uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,108,101,564,415
arxiv
\section{Introduction} Let $A_n^+ = \{\mathbf e_i - \mathbf e_j\colon 1 \leq i < j \leq n+1\}\subset\mathblockyblocky R^{n+1}$ denote the positive roots of type $A_n$. Subsets of $A_n^+$ can be encoded using a directed acyclic graph $G$ on $n+1$ vertices with edges $(i,j) \in E(G)$ oriented so that $i < j$. Given such a graph $G$, one can consider the \emph{root polytopes} \[ Q_G \overset{\rm def}= \textup{conv}\{\mathbf e_i - \mathbf e_j\colon (i,j) \in E(G)\} \subset \mathblockyblocky R^{n+1} \] and \[ \tilde Q_G \overset{\rm def}= \textup{conv}\{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j) \in E(G)\} \subset \mathblockyblocky R^{n+1}. \] The purpose of this paper is to completely characterize the faces of the root polytope $\tilde Q_G$ for every $G$. This is accomplished in Theorems~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces}.\\ Root polytopes were first studied systematically in~\cite{postnikov2009}, where it was shown that the simplices in a triangulation of a root polytope count lattice points of a generalized permutahedron. The class of root polytopes also includes products of simplices, the triangulations of which are known to have very rich combinatorics (see e.g.\ \cite{hrs2000,santos2005,gnp2018}). Triangulations and subdivision algebras of root polytopes were studied in~\cite{meszaros2011,meszaros2016}, and have been used to solve a variety of other combinatorial problems, e.g.\ in~\cite{em2016,em2018}. Much attention has been devoted to studying the face structure of the convex hull of the entire type $A_n$ root system, and more generally to that of other root systems $\Phi$. The faces of the polytope $\mathcal P_{A_n} = \textup{conv}\{\mathbf e_i - \mathbf e_j\colon i,j \in [n+1]\}$ were characterized combinatorially already in~\cite{cho1999}; computing the $f$-vector of $\mathcal P_{A_n}$ is an easy corollary of the characterization. The $f$-vectors of pulling triangulations of the boundary of $\mathcal P_{A_n}$ were computed in~\cite{hetyei2009}, and the $f$-vectors of unimodular triangulations of the boundary of $\mathcal P_\Phi = \textup{conv}\{\mathbf v\colon\mathbf v \in \Phi\}$, $\Phi = A_n, C_n, D_n$, were given in~\cite{abhps2011}. The orbit classes (under an action of the Weyl group) of the faces of $\mathcal P_\Phi$ were algebraically characterized in~\cite{cm2015}. In contrast, to our knowledge the faces of convex hulls of (subsets of) \emph{positive} roots have been studied only for $\Phi^+ = A_n^+$. Gelfand, Graev, and Postnikov studied faces of $\tilde Q_{K_n}$ not containing the origin in~\cite[Prop.\ 8.1]{ggp1997}, but their result contains a mistake. Cho salvaged this result for facets of $\tilde Q_{K_n}$ in~\cite[Prop.\ 13]{cho1999}. Postnikov generalized Cho's result to facets of $\tilde Q_G$ for \emph{transitively closed} graphs $G$ (Definition~\ref{defn:transitively-closed}) in~\cite[Prop.\ 13.3]{postnikov2009}. To our knowledge, Postnikov's characterization~\cite[Prop.\ 13.3]{postnikov2009} has been the state of the art in this direction. Our results specialize to those of Postnikov straightforwardly (spelled out in Corollary~\ref{cor:transitively-closed-facets}), and correct the mistake in~\cite[Prop.\ 8.1]{ggp1997} in full generality (Corollary~\ref{cor:non-tilde-faces-K_n}; see also Remark~\ref{rem:ggp-comparison}). When $G$ is an \emph{alternating} graph (Definition~\ref{defn:alternating}), the faces of the affine cone generated by $\{\mathbf e_i - \mathbf e_j\colon (i,j) \in E(G)\}$ has algebrogeometric significance: it is related to the deformation theory of a certain toric variety associated to $G$. The faces of this cone, i.e.\ the faces of $\tilde Q_G$ containing the origin, were combinatorially characterized in the recent paper~\cite[Thm.\ 3.17]{portakal2019}, building on the work in~\cite{vv2006}. We highlight and reprove their characterization in Corollaries~\ref{cor:portakal1} and~\ref{cor:portakal2}. The faces of $\tilde Q_G$ are again root polytopes, i.e.\ equal to $\tilde Q_H\subseteq \tilde Q_G$ or $Q_H\subset \tilde Q_G$ for certain subgraphs $H\subseteq G$ (Proposition~\ref{prop:faces-of-root-are-root}). We characterize the subgraphs $H$ for which $\tilde Q_H\subseteq \tilde Q_G$ is a face in Theorem~\ref{thm:tilde-faces}, and separately characterize the subgraphs $H$ for which $Q_H\subset \tilde Q_G$ is a face in Theorem~\ref{thm:non-tilde-faces}. For $G = K_n$, the characterizations of Theorem~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces} are particularly nice, and are highlighted in Corollary~\ref{cor:tilde-faces-K_n} and Corollary~\ref{cor:non-tilde-faces-K_n} respectively. \section{Background} \textbf{Conventions.} Unless stated otherwise, $G$ will denote a directed acyclic graph with $V(G) = [n]$. Without loss of generality, we may assume its edges $e = (i,j) \in E(G)$ are directed so that $i < j$. (The adjective \emph{acyclic} will only describe directed graphs, and means that there is no \emph{directed} cycle.) We use the notation $H\subseteq G$ to denote a subgraph $H$ of $G$ with $V(H) = V(G)$ and $E(H) \subseteq E(G)$. We also use the notation $G^\textup{un}$ to denote the underlying undirected graph of $G$. We reserve boldface mathematical notation to denote vectors; in particular $\mathbf e_i$ is the $i$-th basis vector of $\mathblockyblocky R^n$. \textbf{Root polytopes.} In~\cite[Sec.\ 12]{postnikov2009}, Postnikov defined the \textbf{root polytopes} \[ Q_G \overset{\rm def}= \textup{conv}\{\mathbf e_i - \mathbf e_j\colon (i,j) \in E(G)\} \subset \mathblockyblocky R^n \] and \[ \tilde Q_G \overset{\rm def}= \textup{conv}\{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j) \in E(G)\} \subset \mathblockyblocky R^n. \] It is well known that faces of root polytopes are again root polytopes: \begin{prop} \label{prop:faces-of-root-are-root} For every subgraph $H\subseteq G$, the root polytope $Q_H$ is a subpolytope of $\tilde Q_H$, which in turn is a subpolytope of $\tilde Q_G$. Every subpolytope (in particular, every face) of $\tilde Q_G$ is the root polytope $Q_H$ or the root polytope $\tilde Q_H$ for some $H\subseteq G$. \end{prop} \begin{proof} The inclusion of edge sets $E(H) \subseteq E(G)$ implies the inclusion of polytopes $Q_H\subset\tilde Q_H\subseteq\tilde Q_G$. Conversely, every subpolytope $P$ of $\tilde Q_G$ is the convex hull of the vertices of $\tilde Q_G$ which live in $P$ (see e.g.~\cite[Prop.\ 2.3]{ziegler2007}). The non-origin vertices correspond to edges of $G$, so the collection of such vertices forms a subgraph $H$ of $G$. If $P$ contains (resp.\ doesn't contain) the origin, then $P = \tilde Q_H$ (resp.\ $P = Q_H$). \end{proof} \begin{defn} \label{defn:alternating} A graph $G$ is \textbf{alternating} if there is no vertex $j \in [n] = V(G)$ so that $(i,j), (j,k) \in E(G)$. \end{defn} We remark that alternating graphs are nothing more than (appropriately oriented) bipartite graphs: \begin{lem} \label{lem:alternating-is-bipartite} Let $G$ be an alternating graph and suppose $G^\textup{un}$ is connected. Then there is a partition of $V(G) = L\sqcup R$ into two parts so that every edge $(i,j) \in E(G)$ connects a vertex $i \in L$ to a vertex $j \in R$. \end{lem} \begin{proof} If $G$ has no edges, the lemma is vacuous. Otherwise, we may set \begin{align*} L&\overset{\rm def}=\{v \in V(G)\colon \textup{every edge of $G$ incident to $v$ has $v$ as its source}\},\\ R&\overset{\rm def}=\{v \in V(G)\colon \textup{every edge of $G$ incident to $v$ has $v$ as its sink}\}. \end{align*} Every vertex of the alternating graph $G$ has an edge incident to it, so $L$ and $R$ are disjoint. If a vertex $j \in [n]$ is not in $L$, then there is an edge $(i,j) \in E(G)$ with $j$ as its sink; similarly if $j$ is not in $R$, then there is an edge $(j,k) \in E(G)$ with $j$ as its source. Since $G$ is alternating, these cannot simultaneously happen, so $j\in L\sqcup R$. We conclude $L\sqcup R = [n]$. From the definitions of $L$ and $R$, we see that every edge of $G$ connects a vertex in $L$ to a vertex in $R$. \end{proof} The following result can be derived from~\cite{postnikov2009}. Here we include a full proof for completeness. \begin{prop}[{cf.\ \cite[Lem.\ 13.2, Lem.\ 12.5]{postnikov2009}}] \label{prop:root-polytope-dimension-general} Suppose $G^\textup{un}$ has $r$ connected components. Then $\tilde Q_G$ is $(n-r)$-dimensional. If $G^\textup{un}$ has $r$ connected components and $G$ is alternating, then $Q_G$ is $(n-r-1)$-dimensional. \end{prop} \begin{proof} Take a spanning forest $T^\textup{un}\subseteq G^\textup{un}$ and let $T\subseteq G$ be its overlying directed graph. The $n-r+1$ vertices of $\tilde Q_T\subseteq \tilde Q_G$ are affinely independent and hence form an $(n-r)$-dimensional simplex. On the other hand, $\tilde Q_G$ is contained in the $(n-r)$-dimensional subspace \[ W = \bigg\{\mathbf x \in \mathblockyblocky R^n \colon \sum_{i \in G_j^\textup{un}} x_i = 0 \textup{ for all connected components $G_j^\textup{un}$ of $G^\textup{un}$}\bigg\}\subset\mathblockyblocky R^n. \] It follows that $\tilde Q_G$ is $(n-r)$-dimensional. Suppose now that $G^\textup{un}$ has $r$ connected components and $G$ is alternating. In this case, there is a subset $L\subseteq [n] = V(G)$ so that every edge $e \in E(G)$ has source in $L$ and target not in $L$ (the set $L$ can be thought of as ``source vertices'' of the graph $G$). As before, take a spanning forest $T^\textup{un}\subseteq G^\textup{un}$ and let $T\subseteq G$ be its overlying directed graph. The $n-r$ vertices of $Q_T\subseteq Q_G$ are affinely independent and hence form an $(n-r-1)$-dimensional simplex. On the other hand, $\tilde Q_G$ is contained in the $(n-r)$-dimensional subspace $W$ and also in the subspace \[ \bigg\{\mathbf x \in \mathblockyblocky R^n\colon \sum_{i \in L} x_i = 1\bigg\}\subset\mathblockyblocky R^n \] intersecting $W$ transversely. Thus $Q_G$ is contained in a $(n-r-1)$-dimensional subspace of $\mathblockyblocky R^n$, and $Q_G$ is $(n-r-1)$-dimensional. \end{proof} \textbf{Polytopes.} We refer to~\cite{ziegler2007} for background on polytopes in general. In what follows, let \[ \ell\colon(x_1, \dots, x_n)\mapsto \sum_{i=1}^nc_ix_i \] denote a linear form. Recall that a \textbf{face} $F$ of a polytope $P\subset\mathblockyblocky R^n$ is a subset of the form \[ F = P\cap \{\mathbf x\colon \ell(\mathbf x) = c\} \] for some $c \in \mathblockyblocky R$ such that (affine) hyperplane $\{\ell(\mathbf x) = c\}$ is a \textbf{supporting hyperplane (for $F$)}, i.e.\ such that \[ P\subset\{\mathbf x\colon\ell(\mathbf x)\geq c\} \] holds. A \textbf{facet} of a polytope is a face of codimension 1. We will later use the following lemma. \begin{lem} \label{lem:dual-trick} Let $F$ be a face of a polytope $P$ of codimension $d$. Then $F$ is the intersection of some $d$ facets of $P$. \end{lem} \begin{proof} First recall that every face $F$ of a polytope is the intersection of the facets containing it (see~\cite[Thm 3.1.7]{grunbaum2003} or~\cite[Thm 2.7]{ziegler2007}). Let $G$ be a face of $P$ of codimension $d-1$ with $G\supseteq F$. By induction, we may find $d-1$ facets $G_1, \dots, G_{d-1}$ whose intersection is $G$. It suffices to find a facet $G_*\supseteq F$ not containing $G$, as $F = G \cap G_*$ for any such facet $G_*$. Such a facet $G_*$ must exist; otherwise, the intersection of all facets containing $F$ would contain $G$. \end{proof} \section{Faces of $\tilde Q_G$} \label{sec:faces-of-tilde-QG} This section contains the main results of the paper: Theorem~\ref{thm:tilde-faces} characterizes faces $\tilde Q_H\subseteq\tilde Q_G$, while Theorem~\ref{thm:non-tilde-faces} characterizes faces $Q_H\subset\tilde Q_G$. The latter theorem requires significantly more work than the former, but technicalities are summarized by Lemma~\ref{lem:ebl}. Both Theorems~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces} are proven by analyzing supporting hyperplanes of the relevant subpolytopes (see Lemmas~\ref{lem:tilde-hyperplane-conditions} and~\ref{lem:non-tilde-hyperplane-conditions}), then finding necessary and sufficient combinatorial conditions on $H\subseteq G$ for which a supporting hyperplane exists. We begin with the following useful lemma: \begin{lem} \label{lem:tilde-hyperplane-conditions} Let $H\subseteq G$ be a subgraph, so $\tilde Q_H\subseteq \tilde Q_G$. The hyperplane \[ S = \bigg\{\mathbf x\colon \sum_{i=1}^n c_ix_i = c\bigg\} \] is a supporting hyperplane for $\tilde Q_H$ if and only if: \begin{enumerate}[(a)] \item $c = 0$ \item $c_i \geq c_j$ for all $(i,j) \in E(G)$ \item If $(i,j) \in E(G)$, then $c_i = c_j$ if and only if $(i,j) \in E(H)$. \end{enumerate} \end{lem} \begin{proof} Suppose $S$ is a supporting hyperplane for $\tilde Q_H$, and set \[ S_\geq \overset{\rm def}=\bigg\{\mathbf x\colon \sum_{i=1}^n c_ix_i \geq c\bigg\} \] Since $\mathbf 0 \in \tilde Q_H$ must be in $S$, condition (a) follows. Conditions (b) and (c) respectively follow from the conditions \begin{equation} \label{eqn:supp-hyp} \tilde Q_G\subset S_\geq \qquad \textup{ and } \qquad \tilde Q_H = \tilde Q_G\cap S \end{equation} applied to vertices of $\tilde Q_G$. Conversely, if all three conditions (a), (b), and (c) hold, then \[ \{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j)\in E(G)\}\subset S_\geq \quad\textup{ and } \quad \{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j)\in E(H)\} = \{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j)\in E(G)\} \cap S. \] Taking convex hulls, we deduce that~\eqref{eqn:supp-hyp} holds. Thus, $S$ is a supporting hyperplane for $\tilde Q_H$. \end{proof} \begin{defn} Let $H\subseteq G$ be a subgraph, and let $H_1^\textup{un}, \dots, H_m^\textup{un}$ be the connected components of the underlying undirected graph $H^\textup{un}$ of $H$. The directed multigraph $H_\textup{comp}$ is the graph with vertex set \[ V(H_\textup{comp}) = \{H_i^\textup{un}\colon i\in[m]\} \] and edge multiset \[ E(H_\textup{comp}) = \{\{(H_i^\textup{un},H_j^\textup{un})\colon \textup{for each edge } (v_i, v_j) \in E(G)\setminus E(H) \textup{ where } v_i \in V(H_i^\textup{un}), v_j \in V(H_j^\textup{un})\}\}.\qedhere \] \end{defn} \begin{example} \label{ex:hcomp-loops-cycles} The multigraph $H_\textup{comp}$ may have multiple edges, self-loops, or directed cycles. For example, let $H\subseteq G$ be as in Figure~\ref{fig:hcomp-loops-cycles} below. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.8]{hcomp-loops-cycles.pdf} \end{center} \caption{The graphs $H$ and $G$ in Example~\ref{ex:hcomp-loops-cycles}.} \label{fig:hcomp-loops-cycles} \end{figure} The graph $H^\textup{un}$ has two connected components (with vertex sets $V(H_1^\textup{un}) = \{1,3,4,5\}$ and $V(H_2^\textup{un}) = \{2\}$), and $H_\textup{comp}$ is as in Figure~\ref{fig:hcomp-loops-cycles-2}. \begin{figure}[ht] \begin{center} \includegraphics{hcomp-loops-cycles-2.pdf} \end{center} \label{fig:hcomp-loops-cycles-2} \caption{The graph $H_\textup{comp}$ for $H$ and $G$ in Example~\ref{ex:hcomp-loops-cycles}. The edges $E(H_\textup{comp})$ are labelled by their corresponding edges in $G$.} \end{figure} \end{example} \begin{thm} \label{thm:tilde-faces} Let $H\subseteq G$ be a subgraph. The subpolytope $\tilde Q_H\subseteq \tilde Q_G$ is a face of $\tilde Q_G$ if and only if $H_\textup{comp}$ is loopless and acyclic. \end{thm} \begin{proof} Suppose $\tilde Q_H$ is a face of $\tilde Q_G$, and take a supporting hyperplane $S = \{\ell(\mathbf x) = c\}$ for $\tilde Q_H$. By condition (c) of Lemma~\ref{lem:tilde-hyperplane-conditions}, the numbers $\{c_i\}_{i \in [n]}$ are constant on connected components of $H$. In particular, if $i$ and $j$ are in the same connected component of $H$, and $(i,j) \in E(G)$, then $(i,j) \in E(H)$; in other words, $H_\textup{comp}$ is loopless. By condition (b) and (c) of Lemma~\ref{lem:tilde-hyperplane-conditions}, if $(H_i^\textup{un}, H_j^\textup{un}) \in E(H_\textup{comp})$, then $c_{v_i} > c_{v_j}$, where $v_i \in V(H_i^\textup{un})$ and $v_j \in V(H_j^\textup{un})$. It follows that $H_\textup{comp}$ is acyclic. Suppose now that $H_\textup{comp}$ is loopless and acyclic. We will define numbers $\{c_i\}_{i \in [n]}$ satisfying conditions (b) and (c) of Lemma~\ref{lem:tilde-hyperplane-conditions}, so that \[ S = \bigg\{\mathbf x\colon\sum_{i=1}^nc_ix_i = 0\bigg\} \] is a supporting hyperplane for $\tilde Q_H \subseteq\tilde Q_G$. Since $H_\textup{comp}$ is loopless and acyclic, we may take a linear extension, i.e.\ a function \[ f\colon V(H_\textup{comp}) \to \{1,\dots, |V(H_\textup{comp})|\} \] so that if $(H_i^\textup{un}, H_j^\textup{un}) \in E(H_\textup{comp})$, then $f(H_i^\textup{un}) > f(H_j^\textup{un})$. Each vertex $v_i \in [n]$ is in some connected component $H_i^\textup{un}$, the assignment \[ c_{v_i} = f(H_i^\textup{un}) \] works. \end{proof} We pause to highlight an alternative condition equivalent to looplessness of $H_\textup{comp}$. \begin{prop} \label{prop:loopless-criterion} Let $H\subseteq G$ be a subgraph. Then $H_\textup{comp}$ is loopless if and only if $H$ is the disjoint union of induced subgraphs $\{G|_{P_i}\}_{P_i \in \mathcal P}$, where $\mathcal P = \{P_i\}$ is a partition of $[n]$. \end{prop} \begin{proof} If $H_\textup{comp}$ is loopless, the partition $\mathcal P = \{V(H_i^\textup{un})\}$ works: every edge of $H$ must be contained in some $G|_{V(H_i^\textup{un})}$, so \begin{equation} \label{eqn:loopless-criterion} H\subseteq \bigsqcup_i G|_{V(H_i^\textup{un})}; \end{equation} on the other hand, an edge of $G|_{V(H_i^\textup{un})}$ that is not in $H$ becomes a loop in $H_\textup{comp}$, so equality holds in~\eqref{eqn:loopless-criterion}. Conversely, suppose $H$ is the disjoint union of induced subgraphs $\{G|_{P_i}\}_{P_i \in \mathcal P}$: if an edge $(i,j) \in E(G)$ connects two vertices $i,j$ in the same connected component of $H^\textup{un}$, then $i$ and $j$ are in the same part $P_i \in \mathcal P$, hence must be in $E(H)$. In other words, $H_\textup{comp}$ is loopless. \end{proof} It remains to characterize faces $Q_H\subset\tilde Q_G$ (Theorem~\ref{thm:non-tilde-faces}). To illustrate the difference between faces $\tilde Q_H\subseteq \tilde Q_G$ and faces $Q_H\subset \tilde Q_G$, consider the following example: \begin{example} \label{ex:path-consistency-necessary} When $H = G = K_3$, the polytope \[ Q_{K_3} = \textup{conv}\{\mathbf e_1 - \mathbf e_2, \mathbf e_1 - \mathbf e_3, \mathbf e_2 - \mathbf e_3\} \] is not a face of \[ \tilde Q_{K_3} = \textup{conv}\{\mathbf 0, \mathbf e_1 - \mathbf e_2, \mathbf e_1 - \mathbf e_3, \mathbf e_2 - \mathbf e_3\}. \] (It turns out that $Q_{K_3}$ is a triangle and $\tilde Q_{K_3}$ is a rhombus, as Figure~\ref{fig:QK3-example} below shows.) One explanation for this, which turns out to generalize, goes as follows: Suppose that a supporting hyperplane $\{\ell(\mathbf x) = c\}$ for $Q_{K_3}$ exists. Since $\mathbf 0 \not \in Q_{K_3}$, we must have $0 = \ell(\mathbf 0) > c$; up to scaling, we may assume $c = -1$. On one hand, $\ell(\mathbf e_1 - \mathbf e_2) = -1$ and $\ell(\mathbf e_2 - \mathbf e_3) = -1$. On the other hand, $\ell(\mathbf e_1 - \mathbf e_3) = -1$. This is a contradiction. \begin{figure}[ht] \begin{center}\includegraphics{QK3-example}\end{center} \caption{The root polytopes $Q_{K_3}$ and $\tilde Q_{K_3}$. (The hyperplane $\{x_1 + x_2 + x_3 = 0\}\subset\mathblockyblocky R^3$ is identified with $\mathblockyblocky R^2$ via the projection $(x_1, x_2, x_3)\mapsto (x_1 - x_2, x_1 - x_3)$; coordinate directions in $\mathblockyblocky R^2$ are shown in red.)} \label{fig:QK3-example} \end{figure} \end{example} \begin{defn} \label{defn:path-consistent} A directed acyclic graph $H$ on vertex set $V(H) = [n]$ is \textbf{path consistent} if, for any pair $i,j \in [n]$ and any two undirected paths $\textsf p_{ij}^\textup{un}$ and $\textsf q_{ij}^\textup{un}$ in $H^\textup{un}$ connecting $i$ to $j$, we have \begin{equation} \label{eqn:path-consistent} \#\{(a,b) \in \textsf p_{ij}\colon a < b\} - \#\{(a,b) \in \textsf p_{ij}\colon a > b\} = \#\{(a,b) \in \textsf q_{ij}\colon a < b\} - \#\{(a,b) \in \textsf q_{ij}\colon a > b\}. \end{equation} (Here, $\textsf p_{ij}$ and $\textsf q_{ij}$ are the subsets of $E(H)$ whose underlying undirected graph are the paths $\textsf p_{ij}^\textup{un}$ and $\textsf q_{ij}^\textup{un}$. The sets $\textsf p_{ij}$ and $\textsf q_{ij}$ are not necessarily directed paths.) In other words, the difference between the number of ``correctly'' oriented edges and the number of ``incorrectly'' oriented edges in any path depends only on $i$ and $j$. \end{defn} \begin{example} The complete graph $K_3$ is not path consistent, since the paths $((1,3))$ and $((1,2), (2,3))$ connecting vertices 1 and 3 have one and two correctly oriented edges respectively (cf.\ Example~\ref{ex:path-consistency-necessary}). \end{example} \begin{example} \label{ex:alternating-is-path-consistent} Any alternating graph $G$ is path consistent. Explicitly, we may apply Lemma~\ref{lem:alternating-is-bipartite} to each connected component of $G^\textup{un}$ and obtain a partition $V(G) = [n]$ into two parts $[n] = L\sqcup R$ such that every vertex $i \in L$ is the source of every edge incident to it, and every vertex $j \in R$ is the sink of every edge incident to it. Then, if $\textsf p_{ij}^\textup{un}$ is a path connecting $i$ to $j$ in $G^\textup{un}$, we have \begin{equation} \label{eqn:transitively-closed-Q_H} \#\{(a,b) \in \textsf p_{ij}\colon a < b\} - \#\{(a,b) \in \textsf p_{ij}\colon a > b\} = \begin{cases} 1 &\textup{ if } i \in L, j \in R\\ 0&\textup{ if } i,j \in L\\ 0&\textup{ if } i,j\in R\\ -1 &\textup{ if } i \in R, j \in L\end{cases} \end{equation} so Equation~\eqref{eqn:path-consistent} is satisfied. \end{example} While path consistency turns out to be a necessary condition, it is not sufficient, as the next example shows. (The necessity will be the easier half of Theorem~\ref{thm:non-tilde-faces}.) \begin{example} \label{ex:admissible-necessary} Let $H\subset G$ be as in Figure~\ref{fig:2K2-C4}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{2K2-C4-admissible.pdf} \end{center} \caption{The graphs $H$ and $G$ in Example~\ref{ex:admissible-necessary}.} \label{fig:2K2-C4} \end{figure} The root polytope $Q_G$ is a square with affine hull \[ \{(x_1, x_2, x_3, x_4)\colon x_1 + x_2 = 1, x_3 + x_4 = -1\}\subset \mathblockyblocky R^4, \] so $\tilde Q_G$ is a square pyramid with apex $\mathbf 0$ (see Figure~\ref{fig:Q2C2-QK4-example}). The subpolytope $Q_H= \textup{conv}\{\mathbf e_1 - \mathbf e_3, \mathbf e_2 - \mathbf e_4\}$ is a diagonal of the square face $Q_G$ of $\tilde Q_G$; hence $Q_H$ is not a face of $\tilde Q_G$. \begin{figure}[ht] \begin{center}\includegraphics{Q2C2-QK4-example}\end{center} \caption{The root polytopes $Q_H$ and $\tilde Q_G$ in Example~\ref{ex:admissible-necessary}. (The hyperplane $\{x_1 + x_2 + x_3 + x_4 = 0\}\subset\mathblockyblocky R^4$ is identified with $\mathblockyblocky R^3$ via the projection $(x_1, x_2, x_3, x_4)\mapsto (x_1 - x_2, x_1 - x_3, x_1 - x_4)$; coordinate directions are shown in red.)} \label{fig:Q2C2-QK4-example} \end{figure} Let us explain why $Q_H$ is not a face of $\tilde Q_G$ in a way that will generalize. Suppose that a supporting hyperplane $\{\ell(\mathbf x) = c\}$ for $Q_H$ exists. Since $\mathbf 0 \not \in Q_H$, we must have $0 = \ell(\mathbf 0) > c$; up to scaling, we may assume $c = -1$. Writing \[ \ell(\mathbf x) = \sum_{i=1}^n c_ix_i, \] we have the four conditions \begin{align*} (1,3) \in E(H) &\implies c_1 = c_3 - 1,\\ (2,3) \in E(G)\setminus E(H) &\implies c_2 > c_3 - 1,\\ (2,4) \in E(H) &\implies c_2 = c_4 - 1,\\ (1,4) \in E(G)\setminus E(H) &\implies c_1 > c_4 - 1 \end{align*} on the $c_i$: the first two say $c_2 > c_1$, whereas the last two say $c_1 > c_2$. \end{example} We want to introduce a key notion used to generalize Example~\ref{ex:admissible-necessary}. We begin with: \begin{defn} Let $H$ be a path consistent graph and assume $H^\textup{un}$ is path connected. For any two vertices $u,v \in V(H)$, pick any undirected path $\textsf p^\textup{un}$ connecting $u$ to $v$ and set \[ \ell_{uv} \overset{\rm def}= \#\{(a,b)\in \textsf p\colon a < b\} - \#\{(a,b)\in \textsf p\colon a > b\}. \] (This quantity is well-defined because $H$ is path consistent.) We call $u_*\in V(H)$ a \textbf{weight source} if there is a vertex $v_* \in V(H)$ so that \[ \ell_{u_*v_*} = \max_{u,v}\ell_{uv}. \] Note that a weight source always exists, but is not necessarily unique. \end{defn} Although Definition~\ref{defn:weight-function} requires a choice of a weight source $u_*$, we will show in Proposition~\ref{prop:weight-function} that this choice does not matter. \begin{defn} \label{defn:weight-function} Let $H$ be a path consistent graph and assume that $H^\textup{un}$ is path connected. Let $u_*$ be a weight source. The \textbf{weight function} (with respect to $u_*$) of $H$ is the function $w_{u_*}\colon V(H)\to \mathblockyblocky Z$ given by \[ w_{u_*}(i) \overset{\rm def}= \ell_{u_*i}. \] \end{defn} \begin{prop} \label{prop:weight-function} Let $H$ be a path consistent graph so that $H^\textup{un}$ is path connected. Let $w_{u_*}$ denote the weight function of $H^\textup{un}$ with respect to $u_*$. Then: \begin{enumerate} \item $w_{u_*}(i) + 1 = w_{u_*}(j)$ for every edge $(i,j) \in E(G)$. \item $w_{u_*}(i) \geq 0$ for all $i \in V(H)$, and equality holds if and only if $i$ is a weight source. \item If $u_*'$ is another weight source, then $w_{u_*} = w_{u_*'}$. Thus the weight function of $H$ is well-defined, independent of weight source. \end{enumerate} \end{prop} \begin{defn} Let $H$ be a path consistent graph (with $H^\textup{un}$ possibly disconnected). The \textbf{weight function} of $H$ is the function $w\colon V(H)\to \mathblockyblocky Z$ obtained by gluing together weight functions $w_j\colon V(H_j^\textup{un})\to \mathblockyblocky Z$. \end{defn} \begin{proof}[Proof of Proposition~\ref{prop:weight-function}] Item (1) is a consequence of the fact that concatenating $(i,j)\in E(G)$ to any path connecting $u_*$ to $i$ gives a path connecting $u_*$ to $j$. More generally, concatenation of paths gives the equality \[ \ell_{uv} + \ell_{vw} = \ell_{uw}. \] Let $u_*$ be a weight source, and let $v_* \in V(H)$ satisfy $\ell_{u_*v_*} = \max_{u,v}\ell_{uv}$. The equality \[ \ell_{u_*i} = \ell_{u_*v_*} - \ell_{iv_*} \] and the maximality of $\ell_{u_*v_*}$ guarantee that $\ell_{u_*i} \geq 0$. Furthermore equality holds if and only if $\ell_{iv_*} = \ell_{u_*v_*}$, which in turn holds if and only if $i$ is a weight source. This proves item (2). Finally, let $u_*'$ be another weight source. By part (2), $w_{u_*}(u_*') = 0$ and hence \[ \ell_{u_*i} = \underbrace{\ell_{u_*u_*'}}_{=0} + \ell_{u_*'i}, \] so that $w_{u_*} = w_{u_*'}$. This proves item (3). \end{proof} \begin{defn} Let $H\subseteq G$ be a subgraph, and assume $H$ is path consistent. Let $w$ be the weight function of $H$. Each edge $e = (H_i^\textup{un}, H_j^\textup{un}) \in E(H_\textup{comp})$ of the multigraph $H_\textup{comp}$ corresponds to a unique edge $(v_i, v_j) \in E(G)\setminus E(H)$. We define the \textbf{weight decrease} of $e$ to be the quantity \[ \textsf{wd}(e) \overset{\rm def}= w(v_i) - w(v_j).\qedhere \] \end{defn} While path consistency will be analogous to looplessness of $H_\textup{comp}$, the following condition will be analogous to acyclicity of $H_\textup{comp}$. \begin{defn} \label{defn:admissible} A subgraph $H\subseteq G$ is \textbf{admissible (with respect to $G$)} if, for every directed cycle $\mathcal C$ in $H_\textup{comp}$, the condition \begin{equation} \label{eqn:admissible} \sum_{e \in \mathcal C}\textsf{wd}(e) > -|\mathcal C| \end{equation} holds, where the sum in~\eqref{eqn:admissible} runs over the edges $e$ forming the directed cycle $\mathcal C$. \end{defn} \begin{example}[cf.\ Example~\ref{ex:admissible-necessary}] Returning to Example~\ref{ex:admissible-necessary}, we let $H\subset G$ be as in Figure~\ref{fig:2K2-C4}. The graph $H$ is path consistent; the graph $H^\textup{un}$ has two connected components $H_1^\textup{un}$ and $H_2^\textup{un}$ consisting of vertices $\{1,3\}$ and $\{2,4\}$ respectively. The weight function $w\colon V(H)\to \mathblockyblocky N$ sends $w(1) = w(2) = 1$ and $w(3) = w(4) = 2$. The graph $H_\textup{comp}$ consists of a single cycle of length 2: there is an edge $e = (H_1^\textup{un},H_2^\textup{un}) \in E(H_\textup{comp})$ corresponding to the edge $(1,4) \in E(G)\setminus E(H)$ and its weight decrease is $\textsf{wd}(e) = -1$; there is also an edge $e' = (H_2^\textup{un}, H_1^\textup{un}) \in E(H_\textup{comp})$ corresponding to the edge $(2,3) \in E(G)\setminus E(H)$ and its weight decrease is $\textsf{wd}(e') = -1$. The graph $H_\textup{comp}$ has a single directed cycle $\mathcal C = \{e,e'\}$, and the condition~\eqref{eqn:admissible} \[ \textsf{wd}(e) + \textsf{wd}(e') > -2 \] fails to hold. Thus $H\subseteq G$ is not admissible. \end{example} We now have enough language to state our characterization of subgraphs $H\subseteq G$ for which $Q_H\subset\tilde Q_G$ is a face: \begin{thm} \label{thm:non-tilde-faces} Let $H\subseteq G$ be a subgraph of $G$. The subpolytope $Q_H\subset\tilde Q_G$ is a face of $\tilde Q_G$ if and only if $H$ is path consistent and admissible. \end{thm} To prove Theorem~\ref{thm:non-tilde-faces}, we will use the following technical lemma; Section~\ref{sec:ebl} is dedicated to its proof, which we feel is unenlightening in the context of this paper. \begin{lem} \label{lem:ebl} Let $H\subseteq G$ be an admissible subgraph of $G$. There is a vector $\mathbf d = (d_v)_{v \in V(H_\textup{comp})} \in \mathblockyblocky R^{V(H_\textup{comp})}$ so that \[ \textup{\textsf{wd}}(e) + d_{s(e)} - d_{t(e)} > -1 \] for every edge $e \in E(H_\textup{comp})$. \end{lem} (Here, $s(e)$ denotes the source of the edge $e$, and $t(e)$ denotes the target of the edge $e$.) We now prove the following analogue of Lemma~\ref{lem:tilde-hyperplane-conditions} for faces $Q_H\subset\tilde Q_G$: \begin{lem} \label{lem:non-tilde-hyperplane-conditions} Let $H\subseteq G$ be a subgraph, so $Q_H\subset\tilde Q_G$. The hyperplane \[ S = \bigg\{\mathbf x\colon\sum_{i=1}^nc_ix_i = c\bigg\} \] is a supporting hyperplane for $Q_H$ if and only if: \begin{enumerate}[(a)] \item $c < 0$ \item $c_i \geq c_j + c$ for all $(i,j) \in E(G)$ \item If $(i,j) \in E(G)$, then $c_i = c_j + c$ if and only if $(i,j) \in E(H)$. \end{enumerate} \end{lem} \begin{proof} Suppose $S$ is a supporting hyperplane for $Q_H$, and set \[ S_\geq \overset{\rm def}=\bigg\{\mathbf x\colon \sum_{i=1}^n c_ix_i \geq c\bigg\} \] Since $\mathbf 0 \not\in Q_H$ must be in $S_\geq \setminus S$, condition (a) follows. Conditions (b) and (c) respectively follow from the conditions \begin{equation} \label{eqn:supp-hyp-2} \tilde Q_G\subset S_\geq \qquad \textup{ and } \qquad Q_H = \tilde Q_G\cap S \end{equation} applied to vertices of $\tilde Q_G$. Conversely, if all three conditions (a), (b), and (c) hold, then \[ \{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j)\in E(G)\}\subset S_\geq \quad\textup{ and } \quad \{\mathbf e_i - \mathbf e_j\colon (i,j)\in E(H)\} = \{\mathbf 0, \mathbf e_i - \mathbf e_j\colon (i,j)\in E(G)\} \cap S. \] Taking convex hulls, we deduce that~\eqref{eqn:supp-hyp-2} holds. Thus, $S$ is a supporting hyperplane for $Q_H$. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:non-tilde-faces}] Let $Q_H\subset \tilde Q_G$ be a face of $\tilde Q_G$, and take a supporting hyperplane \[ S = \bigg\{\mathbf x\colon \sum_{i=1}^n c_ix_i = c\bigg\} \] of $Q_H$. Applying condition (a) of Lemma~\ref{lem:non-tilde-hyperplane-conditions}, we may assume up to scaling $c = -1$. Then, if $\textsf p_{ij}^\textup{un}$ is an undirected path in $H^\textup{un}$ connecting $i$ to $j$, and $\textsf p_{ij}$ is the overlying directed subgraph of $H$, we have \[ \#\{(a,b) \in \textsf p_{ij}\colon a < b\} - \#\{(a,b) \in \textsf p_{ij}\colon a > b\} = c_j - c_i \] by repeatedly applying condition (c) of Lemma~\ref{lem:non-tilde-hyperplane-conditions} to the edges $e \in \textsf p_{ij}\subseteq E(H)$. This holds for any such path of $H^\textup{un}$, so Equation~\eqref{eqn:path-consistent} is satisfied, and $H$ is path consistent. Importantly, we emphasize that when $i,j \in [n]$ are in the same connected component of $H^\textup{un}$, then \begin{equation} \label{eqn:coeff-weight-equality} c_j - c_i = w(j) - w(i), \end{equation} where $w$ is the weight function of $H$. Furthermore, let $\mathcal C$ be a directed cycle of $H_\textup{comp}$, consisting of edges $\{e_1^\textup{comp}, \dots, e_{|\mathcal C|}^\textup{comp}\}$ corresponding to edges $\{e_1, \dots, e_{|\mathcal C|}\}\subseteq E(G)\setminus E(H)$. Denote by $s_i, t_i \in [n] = V(G)$ the source and target of the edge $e_i$ respectively. Since $\mathbf e_{s_i} - \mathbf e_{t_i} \not \in Q_H$, condition (c) of~\ref{lem:non-tilde-hyperplane-conditions} says \begin{equation} \label{eqn:coeff-inequality} c_{s_i} - c_{t_i} > -1 \end{equation} so \[ \sum_{i=1}^{|\mathcal C|}\textsf{wd}(e_i^\textup{comp}) = \sum_{i=1}^{|\mathcal C|} (w(s_i) - w(t_i)) = \sum_{i=1}^{|\mathcal C|}(w(s_{i+1}) - w(t_i)), \] with $s_{|\mathcal C| + 1}\overset{\rm def}= s_1$. Since $\mathcal C$ forms a cycle in $H_\textup{comp}$, the target of $e_i^\textup{comp} \in E(H_\textup{comp})$ is equal to the source of $e_{i+1}^\textup{comp} \in E(H_\textup{comp})$. Thus, the vertices $t_i, s_{i+1} \in V(H)$ are in the same connected component of $H^\textup{un}$. Then Equations~\eqref{eqn:coeff-weight-equality} and~\eqref{eqn:coeff-inequality} say \[ \sum_{i=1}^{|\mathcal C|}(w(s_{i+1}) - w(t_i)) = \sum_{i=1}^{|\mathcal C|} (c_{s_{i+1}} - c_{t_i}) = \sum_{i=1}^{|\mathcal C|}(c_{s_i} - c_{t_i})> -|\mathcal C|. \] Thus we have verified Equation~\eqref{eqn:admissible} holds for every cycle $\mathcal C$, and $H$ is admissible. Suppose now that $H$ is path consistent and admissible. It suffices to provide numbers $c_i$, $i \in [n] = V(H)$, so that conditions (b) and (c) of~\ref{lem:non-tilde-hyperplane-conditions} hold for $c = -1$, i.e. \begin{equation} \label{eqn:non-tilde-coeff-condition} c_i - c_j > -1 \textup{ for } (i,j) \in E(G)\setminus E(H) \qquad \textup{ and } \qquad c_i - c_j = -1 \textup{ for } (i,j) \in E(H). \end{equation} By Lemma~\ref{lem:ebl}, there exist numbers $d_i$, $i \in V(H_\textup{comp})$, so that \[ \textsf{wd}(e) + d_{s(e)} - d_{t(e)} > -1. \] Now let $v \in [n] = V(H)$ be a vertex of $H$ and suppose $v \in V(H_{v^\textup{comp}}^\textup{un})$ is in the $(v^\textup{comp})$-th connected component of $H^\textup{un}$. Then \[ c_v \overset{\rm def}= w(v) + d_{v^\textup{comp}}, \] where $w$ is the weight function of $H$, satisfies Equation~\eqref{eqn:non-tilde-coeff-condition}: if $e = (i,j) \in E(G)\setminus E(H)$ corresponds to $e^\textup{comp} = (i^\textup{comp}, j^\textup{comp}) \in E(H_\textup{comp})$ then \[ c_i - c_j = \textsf{wd}(e) + d_{i^\textup{comp}} - d_{j^\textup{comp}} > -1, \] while if $(i,j) \in E(H)$ then (as in Equation~\eqref{eqn:coeff-weight-equality}) \[ c_i - c_j = w(i) - w(j) = -1.\qedhere \] \end{proof} \section{Proof of Lemma~\ref{lem:ebl}} \label{sec:ebl} This section contains a proof of Lemma~\ref{lem:ebl}. We feel that Lemma~\ref{lem:ebl-extension} might be of independent interest, although it is largely irrelevant in the context of this paper. \textbf{In this section only}, we temporarily allow $G$ to be a directed multigraph. In what follows, we will treat signed multisets $\mathcal S$ of edges of $G$ as formal sums \[ \mathcal S = \sum_{e\in E(G)} m_e(\mathcal S)\cdot e \] of edges, where $m_e(\mathcal S)$ is the signed multiplicity of $e$ in $\mathcal S$. We identify the set of formal $\mathblockyblocky Z$-linear combinations of edges of $G$ with $\mathblockyblocky Z^{E(G)}$. We treat simple directed cycles $\mathcal C$ and directed paths $\textsf p$ a set of edges, i.e.\ as sums \[ \mathcal C = \sum_{e\in \mathcal C}e \qquad\textup{ and } \qquad \textsf p = \sum_{e\in\textsf p}e. \] \begin{defn} We let $\mathcal M_G$ denote the abelian group of $\mathblockyblocky Z$-linear combinations of simple directed cycles. The elements of $\mathcal M_G$ are called \textbf{formal cycles}, and $\mathcal M_G$ is a $\mathblockyblocky Z$-submodule of $\mathblockyblocky Z^{E(G)}$. \end{defn} \begin{example} \label{ex:MG-relations} Simple directed cycles may satisfy relations in $\mathcal M_G$. For example, consider $G$ as in Figure~\ref{fig:MG-relations} below. \begin{figure}[ht] \begin{center}\includegraphics{MG-relations}\end{center} \caption{The graph $G$ in Example~\ref{ex:MG-relations}.} \label{fig:MG-relations} \end{figure} Let $\mathcal C_1 = (1,2) + (2,4) + (4,1)$ and $\mathcal C_2 = (1,4) + (4,3) + (3,1)$. Also let $\mathcal C_3 = (1,2) + (2,4) + (4,3) + (3,1)$ and $\mathcal C_4 = (1,4) + (4,1)$. These are all simple directed cycles, and in $\mathcal M_G$ the relation $\mathcal C_1 + \mathcal C_2 = \mathcal C_3 + \mathcal C_4$ holds. \end{example} \begin{defn} For a formal sum of edges $\mathcal S \in \mathblockyblocky Z^{E(G)}$ and an edge $e \in E(G)$, we let $m_e(\mathcal S)$ denote the coefficient of $e$ in $\mathcal S$. The \textbf{support} of $\mathcal S$ is the set $\{e\in E(G)\colon m_e(\mathcal S)\neq 0\}$ and is denoted $\textup{supp}(\mathcal S)$. We also set \[ |\mathcal S|\overset{\rm def}=\sum_{e\in E(G)}|m_e(\mathcal S)|.\qedhere \] \end{defn} \begin{lem} \label{lem:ebl-elements} For any directed multigraph $G$, the abelian group $\mathcal M_G$ is equal to the set of formal sums $\mathcal S \in \mathblockyblocky Z^{E(G)}$ satisfying \begin{equation} \label{eqn:ebl-elements-condition} \sum_{\substack{e\in E(G)\\s(e)=v}}m_e(\mathcal S)\cdot e = \sum_{\substack{e\in E(G)\\t(e)=v}}m_e(\mathcal S)\cdot e\qquad \textup{ for all }v \in V(G) \end{equation} and \begin{equation} \label{eqn:ebl-elements-condition-supp} \textup{supp}(\mathcal S)\subseteq\bigcup_{\mathcal C}\textup{supp}(\mathcal C), \end{equation} where the union runs over all simple directed cycles $\mathcal C$ of $G$. \end{lem} (As in Section~\ref{sec:faces-of-tilde-QG}, the notation $s(e)$ and $t(e)$ stands for the source and target of the edge $e$ respectively.) \begin{rem} \label{rem:ebl-elements} We will use Lemma~\ref{lem:ebl-elements} in the following way: If $H\subseteq G$ is a subgraph which is obtained as a union of directed cycles, and the formal cycle $\mathcal C \in \mathcal M_G$ has support in $H$, then $\mathcal C \in \mathcal M_H$. (That is to say, although $\mathcal C$ comes as a $\mathblockyblocky Z$-linear combination of directed cycles of $G$, it may be replaced by a $\mathblockyblocky Z$-linear combination of directed cycles of $H$.) For example, with notation as in Example~\ref{ex:MG-relations}, the formal cycle $\mathcal C_* \overset{\rm def}= \mathcal C_1 + \mathcal C_2 - \mathcal C_3 \in \mathcal M_G$ has support in the subgraph $H\subseteq G$ where $E(H) = \{(1,4), (4,1)\}$. As expected, $\mathcal C_*$ is a $\mathblockyblocky Z$-linear combination of directed cycles of $H$, since $\mathcal C_* = \mathcal C_4$. \end{rem} \begin{proof}[Proof of Lemma~\ref{lem:ebl-elements}] Any simple directed cycle satisfies conditions~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp}; it follows that any formal cycle satisfies conditions~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp} as well. Conversely, suppose $\mathcal S$ is a formal sum \[ \mathcal S = \sum_{e\in E(G)} m_e(\mathcal S)\cdot e \] satisfying~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp}; our goal is to show that $\mathcal S$ is a formal cycle. Adding directed cycles to $\mathcal S$ if necessary, we may assume $m_e(\mathcal S)\geq 0$ for every $e \in E(G)$. Thus, it suffices to show that nonnegative formal sums of edges satisfying~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp} are formal cycles. The remainder of the proof is by induction on $|\mathcal S|$. Specifically, we argue that there exists a simple directed cycle $\mathcal C$ of $G$ whose support is contained in $\textup{supp}(\mathcal S)$; because $\mathcal S - \mathcal C$ is again a nonnegative formal sum of edges satisfying~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp}, the inductive hypothesis guarantees that $\mathcal S - \mathcal C$ is a formal cycle. Indeed, pick any edge $e = (s,t) \in \textup{supp}(\mathcal S)$; since conditions~\eqref{eqn:ebl-elements-condition} and~\eqref{eqn:ebl-elements-condition-supp} holds for the vertex $t \in V(G)$, there is another edge $e' \in \textup{supp}(\mathcal S)$ whose source is the vertex $t$. By repeating this process, we obtain edges whose concatenation forms a directed path; this path eventually intersects itself and thus contains a simple directed cycle. \end{proof} \begin{lem} \label{lem:ebl-extension} Let $G$ be a directed multigraph, and let $c\colon \mathcal M_G \to \mathblockyblocky R$ be an additive map such that $c(\mathcal C) > -|\mathcal C|$ for any directed cycle $\mathcal C \in \mathcal M_G$. Then $c$ can be extended to an additive map $c\colon \mathblockyblocky Z^{E(G)}\to \mathblockyblocky R$ so that $c(e) > -1$ for all $e \in E(G)$. \end{lem} \begin{proof} The proof is by induction on $|E(G)|$. Note that if $G$ has no directed cycles, the lemma is vacuous, so we may assume $G$ has at least one directed cycle. We enumerate the simple directed cycles of $G$ by $\mathcal C_1, \dots, \mathcal C_r$. Set \[ W\overset{\rm def}= \min_{i\in[r]}\bigg\{\frac{c(\mathcal C_i)}{|\mathcal C_i|}\bigg\}>-1; \quad I\overset{\rm def}= \bigg\{i \in [r]\colon \frac{c(\mathcal C_i)}{|\mathcal C_i|} = W\bigg\}; \quad E_I\overset{\rm def}=\{e\in E(G)\colon e \in \mathcal C_i \textup{ for some } i \in I\}. \] Let us define an additive map $c_1\colon \mathblockyblocky Z^{E_I}\to \mathblockyblocky R$ by setting $c_1(e) = W$ for all $e \in E_I$. Treating $E_I\subseteq G$ as a subgraph, observe that Lemma~\ref{lem:ebl-elements} implies any formal cycle $\mathcal C \in \mathcal M_G$ with support in $E_I$ is in fact a formal cycle in $\mathcal M_{E_I}$ (cf.\ Remark~\ref{rem:ebl-elements}). In particular, any simple directed cycle $\{\mathcal C_i\colon i \in I\}$ of $E_I$ satisfies $c(\mathcal C_i) = W|\mathcal C_i|$, so additivity of $c$ implies any formal cycle $\mathcal C$ of $G$ with support in $E_I$ also satisfies $c(\mathcal C) = W|\mathcal C|$. Again treating $E_I\subseteq G$ as a subgraph, we may form the multigraph $(E_I)_\textup{comp}$. We will argue that formal cycles $\mathcal D$ of $(E_I)_\textup{comp}$ can be described as follows: its corresponding linear combination of edges $\mathcal D' \subseteq G\setminus E_I$ is the restriction of some formal cycle $\mathcal C$ of $G$ to $E(G)\setminus E_I$, i.e. \begin{equation} \label{eqn:ebl-restrict} \mathcal D' = \sum_{e\in E(G)\setminus E_I} m_e(\mathcal C)\cdot e \qquad\textup{ for some } \mathcal C \in\mathcal M_G. \end{equation} First note that $E_I$ is a union of directed cycles and hence its weak components are strongly connected; if $\mathcal D$ is a directed cycle of $(E_I)_\textup{comp}$ then the corresponding $\mathcal D' \subseteq G\setminus E_I$ can be completed to a directed cycle $\mathcal C$ of $G$ by appending directed paths in $E_I$. Such a directed cycle $\mathcal C$ satisfies~\eqref{eqn:ebl-restrict}. When $\mathcal D$ is a formal sum of cycles of $(E_I)_\textup{comp}$, the corresponding formal sum of cycles of $G$ also satisfies~\eqref{eqn:ebl-restrict}. We now argue, for any formal cycle $\mathcal D \in \mathcal M_{(E_I)_\textup{comp}}$, that the quantity \[ c_2(\mathcal D)\overset{\rm def}=c(\mathcal C) - W\sum_{e\in E_I}m_e(\mathcal C) \] is well-defined, independent of the choice of formal cycle $\mathcal C$ satisfying~\eqref{eqn:ebl-restrict}. Let $\mathcal C_1, \mathcal C_2$ be formal cycles satisfying~\eqref{eqn:ebl-restrict}, and consider the formal sums \[ \mathcal C_1\cap E_I \overset{\rm def}= \sum_{e\in E_I}m_e(\mathcal C_1)\cdot e \qquad\textup{ and } \qquad\mathcal C_2\cap E_I\overset{\rm def}=\sum_{e\in E_I}m_e(\mathcal C_2)\cdot e. \] By definition, we may decompose $\mathcal C_1\cap E_I$ and $\mathcal C_2 \cap E_I$ into a $\mathblockyblocky Z$-linear combination of directed paths of $E_I$, each of which connects endpoints of edges of $\mathcal D'\subseteq G\setminus E_I$. Treating paths as sums of edges, we may write \[ \mathcal C_1\cap E_I = \sum_i a_i\cdot \textsf p_{i,1} \qquad\textup{ and } \qquad \mathcal C_2\cap E_I = \sum_i b_i\cdot \textsf p_{i,2}, \] where $a_i, b_i \in \mathblockyblocky Z$. For a directed path $\textsf p$, let $s(\textsf p)$ and $t(\textsf p)$ denote the source and target respectively. For $v \in V(E_I)$ and $j \in \{1,2\}$ let \[ S(v;j)\overset{\rm def}=\{i\colon s(\textsf p_{i,j}) = v\} \qquad\textup{ and } \qquad T(v;j)\overset{\rm def}=\{i\colon t(\textsf p_{i,j}) = v\}. \] Since $\mathcal C_1$ and $\mathcal C_2$ satisfy~\eqref{eqn:ebl-restrict} for the same formal cycle $\mathcal D \in \mathcal M_{(E_I)_\textup{comp}}$, we have \[ \sum_{i\in S(v;1)} a_i = \sum_{i \in S(v;2)} b_i \qquad \textup{ and } \qquad \sum_{i\in T(v;1)} a_i = \sum_{i \in T(v;2)} b_i \] for all $v \in V(E_I)$. Thus, $\mathcal C_1\cap E_I$ and $\mathcal C_2\cap E_I$ can be \emph{simultaneously} completed to a formal cycle of $E_I$, i.e.\ there exist formal cycles $(\mathcal C_1)^I$ and $(\mathcal C_2)^I$ of $E_I$ so that \[ \begin{cases} m_e((\mathcal C_1)^I) = m_e(\mathcal C_1) \qquad\textup{ for all } e \in E_I\cap \textup{supp}(\mathcal C_1)\\ m_e((\mathcal C_2)^I) = m_e(\mathcal C_2) \qquad\textup{ for all } e\in E_I\cap \textup{supp}(\mathcal C_2)\\ m_e((\mathcal C_1)^I) = m_e((\mathcal C_2)^I) \qquad\textup{ for all other } e \in E_I. \end{cases} \] Note that $\mathcal C_1 + (\mathcal C_2)^I = (\mathcal C_1)^I + \mathcal C_2$ in $\mathcal M_G$, and hence \[ c(\mathcal C_1) + W|(\mathcal C_2)^I| = c(\mathcal C_2) + W|(\mathcal C_1)^I|. \] Rearranging terms, we obtain \[ c(\mathcal C_1) - W|\mathcal C_1\cap E_I| = c(\mathcal C_2) - W|\mathcal C_2\cap E_I|. \] We conclude $c_2\colon \mathcal M_{(E_I)_\textup{comp}}\to \mathblockyblocky R$ is well-defined. The function $c_2\colon \mathcal M_{(E_I)_\textup{comp}}\to \mathblockyblocky R$ is additive, since restriction commutes with summation: if $\mathcal C_1$ and $\mathcal C_2$ are formal cycles of $G$ whose restrictions to $G\setminus E_I$ are $(\mathcal D_1)'$ and $(\mathcal D_2)'$ respectively, then the restriction of $\mathcal C_1 + \mathcal C_2$ to $G\setminus E_I$ is $(\mathcal D_1)'+ (\mathcal D_2)'$. Furthermore, if $\mathcal D$ is a directed cycle of $(E_I)_\textup{comp}$, then minimality of $W$ implies \[ \frac{c_2(\mathcal D)}{|\mathcal D|} = \frac{c(\mathcal C) - W|\mathcal C\cap E_I|}{|\mathcal D|} > \frac{\frac{c(\mathcal C)}{|\mathcal C|}|\mathcal C| - \frac{c(\mathcal C)}{|\mathcal C|}|\mathcal C\cap E_I|}{|\mathcal D|} = \frac{\frac{c(\mathcal C)}{|\mathcal C|}|\mathcal D|}{|\mathcal D|}= \frac{c(\mathcal C)}{|\mathcal C|} > -1. \] Since $|E((E_I)_\textup{comp})| < |E(G)|$, the inductive hypothesis asserts that the function $c_2$ extends to an additive map $c_2\colon \mathblockyblocky Z^{E((E_I)_\textup{comp})}\to \mathblockyblocky R$ so that $c_2(e) > -1$ for all $e \in E((E_I)_\textup{comp})$; identifying $E((E_I)_\textup{comp})$ with $E(G)\setminus E_I$ we obtain an additive map $c_2\colon \mathblockyblocky Z^{E(G)\setminus E_I}\to \mathblockyblocky R$. The functions $c_1\colon \mathblockyblocky Z^{E_I}\to \mathblockyblocky R$ and $c_2\colon \mathblockyblocky Z^{E(G)\setminus E_I}\to \mathblockyblocky R$ glue to a function $\mathblockyblocky Z^{E(G)}\to \mathblockyblocky R$ which we claim extends $c\colon \mathcal M_G\to \mathblockyblocky R$. To verify this claim, we must check that if $\mathcal C$ is a simple directed cycle of $G$, then \begin{equation} \label{eqn:ebl-extension-condition} \sum_{e\in \mathcal C\cap E_I}c_1(e) + \sum_{e\in \mathcal C\cap (G\setminus E_I)}c_2(e) = c(\mathcal C). \end{equation} By the definitions of $c_1$ and $c_2$, we have \[ \sum_{e\in \mathcal C\cap E_I}c_1(e) = W|\mathcal C\cap E_I| \qquad\textup{ and } \qquad \sum_{e\in \mathcal C\cap (G\setminus E_I)}c_2(e) = c(\mathcal C) - W|\mathcal C\cap E_I|, \] so Equation~\eqref{eqn:ebl-extension-condition} is satisfied. \end{proof} We can now prove Lemma~\ref{lem:ebl}, restated here for convenience: \newtheorem*{lem:ebl}{Lemma~\ref{lem:ebl}} \begin{lem:ebl} Let $H\subseteq G$ be an admissible subgraph of $G$. There is a vector $\mathbf d = (d_v)_{v \in V(H_\textup{comp})} \in \mathblockyblocky R^{V(H_\textup{comp})}$ so that \begin{equation} \label{eqn:ebl-required} \textup{\textsf{wd}}(e) + d_{s(e)} - d_{t(e)} > -1 \end{equation} for every edge $e \in E(H_\textup{comp})$. \end{lem:ebl} \begin{proof}[Proof of Lemma~\ref{lem:ebl}] Let $M^T$ denote the transpose of the incidence matrix of $H_\textup{comp}$, i.e.\ the matrix corresponding to the linear transformation \begin{align*} M^T\colon \mathblockyblocky R^{V(H_\textup{comp})} &\to \mathblockyblocky R^{E(H_\textup{comp})}\\ \mathbf e_i &\mapsto \sum_{\substack{e\in E(H_\textup{comp})\\s(e) = i}} \mathbf e_e - \sum_{\substack{e\in E(H_\textup{comp})\\ t(e) = i}} \mathbf e_e. \end{align*} Let $\textsf{wd}(H_\textup{comp}) \in \mathblockyblocky R^{E(H_\textup{comp})}$ denote the vector whose component indexed by $e \in E(H_\textup{comp})$ is $\textsf{wd}(e)$. Then Lemma~\ref{lem:ebl} asks for a vector $\mathbf d \in \mathblockyblocky R^{V(H_\textup{comp})}$ so that \begin{equation} \label{eqn:d-required} \textsf{wd}(H_\textup{comp}) + M^T\mathbf d > -\mathbf 1, \end{equation} where $\mathbf 1\in \mathblockyblocky R^{E(H_\textup{comp})}$ is the vector whose components are all equal to 1. The image of $M^T$ is equal to the cut space of $H_\textup{comp}$, i.e.\ the space \[ W = \bigg\{\mathbf x \in \mathblockyblocky R^{E(H_\textup{comp})}\colon \sum_{e \in \mathcal C}x_e = 0\textup{ for all directed cycles $\mathcal C$ of $H_\textup{comp}$}\bigg\}; \] see e.g.~\cite[Thm.\ II.3.9, Ex.\ II.4.39]{bollobas1998}. Because $H$ is admissible, the additive function \begin{align*} c\colon \mathcal M_{H_\textup{comp}}&\to \mathblockyblocky R\\ \mathcal C&\mapsto \sum_{e\in\textup{supp}(\mathcal C)}m_e(\mathcal C)\cdot \textsf{wd}(e) \end{align*} satisfies $c(\mathcal C) > -|\mathcal C|$ for every directed cycle $\mathcal C$ of $\mathcal M_{H_\textup{comp}}$, so Lemma~\ref{lem:ebl-extension} guarantees that $c$ can be extended to an additive function $c\colon \mathblockyblocky Z^{E(G)}\to \mathblockyblocky R$ with $c(e) > -1$. Let $\mathbf c \in \mathblockyblocky R^{E(H_\textup{comp})}$ denote the vector whose component indexed by $e \in E(H_\textup{comp})$ is $c(e)$; by definition, $\mathbf c > -\mathbf 1$. The condition that \[ \sum_{e \in \mathcal C} c(e) = \sum_{e \in \mathcal C}\textsf{wd}(e) \] for every directed cycle $\mathcal C$ of $H_\textup{comp}$ is exactly the condition \[ \textsf{wd}(H_\textup{comp}) - \mathbf c \in W, \] so $\textsf{wd}(H_\textup{comp}) - \mathbf c = M^T\mathbf v$ for some $\mathbf v \in \mathblockyblocky R^{V(H_\textup{comp})}$. Rearranging, \[ \textsf{wd}(H_\textup{comp}) + M^T(-\mathbf v) = \mathbf c > -\mathbf 1, \] so $\mathbf d\colonequals -\mathbf v$ satisfies Equation~\eqref{eqn:d-required}. \end{proof} \section{Consequences of Theorems~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces}; relations to previous results} In this section, we explore consequences of Theorems~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces}. In Corollaries~\ref{cor:portakal1} and~\ref{cor:portakal2} we highlight a result of Portakal characterizing faces of the form $\tilde Q_H\subseteq \tilde Q_G$ for alternating graphs $G$. In Corollary~\ref{cor:transitively-closed-facets}, we show that Theorem~\ref{thm:non-tilde-faces} specializes to a result of Postnikov characterizing facets of the form $Q_H\subset\tilde Q_G$ for transitively closed graphs $G$ (Definition~\ref{defn:transitively-closed}). We also highlight the special case $G = K_n$ in Corollaries~\ref{cor:tilde-faces-K_n} and~\ref{cor:non-tilde-faces-K_n}; the latter corollary corrects a result of Gelfand, Graev, and Postnikov (see Remark~\ref{rem:ggp-comparison}). We will use the following notation. \begin{defn} Let $G$ be a directed graph and let $A\subseteq V(G)$. The set of \textbf{neighbors} of $A$, denoted $N(A)$, is the set \[ N(A) \overset{\rm def}= \{v\in V(G)\colon (v,a) \in E(G) \textup{ for some } a \in A\}\cup\{v\in V(G)\colon (a,v) \in E(G) \textup{ for some } a \in A\}. \] We say $A\subseteq V(G)$ is \textbf{independent} if it is disjoint from $N(A)$. \end{defn} Recall (see Lemma~\ref{lem:alternating-is-bipartite}) that the vertex set of an alternating graph may be partitioned into disjoint sets $L$ and $R$ consisting of source and sink vertices respectively. In this setting, Theorem~\ref{thm:tilde-faces} may be recast as follows. \begin{cor}[{\cite[Thm.\ 3.12]{portakal2019}}] \label{cor:portakal1} Let $G$ be an alternating graph and suppose $G^\textup{un}$ is connected. The subgraph $H\subseteq G$ defines a facet $\tilde Q_H\subseteq\tilde Q_G$ if and only if $H^\textup{un}$ has two connected components and \begin{equation} \label{eqn:facet-db} H = G|_{A\sqcup N(A)}\sqcup G|_{[n]\setminus (A\sqcup N(A))} \end{equation} for some set $A\subset R$ of sink vertices. \end{cor} \begin{proof} Suppose first that $\tilde Q_H$ is a facet of $\tilde Q_G$. Since $G$ is connected, we have $\dim(\tilde Q_G) = n-1$ and hence $\dim(\tilde Q_H) = n-2$. Proposition~\ref{prop:root-polytope-dimension-general} implies $H^\textup{un}$ must have two connected components which we denote by $H_1^\textup{un}$ and $H_2^\textup{un}$. Theorem~\ref{thm:tilde-faces} asserts that the two-vertex graph $H_\textup{comp}$ is loopless and acyclic; because $G$ is connected the graph $H_\textup{comp}$ must have an edge which, without loss of generality, sends $H_1^\textup{comp} \in V(H_\textup{comp})$ to $H_2^\textup{comp} \in V(H_\textup{comp})$. Proposition~\ref{prop:loopless-criterion} implies that $H$ is a disjoint union of induced subgraphs of $G$: specifically, we may write \[ H = G|_{V(H_1^\textup{un})} \sqcup G|_{V(H_2^\textup{un})}. \] Set \[ A\overset{\rm def}= V(H_1^\textup{un}) \cap R; \] observe that $H_1\subseteq G|_{A\sqcup N(A)}$: every edge of $H_1$ has target in $A$. Furthermore, because $H_1$ is a source vertex in $H_\textup{comp}$, every edge $e = (v,a) \in E(G)$ incident to a vertex in $A$ must be in $H_1$. It follows that $H_1 = G|_{A\sqcup N(A)}$. Since $V(H_2^\textup{un}) = [n] \setminus V(H_1^\textup{un})$, we conclude that $H$ has the form~\eqref{eqn:facet-db}. Suppose now that $H^\textup{un}$ has two connected components $H_1^\textup{un}$ and $H_2^\textup{un}$ and has the form~\eqref{eqn:facet-db}. Proposition~\ref{prop:loopless-criterion} implies $H_\textup{comp}$ is loopless. Observe that $(G|_{A\sqcup N(A)})^\textup{un}$ and $(G|_{[n]\setminus (A\sqcup N(A))})^\textup{un}$ are both connected: if either had (at least) two connected components, then the other would be empty and $H = G$. We conclude that the two vertices of $H_\textup{comp}$ correspond to $H_1 \colonequals G|_{A\sqcup N(A)}$ and $H_2\colonequals G|_{[n]\setminus (A\sqcup N(A))}$. Note that no edge of $E(G)\setminus E(H)$ can have target in $A$. Furthermore, since $A\subset R$ we have $N(A) \subseteq L$; hence no edge of $E(G)\setminus E(H)$ can have target in $N(A)$. Hence $H_\textup{comp}$ has no edges whose target is $H_1^\textup{comp} \in V(H_\textup{comp})$. In total, we have shown $H_\textup{comp}$ is loopless and acyclic, so Theorem~\ref{thm:tilde-faces} implies $\tilde Q_H\subseteq \tilde Q_G$ is a face. Since $H$ has two connected components, $\dim(\tilde Q_H) = n-2$ and $\tilde Q_H$ is a facet. \end{proof} \begin{cor}[{\cite[Thm.\ 3.17]{portakal2019}}] \label{cor:portakal2} Let $G$ be an alternating graph and suppose $G^\textup{un}$ is connected. The subgraph $H\subseteq G$ defines a face $\tilde Q_H\subseteq \tilde Q_G$ of codimension $d$ if and only if $H^\textup{un}$ has $d+1$ connected components and can be written as the intersection $H = H_1\cap \dots \cap H_d$ of $d$ many graphs for which $\tilde Q_{H_i}$ is a facet of $\tilde Q_G$. \end{cor} \begin{proof} Suppose $\tilde Q_H\subseteq\tilde Q_G$ is a face of codimension $d$. By Lemma~\ref{lem:dual-trick}, it is the intersection of some $d$ facets $F_1, \dots, F_d$ of $\tilde Q_G$. These facets must contain the origin, so $F_i = \tilde Q_{H_i}$ for some graphs $H_i$. Furthermore, since $\tilde Q_H$ has codimension $d$, Proposition~\ref{prop:root-polytope-dimension-general} asserts $H^\textup{un}$ must have $d+1$ connected components. Now suppose $H^\textup{un}$ has $d+1$ connected components and assume $H = H_1\cap \dots \cap H_d$, where $\tilde Q_{H_i}$ is a facet of $\tilde Q_G$. Observe that the vertices of the polytope $\tilde Q_H$ are precisely the common vertices of the polytopes $\tilde Q_{H_i}$ for $i \in [d]$. It follows that the polytope $\tilde Q_H$ is the intersection of the polytopes $\tilde Q_{H_i}$ for $i \in [d]$. Hence $\tilde Q_H \subseteq \tilde Q_G$ is a face; because $H^\textup{un}$ has $d+1$ connected components, Proposition~\ref{prop:root-polytope-dimension-general} asserts $H^\textup{un}$ must have codimension $d$. \end{proof} To state Postnikov's result we begin with the following definition: \begin{defn} \label{defn:transitively-closed} A graph $G$ is called \textbf{transitively closed} if whenever $(i,j), (j,k) \in E(G)$ are edges of $G$, then $(i,k) \in E(G)$ is also an edge of $G$. \end{defn} \begin{defn} Let $L, R \subset [n]$ be disjoint subsets of $[n] = V(G)$. The subgraph $G_{L,R}\subseteq G$ is the (alternating) graph whose edge set is \[ E(G_{L,R}) = \{(i,j) \in E(G)\colon i \in L, j \in R\}. \] We call such graphs \textbf{alternating-induced} subgraphs of $G$. \end{defn} \begin{example} \label{ex:K5-LR} Let $G = K_5$, $L = \{1,3\}$, and $R = \{2,5\}$. Then $G_{L,R}$ is the graph in Figure~\ref{fig:K5-LR}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{K5-LR.pdf} \end{center} \caption{The graph $(K_5)_{\{1,3\},\{2,5\}}$ in Example~\ref{ex:K5-LR}.} \label{fig:K5-LR} \end{figure} \end{example} We will apply the following proposition to obtain corollaries of Theorem~\ref{thm:non-tilde-faces}. Although it is more general than necessary for the purposes of this paper, the proof is essentially the same, so we include it here. \begin{prop} \label{prop:trans-closed-non-tilde-necessary} Let $G$ be transitively closed and let $H\subseteq G$ be a path consistent and admissible subgraph. Then \begin{equation} \label{eqn:trans-closed-non-tilde-necessary} H = \bigsqcup_{P_i \in \mathcal P}(G|_{P_i})_{L_i, R_i} \end{equation} is the disjoint union of alternating-induced subgraphs of induced subgraphs of $G$. \end{prop} \begin{proof} Suppose $H\subseteq G$ is path consistent and admissible. We first claim that $H$ is alternating. Indeed, if there exist vertices $i,j,k\in [n] = V(H)$ with $(i,j), (j,k) \in E(H)$, then $(i,k) \in E(G)$ because $G$ is transitively closed. If $(i,k) \in E(H)$ then $H$ is not path consistent, since $\textsf p_{ik} = ((i,j), (j,k))$ and $\textsf q_{ik} = ((i,k))$ violate Equation~\eqref{eqn:path-consistent}, and if $(i,k) \not \in E(H)$ then $H$ is not admissible, since the edge $e = (i,k) \in E(G)\setminus E(H)$ corresponds to a self-loop $e_\textup{comp}\in E(H_\textup{comp})$ with $\textsf{wd}(e_\textup{comp}) = -2$, violating Equation~\eqref{eqn:admissible}. Now let $H_i$ denote the overlying directed graph of the connected component $H_i^\textup{un}$ of $H^\textup{un}$. If $H_i$ is not an isolated vertex, every vertex $v\in V(H_i)$ is either the source of an edge or the target of an edge in $E(H_i)$. Define the (disjoint) subsets \begin{align*} L_i &\overset{\rm def}=\{v\in V(H_i)\colon v \textup{ is the source of some $e \in E(H_i)$}\},\\ R_i&\overset{\rm def}=\{v\in V(H_i)\colon v \textup{ is the target of some $e \in E(H_i)$}\}. \end{align*} Observe that $H_i \subseteq (G|_{V(H_i)})_{L_i, R_i}$. An edge $e \in E((G|_{V(H_i)})_{L_i, R_i}) \setminus E(H_i)$ corresponds to a loop $e_\textup{comp} = (H_i^\textup{un}, H_i^\textup{un}) \in E(H_\textup{comp})$ with $\textsf{wd}(e_\textup{comp}) = -1$, violating Equation~\eqref{eqn:admissible}. It follows that $H_i = (G|_{V(H_i)})_{L_i,R_i}$. \end{proof} We use Proposition~\ref{prop:trans-closed-non-tilde-necessary} to deduce Postnikov's result from Theorem~\ref{thm:non-tilde-faces}. For $G = K_n$, this result appeared in the earlier work of~\cite[Prop.\ 13]{cho1999}. \begin{cor}[{\cite[Prop.\ 13.3]{postnikov2009}}] \label{cor:transitively-closed-facets} Let $G$ be transitively closed and suppose $G^\textup{un}$ is connected. The subgraph $H\subset G$ defines a facet $Q_H\subseteq\tilde Q_G$ of $\tilde Q_G$ not containing the origin if and only if $H^\textup{un}$ is connected and $H = G_{L,R}$ is alternating-induced by some partition $L \sqcup R = [n]$. \end{cor} \begin{proof} Let $Q_H\subset\tilde Q_G$ be a facet. By Theorem~\ref{thm:non-tilde-faces}, $H\subseteq G$ is path consistent and admissible. By Proposition~\ref{prop:trans-closed-non-tilde-necessary}, $H$ has the form~\eqref{eqn:trans-closed-non-tilde-necessary}. Since $G^\textup{un}$ is connected, Proposition~\ref{prop:root-polytope-dimension-general} says $\tilde Q_G$ is $(n-1)$-dimensional, so the facet $Q_H\subseteq \tilde Q_G$ is $(n-2)$-dimensional. Since $H$ is alternating, Proposition~\ref{prop:root-polytope-dimension-general} implies that $H^\textup{un}$ has one connected component. It follows that the partition $\mathcal P$ appearing in~\eqref{eqn:trans-closed-non-tilde-necessary} can only contain one part, i.e.\ $H = G_{L,R}$ for some disjoint $L,R\subset[n]$. If $H$ is to contain no isolated vertices, we further obtain $L\sqcup R = [n]$. Conversely, suppose $H\subseteq G$ is a subgraph so that $H^\textup{un}$ is connected and $H = G_{L,R}$ for some $L\sqcup R = [n]$. By Proposition~\ref{prop:root-polytope-dimension-general}, $\dim \tilde Q_G = n-1$ and $\dim Q_H = n-2$, so it suffices to show that $Q_H\subseteq \tilde Q_G$ is a face. Since $H$ is alternating, it is automatically path consistent (as shown in Example~\ref{ex:alternating-is-path-consistent}). Note also that $H_\textup{comp}$ consists of a single vertex with a self loop corresponding to each edge $e = (i,j) \in E(G)\setminus E(G_{L,R})$, and $\textsf{wd}(e) = 0$ when $i,j \in L$ or $i,j \in R$, whereas $\textsf{wd}(e) = 1$ when $i \in R$ and $j \in L$. In both cases, Equation~\eqref{eqn:admissible} is satisfied and $H$ is admissible. Since $H\subseteq G$ is path consistent and admissible, Theorem~\ref{thm:non-tilde-faces} implies $Q_H\subset\tilde Q_G$ is a face. \end{proof} The case $G = K_n$ of Theorems~\ref{thm:tilde-faces} and~\ref{thm:non-tilde-faces} is of special interest, and we spell them out here. \begin{cor} \label{cor:tilde-faces-K_n} The subgraph $H\subseteq K_n$ forms a face $\tilde Q_H\subseteq \tilde Q_{K_n}$ if and only if \begin{equation} \label{eqn:tilde-faces-K_n} H = K_{[1,n_1]} \sqcup K_{[n_1+1,n_2]}\sqcup\dots\sqcup K_{[n_\ell+1,n]} \end{equation} is a disjoint union of complete graphs on vertex sets $[n_i + 1, n_{i+1}] \overset{\rm def}=\{n_i+1, n_i+2, \dots, n_{i+1}\}$. \end{cor} \begin{proof} By Theorem~\ref{thm:tilde-faces}, it suffices to characterize subgraphs $H\subseteq K_n$ so that $H_\textup{comp}$ is loopless and acyclic. By Proposition~\ref{prop:loopless-criterion}, $H_\textup{comp}$ is loopless if and only if $H$ is the disjoint union of induced subgraphs $\{(K_n)|_{P_i}\}_{P_i \in \mathcal P}$, which are just complete graphs $\{K_{P_i}\}_{P_i\in\mathcal P}$ on vertex sets $P_i\subseteq[n]$. The acyclicity of $H_\textup{comp}$ implies that if $i < j < k$ and $i \in P_a$, $j \in P_b \neq P_a$, then $k \not \in P_a$. Thus, $P_a$ consists of consecutive numbers $\{n_i + 1, \dots, n_{i+1}\}$. If the partition $\mathcal P = \{P_i\}$ is of the form $P_i = [n_i+1, n_{i+1}]$, it is immediate that $H_\textup{comp}$ is acyclic. \end{proof} \begin{cor} \label{cor:non-tilde-faces-K_n} The subgraph $H\subseteq K_n$ forms a face $Q_H\subset\tilde Q_{K_n}$ if and only if \begin{equation} \label{eqn:non-tilde-faces-K_n} H = (K_{[1,n_1]})_{L_1,R_1} \sqcup (K_{[n_1+1,n_2]})_{L_2,R_2} \sqcup\dots\sqcup (K_{[n_\ell+1,n]})_{L_{\ell+1}, R_{\ell+1}} \end{equation} is a disjoint union of alternating-induced subgraphs of complete graphs on vertex sets $[n_i+1,n_{i+1}]$. \end{cor} \begin{proof} By Theorem~\ref{thm:non-tilde-faces}, it suffices to characterize path consistent, admissible subgraphs $H\subseteq K_n$. Let $H\subseteq K_n$ be such a graph. Since $K_n$ is transitively closed, Proposition~\ref{prop:trans-closed-non-tilde-necessary} asserts that \[ H = \bigsqcup_{P_i \in \mathcal P} (K_{P_i})_{L_i, R_i} \] is a disjoint union of alternating-induced subgraphs of complete graphs on vertex sets $P_i\in\mathcal P$. To show that $H$ is of the form~\eqref{eqn:non-tilde-faces-K_n}, it suffices to show that if $i,j,k \in [n] = V(H)$ with $i < j < k$, and $(i,k) \in E(H)$, then either $(i,j) \in E(H)$, $(j,k) \in E(H)$, or $j$ is an isolated vertex. (This would imply that the partition $\mathcal P = \{P_i\}$ can be chosen so that $i,k \in P_*$ implies $j \in P_*$, i.e., so that the parts are consecutive blocks of numbers.) With the above goal in mind, consider any triple $i < j < k$ with $(i,k)\in E(H)$, and suppose $j$ is not in the same connected component of $H^\textup{un}$ as $i$ and $k$; we want to show that $j$ is isolated. If there is an edge $(j,\ell) \in E(H)$, then the edges $e_{i\ell} = (i,\ell)$ and $e_{jk} = (j,k)$ of $E(K_n)$ give rise to a directed cycle $\mathcal C = \{(e_{i\ell})_\textup{comp}, (e_{jk})_\textup{comp}\}$ in $H_\textup{comp}$. Since $\textsf{wd}((e_{i\ell})_\textup{comp}) = \textsf{wd}((e_{jk})_\textup{comp}) = -1$, Equation~\eqref{eqn:admissible} is violated and $H$ is not admissible. Similarly, if there is an edge $(\ell,j) \in E(H)$, then the edges $e_{\ell k} = (\ell,k)$ and $e_{ij} = (i,j)$ of $E(K_n)$ give rise to a directed cycle $\mathcal C = \{(e_{\ell k})_\textup{comp}, (e_{ij})_\textup{comp}\}$ in $H_\textup{comp}$. Since $\textsf{wd}((e_{\ell k})_\textup{comp}) = \textsf{wd}((e_{ij})_\textup{comp}) = -1$, Equation~\eqref{eqn:admissible} is violated and $H$ is not admissible. Conversely, if $H$ is of the form~\eqref{eqn:non-tilde-faces-K_n}, then it is alternating and hence path consistent, and the directed graph $H_\textup{comp}$ is nothing more than the complete graph on $V(H_\textup{comp})$; in particular it is acyclic and Equation~\eqref{eqn:admissible} is satisfied, so $H$ is admissible as well. \end{proof} \begin{rem} \label{rem:ggp-comparison} The faces $Q_H\subset\tilde Q_{K_n}$ were studied already in~\cite[Prop.\ 8.1]{ggp1997}. Their result contains a mistake; it states that there is a bijection \[ \rho\colon \{H\colon Q_H\subset\tilde Q_{K_n} \textup{ is a face}\}\longleftrightarrow \{\textup{alternating-induced subgraphs } (K_n)_{L,R}\} \] such that $H\subseteq\rho(H)$. This is false for $n = 4$, as for the graphs $H_1$ and $H_2$ in Figure~\ref{fig:H1-H2}, the condition $H\subseteq\rho(H)$ forces $\rho(H_1) = \rho(H_2) = H_2$. Yet, $Q_{H_1}$ is an edge of the triangular facet $Q_{H_2}$ of $\tilde Q_{K_4}$, and indeed Corollary~\ref{cor:non-tilde-faces-K_n} asserts that \[ H_1 = (K_{\{1,2\}})_{\{1\},\{2\}}\sqcup (K_{\{3,4\}})_{\{3\},\{4\}} \qquad \textup{ and }\qquad H_2 = (K_4)_{\{1,3\},\{2,4\}} \] form distinct faces of $Q_H\subset\tilde Q_{K_n}$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.7]{H1-H2.pdf} \end{center} \caption{The graphs $H_1$ and $H_2$ in Remark~\ref{rem:ggp-comparison}.} \label{fig:H1-H2} \end{figure} Compare~\cite[Prop.\ 8.1]{ggp1997} to Corollary~\ref{cor:non-tilde-faces-K_n}, which asserts that the identity map is a bijection \[ \textup{id}\colon \{H\colon Q_H\subset\tilde Q_{K_n}\textup{ is a face}\}\longleftrightarrow \{\textup{disjoint unions of alternating-induced subgraphs $(K_{[n_i+1, n_{i+1}]})_{L_i, R_i}$}\}.\qedhere \] \end{rem} \begin{rem} Corollaries~\ref{cor:tilde-faces-K_n} and~\ref{cor:non-tilde-faces-K_n} give rise to the tantalizing question of explicitly computing the $f$-vector of $\tilde Q_{K_n}$. Specifically, let us highlight that by Proposition~\ref{prop:root-polytope-dimension-general} there are \begin{align*} \#\{\textup{graphs of the form~\eqref{eqn:tilde-faces-K_n} with } &n-d \textup{ connected components}\} \\&+\#\{\textup{graphs of the form~\eqref{eqn:non-tilde-faces-K_n} with $n-d-1$ connected components}\} \end{align*} faces of dimension $d$. The first term of the summand is easily shown to be \[ \#\{\textup{graphs of the form~\eqref{eqn:tilde-faces-K_n} with $n-d$ connected components}\} = \binom {n-1}{n-d-1}, \] as the graph $H$ is uniquely determined by the numbers $1\leq n_1 < \dots < n_{n-d-1} \leq n-1$. We record here that a graph $H$ of the form~\eqref{eqn:non-tilde-faces-K_n} arises from a unique choice of $L_i, R_i$ satisfying the additional condition \begin{equation} \label{eqn:non-tilde-faces-K_n-bijection} \min(L_i \cup R_i) \in L_i \qquad \textup{ and } \qquad \max(L_i\cup R_i) \in R_i, \end{equation} and that conversely any collection of disjoint sets $L_i, R_i$ satisfying condition~\eqref{eqn:non-tilde-faces-K_n-bijection} and $\max(R_i) < \min(L_{i+1})$ uniquely determines the graph $H$, since we may recover \[ E(H) = \{(a,b)\colon a \in L_i, b \in R_i \textup{ for some $i$}\}. \] In other words, we have a bijection \[ \{H \textup{ of the form~\eqref{eqn:non-tilde-faces-K_n}}\} \longleftrightarrow \{\textup{disjoint sets $L_i, R_i \subset[n]$ satisfying~\eqref{eqn:non-tilde-faces-K_n-bijection} and $\max(R_i) < \min(L_{i+1})$}\}. \] The graph $H$ corresponding to the sets $\{L_1, R_1, \dots, L_\ell, R_\ell\}$ under this bijection is so that $H^\textup{un}$ has $\ell$ connected components containing an edge, along with \[ n - \sum_{i=1}^\ell(|L_i| + |R_i|) \] many isolated vertices. \end{rem} \section{Acknowledgements} I am grateful to Karola M\'esz\'aros for her sustained guidance and for many suggestions which improved the quality of this manuscript and its previous drafts. I am also grateful to the anonymous referees for many useful comments. I also thank Florian Frick for helpful discussions about polytopes, as well as Kabir Kapoor, \.Irem Portakal, and especially Seraphina Lee for many stimulating conversations. \begin{bibdiv} \begin{biblist} \bib{abhps2011}{article}{ author={Ardila, Federico}, author={Beck, Matthias}, author={Ho\c sten, Serkan}, author={Pfeifle, Julian}, author={Seashore, Kim}, title={Root polytopes and growth series of root lattices}, journal={SIAM J.\ Disc.\ Math.}, volume={25}, date={2011}, pages={360--378} } \bib{bollobas1998}{book}{ author={Bollob\'as, B\'ela}, title={Modern graph theory}, series={Graduate Texts in Mathematics}, volume={184}, publisher={Springer, New York}, date={1998} } \bib{cho1999}{article}{ author={Cho, Soojin}, title={Polytopes of roots of type $A_n$}, journal={Bull.\ Austral.\ Math.\ Soc.}, volume={59}, date={1999}, pages={391--402} } \bib{cm2015}{article}{ author={Cellini, Paola}, author={Marietti, Mario}, title={Root polytopes and Borel subalgebras}, journal={Int.\ Math.\ Res.\ Not.}, date={2015}, number={12}, pages={4392--4420} } \bib{em2016}{article}{ author={Escobar, Laura}, author={M\'esz\'aros, Karola}, title={Toric matrix Schubert varieties and their polytopes}, journal={Proc.\ Amer.\ Math.\ Soc.}, volume={144}, date={2016}, number={12}, pages={5081--5096} } \bib{em2018}{article}{ author={Escobar, Laura}, author={M\'esz\'aros, Karola}, title={Subword complexes via triangulations of root polytopes}, journal={Algebr.\ Comb.}, volume={1}, date={2018}, number={3}, pages={395--414} } \bib{ggp1997}{article}{ author={Gelfand, Israel M.}, author={Graev, Mark I.}, author={Postnikov, Alexander}, title={Combinatorics of hypergeometric functions associated with positive roots}, book={ title={The Arnold-Gelfand Mathematical Seminars}, publisher={Birkh\"auser Boston}, address={Boston}, } pages={205--221} date={1997}, } \bib{gnp2018}{article}{ author={Galashin, Pavel}, author={Nenashev, Gleb}, author={Postnikov, Alexander}, title={Trianguloids and triangulations of root polytopes}, eprint={arXiv:1803.06239} date={2018} } \bib{grunbaum2003}{book}{ author={Gr\"unbaum, Branko}, title={Convex Polytopes}, series={Graduate Text in Mathematics}, volume={221}, edition={2}, publisher={Springer-Verlag New York}, date={2003} } \bib{hetyei2009}{article}{ author={Hetyei, G\'abor}, title={Delannoy Orthants of Legendre Polytopes}, journal={Disc.\ Comput.\ Geom.}, volume={42}, date={2009}, pages={705--721} } \bib{hrs2000}{article}{ author={Huber, Birkett}, author={Rambau, J\"org}, author={Santos, Francisco}, title={The Cayley trick, lifting subdivisions and the Bohne-Dress theorem on zonotopal tilings}, journal={J.\ Eur.\ Math.\ Soc,}, volume={2}, date={2000}, pages={179--198} } \bib{meszaros2011}{article}{ author={M\'esz\'aros, Karola}, title={Root polytopes, triangulations, and the subdivision algebra. I}, journal={Trans.\ Amer.\ Math.\ Soc.}, volume={363}, date={2011}, number={8}, pages={4359--4382} } \bib{meszaros2016}{article}{ author={M\'esz\'aros, Karola}, title={Pipe dream complexes and triangulations of root polytopes belong together}, journal={SIAM J.\ Disc.\ Math.}, volume={30}, date={2016}, number={1}, pages={100--111} } \bib{portakal2019}{article}{ author={Portakal, \.Irem}, title={On the classification of rigid toric varieties arising from bipartite graphs}, eprint={arXiv:1905.02445} date={2019} } \bib{postnikov2009}{article}{ author={Postnikov, Alexander}, title={Permutohedra, associahedra, and beyond}, journal={Int.\ Math.\ Res.\ Not.}, date={2009}, number={6}, pages={1026--1106} } \bib{santos2005}{article}{ author={Santos, Francisco}, title={The Cayley trick and triangulations of products of simplices}, book={ title={Integer points in polyhedra--geometry, number theory, algebra, optimization}, volume={374} publisher={Amer.\ Math.\ Soc}, address={Providence, RI}, } pages={151--177}, date={2005} } \bib{vv2006}{article}{ author={Valencia, Carlos E.}, author={Villarreal, Rafael H.}, title={Explicit representations by halfspaces of the edge cone of a graph}, journal={Int.\ J.\ Contemp.\ Math.\ Sci.}, date={2006}, number={1}, pages={53--66} } \bib{ziegler2007}{book}{ author={Ziegler, G\"unter M.}, title={Lectures on polytopes}, series={Graduate Texts in Mathematics}, volume={152}, edition={7}, publisher={Springer, New York}, date={2007} } \end{biblist} \end{bibdiv} \end{document}
1,108,101,564,416
arxiv
\section{Introduction} The role of quantum spin coherence in graphene spin transport is presently poorly understood. It has been neglected in previous theoretical treatments based on semiclassical transport equations~\cite{ferreira2014extrinsic,hy2015extrinsic}. Regarding the magnitude of the spin relaxation time, which determines the degree of quantum coherence, there is substantial disagreement between theory and experiment \cite{Pesin12,Ochoa12,FertMRS14}, as well as between different experiments~\cite{Tombros2007,electron2011wu,zomer2012long,dlubak2012highly}. Nevertheless, it seems that graphene exhibits fairly long spin relaxation times compared to metals, making it a promising material for \emph{passive} spintronics, i.e.~long-distance transport of spin currents~\cite{Tombros2007,electron2011wu,Pesin12,zomer2012long,Ochoa12,dlubak2012highly,stephan2014,FertMRS14}. There is also a solid body of theoretical~\cite{netoguinea09,weeks2011engineering, pachoud2014scattering,fabian2015spin} and experimental \cite{marchenko2012giant,calleja2015natphy,balakrishnan_colossal,balakrishnan2014giant} work indicating that graphene can exhibit strong extrinsic SOC induced by proximity to adatom impurities or metallic substrates. This suggests that graphene-based devices can also play an important role in \emph{active} spintronics. Extrinsic SOC induced by proximity to metals and metal clusters has been detected experimentally in graphene, via angle-resolved photoemission~\cite{marchenko2012giant}, scanning tunneling spectroscopy~\cite{calleja2015natphy}, and spin transport~\cite{balakrishnan_colossal,balakrishnan2014giant}. In particular, Balakrishan \emph{et al}.~\cite{balakrishnan2014giant} have reported observing the spin Hall effect (SHE) in graphene devices sparsely decorated with copper clusters (residues found in graphene grown by chemical vapor deposition); they found that the spin Hall angle (the ratio of spin current to charge current) was $\theta_{\mathrm{sH}} \sim 0.1$, comparable to transition metals~\cite{Kimura2007,sinova2015spin} and 2D transition metal dichalcogenides~\cite{Wang2012,Qian2014}. Although other groups have reproduced the nonlocal resistance measurements in adatom-decorated graphene, they have failed to observe the expected modulation with in-plane magnetic field (Hanle precession)~\cite{neutral2015wang,kaverzin2015electron}. Other recent experiments \cite{oshima2014observation,mendes2015spin,dushenko2016gate} have demonstrated spin-charge conversion in graphene by spin-pumping, and some~\cite{oshima2014observation,dushenko2016gate} reported values of $\theta_{\mathrm{sH}}$ many orders of magnitude below what was obtained in earlier Hall-bar devices \cite{balakrishnan2014giant}. In our view, the confusing experimental situation calls for a more detailed theoretical analysis of how spin currents and charge currents are coupled by extrinsic SOC in graphene. A semi-classical theory of the SHE in graphene with extrinsic SOC has been developed by Ferreira \textit{et al.}~\cite{ferreira2014extrinsic} That work showed that the spin Hall angle can be enhanced in graphene by resonant skew scattering, and that the enhancement is much stronger than in bulk metals~\cite{Fert_resonant1} due to graphene's 2D Dirac density of states~\cite{wehling2009adsorbates,katsnelson2012graphene}. However, the theory neglected quantum spin coherence, describing the carriers using distinct ``spin-up'' and ``spin-down'' distributions~\cite{ferreira2014extrinsic}. As shown below, this can be strictly justified only when the extrinsic SOC is purely of the spin-conserving (Maclure-Yafet-Kane-Mele) type~\cite{KaneMeleQSHE1995}. Rashba-type SOC, which favors spin polarization parallel to the graphene plane (and appears when reflection symmetry about the plane is broken), gives rise to coherent spin precession and spin relaxation processes that have not been accounted for, which could lead to qualitative devitations from the results of Ref.~\onlinecite{ferreira2014extrinsic}. Indeed, numerous tight-binding and density functional theory studies have found that adatom impurities induce both types of SOC, with comparable strengths \cite{netoguinea09,weeks2011engineering,stabilizing2012hua,fabian2013spin,fabian2015spin}. \begin{figure*}[ht] \includegraphics[width=0.95\textwidth]{triangle6.pdf} \caption{\label{fig:triangle} (a) Relationship between the three basic macroscopic transport quantities (charge current $J$, spin current $\mathcal{J}$, and magnetization $\mathcal{M}$) in systems with time-reversal symmetric microscopic dynamics. The coupling between $J$ and $\mathcal{J}$ is governed by the spin Hall angle $\theta_{\mathrm{sH}}$, and the coupling between $\mathcal{J}$ and $\mathcal{M}$ is governed by the Rashba scattering rate $\alpha_{\mathrm{R}}$; together, these yield the Edelstein effect. The anisotropic spin precession (ASP) scattering rate, $\alpha_{\mathrm{asp}}$, describes a novel direct coupling between $J$ and $\mathcal{M}$. (b) Feynman diagram corresponding to scattering events linear in the impurity density $n_{\text{imp}}$. The scattering amplitude involves the product of $T$-matrix elements, and different terms in the product describe ASP scattering, skew scattering, Rashba scattering, Drude relaxation, and Elliott-Yafet spin relaxation. (c) Schematic of ASP scattering, which arises from the terms in the product given by $\boldsymbol{c}_{3}= i \boldsymbol{B}_{kp} \times \boldsymbol{B}_{kp}^{*}$. For example, the product of $(\boldsymbol{B}_{kp})^z\sigma^z$ and $(\boldsymbol{B}^*_{kp})^x\sigma^x$ can cause a randomly polarized spin to align in the $y$ direction. The former skews while the latter flips the electron spin. This gives rise to spin alignment in the $y$ direction, $\mathcal{M}^y \ne 0$, but no net transverse spin current $\mathcal{J}_y^z$. } \end{figure*} Numerical quantum transport simulations are an alternative approach to studying extrinsic SOC in graphene, and are capable of accounting for quantum spin coherence. So far, however, simulations based on the Kubo formalism \cite{rappoport,stephan2016revisit} have been limited to impurity densities of $\gtrsim 10\%$, significantly higher than in typical experiments~\cite{balakrishnan_colossal,balakrishnan2014giant,kaverzin2015electron} ($\lesssim 1\%$). It is uncertain whether these numerical methods can be extrapolated to the dilute impurity regime, due to the different, density dependent mechanisms involved in the SHE~\cite{sinova2015spin,mirco2016phase,mirco2016quantum}. This paper presents an analytical theory of spin-coherent transport for extrinsic SOC in graphene, based on the linearized Quantum Boltzmann Equation (QBE). It incorporates coherent spin dynamics into the transport equations by treating both types of extrinsic SOC on equal footing, and is directly applicable to the experimentally-relevant limit of strongly-scattering but dilute impurities \cite{balakrishnan_colossal,balakrishnan2014giant,kaverzin2015electron}. From the theory, we uncover a novel extrinsic SOC scattering process, ``anisotropic spin precession'' (ASP) scattering, which involves a combination of skew and spin-flip scattering. ASP scattering directly couples non-equilibrium spin polarization and charge current [cf.~Fig.~\ref{fig:triangle}(a)], and is distinct from other previously-studied scattering processes that couple non-equilibrium spin polarization $\mathcal{M}$, charge current $J$, and spin current $\mathcal{J}$. One of the most striking predictions of the theory is that graphene can exhibit a sizable current-induced spin polarization (CISP). CISP, also known as the inverse spin-galvanic effect \cite{cisp2004kato,sih2005spatial,cisp2014dydra}, refers to the production of non-equilibrium spin polarization (i.e.~magnetization) by passing a charge current through a material. So far, the mechanism that has been identified as the cause of CISP is the Edelstein effect \cite{Edelstein1990233,pikus1991spin,shen2014theory}, in which a charge current $J$ is first converted into a spin current $\mathcal{J}$ via the SHE, and $\mathcal{J}$ is then converted into spin polarization, $\mathcal{M}$, by Rashba SOC~\cite{raimondi2012su2}. However, as we show below, in graphene doped with SOC impurities, CISP arises from both an extrinsic version of the Edelstein effect and ASP scattering; and the latter is dominant when the Rashba SOC induced by the impurities is strong. Apart from CISP, ASP scattering also contributes to the spin current $\mathcal{J}$, and the size of its contribution can be comparable to the standard SHE contribution caused by skew scattering. In particular, the ASP scattering contribution is distinct from side-jump scattering~\cite{mirco2016phase,mirco2016quantum}, which is another mechanism that contributes to the SHE in graphene. ASP scattering also gives important corrections to spin relaxation processes, and particularly the D'yakonov-Perel (DP) relaxation time. The rest of the article is organized as follows. Sec.~\ref{sec:results} summarizes our most important results, including the linear response equation relating charge current, spin current, and non-equilibrium spin polarization to the applied electric field. We also discuss some of the experimental implications from our theory and clarify the spin relaxation mechanisms in graphene with SOC disorder. Sec.~\ref{sec:QBE} provides a brief summary of the QBE, emphasizing the structure of the collision integral. Sec.~\ref{sec:model} explains how the linearized QBE can be solved after introducing a microscopic model for the SOC disorder potential and an \emph{ansatz} for the electron distribution function. Finally, in Sec.~\ref{sec:summary} we close the article with a summary and outlook. Key technical details and the detailed derivation of the QBE from the equation of motion for the density matrix are given in the Appendix. \section{Results} \label{sec:results} In this section, we present the theory's main results, leaving a discussion of its derivation to Sec.~\ref{sec:QBE}. When both types of extrinsic SOC (Maclure-Yafet-Kane-Male and Rashba) are present in adatom decorated graphene, and quantum spin coherence is accounted for, the linear response of the system becomes qualitatively different from the semi-classical descriptions previously developed in Refs.~\cite{ferreira2014extrinsic,hy2015extrinsic}. Specifically, let us consider an electric field $E_x$ applied along the $\hat{x}$ direction, the graphene plane being the $\hat{x}$-$\hat{y}$ plane. The response of the system in terms of the longitudinal charge current $J_x$, transverse spin current $\mathcal{J}_y^z$, and magnetization $\mathcal{M}^y$ (in rescaled units; see Sec.~\ref{sec:QBE}) takes the form: \begin{multline} \label{eq:transport} \begin{pmatrix} J_x \\ \mathcal{J}_y^z \\ \mathcal{M}^y \end{pmatrix} = \begin{pmatrix} 0 & \theta_{\mathrm{sH}} & \tau_\mathrm{D} \alpha_{\mathrm{asp}} \\ -\theta_{\mathrm{sH}} & 0 & \tau_\mathrm{D} \alpha_{\mathrm{R}} \\ \tau_{\mathrm{EY}} \alpha_{\mathrm{asp}} & - \tau_{\mathrm{EY}} \alpha_{\mathrm{R}} & 0 \end{pmatrix} \begin{pmatrix} J_x \\ \mathcal{J}_y^z \\ \mathcal{M}^y \end{pmatrix} \\+ \sigma_{\mathrm{D}} \begin{pmatrix} E_x \\ 0 \\ 0 \end{pmatrix}. \end{multline} \noindent The first term on the right describes the coupling of spin and charge, and the second term describes the out-of-equilibrium drive ($\sigma_\mathrm{D}$ is the Drude conductivity). In the (dimensionless) coupling matrix, $\tau_{\text{D}}$ and $\tau_{\text{EY}}$ are the Drude relaxation time and Elliott-Yafet spin relaxation time (see Sec.~\ref{sec:Spin relaxation mechanisms}); $\alpha_{\mathrm{R}}$ is the scattering rate induced by Rashba SOC; and $\theta_{\text{sH}} $ is the spin Hall angle. In the experimentally relevant dilute-impurity regime \cite{balakrishnan2014giant, neutral2015wang}, the dominant contribution to the SHE arises from skew scattering \cite{ferreira2014extrinsic,sinova2015spin}, and thus $\theta_{\text{sH}}=\tau_{\mathrm{D}}\alpha_{\mathrm{sk}}$ where $\alpha_{\mathrm{sk}}$ is the skew scattering rate. These relationships are depicted in Fig.~\ref{fig:triangle}(a). The matrix elements that directly couple $J_x$ with $\mathcal{M}^y$ are a novel outcome of the theory. They are governed by the ASP scattering rate, $\alpha_{\text{asp}}$. As shown in Fig.~\ref{fig:triangle}(c), ASP scattering arises from quantum interference between skew scattering and spin flip scattering: in a single scattering event, the electron is skewed and then flipped (or vice versa), and this results in a net spin polarization in the plane. As discussed in Sec.~\ref{sec:QBE}, the existence of ASP scattering is fundamentally due to the existence of a special axis (the out-of-plane direction) in 2D materials, which breaks rotational symmetry for rotations about this axis. ASP scattering vanishes in 3D materials possessing time-reversal and 3D rotational symmetries (such as the system studied by Lifshits and Dyakonov in Ref.~\onlinecite{dyakonov_swapcurrent2009}). As shown in Appendix \ref{app:sym}, ASP scattering can also occur in 2D electron gases in quantum wells, and in other spin-orbit coupled electron systems when the full 3D rotational symmetry is broken, such as interfaces. Note that the coupling between $J$ and $\mathcal{J}$ (the SHE) and between $\mathcal{J}$ and $\mathcal{M}$ are both odd under Onsager reciprocity. In other words, in the matrix in Eq.~\eqref{eq:transport}, the $(1,2)$ and $(2,1)$ elements have opposite signs, and likewise the $(2,3)$ and $(3,2)$ elements have opposite signs. This is consistent with the fact that $J$ and $\mathcal{M}$ are odd, and $\mathcal{J}$ is even, under time-reversal. On the other hand, the new ASP scattering induced couplings between $J$ and $\mathcal{M}$ are even under Onsager reciprocity; correspondingly, the $(1,3)$ and $(3,1)$ matrix elements have the same sign. This has important consequences for the ASP contributions to spin relaxation, as discussed in Section \ref{sec:Spin relaxation mechanisms}. \subsection{Current-induced spin polarization}\label{sec:cisp} \begin{figure} \includegraphics[width=8cm]{cisp.pdf} \caption{\label{fig:sigmaee} Ratio of magnetization to charge current at zero temperature, plotted versus the chemical potential $\mu$ (measured from the Dirac point). The magnetization is measured in Bohr magnetons. The top panel is plotted for strong Rashba SOC with bare Rashba potential strength of $30$~meV while the bottom panel is for weak Rashba SOC with bare Rashba potential strength of $10$~meV . In both cases, the strength of the spin-conserving and scalar potentials are set to $10$~meV and $80$~meV, respectively, in line with previous theoretical studies.~\cite{ferreira2014extrinsic,balakrishnan2014giant} See appendix~\ref{app:micro_model} for details.} \end{figure} To compute the CISP, we expand the last row in Eq.~\eqref{eq:transport} to first order in $\alpha_{\mathrm{R}}\theta_{\mathrm{sH}}$ and $\alpha_{\mathrm{asp}}$. This gives $\mathcal{M}^{y}=\sigma_{\mathrm{cisp}}E_{x}$, where \begin{equation}\label{eq:edelstein} \sigma_{\mathrm{cisp}}=\sigma_{\mathrm{D}} \left(\theta_{\mathrm{sH}}\alpha_{\mathrm{R}} + \alpha_{\mathrm{asp}} \right) \tau_{s}. \end{equation} Here $\tau_{s}^{-1}= \tau_{\text{EY}}^{-1}+\tau_{\text{DP}}^{-1}$ is the total spin relaxation time (see Section~\ref{sec:Spin relaxation mechanisms}). Eq.~(\ref{eq:edelstein}) shows that there are two distinct mechanisms contributing to the CISP: (i) the \emph{extrinsic} Edelstein effect which is a two-step process associated with $\theta_{\mathrm{sH}}\alpha_{\mathrm{R}}$, and (ii) the ASP scattering which is associated with $\alpha_{\mathrm{asp}}$. The first term is formally identical to the Edelstein effect found in the 2D electron gas, \cite{shen2014theory} with one important difference: the effect here is of extrinsic origin. Unlike the \emph{intrinsic} Eldestein effect \cite{Edelstein1990233,pikus1991spin,shen2014theory}, which arises from a spatially uniform Rashba SOC, the \emph{extrinsic} Eldestein effect arises from impurity scattering. Its strength is determined by the Rashba scattering rate $\alpha_{\mathrm{R}}$, which depends on the chemical potential $\mu$. The second term in Eq.~\eqref{eq:edelstein} describes the enhancement of the magnetization by ASP scattering. Normally, since ASP scattering is present even in the first Born approximation, we would expect it to dominate over the Edelstein effect, which appears only in the third Born approximation. However, in the model calculations below, all the scattering rates are computed to all orders in the impurity potential strength. This results in the two contributions being comparable in magnitude for most $\mu$. In Fig.~\ref{fig:sigmaee}, we plot the dimensionless ratio $\sigma_{\text{cisp}}/\sigma_{\mathrm{D}}$, which serves as a figure of merit for CISP, against the chemical potential $\mu$. It exhibits a peak for positive $\mu$ due to resonant spin-coherent scattering ($\sim 100$\,meV based on our choice of SOC impurity potential; see Sec.~\ref{sec:model}). ASP scattering gives the dominant contribution to CISP when the impurity Rashba SOC is large. Both the extrinsic Edelstein and ASP scattering contributions to CISP are proportional to the total spin relaxation time. Due to the specific features of graphene, this implies that the net magnetization, normalized by the charge current, can be enhanced both by the resonant enhancement of the SHE~\cite{ferreira2014extrinsic}, and the long spin relaxation times characteristic of graphene~\cite{zomer2012long,dlubak2012highly}. \subsection{Current-induced spin current} \begin{figure} \includegraphics[width=8cm]{cisc.pdf} \caption{\label{fig:cisc} Ratio of spin current to charge current at zero temperature, plotted versus the chemical potential $\mu$ (measured from the Dirac point). In the close vicinity of Dirac point, the contribution of ASP scattering goes to number much larger than one. This is because $\tau_{\mathrm{EY}}$ goes to infinity much faster than $\alpha_{\mathrm{asp}}$ goes to zero, an artefact of the theory. The parameters used here are the same as in Fig.~\ref{fig:sigmaee}} \end{figure} In addition to the magnetization, we can compute the spin current from the second row of Eq.~\eqref{eq:transport}. To first order in $\theta_{\mathrm{sH}}$ and $\alpha_{\mathrm{asp}}\alpha_{\mathrm{R}}$, we find that $\mathcal{J}^{z}_{y}=\sigma_{\mathrm{cisc}}E_{x}$, where \begin{equation}\label{eq:SHE} \sigma_{\mathrm{cisc}}= \sigma_{\mathrm{D}}\Big(-\theta_{\mathrm{sH}}+ \left( \alpha_{\mathrm{R}}\tau_{\mathrm{D}} \right) \left( \alpha_{\mathrm{asp}}\tau_{\mathrm{EY}} \right)\Big). \end{equation} The first term in Eq.~\eqref{eq:SHE} is the conventional spin Hall conductivity arising from skew scattering. The second term arises from a combination of ASP scattering and Rashba scattering. If Rasba SOC is absent, the ``skewness ratio'' $\sigma_{\mathrm{cisc}}/ \sigma_{\mathrm{D}}$ reduces to $\theta_{\mathrm{sH}}$, conventionally known as the spin Hall angle. Note that both terms are independent of the impurity density, unlike the quantum side-jump contribution to the spin Hall current \cite{mirco2016phase,mirco2016quantum}. (Side-jump is not included in the present theory.) Fig.~\ref{fig:cisc} shows $\sigma_{\mathrm{cisc}}/ \sigma_{\mathrm{D}}$ versus $\mu$. For strong impurity-induced Rashba SOC, the skewness ratio is enhanced for $\mu$ near the scattering resonance ($\sim 100$\,meV; see Sec.~\ref{sec:model}), with the skew scattering and ASP/Rashba contributions having the same sign. However, for weak impurity-induced Rashba, the two contributions can have opposite signs, diminishing the total spin current response. This is consistent with a previous semiclassical (non-spin-coherent) calculation which found a similar reduction under weak Rashba SOC disorder \cite{ferreira2014extrinsic}. These plots also indicate that at small $\mu$, there is a sharp increase in the skewness ratio coming from the ASP/Rashba scattering contribution. Specifically, as $\mu$ tends to zero, $\alpha_{\mathrm{asp}}$ vanishes more slowly than the Elliot-Yafet spin relaxation time $\tau_{\mathrm{EY}}$ increases. However, at very small values of $\mu$ ($\sim 10$\,meV), our theory becomes unreliable due to multiple impurity scattering and interband coherence effects (see Sec.~\ref{sec:model}). It is interesting to note that the Edelstein contribution to CISP (blue dotted line in Fig.~\ref{fig:sigmaee}) tracks the SHE contribution to CISC (blue dash-dotted line in Fig.~\ref{fig:sigmaee}. This is because the Edelstein effect is a two-step conversion process, see Fig.~\ref{fig:triangle}a. The new ASP scattering contribution to the spin current calls for a need to revise the existing SHE theory \cite{abanin2009nonlocal} which has been customarily employed to fit the nonlocal resistance data in Hall bar spin-transport experiments~\cite{balakrishnan2014giant,balakrishnan_colossal,neutral2015wang,kaverzin2015electron}. This is especially true when the impurity-induced Rashba is large, as it may be the case of hydrogenated graphene \cite{balakrishnan2014giant,kaverzin2015electron}; such an analysis will be presented elsewhere~\cite{in-prep}. The present theory may also have important implications for understanding recent spin pumping experiments in CVD graphene involving the inverse of CISP \cite{mendes2015spin} and the inverse of current-induced spin current~\cite{oshima2014observation,dushenko2016gate}. \subsection{Spin relaxation} \label{sec:Spin relaxation mechanisms} \begin{figure*} \includegraphics[width=\textwidth]{spinrelax.pdf} \caption{\label{fig:eydp} (a)--(b) Spin relaxation rates of the Elliott-Yafet type, $\tau_{\mathrm{EY}}^{-1}$, and Dyakonov-Perel type, $\tau_{\mathrm{DP}}^{-1}$, for different Rashba SOC strengths. Note that the DP spin relaxation rate becomes negative for a window of chemical potential where $\alpha_{\mathrm{R}}^2 <\alpha_{\mathrm{sk}}^2$. Both scattering rates show resonant enhancements near the vicinity of Dirac point (i.e.~$\mu=0$). At large $\mu$, the EY scattering rate dominates. (c) Relaxation time versus $\mu$ for weak Rashba SOC. In these calculations, we assume identical impurities and zero temperature; for a random distribution of impurity strengths and/or finite temperatures, these resonant features will be further smoothed out. The parameters are the same as in Fig.~\ref{fig:sigmaee}} \end{figure*} The theory derived in Sec.~\ref{sec:QBE} also allows us to obtain the spin relaxation rate arising from SOC disorder, $\tau^{-1}_{s}$. In the stationary state, this relaxation rate is obtained by solving Eq.~\eqref{eq:transport} for the spin polarization $\mathcal{M}^y$, as described in Section~\ref{sec:cisp}. We find that $\tau_s^{-1}$ receives two contributions, which add up by Matthiessen's rule \begin{equation} \tau^{-1}_s = \tau^{-1}_{\mathrm{EY}} + \tau^{-1}_{\mathrm{DP}}. \end{equation} The two contributions can be identified as Elliott-Yafet (EY) relaxation \cite{Ochoa12} and D'yakonov-Perel (DP) relaxation~\cite{vzutic2004spintronics}. The rates are derived to be \begin{align} \frac{1}{\tau_{\text{EY}} } =& \frac{C}{\tau_{\mathrm{D}} }, \label{tauey} \\ \frac{1}{\tau_{\text{DP}}} =& \tau_{\mathrm{D}}(\alpha_{\mathrm{R}}^2-\alpha_{\mathrm{asp}}^2), \label{taudp} \end{align} \noindent where $\tau_{\mathrm{D}}$ is the Drude relaxation (elastic scattering) time and $C > 0$ depends on the microscopic model (see Sec.~\ref{sec:model}). Since our theory assumes that the SOC disorder stems from localized impurities (e.g.~adatoms \cite{balakrishnan_colossal,balakrishnan2014giant, kaverzin2015electron}), the EY relaxation is caused by spin-flip scattering events. This form of EY spin relaxation is akin to the one considered by Lifshits and Dyakonov in Ref.~\onlinecite{dyakonov_swapcurrent2009}, but different from other models (e.g.~Ref.~\onlinecite{Ochoa12}) where the Rashba SOC is assumed to be uniform. The DP relaxation time, $\tau_{\mathrm{DP}}$, is related to spin precession and was previously understood to arise from Rashba SOC scattering. We argue that this understanding is incomplete: $\tau_{\mathrm{DP}}$ also receives a contribution from ASP scattering. It can be seen from Eq.~\eqref{taudp} that, unlike the EY relaxation time $\tau_{\text{EY}}$ which is strictly positive, the sign of the DP relaxation time $\tau_{\text{DP}}$ depends on the competition between $\alpha_{\mathrm{R}}$ and $\alpha_{\mathrm{asp}}$. As noted above, ASP scattering is an Onsager even process. As a result, its contribution to $\tau_{\text{DP}}$ is negative. When $\alpha_{\mathrm{R}}^2 > \alpha_{\mathrm{asp}}^2$, $\tau_{\text{DP}}$ describes a spin relaxation process, whereas when $\alpha_{\mathrm{R}}^2 < \alpha_{\mathrm{asp}}^2$, it describes spin amplification (the total spin relaxation time, however, remains strictly positive, cf.~Fig.~\ref{fig:eydp}). This shows that Rashba SOC can either randomize or align the spin polarization, depending on the microscopic details of the system. We also find that the total spin relaxation time is minimum near zero-doping, which agrees well with the trend observed in the experiments~\cite{spinrelaxation2011han,linear2009jozsa}, and suggesting that resonant scattering with SOC disorder is an important source of spin relaxation at low temperatures. In Fig.~\ref{fig:eydp}, the EY and DP spin relaxation rates are plotted against the chemical potential $\mu$, for strong and weak Rashba SOC. We find that EY relaxation is the dominant mechanism for spin relaxation at large $\mu$, but DP spin relaxation becomes important at small $\mu$ since it is inversely proportional to the Drude relaxation time. Fig.~\ref{fig:eydp}(c) shows that the total spin relaxation time $\tau_{s}$ approaches a minimum as the Drude relaxation (elastic scattering) time $\tau_{\mathrm{D}}$ peaks near $\mu = 0$ (the Dirac point). This agrees with the experimental observations of spin relaxation in exfoliated graphene~\cite{spinrelaxation2011han} and CVD graphene,~\cite{kamalakar2015long} which indicate that both EY and DP spin relaxation mechanisms are present in graphene. Finally, we note that Eqs.~\eqref{eq:transport}--\eqref{taudp} follow from the general form of the QBE within the linear response regime, and are independent of the underlying microscopic scattering model. The parameters $\{\tau_{\mathrm{D}}, \tau_{\mathrm{EY}}, \alpha_{\mathrm{sk}}, \alpha_{\mathrm{R}}, \alpha_{\mathrm{asp}}\}$ entering into these equations all depend on the chemical potential, the impurity density, and the SOC impurity potential. Their actual values must be derived from a microscopic scattering model, which in turn is fitted to \textit{ab initio} calculations and/or experimental measurements. Details of this derivation are given in Sec.~\ref{sec:model} and the Appendix. \section{Quantum Boltzmann Equation in the strong and dilute disorder regime} \label{sec:QBE} In this section, we discuss the quantum transport equation that leads to the linear response matrix equation \eqref{eq:transport}. To capture the coherent quantum dynamics of electron spins in disordered graphene, we use the method of Kohn and Luttinger~\cite{KohnLuttingerBTE} to derive a quantum kinetic equation. The collision integral derived is first-order in the impurity concentration $n_{\mathrm{imp}}$, but exact to all orders in the strength of the single-impurity potential. Details of this formalism are given in the Appendix \ref{sec:app_QBE}. Unlike the original Kohn-Luttinger treatment, we keep track of the quantum spin coherence by using a $2 \times 2$ density matrix distribution $n_k(t)$. The deviation from equilibrium distribution is given by $\delta n_{k}(t) = n_k(t) - n^0_k$, where $n^0_k = f_{\mathrm{FD}}(\epsilon_k)\: \mathbb{1}$ is the equilibrium distribution, $\mathbb{1}$ is the $2\times 2$ unit matrix, $\epsilon_k = v_F k$ ($\epsilon_k = - v_F k$) is the dispersion relation of electrons (holes) in graphene, and $f_{\mathrm{FD}}(\epsilon) = (e^{(\epsilon-\mu)/k_B T}+1)^{-1}$ is the Fermi-Dirac distribution at absolute temperature $T$ and chemical potential $\mu$. This results in a linearized quantum Boltzmann equation (QBE) which describes how $\delta n_{k}$ reacts to applied electric and magnetic fields, $\boldsymbol{E}(t)$ and $\boldsymbol{\mathcal{H}}(t)$: \begin{align} \label{eq:QBE} \partial_t \delta n_k + \frac{i}{\hbar} \gamma \left[ \delta n_k , \boldsymbol{s}\cdot \boldsymbol{\mathcal{H}}(t)\right] + e \boldsymbol{E}(t)\cdot \frac{\boldsymbol{\nabla}_{k} n^0_k}{\hbar} = \mathcal{I}\left[ \delta n_k \right]. \end{align} \noindent Here, $\boldsymbol{s} = \frac{\hbar}{2} \boldsymbol{\sigma}$ is the electron spin operator ($\boldsymbol{\sigma} = (\sigma^x, \sigma^y, \sigma^z)$ are the Pauli matrices) and $\gamma$ is the gyromagnetic ratio. In deriving Eq.~\eqref{eq:QBE}, we have assumed that the external (electric and magnetic) fields vary slowly in time compared to the time scale of $\hbar/\mu$. To leading order in $n_{\mathrm{imp}}$, the collision integral is \begin{align} \label{eq:coll-int} \mathcal{I}[\delta n_{k}] = \frac{ i}{\hbar} [\delta n_{k} ,\Sigma^R_{k} ] +\frac{2\pi n_{\mathrm{imp}}}{\hbar} \sum_{\boldsymbol{p}} \delta(\epsilon_{k}-\epsilon_{p}) \nonumber \\ \times \left( \mathcal{T}^{+}_{kp} \delta n_{p} \mathcal{T}^{-}_{pk}- \frac{\mathcal{T}^{+}_{kp}\mathcal{T}^{-}_{pk}\delta n_{k} + \delta n_{k} \mathcal{T}^{+}_{kp} \mathcal{T}^{-}_{pk} }{2} \right). \end{align} \noindent Here $\mathcal{T}^{+}_{kp}\equiv\langle k | \mathcal{T}(\epsilon_k + i0^+)| p \rangle$ ($\mathcal{T}_{pk}^{-}$) is the retarded (advanced) on-shell $T$-matrix of a single impurity located at the origin. For graphene, $|k\rangle$ and $ |p\rangle$ are Bloch states of pristine graphene corresponding to the conduction band, for electron doping (valence band, for hole doping). We stress that $\delta n_{k}$ and $\mathcal{T}_{kp}^{\pm}$ are matrices in spin space, so the order of factors in Eq.~\eqref{eq:coll-int} is important. The first term behaves as an effective (impurity-generated and momentum-dependent) magnetic field which respects time-reversal invariance. This ``magnetic field'' is precisely the hermitian part of the self-energy correction that can be derived in a diagrammatic calculation: \begin{equation}\label{eq:self-energy} \Sigma^R_{k}= \frac{ n_{\mathrm{imp}}}{2}(\mathcal{T}_{kk}^{+}+\mathcal{T}_{kk}^{-}). \end{equation} For impurity potentials that do not act upon the spin degree of freedom, this term has no dynamical consequences and can be absorbed by redefining the chemical potential. However, this is not the case for our problem because the $T$-matrix acts upon the spin degree of freedom. Note that a uniform Rashba coupling that arises from encapsulation \cite{gumar2016spin} or an out-of-plane applied electric field~\cite{zibo2016numerical} will add an energy-independent potential to Eq.~\eqref{eq:self-energy}. Finally, the last two terms in Eq.~\eqref{eq:coll-int} are the quantum analogues of the ``scattered-in'' and ``scattered-out'' terms in the semiclassical Boltzmann equation. In the first Born approximation, $\mathcal{T}^{\pm}_{kp} \to V_{kp}$ (where $V$ is the single-impurity potential), the collision integral $\mathcal{I}[\delta n_k]$ would reduce to the more familiar form found in Refs.~\onlinecite{Glazov20102157,tarasenko2006scattering,pikus1991spin}. Next, note that the two $T$-matrices are related by hermitian conjugation: \begin{equation} \mathcal{T}_{pk}^{-} = \left[ \mathcal{T}^{+}_{kp}\right]^{\dag}. \end{equation} Upon using the following parametrization: $\delta n_{k}=\rho_{k} \: \mathbb{1} + \boldsymbol{m}_{k} \cdot \boldsymbol{\sigma}$, where $\rho_{k}$ and $\boldsymbol{m}_{k}$ represent the charge and spin distribution functions, respectively, and $\mathcal{T}^{+}_{kp}=A_{kp} \: \mathbb{1} + \boldsymbol{B}_{k p} \cdot \boldsymbol{\sigma}$, we obtain: \begin{align} \label{eq:QBE_charge} &\partial_t \rho_k + e\boldsymbol{E} \cdot \frac{\boldsymbol{\nabla}_{k} n^{0}_{k}}{\hbar} = \mathcal{I}_{1}[ \rho_{k}, \boldsymbol{m}_{k}], \\ \label{eq:QBE_spin} &\partial_t \boldsymbol{m}_k + \left(\frac{\gamma}{\hbar} \boldsymbol{\mathcal{H}} - \frac{ n_{\mathrm{imp}}}{\hbar}\mbox{Re}\: \boldsymbol{B}_{kk}\right) \times \boldsymbol{m}_{k} =\boldsymbol{\mathcal{I}}_{2}[\rho_{k},\boldsymbol{m}_{k}]. \end{align} Note that the term involving $\boldsymbol{B}_{kk}$ in Eq.~\eqref{eq:QBE_spin} is related to the self-energy correction $\Sigma_{k}^{R}$; This term is moved to the left-hand side of the equation to emphasis the resemblance between the impurity SOC-generated magnetic field $\boldsymbol{B}_{kk}$ and the real magnetic field $\boldsymbol{\mathcal{H}}$. The collision terms on the right of Eqs.~\eqref{eq:QBE_charge} and \eqref{eq:QBE_spin} describe how charge and spin are scattered by (time-reversal invariant) impurities; they are given by the following: \begin{align} \mathcal{I}_{1}[\rho_{k},\boldsymbol{m}_{k}] = \frac{n_{\mathrm{imp}}}{2\pi\hbar}\int\! & d^2 p \, \bigg[ c_1 (\rho_{p}-\rho_{k}) + \boldsymbol{c}_2 \cdot(\boldsymbol{m}_{p}-\boldsymbol{m}_{k}) \nonumber \\ & - \boldsymbol{c}_3 \cdot(\boldsymbol{m}_{p}+\boldsymbol{m}_{k}) \,\bigg]\delta(\epsilon_p -\epsilon_k), \label{eq:I1} \\ \boldsymbol{\mathcal{I}}_{2}[\rho_{k},\boldsymbol{m}_{k}] = \frac{n_{\mathrm{imp}}}{2\pi\hbar} \int\! & d^2 p \, \bigg[ c_1 (\boldsymbol{m}_{p}-\boldsymbol{m}_{k}) + \boldsymbol{c}_2 (\rho_{p}-\rho_{k}) \nonumber \\ & +\boldsymbol{c}_3 \,(\rho_{p}-\rho_{k}) + \boldsymbol{\mathcal{K}}\, \bigg]\delta(\epsilon_p -\epsilon_k), \label{eq:I2} \end{align} where the real-valued $c_1$, $\boldsymbol{c}_2$, $\boldsymbol{c}_3$ and $\boldsymbol{\mathcal{K}}$ are given by: \begin{align} c_1 =& |A_{kp}|^{2}+|\mathbf{B}_{kp}|^{2} \; ; \; \boldsymbol{c}_2 = 2\text{Re}\: (A_{kp}\, \boldsymbol{B^{*}_{kp}}), \\ \boldsymbol{c}_3 =& i\,\boldsymbol{B}_{kp}\times\boldsymbol{B}^{*}_{kp}, \\ \boldsymbol{\mathcal{K}} =& 2 \mathrm{Im}(A_{kp}\boldsymbol{B}^{*}_{kp}) \times \boldsymbol{m}_{p} \nonumber +2\boldsymbol{B}^{*}_{kp}\times(\boldsymbol{B}_{kp}\times\boldsymbol{m}_{p}) \\ &+ 2i\,\text{Im}\,[ (\boldsymbol{B}_{kp} \cdot\boldsymbol{m}_{p})\boldsymbol{B}^{*}_{kp} ]. \label{BigKappa} \end{align} The various terms in the collision integrals in Eqs.~\eqref{eq:I1} and \eqref{eq:I2} correspond to second-order scattering processes with specific physical interpretations [see Fig.~\ref{fig:triangle}(b)]. The $c_1$ terms describe conventional elastic scattering, and give rise to the Drude relaxation time. The $\boldsymbol{c}_2$ terms give the skew scattering rate $\alpha_{\mathrm{sk}}$, which couples the charge current to the spin current and is thus responsible for the extrinsic SHE.~\cite{Dyakonov_Perel_2} The terms in $\boldsymbol{\mathcal{K}}$ contribute to the scattering rate induced by Rashba SOC $\alpha_{\mathrm{R}}$; the physical interpretation of these terms depends on the symmetry of the $T$-matrix and the dimensionality of the system. For a 3D electron gas with parity, rotational and time-reversal symmetry, the first (second) term in $\boldsymbol{\mathcal{K}} $ corresponds to the swapping spin current (EY spin relaxation) while the last term vanishes.~\cite{dyakonov_swapcurrent2009} On the other hand, in a 2D non-relativistic electron gas with the same symmetry, $\boldsymbol{\mathcal{K}}$ gives rise to the extrinsic Edelstein effect, in addition to the swapping spin current and EY spin relaxation (see App.~\ref{app:sym}). The $\boldsymbol{c}_3$ terms correspond to the ASP scattering mechanism [Fig.~\ref{fig:triangle}(c)]. For example, the terms involving $c_{3}^{y}$ contain the factors $(B_{kp})^x \sigma^x$ and $(B_{kp}^{*})^z \sigma^z$ which flips and skews the electron spin, respectively. Their product is proportional to $\sigma^y$ which polarizes the electron spin in the $y$ direction. Note that the scattering process $\mathbf{c}_{2}$ couples to $\boldsymbol{m}_{p}-\boldsymbol{m}_{k}$ in Eq.~\eqref{eq:I1} and cannot lead to a uniform magnetization. \section{Microscopic model and ansatz} \label{sec:model} In order to solve the transport equations \eqref{eq:QBE_charge}--\eqref{BigKappa}, we require (i) a microscopic description \cite{hy2015extrinsic} of the scattering process that gives the single-impurity $T$-matrix $\mathcal{T}_{kp}^{\pm}$ , and (ii) an \emph{ansatz} for the distribution function \cite{chunli2016graphene}. As mentioned above, the T-matrix is parameterized by $\mathcal{T}^{+}_{kp}=A_{kp} \: \mathbb{1} + \boldsymbol{B}_{k p} \cdot \boldsymbol{\sigma}$. We can calculate $A_{kp}$ and $\boldsymbol{B}_{kp}$ for a microscopic model of 2D Dirac states scattering off an isolated rotationally and time-reversal symmetric impurity. (Inter-valley scattering is neglected, because we are ultimately interested in impurities with characteristic size much larger than the inter-carbon distance in graphene.) As shown in Appendix~\ref{app:micro_model}, the result is \begin{align} \label{eq:t-matrix-A} A_{kp}&= \gamma_{0}\cos\theta,\\ \label{eq:t-matrix-B} \boldsymbol{B}_{kp}&= \Big[\gamma_{R} \sin\phi, \; -\gamma_{R} \cos\phi, \; i \gamma_{I} \sin\theta\Big], \end{align} \noindent where $\theta \equiv (\theta_{k}-\theta_{p})/2$ and $\phi \equiv (\theta_{k}+\theta_{p})/2$, with $\theta_k \equiv \tan^{-1}(k_y/k_x)$ being the azimuthal angle for vector $k$. Thus, the results below will be expressed in terms of the (complex) renormalized potential strengths $\{\gamma_{0}, \gamma_I, \gamma_R\}$, which vary with the energy of the scattering electron (i.e.~chemical potential at zero temperature), and can exhibit resonances at certain energies. They are calculated from the bare impurity potentials which served as the inputs to the theory. The bare impurity potentials can be fitted to \emph{ab initio} calculations and/or experiments (see Appendix~\ref{app:micro_model}). In order to obtain the steady state solution of the QBE, we introduce a drift velocity \emph{ansatz} for the distribution function (which generalizes the one used in Ref.~\onlinecite{chunli2016graphene}): \begin{multline} \label{eq:ansatz} n^0_k +\delta n_{k} \\= f_{\mathrm{FD}}\left[\epsilon_k - \hbar \boldsymbol{k} \cdot \boldsymbol{v}_{c} - ((\hbar\boldsymbol{k} \cdot \boldsymbol{v}_{s})\boldsymbol{\hat{n}}_{1} + h_0 \boldsymbol{\hat{n}}_{0}) \cdot \boldsymbol{\sigma}\right]. \end{multline} Here, $n^0_k = f_{\mathrm{FD}}(\epsilon_k)$ is the equilibrium Fermi-Dirac distribution function, $\boldsymbol{v}_{c}$ ($\boldsymbol{v}_{s}$) is the drift velocity of the charge (spin) degrees of freedom, $h_0$ is proportional to the magnitude of the magnetization, and $\boldsymbol{\hat{n}_0}$ and $\boldsymbol{\hat{n}_1}$ are the directions of magnetization and spin current polarization directly. The quantities of interest are the magnetization (i.e. non-equilibrium spin polarization), $\boldsymbol{M} = (M^x, M^y, M^z)$, the charge current density, $\boldsymbol{J} = (J_x, J_y)$, and the spin current density $\boldsymbol{\mathcal{J}}^{a} = (\mathcal{J}_x^a, \mathcal{J}_y^a)$ (where $a=x,y,z$ is the spin orientation). At zero temperature, they are related with the \textit{ansatz} by \begin{align} M^a =&\frac{\hbar g_{s}g_{v}}{\Omega}\sum_{k} (m_{k})^a = \hbar g_{s}g_{\nu}N(\mu) h_{0} \: (\hat{n}_{0})^a,\\ J_i =& \frac{eg_{v} g_{s}}{\Omega}\sum_{k} \rho_{k} (v_{k})_{i} =eg_{s}g_{v}N(\mu)\epsilon_{F}\frac{(v_{c})_i}{2},\\ \mathcal{J}^{a}_i =&\frac{eg_{s}g_{v}}{\Omega} \sum_{k} (m_{k})^a (v_{k})_i =eg_{s}g_{v}N(\mu)\epsilon_{F}\frac{(v_{s})_{i} (\hat{n}_{1})^a }{2}, \end{align} where $g_{s}=g_{v}=2$ are the spin and valley degeneracy, $\boldsymbol{v}_{k} = \hbar v_F (\mathbf{{k}}/k)$ is the group velocity, and $N(\mu)=\mu/(2\pi \hbar^2 v_{F}^2)$ is the density of states at Fermi energy. We follow the convention where spin current and charge current are measured in the same units. For the sake of notational simplicity, we have rescaled \begin{equation} \boldsymbol{\mathcal{M}} = (ev_{F}/\hbar)\, \boldsymbol{M}. \end{equation} Next, we substitute Eqs.~\eqref{eq:t-matrix-A}--\eqref{eq:ansatz} into Eqs.~\eqref{eq:QBE_charge}--\eqref{eq:QBE_spin} and set $\boldsymbol{E}=E_x$ and $\boldsymbol{\mathcal{H}}=0$. This yields Eq.~\eqref{eq:transport}, the linear response relation. The scattering rates that enter into this equation are given by: \begin{align} \alpha_{\mathrm{sk}}&= \frac{\pi n_{\mathrm{imp}} }{ \hbar}N(\mu)\: \mbox{Im}\left( \gamma_{I}\gamma_{0}^{*}\right), \\ \alpha_{\mathrm{R}}&= \frac{ n_{\mathrm{imp}} }{ \hbar } \left( \frac{1}{2}\mathrm{Re}\: \gamma_{R} + \pi N(\mu) \mathrm{Im}\: (\gamma_{0}+\gamma_{I})\gamma_{R}^* \right), \label{eq:alpha_ext} \\ \alpha_{\mathrm{asp}}&= -\frac{2\pi n_{\text{imp}}}{ \hbar} N(\mu)\: \mbox{Re} \gamma_{I}\gamma_{R}^{*}, \label{eq:alpha_cf} \\ \frac{1}{\tau_{\mathrm{D}}}& =\frac{\pi n_{\mathrm{imp}}}{2\hbar}N(\mu) \left( |\gamma_{0}|^2 + 3| \gamma_{I}|^2 + 4| \gamma_{R}|^2 \right), \\ \frac{1}{\tau_{\text{EY}} } &= \frac{8}{\tau_{\mathrm{D}} }\left( \frac{|\gamma_{I}|^2+ |\gamma_{R}|^{2}}{ |\gamma_{0}|^2 + 3| \gamma_{I}|^2 + 4|\gamma_{R}|^2 } \right). \end{align} Note that $\alpha_{\mathrm{R}}$ contains a term linear in the SOC strength arising from forward scattering (i.e., the effective magnetic field induced by $\boldsymbol{B}_{kk}$). If we neglect Rashba SOC ($\alpha_{\mathrm{R}}=\alpha_{\mathrm{asp}}=0$) and set the magnetic field $\boldsymbol{\mathcal{H}} = \boldsymbol{0}$, the spin precession vanishes and the QBE reduces to the semi-classical transport equation of Ref.~\onlinecite{ferreira2014extrinsic} (for the case of spin-conserving SOC disorder), which captures the SHE but not CISP. Accounting for finite temperatures will complicate the expressions of the scattering rates without changing the results qualitatively. Our theory neglects multiple impurity scattering and interband coherence effects. Therefore, it becomes less reliable when the Fermi energy $\mu$ becomes comparable to the temperature and, at zero temperature, when the Fermi wavelength is comparable to the average impurity distance. Using the impurity density $n_{\text{imp}}\sim 10^{10}\,\mathrm{cm}^{-2}$ reported in Ref.~\onlinecite{balakrishnan2014giant}, we estimate our theory breaks down for $\mu \approx 10$ meV. \section{Summary and outlook} \label{sec:summary} We have developed a quantum Boltzmann equation that exhibits three important technical advantages: (i) it accounts for the coherent spin dynamics of the electrons as they undergo scattering with a random ensemble of impurities that induced spin-orbit coupling by proximity; (ii) it goes beyond the standard Born approximation and treats impurity potential strength to all orders, thus capturing the important effect of scattering resonances; and (iii) it describes the experimentally-relevant dilute impurity regime (higher impurity densities can also be systematically accessed via Virial expansion if necessary; see Appendix \ref{sec:app_QBE}). Upon applying this theoretical framework to spin coherent transport in graphene with extrinsic SOC impurities, we found that besides the previously-known skew scattering and Rashba-induced spin flip scattering processes, there exists a distinct and novel scattering mechanism: anisotropic spin-precession (ASP) scattering. Conceptually, ASP scattering provides a ``missing link'' in the mechanisms linking spin polarization, charge current, and spin current, as shown diagrammatically in Fig.~\ref{fig:triangle}(a). The most striking physical consequence is that ASP scattering provides a dominant contribution to current-induced spin polarization (CISP) in graphene. This can be detected experimentally using either spatially resolved magneto-optical Kerr rotation~\cite{sih2005spatial} or suitable non-local transport measurements. ASP scattering also gives a sizeable correction to the SHE, which could be verified by studying the spin current at low chemical potentials. We have also calculated the spin relaxation time arising from SOC disorder, which includes both EY and DP relaxation. The DP relaxation rate turns out to have a significant contribution from ASP scattering. In the future, it will be interesting to extend the transport equations to describe spin diffusion \cite{in-prep}, which may yield insights into ongoing experimental controversies over nonlocal resistance measurements in adatom-decorated graphene~\cite{neutral2015wang,kaverzin2015electron,balakrishnan2014giant}. Apart from the graphene context, analogues of ASP scattering might also be present in other systems where 3D rotation symmetry is broken, such as interfaces between two different materials where roughness \cite{sanchez2013spin} and impurities can generate SOC disorder. The QBE formalism that we have developed can also be extended to study the anomalous Hall effect in ferromagnetic graphene, for which the anomalous Hall conductivity receives a large extrinsic contribution~\cite{wang2015proxmity}. \textbf{Acknowledgements: } MAC's work is supported by the Ministry of Science and Technology (Taiwan) under contract number NSC 102-2112-M-007-024-MY5, and Taiwan's National Center of Theoretical Sciences (NCTS). CH and CYD were supported by the Singapore National Research Foundation grant No.~NRFF2012-02, and by the Singapore MOE Academic Research Fund Tier 3 grant MOE2011-T3-1-005. We gratefully acknowledge useful discussions with S.~Adam, E.~Farrell, R.~Raimondi, S.~Roche, E. Sherman, J.~Sinova, S.~Valenzuela, and G.~Vignale.
1,108,101,564,417
arxiv
\section{Introduction} Transistors are the main compounds of semiconductor electronic technology. The core of transistors is composed of three semiconducting materials concatenated in series, thus forming double junctions. The middle semiconductor is doped with charged impurities different from those in the two other semiconductors. Since transistors have three ports and currents flow between pairs of ports, two electric currents are coupled together inside transistors, enabling the amplification of signals \cite{SST51,EM54,SS04,CC05,B05,SN07}. The fundamental issue is that the coupling between the electric currents is ruled by microreversibility, as in any type of device or process. In linear regimes close to thermodynamic equilibrium, microreversibility implies the Onsager-Casimir reciprocal relations~\cite{O31a,O31b,C45}. However, transistors are functioning in highly nonlinear regimes beyond the domain of application of the Onsager-Casimir reciprocal relations. Remarkably, the generalizations of these relations beyond the linear regime are known today \cite{S92,AG04,AG07JSM,HPPG11,BG18}. They can be deduced from the fluctuation theorem for currents, which is based on the time-reversal symmetry of the microscopic dynamics of electrons and ions \cite{AG07JSP,AGMT09,AG09,EHM09,CHT11,S12,G13}. The fluctuation theorem is valid not only in the linear regimes, but also in the nonlinear regimes, and can thus be used to investigate the nonlinear transport properties of transistors. In our previous paper \cite{GG18}, the fluctuation theorem was considered for diodes that are also nonlinear electronic devices. Here, our purpose is to extend these considerations to transistors. The novel aspect is that two currents are flowing in transistors, instead of only one in diodes. As a consequence of the nonlinear coupling between the two currents, the generalizations of Onsager-Casimir reciprocal relations to nonlinear transport can be tested in transistors. For this purpose, the stochastic approach of Ref.~\cite{GG18} is extended from the single junction of diodes to the double junction of $n$-$p$-$n$ transistors. The approach is based on diffusion-reaction stochastic partial differential equations for electrons and holes, including their Coulomb interaction described by the Poisson equation. This scheme satisfies local detailed balance in consistency with microreversibility. The stochastic description is presented in Sec.~\ref{sec:stochastic}. The functionality of transistors is studied in Sec.~\ref{sec:funct}. Section~\ref{sec:FT} is devoted to the fluctuation theorem for the two currents of the transistor. Section~\ref{sec:resp} shows that the linear response coefficients obey the Onsager-Casimir reciprocal relation and the fluctuation-dissipation theorem, and that the next-order nonlinear response coefficients satisfy higher-order generalizations. Section~\ref{sec:conclude} gives concluding remarks. \section{Stochastic description of transistors} \label{sec:stochastic} \subsection{The bipolar $n$-$p$-$n$ junction transistor} There exists many types of transistors \cite{SS04,CC05,B05,SN07}. The bipolar $n$-$p$-$n$ junction transistor (BJT) is one of the most common of them. BJTs consist of three small doped regions of a piece of silicon, respectively typed as $n$, $p$, and $n$, thus forming two junctions, as shown in Fig.~\ref{fig1}. The electrons~${\rm e}^{-}$ and holes~${\rm h}^{+}$ are the two mobile charge carriers across the bipolar $n$-$p$-$n$ junction, with electrons being the majority ones in $n$-type semiconductor, and holes the majority ones in $p$-type semiconductor. The positively-charged donors and negatively-charged acceptors are respectively anchored in $n$-type semiconductors and $p$-type semiconductors. Each doped region has a port and the three ports are in contact with some charge carrier reservoir. They are respectively called \textit{Collector}, \textit{Base}, and \textit{Emitter} (see Fig.~\ref{fig1}). \begin{figure*} \begin{minipage}[t]{0.28\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_1a.pdf}} \end{minipage} \begin{minipage}[t]{0.50\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_1b.pdf}} \end{minipage} \caption{Schematic representation of (a) the transistor and (b) the bipolar $n$-$p$-$n$ double junction. In panel~(b), the black (resp. white) dots represent electrons (resp. holes). The three reservoirs, called \textit{Collector}, \textit{Base}, and \textit{Emitter}, fix the values of the electron density, the hole density, and the electric potentials at their contact with the transistor.} \label{fig1} \end{figure*} In order to model the transistor, a Cartesian coordinate system is associated with the system. As shown in Fig.~\ref{fig1}(b), the semiconducting material extends from $x=-l/2$ to $x=+l/2$ and is divided in three parts. The part from $x=-l/2$ to $x=-l_p/2$ is of $n$-type, the one from $x=-l_p/2$ to $x=+l_p/2$ of $p$-type, and the one from $x=+l_p/2$ to $x=+l/2$ of $n$-type. The three parts are respectively of lengths $l_n=(l-l_p)/2$, $l_p$, and $l_n=(l-l_p)/2$. The {\it Collector} is in contact at $x=-l/2$, the {\it Emitter} at $x=+l/2$, and the {\it Base} along a length $l_B$ symmetrically located around the origin $x=0$. The length of the contact with the {\it Base} is smaller than the one of the $p$-type part: $l_B < l_p$. The geometry is chosen to be symmetric with respect to $x=0$ for simplicity. In addition, the bipolar $n$-$p$-$n$ double junction has the section area $\Sigma$ in the transverse $y$- and $z$-directions. The section areas of the contacts with the {\it Collector} and {\it Emitter} are assumed to be equal: $\Sigma_C=\Sigma_E=\Sigma$. Accordingly, the semiconducting material extends over a domain of volume $V=l\Sigma$. Moreover, we denote $\Sigma_B$ the section area of the contact with the {\it Base}. The donor density $d({\bf r})$ and acceptor density $a({\bf r})$ are supposed to be uniform in the different types of semiconductor. Therefore, they can be expressed as \begin{align} & d({\bf r})=d\, \theta\left(-x-l_p/2\right)+d\, \theta\left(x-l_p/2\right) \text{,} \\ & a({\bf r})=a\, \theta\left(x+l_p/2\right)\, \theta\left(-x+l_p/2\right) \text{,} \end{align} in terms of two constant values $a$ and $d$, combined with Heaviside's step function $\theta(x)$ defined such that $\theta(x)=1$ if $x>0$ and $\theta(x)=0$ otherwise. The charge density is thus given by \begin{align} \rho=e(p-n+d-a) \, , \label{eq_charge_density} \end{align} with the elementary electric charge $e=|e|$, and the densities of holes $p$, electrons $n$, donors $d$, and acceptors $a$. Here, we have assumed that every donor gives one electron and every acceptor one hole. Because of the electrostatic interaction between the charges, these densities are coupled to the electric potential $\phi({\bf r})$. The electron and hole densities as well as the electric potential have fixed boundary values at the contacts with the three reservoirs. They are respectively given by $n_C$, $p_C$, $\phi_C$ at the {\it Collector}; $n_B$, $p_B$, $\phi_B$ at the {\it Base}; and $n_E$, $p_E$, $\phi_E$ at the {\it Emitter}. If the transistor is at equilibrium without flow of charge carriers, detailed balance between the generation and recombination of electron-hole pairs requires that $n_{\rm eq} p_{\rm eq}=\nu^2$, where $\nu$ is called the intrinsic carrier density. Moreover, the electron and hole densities are given at equilibrium by \begin{align} n_{\rm eq}({\bf r})\sim {\rm e}^{+\beta\phi_{\rm eq}({\bf r})}\hspace{1cm}\text{and}\hspace{1cm} p_{\rm eq}({\bf r})\sim{\rm e}^{-\beta e\phi_{\rm eq}({\bf r})} \end{align} in terms of the electric potential determined across the whole system by the Poisson equation and the boundary conditions at the contacts with the three reservoirs. If the BJT is at equilibrium, the inhomogeneous distributions of the charge carriers thus produce the Nernst potentials \begin{align}\label{Nernst_C_E} (\phi_C-\phi_E)_{\rm eq}=\frac{1}{\beta e}\ln\frac{n_C}{n_E}=\frac{1}{\beta e}\ln\frac{p_E}{p_C} \end{align} and \begin{align}\label{Nernst_B_E} (\phi_B-\phi_E)_{\rm eq}=\frac{1}{\beta e}\ln\frac{n_B}{n_E}=\frac{1}{\beta e}\ln\frac{p_E}{p_B} \text{,} \end{align} where $\beta\equiv(k_{\rm B}T)^{-1}$ is the inverse temperature. The transistor is driven out of equilibrium by applying voltage differences with respect to the Nernst potentials \begin{align} & V_C=\phi_C-\phi_E-\frac{1}{\beta e}\ln\frac{n_C}{n_E} \text{,} \label{eq_V_C}\\ & V_B=\phi_B-\phi_E-\frac{1}{\beta e}\ln\frac{n_B}{n_E} \text{,} \label{eq_V_B} \end{align} which induce currents across the BJT. In the following, we use the associated affinities or thermodynamic forces \begin{align}\label{affinities} A_C \equiv \beta e V_C \qquad\mbox{and} \qquad A_B \equiv \beta e V_B\text{,} \end{align} which are dimensionless. The equilibrium state is recovered if they vanish, i.e., if the applied voltages are equal to zero $V_C=V_B=0$. \subsection{Stochastic diffusion-reaction equations} The thermal agitation inside the BJT generates incessant erratic motion for the electrons and holes, in turn causing local fluctuations in the currents and reaction rates. These fluctuations can be described within the stochastic approach by introducing Gaussian white noise fields in the diffusion-reaction equations for the electron and hole densities. The advantage of this approach is that the usual phenomenological parameters suffice for the stochastic description. The mobilities of electrons and holes are related with their diffusion coefficients through Einstein's relations \begin{align} \mu_n=\beta e D_n \hspace{1cm}\text{and}\hspace{1cm} \mu_p=\beta eD_p \text{.} \end{align} Besides, the electron-hole pairs are randomly generated and recombined according to the reactions \begin{align} {\rm e}^{-}+{\rm h}^{+}\autorightleftharpoons{$\scriptstyle k_-$}{$\scriptstyle k_+$}\emptyset \, \text{,} \label{eq_reaction} \end{align} where $k_{+}$ and $k_{-}$ are respectively the generation and recombination rate constants. In general, the quantities $D_n$, $D_p$, and $k_{\pm}$ are spatially dependent in an inhomogeneous medium. However, for simplicity, we assume that they are uniform across the whole BJT. Considering the diffusion and generation-recombination processes as well as the electrostatic interaction between the charges, we have the following stochastic partial differential equations for the charge carrier densities coupled to the Poisson equation for the electric potential, \begin{align} & \partial_tn+{\bf\nabla}\cdot{\bf j}_n=\sigma_n \text{,} \label{eq-n}\\ & \partial_tp+{\bf\nabla}\cdot{\bf j}_p=\sigma_p \text{,} \label{eq-p}\\ & \nabla^2\phi=-\frac{\rho}{\epsilon} \text{,} \label{eq-phi} \end{align} where \begin{align} & \sigma_n=\sigma_p=k_+-k_-np+\delta\sigma \text{,} \label{eq-s}\\ & {\bf j}_n=-\mu_n n\,\pmb{\cal E}-D_n{\bf\nabla}n+\delta{\bf j}_n \text{,} \label{eq-jn}\\ & {\bf j}_p=+\mu_p p\,\pmb{\cal E}-D_p{\bf\nabla}p+\delta{\bf j}_p \text{,} \label{eq-jp}\\ & \pmb{\cal E}=-{\bf\nabla}\phi \text{,} \label{eq-E} \end{align} are the reaction rates, the current densities, and the electric field, while $\rho$ is the charge density given by Eq. (\ref{eq_charge_density}) and $\epsilon$ the dielectric constant of the material \cite{GG18}. The fluctuations $\delta{\bf j}_n$, $\delta{\bf j}_p$, and $\delta\sigma$ are Gaussian white noise fields characterized by \begin{align} & \langle \delta{\bf j}_n({\bf r},t) \rangle = \langle \delta{\bf j}_p({\bf r},t) \rangle = 0 \label{av_j}\text{,} \\ & \langle \delta{\bf j}_n({\bf r},t)\otimes \delta{\bf j}_n({\bf r}',t') \rangle = \Gamma_{nn}({\bf r},t) \, \delta^3({\bf r}-{\bf r'}) \, \delta(t-t') \, {\boldsymbol{\mathsf 1}} \text{,} \\ & \langle \delta{\bf j}_p({\bf r},t)\otimes \delta{\bf j}_p({\bf r}',t') \rangle = \Gamma_{pp}({\bf r},t) \, \delta^3({\bf r}-{\bf r'}) \, \delta(t-t') \, {\boldsymbol{\mathsf 1}} \text{,} \\ & \langle \delta{\bf j}_n({\bf r},t)\otimes \delta{\bf j}_p({\bf r}',t') \rangle = 0 \text{,} \\ & \langle\delta\sigma({\bf r},t)\rangle = 0 \text{,} \label{av_s}\\ & \langle\delta\sigma({\bf r},t)\,\delta\sigma({\bf r'},t')\rangle = \Gamma_{\sigma\sigma}({\bf r},t) \, \delta^3({\bf r}-{\bf r'}) \, \delta(t-t') \text{,} \\ & \langle \delta\sigma({\bf r},t)\, \delta{\bf j}_n({\bf r}',t') \rangle = \langle \delta\sigma({\bf r},t)\, \delta{\bf j}_p({\bf r}',t') \rangle = 0 \text{,} \end{align} where ${\boldsymbol{\mathsf 1}}$ is the $3\times 3$ identity matrix and \begin{align} &\Gamma_{nn}({\bf r},t) \equiv 2\, D_n \, n({\bf r},t) \text{,} \label{GWN-n}\\ &\Gamma_{pp}({\bf r},t) \equiv 2\, D_p \, p({\bf r},t) \text{,} \label{GWN-p}\\ &\Gamma_{\sigma\sigma}({\bf r},t) \equiv k_++k_- n({\bf r},t) p({\bf r},t) \label{GWN-sig} \end{align} are the noise spectral densities associated with the electron and hole diffusions, and the reaction. Because of Eqs.~(\ref{av_j}) and~(\ref{av_s}), we recover the mean-field equations of the macroscopic description by averaging the stochastic partial differential equations over the noises. \subsection{Numerical method for simulating the transistor} For the numerical simulation of the transistor, a Markov jump process is associated with the stochastic partial differential equations~(\ref{eq-n})-(\ref{eq-E}), as described in detail in Appendix~\ref{App:Markov}. Space is discretized into $L$ cells of length $\Delta x=l/L$, section area $\Sigma$, and volume $\Omega=\Sigma\Delta x$, located at the coordinates $x_{i}=(i-0.5)\Delta x-l/2$ ($i=1,2,\dots,L$). Consistently with Fig.~\ref{fig1}(b), there are $L_n=l_n/\Delta x$ cells in both parts of $n$-type, $L_p=l_p/\Delta x$ cells for the part of $p$-type, and $L_B=l_B/\Delta x$ cells in contact with the {\it Base}. The numbers of electrons, holes, acceptors, and donors in each cell of the BJT are related to the corresponding densities by $N_{i}=n(x_{i})\Omega$, $P_{i}=p(x_{i})\Omega$, $A_{i}=a(x_{i})\Omega$, and $D_{i}=d(x_{i})\Omega$. The state of the discretized BJT is fully characterized by the electron numbers ${\bf N}=(N_i)_{i=1}^L$ and the hole numbers ${\bf P}=(P_i)_{i=1}^L$ in the cells. The master equation ruling the time evolution of their probability distribution ${\cal P}({\bf N},{\bf P},t)$ is given in Appendix~\ref{master_eq}. Moreover, the Poisson equation~(\ref{eq-phi}) is also discretized along the chain of $L$ cells forming the system, taking into account the electric potentials of the {\it Collector}, the {\it Base}, and the {\it Emitter}, as explained in Appendix~\ref{App:Poisson}. The resulting stochastic process can be simulated numerically by Gillespie's algorithm \cite{G76}, which is an exact method for generating random trajectories in this case. In order to speed up the simulation, the Markov jump process is approximated by a Langevin stochastic process under the assumption that the numbers of electrons and holes are large enough in every cell, $N_i\gg 1$ and $P_i\gg 1$. Accordingly, these numbers obey stochastic differential equations expressed in terms of the fluxes of particles between the cells, the reaction rates, and Gaussian white noises for their fluctuations, as shown in Appendix \ref{App:Langevin}. At the contacts with the three reservoirs, the boundary conditions on the charge carrier densities determine the boundary values for the corresponding particle numbers \begin{align} & \bar{N}_C=n_C\Omega\text{,}\hspace{1cm} \bar{P}_C=p_C\Omega \text{,}\\ & \bar{N}_B=n_B\Omega\text{,}\hspace{1cm} \bar{P}_B=p_B\Omega \text{,}\\ & \bar{N}_E=n_E\Omega\text{,}\hspace{1cm} \bar{P}_E=p_E\Omega \text{.} \end{align} Furthermore, the three parts of the transistor are supposed to be doped from a semiconducting material of uniform intrinsic density $\nu$, so that the boundary values of the electron and hole densities should satisfy the conditions \begin{align} n_Cp_C=n_Bp_B=n_Ep_E = \nu^2 \text{.} \end{align} We further set \begin{align} n_C=n_E\text{,}\hspace{1cm}p_C=p_E \text{,} \end{align} to have a system that is symmetric with respect to $x=0$, as depicted in Fig.~\ref{fig1}(b). In numerical simulations, the statistical averages of any observable quantity $X$ can be evaluated by the time average $\langle X\rangle=\lim_{T\to\infty}(1/T) \int_0^TX(t)\, dt$, which is equivalent by ergodicity to the ensemble average $\langle X\rangle=\sum_{{\bf N},{\bf P}}X\,{\cal P}_{\rm st}({\bf N},{\bf P})$ over the stationary probability distribution ${\cal P}_{\rm st}$. In the continuum limit, the volume of the cells is supposed to vanish together with the particle numbers, so that the electron and hole densities can be recovered as $n(x_{i})=N_{i}/\Omega$ and $p(x_{i})=P_{i}/\Omega$. \bigstrutjot=2pt \begin{table*}[h] \caption{The values of dimensionless physical quantities and parameters used in simulating the BJT model in rescaled units.} \begin{tabular}{>{\centering\arraybackslash}m{6cm}>{\centering\arraybackslash}m{2cm}||>{\centering\arraybackslash}m{6cm}>{\centering\arraybackslash}m{2cm}} \hline \hline quantity & value & quantity & value \bigstrut \\ \hline permittivity & $\epsilon=0.01$ & length of each cell & $\Delta x=0.1$ \bigstrut \\ \hline elementary charge & $|e|=1.0$ & width of each cell & $\Delta y=0.2$ \bigstrut \\ \hline inverse temperature & $\beta=1.0$ & number of cells in both $n$-type regions & $L_n=10$ \bigstrut \\ \hline diffusion coefficient for electrons and holes & $D=0.01$ & number of cells in the $p$-type region & $L_p=3$ \bigstrut \\ \hline generation and recombination rate constants & $k_+=k_-=0.01$ & number of cells in contact with the {\it Base} & $L_B=1$ \bigstrut \\ \hline \hline \end{tabular} \label{tab_physical_quantities} \end{table*} \begin{table*}[h] \caption{The set of parameter values used in Sec.~\ref{sec:funct}.} \begin{tabular}{>{\centering\arraybackslash}m{5cm}>{\centering\arraybackslash}m{3cm}||>{\centering\arraybackslash}m{5cm}>{\centering\arraybackslash}m{3cm}} \hline \hline parameter & value & parameter & value \bigstrut \\ \hline volume of each cell & $\Omega=10^9$ & section area & $\Sigma=10^{10}$, $\Sigma_B=5\times 10^9$ \bigstrut \\ \hline number of electrons for the \textit{Collector} & $\bar{N}_C=10^{13}$ & number of holes for the \textit{Collector} & $\bar{P}_C=10^5$ \bigstrut \\ \hline number of electrons for the \textit{Base} & $\bar{N}_B=10^8$ & number of holes for the \textit{Base} & $\bar{P}_B=10^{10}$ \bigstrut \\ \hline number of electrons for the \textit{Emitter} & $\bar{N}_E=10^{13}$ & number of holes for the \textit{Emitter} & $\bar{P}_E=10^5$ \bigstrut \\ \hline \hline \end{tabular} \label{tab_parameters_1} \end{table*} \begin{figure*}[h] \begin{minipage}[t]{0.99\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_2.pdf}} \end{minipage} \caption{The profiles of (a) the charge carrier densities, (b) the current densities, and (c) the electric potential across the BJT which is used as signal amplifier under the working conditions $A_C=20$ and $A_B=6$. The {\it Collector C} is located at $x\leq-1.15$, the {\it Emitter E} at $x\geq+1.15$, and the {\it Base B} around $x=0$. The simulations were carried out with the time step $dt=0.00015$ and $10^6$ iterates for every data point.} \label{fig2} \end{figure*} \begin{figure*}[h] \begin{minipage}[t]{0.99\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_3.pdf}} \end{minipage} \caption{(a) The mean currents $J_C$ and $J_B$ versus the affinity $A_B$, with the other affinity fixed to the value $A_C= 20$. The lines join the numerical points depicted by the asterisks. (b) The current $J_C$ versus the other current $J_B$. The solid line joins the asterisks. The dashed line in the middle region is determined from Lagrange interpolation using the five asterisks of this domain. The derivative of $J_C$ with respect to $J_B$ at the point $(A_C=20,A_B=6)$ is evaluated giving the amplification factor~(\ref{alpha_num}). The simulations were carried out with the time step $dt=0.00015$ and $10^6$ iterates for every data point.} \label{fig3} \end{figure*} We assume for simplicity that the electron and hole diffusion coefficients are equal $D_n=D_p\equiv D$. As done in our previous paper~\cite{GG18}, the quantities of interest may be rescaled using the intrinsic carrier density $\nu$, the intrinsic carrier lifetime $\tau=1/(k_-\nu)$, the intrinsic carrier diffusion length before recombination $\ell=\sqrt{D\tau}$, the inverse temperature $\beta$, and the elementary electric charge. After this rescaling, the quantities of interest become dimensionless. Table~\ref{tab_physical_quantities} gives the values of the so-rescaled quantities used in the following numerical simulations of the BJT model. \section{The Functionality of transistors} \label{sec:funct} The purpose of this section is to show that the properties characterizing the functionality of transistors can be described within the stochastic approach. In electronic technology, transistors are primarily used to amplify signals in electric circuits. This amplification results from the coupling between the two electric currents, $J_C$ and $J_B$. By this coupling, one current can serve as input and the other as output. The amplification factor is defined as the ratio of these two currents, $J_C/J_B$. We may also introduce the differential amplification factor as follows. When the affinity $A_C$ is fixed, the variation of the other affinity $A_B$ leads to variations of $J_C$ and $J_B$. The amplification factor is defined as the ratio between these two variations \begin{align} \alpha=\left(\frac{\partial J_C}{\partial J_B}\right)_{A_C} \label{alpha-dfn} \end{align} under specific working conditions. To achieve the functionality of signal amplification, the transistors should satisfy the following requirements: \begin{itemize} \item The concentration of the majority charge carriers in the \textit{Collector} region should be overwhelmingly larger than the concentration of minority charge carriers in the \textit{Base} region. \item The concentration of the majority charge carriers in the \textit{Emitter} region should be overwhelmingly larger than the concentration of minority charge carriers in the \textit{Base} region. \item The \textit{Collector}-\textit{Base} junction should be reverse biased. \item The \textit{Emitter}-\textit{Base} junction should be forward biased. \item The \textit{Base} region should be very thin so that the majority charge carriers in the \textit{Emitter} region can easily get swept to the \textit{Collector} region. \item The contacting section areas $\Sigma_C$ and $\Sigma_E$ should be larger than $\Sigma_B$. \end{itemize} Table \ref{tab_parameters_1} gives a set of parameter values approaching these requirements in order to show that the present stochastic model can describe transistors in such regimes. The first two conditions are satisfied since $\bar{N}_C=\bar{N}_E\gg\bar{N}_B$, and the last one because $\Sigma=\Sigma_C=\Sigma_E>\Sigma_B$. If the transistor was at equilibrium without applied voltage ($A_C= A_B=0$), the Nernst potentials~(\ref{Nernst_C_E}) and~(\ref{Nernst_B_E}) would take the values $(\phi_C-\phi_E)_{\rm eq}=0$ and $(\phi_B-\phi_E)_{\rm eq}=-11.5$ with the parameter set of Table \ref{tab_parameters_1}. At equilibrium, the electric field would have a symmetric profile around $x=0$ with $(\phi_C-\phi_B)_{\rm eq}=(\phi_E-\phi_B)_{\rm eq}=11.5$. Figure~\ref{fig2} shows the profiles of charge carrier densities and current densities together with the electric potential under nonequilibrium conditions with applied voltages corresponding to $A_C= 20$ and $A_B=6$. In Fig.~\ref{fig2}(a), we see that the {\it Base} region is thin in the model, so that the fifth condition is satisfied. As observed in Fig.~\ref{fig2}(b), the current densities are non-vanishing because the transistor is out of equilibrium. According to Eqs.~(\ref{eq_V_C})-(\ref{eq_V_B}), we here have that $\phi_C-\phi_E=20$ and $\phi_B-\phi_E=-5.5$, so that $\phi_C-\phi_B=25.5$ and $\phi_E-\phi_B=5.5$, in agreement with the electric field plotted in Fig.~\ref{fig2}(c). Since $\phi_C-\phi_B=25.5$ is larger than $(\phi_C-\phi_B)_{\rm eq}=11.5$, the \textit{Collector}-\textit{Base} junction is reverse biased, as it should by the third condition. Moreover, $\phi_E-\phi_B=5.5$ is smaller than $(\phi_E-\phi_B)_{\rm eq}=11.5$, so that the \textit{Emitter}-\textit{Base} junction is forward biased and the fourth condition is also satisfied. Under these conditions, the transistor can indeed achieve signal amplification, as demonstrated in Fig.~\ref{fig3}. The currents $J_C$ and $J_B$ are shown in Fig.~\ref{fig3}(a) as functions of $A_B$, with $A_C$ fixed. Since the current $J_C$ is greater than $J_B$, the amplification factor $J_C/J_B$ is larger than unity, as expected. Furthermore, Fig.~\ref{fig3}(b) depicts how the current $J_C$ increases with the other current $J_B$ and the associated affinity $A_B$. For $A_B=6$, the differential amplification factor~(\ref{alpha-dfn}) is evaluated to be \begin{align} \alpha(A_C= 20,A_B=6)\simeq 4.278 \text{,} \label{alpha_num} \end{align} which is also larger than unity, as required. It should be noticed that the amplification factors can take different values for different working conditions of the transistor. These results show that the stochastic approach is relevant to study transistors in their regimes of signal amplification. We proceed in the next Sec.~\ref{sec:FT} and Sec.~\ref{sec:resp} with the study of their fluctuation properties. \section{Fluctuation Theorem for Currents} \label{sec:FT} \subsection{Generalities} We consider the fluctuating electric currents flowing respectively across the contact with the \textit{Collector} and the contact with the \textit{Base}. These electric currents are due to the random motion of electrons and holes crossing the contact sections between the transistor and the corresponding reservoirs. The instantaneous electric currents are thus defined as \begin{align} &{\cal I}_C(t)=\sum_{n=-\infty}^{+\infty}q_n^{(C)}\delta(t-t_n^{(C)}) \text{,}\\ & {\cal I}_B(t)=\sum_{n=-\infty}^{+\infty}q_n^{(B)}\delta(t-t_n^{(B)}) \text{,} \end{align} where $t_n^{(C)}$ (resp. $t_n^{(B)}$) are the random times of the crossing events and $q_n^{(C)}$ (resp. $q_n^{(B)}$) are the transferred charges equal to $\pm e$ depending on whether the carrier is an electron or a hole and if its motion is inward or outward the transistor. The corresponding random numbers of charges accumulated over the time interval $[0,\,t]$ are defined as \begin{align} Z_C(t)=\frac{1}{e}\int_0^{t}{\cal I}_C(t')\,dt' \text{,}\qquad Z_B(t)=\frac{1}{e}\int_0^{t}{\cal I}_B(t')\,dt' \, . \label{Z-dfn} \end{align} We also define the instantaneous total electric currents including the contribution of displacement currents as \begin{align} &\tilde{\cal I}_C(t)={\cal I}_C(t)-\epsilon\, \partial_t\partial_x\phi\, \Sigma_C \text{,}\\ &\tilde{\cal I}_B(t)={\cal I}_B(t)-\epsilon\, \partial_t\partial_y\phi\, \Sigma_B \text{,} \end{align} which are the experimentally measured electric currents \cite{GG18,AG09,BB00,S38,R39}, as well the corresponding accumulated charge numbers $\tilde Z_C(t)$ and $\tilde Z_B(t)$ with definitions as in Eq.~(\ref{Z-dfn}). The mean values of the charge currents are given by \begin{align} & J_C\equiv \lim_{t\to\infty}\frac{1}{t}\, \langle Z_C(t)\rangle = \lim_{t\to\infty}\frac{1}{t}\, \langle \tilde{Z}_C(t)\rangle\text{,} \label{J_C}\\ & J_B\equiv \lim_{t\to\infty}\frac{1}{t}\, \langle Z_B(t)\rangle = \lim_{t\to\infty}\frac{1}{t}\, \langle \tilde{Z}_B(t)\rangle\text{,} \label{J_B} \end{align} and the corresponding electric currents by $I_C=eJ_C$ and $I_B=eJ_B$. The equality between the mean values without and with the displacement currents comes from the fact that the displacement currents are given by a time derivative. The diffusivities of the currents are defined as \begin{align} &D_{CC}\equiv\lim_{t\to\infty}\frac{1}{2t}\, {\rm var}_{Z_CZ_C}(t)=\lim_{t\to\infty}\frac{1}{2t}\, {\rm var}_{\tilde Z_C\tilde Z_C}(t)\, , \label{D_CC}\\ &D_{BB}\equiv\lim_{t\to\infty}\frac{1}{2t}\, {\rm var}_{Z_BZ_B}(t)=\lim_{t\to\infty}\frac{1}{2t}\, {\rm var}_{\tilde Z_B\tilde Z_B}(t)\, , \label{D_BB}\\ &D_{CB}\equiv\lim_{t\to\infty}\frac{1}{2t}\, {\rm cov}_{Z_CZ_B}(t)=\lim_{t\to\infty}\frac{1}{2t}\, {\rm cov}_{\tilde Z_C\tilde Z_B}(t) \label{D_CB} \end{align} in terms of the variances and the covariances between the accumulated random charge numbers \begin{align} & {\rm var}_{Z_CZ_C}(t)\equiv\langle Z_C(t)Z_C(t)\rangle-\langle Z_C(t)\rangle^2 \text{,}\label{eq_var_CC} \\ & {\rm var}_{Z_BZ_B}(t)\equiv\langle Z_B(t)Z_B(t)\rangle-\langle Z_B(t)\rangle^2 \text{,}\label{eq_var_BB} \\ & {\rm cov}_{Z_CZ_B}(t)\equiv\langle Z_C(t)Z_B(t)\rangle-\langle Z_C(t)\rangle\langle Z_B(t)\rangle = {\rm cov}_{Z_BZ_C}(t)\text{.}\label{eq_cov_CB} \end{align} The diffusivities also take the same value whether the displacement currents are included or not. Since the covariance between two random variables is symmetric under their exchange, we have the symmetry $D_{CB}=D_{BC}$. We suppose that the voltages~(\ref{eq_V_C}) and~(\ref{eq_V_B}) are applied at the boundaries of the transistor. Consequently, the transistor is driven out of equilibrium and the stochastic process of charge transfers between the reservoirs eventually reaches a nonequilibrium steady state. This latter is expected to depend on the applied voltages, or equivalently on the affinities \begin{align} & A_C=\ln\left[\frac{\bar{P}_C}{\bar{P}_E}{\rm e}^{\beta e(\phi_C-\phi_E)}\right]=\ln\left[\frac{\bar{N}_E}{\bar{N}_C}{\rm e}^{\beta e(\phi_C-\phi_E)}\right]=\beta eV_C \text{,} \label{eq_theoretical_affinity_C} \\ & A_B=\ln\left[\frac{\bar{P}_B}{\bar{P}_E}{\rm e}^{\beta e(\phi_B-\phi_E)}\right]=\ln\left[\frac{\bar{N}_E}{\bar{N}_B}{\rm e}^{\beta e(\phi_B-\phi_E)}\right]=\beta eV_B \text{,} \label{eq_theoretical_affinity_B} \end{align} which are determined by the differences of electrochemical potentials between the corresponding reservoirs. The dependences of the mean values of the currents on the affinities define the characteristic functions of the transistor: $J_C(A_C,A_B)$ and $J_B(A_C,A_B)$. At equilibrium, the affinities are vanishing together with the applied voltages and the mean values of the currents, so that $J_C(0,0)=J_B(0,0)=0$. However, the diffusivities do not necessarily vanish at equilibrium. Beyond the mean values of the currents and the diffusivities, the process can be characterized by higher cumulants or the full probability distribution $P_{A_C,A_B}(Z_C,Z_B,t)$ that $Z_C(t)$ and $Z_B(t)$ charges are crossing the {\it Collector} and the {\it Base} during the time interval $[0,t]$, while the transistor is in a nonequilibrium steady state of affinities $A_C$ and $A_B$. This steady state is given by the stationary solution of the master equation of the Markov jump process described in Appendix~\ref{App:Markov}. Using the network representation of this Markov jump process and its decomposition into cyclic paths \cite{S76}, the process can be shown to obey a fluctuation theorem for all the currents as a consequence of local detailed balance \cite{AG07JSP,AG09}. This theorem states that the joint distribution of random variables $Z_C$ and $Z_B$ at time $t$ satisfies the following fluctuation relation \begin{align} \frac{P_{A_C,A_B}(Z_C,Z_B,t)}{P_{A_C,A_B}(-Z_C,-Z_B,t)}\simeq_{t\to\infty}\exp(A_CZ_C+A_BZ_B) \text{.} \label{FT} \end{align} A similar fluctuation relation holds if the displacement currents are included in the accumulated charge numbers \cite{AG09}. As a consequence of the fluctuation theorem, the thermodynamic entropy production is always non-negative in accord with the second law of thermodynamics. The entropy production can indeed be expressed as the Kullback-Leibler divergence between the probability distributions of opposite fluctuations of the currents \cite{G13}, giving the dissipated power divided by the thermal energy \begin{equation} \frac{1}{k_{\rm B}}\frac{d_{\rm i}S}{dt}= A_CJ_C+A_BJ_B=\beta \left( V_CI_C+V_BI_B\right) \geq 0 \, , \end{equation} as expected. We notice that the fluctuation relation~(\ref{FT}) holds in the long-time limit. The convergence time is determined by diffusion~\cite{GGHK18} and it can be estimated to range between the time of diffusion across the middle part, $t_{\rm diff}\sim l_p^2/D\sim 9$, and the one before recombination, $t_{\rm diff}\sim \ell^2/D\sim 100$. \subsection{Numerical results} The direct test of the fluctuation relation~(\ref{FT}) requires the availability of an overlap between the probability distributions $P(Z_C,Z_B,t)$ and $P(-Z_C,-Z_B,t)$. Since the maxima of these distributions move apart under nonequilibrium conditions, the overlap rapidly decreases as time increases. Therefore, the direct test of the fluctuation relation is restricted to short times. Nevertheless, the test is possible as shown in Fig.~\ref{fig4} for the joint probability distributions of the accumulated charge numbers without and with the displacement currents using the set of parameter values given in Table~\ref{tab_parameters_2}. For the bare charge numbers, Fig.~\ref{fig4}(a) depicts the joint distribution itself at time $t=20$, which is roughly Gaussian and shifted with respect to the origin because of the elapsed time. There is a significant overlap with the opposite distribution $P(-Z_C,-Z_B,t)$ and Fig.~\ref{fig4}(b) shows several contours of the two-dimensional function $\ln\left[P(Z_C,Z_B,t)/P(-Z_C,-Z_B,t)\right]$ in the plane of the variables $Z_C$ and $Z_B$. These contours appear straight given the presence of statistical errors, in agreement with the prediction of the fluctuation theorem that the function should be linear. The function $\ln\left[P(Z_C,Z_B,t)/P(-Z_C,-Z_B,t)\right]$ can thus be fitted to a linear function $A_C(t)\,Z_C+A_B(t)\,Z_B$, defining the finite-time affinities $A_C(t)$ and $A_B(t)$. However, their values remain smaller than the applied affinities $A_C=A_B=0.1$ because convergence is expected for $t\gg t_{\rm diff}$ and has not yet been reached in Fig.~\ref{fig4}. \begin{table*}[h] \caption{The set of parameter values used in Sec.~\ref{sec:FT} and Sec.~\ref{sec:resp}.} \begin{tabular}{>{\centering\arraybackslash}m{5cm}>{\centering\arraybackslash}m{3cm}||>{\centering\arraybackslash}m{4.5cm}>{\centering\arraybackslash}m{3cm}} \hline \hline parameter & value & parameter & value \bigstrut \\ \hline volume of each cell & $\Omega=1000$ & section areas & $\Sigma=10000$, $\Sigma_B=5000$ \bigstrut \\ \hline number of electrons for the \textit{Collector} & $\bar{N}_C=10000$ & number of holes for the \textit{Collector} & $\bar{P}_C=100$ \bigstrut \\ \hline number of electrons for the \textit{Base} & $\bar{N}_B=100$ & number of holes for the \textit{Base} & $\bar{P}_B=10000$ \bigstrut \\ \hline number of electrons for the \textit{Emitter}& $\bar{N}_E=10000$ & number of holes for the \textit{Emitter} & $\bar{P}_E=100$ \bigstrut \\ \hline \hline \end{tabular} \label{tab_parameters_2} \end{table*} \begin{figure*}[h] \begin{minipage}[t]{0.9\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_4ab.pdf}} \end{minipage} \begin{minipage}[t]{0.9\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_4cd.pdf}} \end{minipage} \caption{(a) Joint probability distribution $P(Z_C,Z_B,t)$ of the transferred charges $Z_C$ and $Z_B$ at time $t=20$. The center of this distribution marked with the symbol + corresponds to the mean values $\langle Z_B\rangle=117.43$ and $\langle Z_C\rangle=75.21$. Several contours of the distribution are also plotted. (b)~The function $\ln\left[P(Z_C,Z_B,t)/P(-Z_C,-Z_B,t)\right]$ versus $Z_C$ and $Z_B$ at the same time $t=20$. Several contours are shown. The arrows indicate the gradient of the distribution. The finite-time affinities take the values $A_B(t=20)=0.0387$ and $A_C(t=20)=0.0326$. (c)~Joint probability distribution $P(\tilde Z_C,\tilde Z_B,t)$ of the transferred total charges $\tilde Z_C$ and $\tilde Z_B$ including the displacement currents, at the same time $t=20$. This distribution is centered on the same mean values $\langle \tilde Z_B\rangle=117.43$ and $\langle \tilde Z_C\rangle=75.21$. (d)~The corresponding function $\ln\left[P(\tilde Z_C,\tilde Z_B,t)/P(-\tilde Z_C,-\tilde Z_B,t)\right]$ versus $\tilde Z_C$ and $\tilde Z_B$ at the same time $t=20$, giving the finite-time affinities $\tilde A_B(t=20)=0.0659$ and $\tilde A_C(t=20)=0.0752$. For both cases, the affinities are set in the simulation to the value $A_C=A_B=0.1$. The simulation is carried out with the time step $dt=0.1$ and the statistics over $3\times 10^7$ trajectories. The pixels in the four panels are all of size $4\times 4$.} \label{fig4} \end{figure*} \begin{figure*} \begin{minipage}[h]{0.8\hsize} \resizebox{0.65\hsize}{!}{\includegraphics{Fig_5.pdf}} \end{minipage} \caption{The finite-time affinities $\tilde A_C(t)$ and $\tilde A_B(t)$ versus time $t$ in the same conditions as in Fig.~\ref{fig4}(c) and Fig.~\ref{fig4}(d) for the transferred total charges $\tilde Z_C$ and $\tilde Z_B$ including the displacement currents. These affinities are obtained by fitting $\ln\left[P(\tilde Z_C,\tilde Z_B,t)/P(-\tilde Z_C,-\tilde Z_B,t)\right]$ to the linear function $\tilde A_C(t)\,\tilde Z_C+\tilde A_B(t)\,\tilde Z_B$. The dashed lines show the fits $\tilde A_C(t)\simeq 0.1-0.074\times\exp(-t/16.52)$ and $\tilde A_B(t)\simeq 0.1-0.086\times\exp(-t/20.61)$.} \label{fig5} \end{figure*} \begin{table}[h] \caption{The comparison between the numerical affinities and their theoretical expectations. The statistics used to evaluate the numerical affinities is obtained by simulations with the time step $dt=0.05$, the total time $t=2.5\times 10^3$, and $5\times 10^5$ trajectories for every case.} \vskip 0.3 cm \begin{tabular}{rrrrr} \hline \hline case & $A_C^{\rm (th)}$ & $A_C^{\rm (num)}\qquad$ & $A_B^{\rm (th)}$ & $A_B^{\rm (num)}\qquad$ \bigstrut \\ \hline 1 & $1.0$ & $0.9914\pm 0.0034$ & $0.7$ & $0.6942\pm 0.0027$ \bigstrut \\ 2 & $0.8$ & $0.7919\pm 0.0025$ & $0.4$ & $0.3952\pm 0.0019$ \bigstrut \\ 3 & $0.5$ & $0.5018\pm 0.0033$ & $1.2$ & $1.2007\pm 0.0041$ \bigstrut \\ 4 & $0.0$ & $0.0000\pm 0.0000$ & $0.0$ & $0.0000\pm 0.0000$ \bigstrut \\ 5 & $-0.4$ & $-0.4002\pm 0.0018$ & $0.6$ & $0.5975\pm 0.0020$ \bigstrut \\ 6 & $-0.5$ & $-0.4864\pm 0.0029$ & $-0.7$ & $-0.6864\pm 0.0029$ \bigstrut \\ 7 & $-1.0$ & $-1.0058\pm 0.0039$ & $0.4$ & $0.4022\pm 0.0028$ \bigstrut \\ 8 & $-1.2$ & $-1.3924\pm 0.0084$ & $1.3$ & $1.4118\pm 0.0084$ \bigstrut \\ \hline \hline \end{tabular} \label{tab_cases} \end{table} As shown in Fig.~\ref{fig4}(c) and Fig.~\ref{fig4}(d), similar results hold for the joint probability distribution $P(\tilde Z_C,\tilde Z_B,t)$ of the charge numbers with the displacement currents. As seen in Fig.~\ref{fig4}(c), the displacement currents have for effect that the distribution $P(\tilde Z_C,\tilde Z_B,t)$ is narrower than $P(Z_C,Z_B,t)$ depicted in Fig.~\ref{fig4}(a). Consequently, the finite-time affinities $\tilde A_C(t)$ and $\tilde A_B(t)$ are larger than $A_C(t)$ and $A_B(t)$ and the convergence in time towards the asymptotic values of the affinities should be faster for the statistics of the transferred total charges $\tilde Z_C$ and $\tilde Z_B$ including the displacement currents, than for the statistics of the transferred charges $Z_C$ and $Z_B$. Figure~\ref{fig5} confirms that the finite-time affinities $\tilde A_C(t)$ and $\tilde A_B(t)$ approach their asymptotic value $A_C=A_B=0.1$, as time increases. Since the overlap between the opposite distributions rapidly decreases, statistical errors increase for $t>20$. The exponential fits of the finite-time affinities provide estimations of the convergence times in the range of values expected by charge carrier diffusion. In order to test the convergence of the finite-time affinities towards their asymptotic values over longer time scales, we develop a method using the following coarse-grained model, \begin{equation} \begin{array}{c} \textit{Collector}\autorightleftharpoons{$\scriptstyle W_{CE}$}{$\scriptstyle W_{EC}$}\textit{Emitter} \text{,}\\ \textit{Base}\autorightleftharpoons{$\scriptstyle W_{BE}$}{$\scriptstyle W_{EB}$}\textit{Emitter} \text{,}\\ \textit{Collector}\autorightleftharpoons{$\scriptstyle W_{CB}$}{$\scriptstyle W_{BC}$}\textit{Base} \text{,} \end{array} \label{model_CBE} \end{equation} where the charges are supposed to be transferred between the three reservoirs with the transition rates $\{W_{kl}\}_{k,l=C,B,E}$, as formulated in Appendix~\ref{App:coarse}. This constitutes the minimal model in the sense that the values of its rates can be fully determined from the knowledge of the mean currents and diffusivities, if the conditions of local detailed balance are satisfied. This simple model is related to the Ebers-Moll transport model of bipolar junction transistors \cite{EM54,SS04}. Given the values $J_C$, $J_B$, $D_{CC}$, $D_{BB}$, and $D_{CB}$ of the mean currents and the diffusivities, the six rates $W_{kl}$ can be determined, giving the values of the affinities according to $A_{kl}=\ln(W_{kl}/W_{lk})$ with $k,l=C,B,E$. Since this model results from the coarse graining of the complete description, it has a domain of validity limited to moderate values of the applied voltages. In this domain, the parameter values of the model can thus be fitted to the numerical values of the mean currents~(\ref{J_C})-(\ref{J_B}) and the diffusivities~(\ref{D_CC})-(\ref{D_CB}) of the full model in order to obtain the affinities. Table~\ref{tab_cases} shows the comparison between the numerical affinities and the theoretical predictions for several cases. Accurate agreement is found if the affinities remain moderate, confirming the convergence of the finite-time affinities $A_C(t)$ and $A_B(t)$ towards their expected asymptotic values~(\ref{eq_theoretical_affinity_C}) and~(\ref{eq_theoretical_affinity_B}) within the domain of validity of the model~(\ref{model_CBE}). Despite the limited scope of application of this method, the agreement between the numerical and theoretical values of the affinities brings further numerical support to the fluctuation relation for the currents. In the next section, the consequences of the fluctuation theorem on the linear and nonlinear transport properties will be tested. \section{Linear and nonlinear response properties} \label{sec:resp} \subsection{Deduction of the properties from the fluctuation theorem} The fluctuation theorem provides a unified framework for deducing the Onsager reciprocal relations and their generalizations to the nonlinear transport properties \cite{S92,AG04,AG07JSM,HPPG11,BG18}. For this purpose, it is convenient to introduce the cumulant generating function \begin{align} Q({\boldsymbol\lambda};{\bf A})\equiv\lim_{t\to\infty}-\frac{1}{t}\ln\int P_{A_C,A_B}(Z_C,Z_B,t)\, {\rm e}^{-\lambda_CZ_C-\lambda_BZ_B}\, dZ_C\, dZ_B \text{,} \label{CGF} \end{align} where ${\boldsymbol\lambda}=(\lambda_C,\lambda_B)$ are the so-called counting parameters and the macroscopic affinities are written in vectorial notation ${\bf A}=(A_C,A_B)$. As a consequence of the fluctuation theorem~(\ref{FT}), the cumulant generating function obeys the following symmetry relation \begin{align} Q({\boldsymbol\lambda};{\bf A})=Q({\bf A}-{\boldsymbol\lambda};{\bf A}) \text{.} \label{eq_symmetric_relation_of_FT} \end{align} Now, the mean currents and the diffusivities can be obtained by taking the successive derivatives of the generating function~(\ref{CGF}) with respect to the counting parameters: \begin{align} & J_{\alpha}({\bf A})=\left.\frac{\partial Q({\boldsymbol\lambda};{\bf A})}{\partial\lambda_{\alpha}}\right\vert_{{\boldsymbol\lambda}={\bf 0}} \text{,} \\ & D_{\alpha\beta}({\bf A})=-\frac{1}{2}\left.\frac{\partial^2Q({\boldsymbol\lambda};{\bf A})}{\partial\lambda_{\alpha}\partial\lambda_{\beta}}\right\vert_{{\boldsymbol\lambda}={\bf 0}} \text{,} \label{dfn-D} \end{align} for $\alpha,\beta=C,B$. Besides, we may expand the mean currents in power series of the affinities as \begin{align} J_{\alpha}=\sum_{\beta}L_{\alpha,\beta}A_{\beta}+\frac{1}{2}\sum_{\beta,\gamma}M_{\alpha,\beta\gamma}A_{\beta}A_{\gamma}+\cdots \end{align} in terms of the response coefficients defined by \begin{align} & L_{\alpha,\beta}=\left.\frac{\partial J_{\alpha}}{\partial A_{\beta}}\right\vert_{{\bf A}={\bf 0}}=\left.\frac{\partial^2 Q({\boldsymbol\lambda};{\bf A})}{\partial\lambda_{\alpha}\partial A_{\beta}}\right\vert_{{\boldsymbol\lambda}={\bf A}={\bf 0}} \text{,} \\ & M_{\alpha,\beta\gamma}=\left.\frac{\partial^2 J_{\alpha}}{\partial A_{\beta}\partial A_{\gamma}}\right\vert_{{\bf A}={\bf 0}}=\left.\frac{\partial^3 Q({\boldsymbol\lambda};{\bf A})}{\partial\lambda_{\alpha}\partial A_{\beta}\partial A_{\gamma}}\right\vert_{{\boldsymbol\lambda}={\bf A}={\bf 0}} \text{.} \end{align} The coefficients $L_{\alpha,\beta}$ characterize the linear response properties and the coefficients $M_{\alpha,\beta\gamma}$ the nonlinear response properties of the currents at second order in the affinities. The coefficients of higher orders can also be introduced \cite{AG07JSM,BG18}. If we take the derivatives of the symmetry relation Eq. (\ref{eq_symmetric_relation_of_FT}) with respect to $\lambda_\alpha$ and $A_\beta$, and set ${\boldsymbol\lambda}={\bf 0}$ and ${\bf A}={\bf 0}$, we obtain the fluctuation-dissipation relations \begin{align} L_{\alpha,\beta}= D_{\alpha\beta}({\bf A}={\bf 0}) \label{FDR} \end{align} and the Onsager reciprocal relations \begin{align} L_{\alpha,\beta}=L_{\beta,\alpha} \text{.} \label{ORR} \end{align} as a consequence of the symmetry $D_{\alpha\beta}=D_{\beta\alpha}$ resulting from the definition~(\ref{dfn-D}) of the diffusivities. If we take a further derivative of the symmetry relation~(\ref{eq_symmetric_relation_of_FT}) with respect to $A_\gamma$ before setting ${\boldsymbol\lambda}={\bf 0}$ and ${\bf A}={\bf 0}$, we find that \begin{align} M_{\alpha,\beta\gamma}=\left(\frac{\partial D_{\alpha\beta}}{\partial A_{\gamma}}+\frac{\partial D_{\alpha\gamma}}{\partial A_{\beta}}\right)_{{\bf A}={\bf 0}} \text{,} \label{M-dDdA} \end{align} giving the nonlinear response coefficient $M_{\alpha,\beta\gamma}$ in terms of the first responses of the diffusivities around equilibrium. The relations~(\ref{M-dDdA}) as well as the Onsager reciprocal relations~(\ref{ORR}) find their origin in the microreversibility underlying the fluctuation theorem for currents \cite{S92,AGMT09,EHM09,CHT11,S12,G13}. \begin{figure*}[h] \begin{minipage}[t]{0.99\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_6.pdf}} \end{minipage} \caption{Mean charge currents versus one affinity with the other being zero: (a) The {\it Collector} current $J_C$ versus the {\it Collector} affinity $A_C$; (b) the {\it Base} current $J_B$ versus the {\it Base} affinity $A_B$; (c) the {\it Collector} (solid line) and {\it Base} (dashed line) currents versus the affinity of the other reservoir. The asterisks are the numerical data from the simulation. The lines show the polynomials obtained from Lagrange interpolations using the data points. From the functions that are given by Lagrange polynomials, the first partial derivatives around the equilibrium point $(A_C=0,A_B=0)$ can be estimated, with the approximate values given in Table~\ref{tab_lin}. The root mean squares on the data points are evaluated to be $\sigma_{J_C}\simeq 0.0020$ and $\sigma_{J_B}\simeq 0.0021$. The simulations were carried out with the time step $dt=0.05$ and $10^9$ iterates for every data point.} \label{fig6} \end{figure*} \begin{table*}[h] \caption{The numerical values of the quantities used in the fluctuation-dissipation and the Onsager reciprocal relations.} \vskip 0.3 cm \begin{tabular}{|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}||>{\centering\arraybackslash}m{3cm}|} \hline $L_{\alpha,\beta}$ & $\left.D_{\alpha\beta}\right\vert_{(0,0)}$ & $L_{\alpha,\beta}-\left.D_{\alpha\beta}\right\vert_{(0,0)}$ \bigstrut \\ \hline $\left.\frac{\partial J_C}{\partial A_C}\right\vert_{(0,0)}=93.106\pm 0.019$ & $\left.D_{CC}\right\vert_{(0,0)}=92.991\pm 1.039$ & $0.115$ \bigstrut \\ \hline $\left.\frac{\partial J_C}{\partial A_B}\right\vert_{(0,0)}=-56.288\pm 0.019$ & $\left.D_{CB}\right\vert_{(0,0)}=-56.343\pm 0.488$ & $0.055$ \bigstrut \\ \hline $\left.\frac{\partial J_B}{\partial A_C}\right\vert_{(0,0)}=-56.303\pm 0.020$ & $\left.D_{BC}\right\vert_{(0,0)}=-56.343\pm 0.488$ & $0.040$ \bigstrut \\ \hline $\left.\frac{\partial J_B}{\partial A_B}\right\vert_{(0,0)}=112.603\pm 0.020$ & $\left.D_{BB}\right\vert_{(0,0)}=113.158\pm 0.487$ & $-0.555$ \bigstrut \\ \hline \end{tabular} \label{tab_lin} \end{table*} \begin{figure*} \begin{minipage}[t]{0.99\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_7.pdf}} \end{minipage} \caption{The mean charge currents as a function of the affinities $A_B$ and $A_C$: (a) The current $J_C$ from the \textit{Collector} to BJT; (b) The current $J_B$ from the \textit{Base} to BJT. The asterisks are the numerical data points from the simulation. The surfaces are obtained from Lagrange interpolation using the data points. Furthermore, the data points are used to get the second derivatives $\partial^2J_{\alpha}/\partial A_{\beta}\partial A_{\gamma}\vert_{(0,0)}$ around the equilibrium point $(A_C=0,A_B=0)$, as explained in Appendix~\ref{App:num}. The numerical values of these second derivatives are given in Table~\ref{tab_nonlin}. The simulations were carried out with the time step $dt=0.05$ and $10^9$ iterates for every data point.} \label{fig7} \end{figure*} \begin{figure*} \begin{minipage}[t]{0.99\hsize} \resizebox{1.0\hsize}{!}{\includegraphics{Fig_8.pdf}} \end{minipage} \caption{The diffusivities $D_{\alpha\beta}$ versus one affinity $A_{\gamma}$, the other affinity being set equal to zero. The numerical data points are plotted together with the error bars and the dashed lines give the Lagrange polynomial interpolations of the data points. These interpolations provide the first derivatives $\partial D_{\alpha\beta}/\partial A_{\gamma}\vert_{(0,0)}$ at the equilibrium point $(A_C=0,A_B=0)$. Their numerical values are given in Table~\ref{tab_nonlin}. The simulations were carried out with the time step $dt=0.05$, the total time $t=2500$, and the statistics of $5\times 10^4$ trajectories for every data point.} \label{fig8} \end{figure*} \begin{table*} \caption{The numerical values of the quantities used in the nonlinear transport relations~(\ref{M-dDdA}).} \vskip 0.3 cm \begin{tabular}{|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}|>{\centering\arraybackslash}m{4cm}||>{\centering\arraybackslash}m{3cm}|} \hline $M_{\alpha,\beta\gamma}$ & $R_{\alpha\beta,\gamma}$ & $R_{\alpha\gamma,\beta}$ & $M_{\alpha,\beta\gamma}-R_{\alpha\beta,\gamma}-R_{\alpha\gamma,\beta}$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_C}{\partial A_C^2}\right\vert_{(0,0)}=-67.388\pm 0.620$ & $\left.\frac{\partial D_{CC}}{\partial A_C}\right\vert_{(0,0)}=-33.642\pm 9.897$ & $\left.\frac{\partial D_{CC}}{\partial A_C}\right\vert_{(0,0)}=-33.642\pm 9.897$ & $-0.104$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_C}{\partial A_B^2}\right\vert_{(0,0)}=-45.325\pm 0.620$ & $\left.\frac{\partial D_{CB}}{\partial A_B}\right\vert_{(0,0)}=-22.474\pm 4.639$ & $\left.\frac{\partial D_{CB}}{\partial A_B}\right\vert_{(0,0)}=-22.474\pm 4.639$ & $-0.377$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_C}{\partial A_C\partial A_B}\right\vert_{(0,0)}=68.747\pm 0.097$ & $\left.\frac{\partial D_{CC}}{\partial A_B}\right\vert_{(0,0)}=47.409\pm 9.900$ & $\left.\frac{\partial D_{CB}}{\partial A_C}\right\vert_{(0,0)}=20.992\pm 4.642$ & $0.346$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_B}{\partial A_C^2}\right\vert_{(0,0)}=42.064\pm 0.667$ & $\left.\frac{\partial D_{CB}}{\partial A_C}\right\vert_{(0,0)}=20.992\pm 4.642$ & $\left.\frac{\partial D_{CB}}{\partial A_C}\right\vert_{(0,0)}=20.992\pm 4.642$ & $0.080$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_B}{\partial A_B^2}\right\vert_{(0,0)}=90.066\pm 0.665$ & $\left.\frac{\partial D_{BB}}{\partial A_B}\right\vert_{(0,0)}=45.068\pm 4.644$ & $\left.\frac{\partial D_{BB}}{\partial A_B}\right\vert_{(0,0)}=45.068\pm 4.644$ & $-0.070$ \bigstrut \\ \hline $\left.\frac{\partial^2 J_B}{\partial A_C\partial A_B}\right\vert_{(0,0)}=-44.777\pm 0.107$ & $\left.\frac{\partial D_{CB}}{\partial A_B}\right\vert_{(0,0)}=-22.474\pm 4.639$ & $\left.\frac{\partial D_{BB}}{\partial A_C}\right\vert_{(0,0)}=-22.330\pm 4.630$ & $0.027$ \bigstrut \\ \hline \end{tabular} \label{tab_nonlin} \end{table*} \subsection{Numerical test of the linear transport properties} In this subsection, we focus on the numerical test of the fluctuation-dissipation relations~(\ref{FDR}) and the Onsager reciprocal relation~(\ref{ORR}) for $\alpha,\beta=C,B$. Here, we use the methods given in Appendix \ref{App:num} for the numerical evaluation of derivatives and their error analysis. The evaluation of the linear response coefficients relies on the determination of the mean currents as a function of the affinities. To achieve this evaluation, we have computed the mean currents for several values of the affinities, as shown in Fig.~\ref{fig6}. We have used the Lagrange interpolation method to obtain one-variable polynomials approximating $J_C(A_C,A_B= 0)$, $J_C(A_C= 0,A_B)$, $J_B(A_C,A_B= 0)$, and $J_B(A_C= 0,A_B)$ based on the numerical data plotted in Fig.~\ref{fig6}. Subsequently, the linear response coefficients can be computed by taking the first partial derivatives of the Lagrange polynomials at the equilibrium point $(A_C=0,A_B=0)$. Their numerical values are given in the first column of Table~\ref{tab_lin}. This computation already confirms that the Onsager reciprocal relation $L_{C,B}=L_{B,C}$ is satisfied within the numerical accuracy. Furthermore, the equilibrium values of the diffusivities are computed using Eqs.~(\ref{D_CC})-(\ref{D_CB}), giving the values in the second column of Table~\ref{tab_lin}. The difference between the linear response coefficients and the diffusivities are reported in the third column of Table~\ref{tab_lin}, showing that the fluctuation-dissipation relations~(\ref{FDR}) are also satisfied within the numerical accuracy. \subsection{Numerical test of the nonlinear transport properties} The numerical values of the charge currents $J_C$ and $J_B$ are computed for different values of the affinities $A_C$ and $A_B$ in order to construct the two-variable functions $J_C(A_C,A_B)$ and $J_B(A_C,A_B)$ using two-dimensional Lagrange interpolations, as shown in Fig.~\ref{fig7}. The values of second derivatives at the equilibrium point $(A_C=0,A_B=0)$, \begin{align} \left.\frac{\partial^2 J_{\alpha}}{\partial A_{\beta}\partial A_{\gamma}}\right\vert_{(0,0)} \qquad\mbox{for} \qquad \alpha,\,\beta,\,\gamma=C,\,B, \end{align} are thus numerically evaluated in order to determine the nonlinear response coefficients $M_{\alpha,\beta\gamma}$, using the numerical method explained in Appendix \ref{App:num}. On the other hand, the diffusivities $D_{\alpha\beta}$ are again computed using Eqs.~(\ref{D_CC})-(\ref{D_CB}), but for the transistor driven away from equilibrium. They are plotted in Fig.~\ref{fig8} as functions of the affinities. Therefore, the derivatives of the diffusivities with respect to the affinities \begin{align} R_{\alpha\beta,\gamma}\equiv\left.\frac{\partial D_{\alpha\beta}}{\partial A_{\gamma}}\right\vert_{(0,0)} \qquad\mbox{for} \qquad \alpha,\,\beta,\,\gamma=C,\,B \end{align} can also be evaluated numerically at the equilibrium point $(A_C=0,A_B=0)$. The results for the quantities $M_{\alpha,\beta\gamma}$ and $R_{\alpha\beta,\gamma}$ are given in Table \ref{tab_nonlin} where we calculate the differences, $M_{\alpha,\beta\gamma}-R_{\alpha\beta,\gamma}-R_{\alpha\gamma,\beta}$, testing the validity of the prediction~(\ref{M-dDdA}) of the fluctuation theorem beyond the linear transport properties. We see that these differences are smaller than the numerical errors in agreement with the predictions. \section{Conclusion and Perspectives} \label{sec:conclude} Using a spatially extended stochastic description of charge transport in bipolar $n$-$p$-$n$ junction transistors, we have shown in this paper that a fluctuation theorem holds for the two electric currents that are coupled together in the double junction of the transistor. We have also shown that, as a corollary of the fluctuation theorem for the currents, nonlinear transport generalizations of the fluctuation-dissipation and Onsager reciprocal relations are satisfied in the transistor. In particular, we have verified in detail that the second-order nonlinear response coefficients of the currents are related to the first-order responses of the diffusivities, as predicted by theory~\cite{AG04,AG07JSM,BG18}. These results are based on stochastic partial differential equations describing the diffusion of electrons and holes, as well as their generation and recombination. These stochastic diffusion-reaction equations are coupled to the Poisson equation for the electric potential and they obey local detailed balance. The scheme is consistent with the laws of electricity, thermodynamics, and microreversibility. The stochastic process is driven out of equilibrium by boundary conditions due to the voltages applied to the reservoirs in contact with the three ports of the transistor. In this case, the transistor is the stage of a nonequilibrium steady state, manifesting highly nonlinear transport properties. The key point raised in this paper is that, besides their amazing technological importance, transistors can be used to address the fundamental issue of microreversibility in nonequilibrium statistical physics. The one-variable fluctuation theorem has already been experimentally investigated in linear $RC$ electric circuits \cite{GC05,JGC08}. Our previous paper~\cite{GG18} has shown that the one-variable fluctuation theorem can be studied in nonlinear devices such as diodes. In transistors, the experimental test of the two-variable fluctuation theorem can also be envisaged, either by the direct measurement of current fluctuations, or by testing its consequences, namely, the time-reversal symmetry relations generalizing the fluctuation-dissipation and Onsager reciprocal relations to the nonlinear transport properties. Such tests would require accurate noise measurements with large enough statistics. In this way, these symmetry relations, finding their origins in the fundamental law of microreversibility, could be tested experimentally in common devices of modern technology. \vskip 1 cm \section*{Acknowledgments} The authors thank Sergio Ciliberto for stimulating discussions. Financial support from the China Scholarship Council under the Grant No. 201606950037, the Universit\'e libre de Bruxelles (ULB), and the Fonds de la Recherche Scientifique - FNRS under the Grant PDR T.0094.16 for the project ``SYMSTATPHYS" is acknowledged.
1,108,101,564,418
arxiv
\section{Introduction} Observations of galaxies reveal that they evolve over cosmic time from smaller, bluer, more irregular star-forming galaxies at higher redshifts to larger, redder, more elliptical galaxies in the local universe (e.g., \citealt{Glazebrook1995,Lilly1995,Giavalisco1996}). Additionally, the bimodality of galaxy properties such as color, mass, and star formation rate at low redshift implies that galaxies are quenching, or shutting down their star formation, in the local universe as well (e.g., \citealt{Schawinski2009,Masters2010,Weigel2017}). Galaxy evolution, or changes in the size, structures, and star formation properties of galaxies, in lower mass (log$(M_*/M_{\odot}) < 10.5$) galaxies is largely driven by the accretion of gas and/or the prevention of this gas from forming stars (\citealt{Robotham2014}). Many different processes can drive this evolution, from internal processes that are dependent on the galaxy properties, to external processes, which are related to the surroundings of the galaxy. Examples of internal processes include feedback from active galactic nuclei (AGNs; \citealt{Croton2006,fabian12,Heckman2014}), star formation driven outflows (\citealt{Rupke2018} and references therein), and morphological quenching due to structures such as bars or stellar bulges (\citealt{Sheth2005}). External processes include galaxy interactions with the hot intracluster medium that remove or heat gas (\citealt{Gunn1972}), `cold flow' accretion from the cosmic web (\citealt{Dekel2009}), and galaxy mergers (\citealt{Silk1998,DiMatteo2005,Kaviraj2013}). While the current $\Lambda$CDM framework for structural formation in the universe points to the importance of mergers for assembling dark matter halos (\citealt{White1978,White1991,Cole2008}), the relative contribution of mergers to galaxy evolution through processes such as star formation, AGN activity, and/or morphological transformation remains unclear. This disagreement stems largely from the difficulty of building large, unambiguous samples of merging galaxies. Galaxy mergers are inherently difficult to identify; they persist for $\sim$few Gyr and they have a diversity of identifying characteristics that vary with merger stage, mass ratio, gas fraction, orbital parameters, and other merger initial conditions. The difficulty of identifying merging galaxies also contributes to the uncertainty in the merger rate ($R_{\mathrm{merg}}$), which is a key measurement for quantifying the role of mergers in galaxy evolution and comparing observations to simulations (e.g., \citealt{Lopez-Sanjuan2008}). The merger rate can either be measured directly from simulations or empirically, using the observed merger fraction and assuming a merger `observability' timescale (\citealt{Lotz2011}). Both techniques show large scatter between different estimates of the merger rate; semi-analytic models and hydrodynamical simulations have discrepancies of about an order of magnitude (see \citet{Hopkins2010} and references therein), while observations have also not converged, due to uncertainties in merger timescales and the completeness of the different methodologies. Recently, however, \citet{Mantha2018} have demonstrated that different empirical estimates of the merger rate can be brought into agreement. This work demonstrates the importance of careful calibration of the completeness of the merger identification methodology and the observability timescale. Clean and complete samples of merging galaxies are therefore needed to address the contributions of mergers to evolutionary processes in galaxies and to reduce systematic uncertainties in the galaxy merger rate. This in turn necessitates a thorough understanding of the limitations and observability timescale of the technique used to identify merging galaxies. A variety of imaging techniques exist to identify merging galaxies, all of which are susceptible to their own biases. These often rely upon individual imaging tools, or predictors, such as the $Gini-M_{20}$ methodology, or the asymmetry of the galaxy light in imaging. One approach to overcome these biases is to utilize simulations of merging galaxies to better understand the shortcomings of individual tools and to characterize the observability timescales of these methods. For example, \citet{Lotz2008,Lotz2010a,Lotz2010b} use simulations of merging galaxies to measure the length of time that a major merger is observable by the $Gini-M_{20}$ and asymmetry metric. They find $0.3 - 0.5$ Gyr observability timescales, meaning that merging galaxies are visible as mergers using these techniques for only a short time during the $\sim$few Gyr duration of the merger. Another strategy is to combine the predictors to create a single classification tool that dramatically lengthens the observability timescale by capitalizing on the strengths of the individual methods (e.g., \citealt{Goulding2018,Snyder2019}). In \citet{Nevin2019} (henceforth N19), we pursue both of these approaches and utilize \texttt{GADGET-3/SUNRISE} simulated galaxies to build a merger identification technique. This technique combines seven imaging predictors to create one more accurate and precise classifier that incorporates the strength of all of these predictors, lengthening the observability timescale to $>$2 Gyr. Using this approach to simulate merging galaxies, we achieve high temporal resolution (relative to cosmological simulations), which enables us to construct a more complete picture of the different stages of a merger. The suite of simulated mergers also provides a known sample of merging and nonmerging galaxies from which we can understand the limitations of the identification technique before it is applied. Recent years have witnessed an increase in the quantity and quality of integral field spectroscopy (IFS) data sets. With these advancements, kinematic predictors provide a promising addition to imaging predictors in the merger identification toolkit. Kinematic predictors are able to directly probe the dynamical histories of galaxies by tracing baryonic \textit{and} dark matter (\citealt{Glazebrook2013}). Disturbances in the stellar kinematics are dynamically long-lived and can identify a merger long after the imaging signatures have faded. For instance, morphological disturbances like tidal tails can fade on a $\sim$500 Myr timescale following final coalescence and are faint compared to the light of the galaxy (e.g., \citealt{Hung2014,Wen2016}) whereas kinematic disturbance in the stars of a galaxy can persist for longer (up to $\sim$Gyr after final coalescence; \citealt{Hung2016}). Kinematic predictors may additionally clear up ambiguities in imaging. For instance, some clumpy star-forming galaxies appear to be mergers in imaging due to their disturbed morphologies (\citealt{Miralles-Caballero2011,Petty2014}), yet galaxies with clumpy morphologies can actually be nonmerging spiral galaxies with clumps of star formation in their centers or in their spiral arms (\citealt{Alonso-Herrero2006,Haan2011}). Kinematics have shown promise as an additional tool to determine if a star-forming galaxy is disk-like (e.g., \citealt{White2017}). This type of clumpy star-forming galaxy is even more abundant at intermediate and high redshifts, where a higher fraction of galaxies are expected to be actively merging, yet many isolated (non-merging) galaxies are also inherently clumpy (e.g., \citealt{Guo2015}). In addition to their clumpy and rapidly evolving morphologies, high redshift galaxies also have distinct kinematic features such as high velocity dispersions regardless of whether they are actively merging or isolated (e.g., \citealt{Law2012a,Law2012b}). The decreasing spatial resolution and surface brightness dimming of high redshift galaxies also confound the identification of mergers. Since high redshift galaxies present a host of additional complications, in this work we focus on local galaxies in order to develop the groundwork for a method that could eventually be extended to the more distant universe. Like every other merger identification tool, kinematic predictors also have their own set of ambiguities and limitations. For instance, in gas-rich mergers, disks are able to survive the merger and these recently-merged galaxies can masquerade as isolated disk galaxies (e.g., \citealt{Robertson2006}). \citet{Hung2015} find that relying upon kinematics alone to classify a sample of ULIRGs identifies many merging galaxies as isolated disks and would provide a false-negative merger identification for up to 50\% of ULIRGs. Additionally, the identification technique depends strongly on the merger stage and the choice of kinematic predictor. Other work confirms that some mergers with highly disturbed visual morphology exhibit a distinct lack of disturbance in the stellar kinematics (\citealt{Bellocchi2013,Hung2016}). It is therefore important to probe the kinematics of merging galaxies using simulations in order to understand the biases and limitations of these tools before applying them to real galaxies. There is currently a wealth of work dedicated to the imaging approach to identifying merging galaxies from large surveys. While there are many detailed case studies of the kinematics of individual local mergers (e.g., \citealt{Dasyra2006,Piqueras-Lopez2012}), there is a lack of detailed statistical-sized kinematic studies of local mergers. Recent years have brought a revolution in more and more capable IFS surveys, creating opportunities to identify merging galaxies using kinematic signatures. Surveys such as ATLAS-3D (\citealt{Cappellari2011}), CALIFA (\citealt{Sanchez2012}), SAMI (\citealt{Croom2012}), MaNGA (\citealt{Bundy2015}), and HECTOR (\citealt{Bryant2016}) offer a promising avenue to study the spatially-resolved spectral properties of an astounding number of galaxies. Here, we focus on the nearing-completion Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey. MaNGA is an IFS survey of $>$10,000 local galaxies (with a median redshift $z\sim0.03$) with a spectral resolution of $R\sim2000$ and a spatial sampling of $1-2$ kpc (\citealt{Bundy2015}). One of MaNGA's secondary scientific goals is to help disentangle the evolutionary pathways of galaxies and to focus on incorporating simulations of merging galaxies with observations. It is thus uniquely well-suited for this project, where the goal is to create a merger classification technique from the kinematics of simulated galaxies, which we will then apply to the kinematics of the $>$10,000 galaxies in the MaNGA survey in order to identify mergers in future work. This paper is organized as follows: In \S \ref{methods4} we review the \texttt{GADGET-3/SUNRISE} simulations from N19, describe the process of creating mock stellar kinematic maps from the synthetic spectra of the galaxy merger simulations, introduce the kinematic predictors, and review the linear discriminant analysis (LDA) technique used in N19 and in this work. In \S \ref{results4} we describe the results of the LDA classification, including the coefficients of the LDA, the observability timescales, and the accuracy and precision of the method. In \S \ref{discuss4} we describe the LDA coefficients in the context of previous work on mergers, how the classification changes with mass ratio, and examine the performance of the kinematic classification technique in the context of other tools and statistical methods. We present our conclusions in \S \ref{conclusions4}. In this work we focus on creating the kinematic classification from simulated galaxies. In future work we plan to apply the classification to galaxies in the MaNGA survey. A cosmology with $\Omega_m = 0.3$, $\Omega_\Lambda = 0.7$, and $h = 0.7$ is assumed throughout. \section{Methods} \label{methods4} In order to construct a merger identification framework from the kinematics of simulated galaxies, we follow a detailed procedure to best mimic observations from the MaNGA survey. We introduce the galaxy merger simulations in \S \ref{simdeets} and describe the process for preparing mock kinematic maps from the simulated galaxies in \S \ref{mimic}. Finally, we introduce the kinematic predictors that we utilize in the kinematic classification in \S \ref{kinclass}. We dedicate several appendices to discussions that are informative but ancillary to the goals of this paper. We make the deliberate choice to extract the stellar kinematics from the \texttt{SUNRISE} spectra, as opposed to relying directly on particle velocities. We discuss the implications of this choice and compare the extracted stellar velocity and velocity dispersion maps to the inherent velocity of the simulation particles in \S \ref{scatter}. In \S \ref{scatter} we also discuss the effects of dust on the simulated observations. In \S \ref{noise} we include more details about adding noise to the mock spectra. In \S \ref{AGNprobs} we address AGN contamination and how we extract stellar kinematics from galaxies that host AGN. \subsection{\texttt{GADGET-3/SUNRISE} Overview} \label{simdeets} As in N19, we utilize \texttt{GADGET-3/SUNRISE} simulations of merging galaxies. \texttt{GADGET-3} (\citealt{Springel2003,Springel2005}) is a smoothed particle hydrodynamics (SPH) and N-body code that models processes such as radiative heating, radiative cooling, star formation, supernova feedback, and the multi-phase interstellar medium (ISM) using sub-resolution prescriptions. \texttt{GADGET-3} also includes SMBH accretion as well as AGN feedback (this is achieved by coupling 5\% of the accreted luminosity to the gas as thermal energy). \texttt{GADGET} has been used for many different astrophysical applications, including wide use in studies of merging galaxies (e.g., \citealt{DiMatteo2005,cox06,Hopkins2006,Robertson2006,Hopkins2008,Blecha2011,Blecha2013b,Hopkins2013a,Hopkins2013b}). We present the five galaxy merger simulations and the matched isolated simulations in Table \ref{simulations}. The framework for these simulations is established in \citet{Blecha2018}, and the simulations themselves are presented in N19. Three of the simulations are major mergers (where the mass ratio, $q$, of the progenitors is greater than $q = 0.25$, or 1:4, \citealt{Rodriguez-Gomez2015,Nevin2019}) and two of the simulations are minor mergers. The major mergers have mass ratios of 1:2, 1:3, and 1:3. We define the gas fraction of these simulations as $f_{\mathrm{gas}} = M_{\mathrm{gas,disk}}/(M_{\mathrm{gas,disk}} + M_{\mathrm{*,disk}})$. The 1:2 and 1:3 mass ratio major mergers have a relatively high gas fraction of 0.3 and one of the 1:3 mass ratio major mergers has a relatively low gas fraction of 0.1. We verify that the different gas fractions of the simulations (0.1 and 0.3) cover the full range in gas fractions of the MaNGA galaxies. The mean gas fraction in MaNGA is defined by \citet{Barrera-Ballesteros2018} as: $$\mu_{\mathrm{gas}} = \frac{\sigma_{\mathrm{gas}}}{\sigma_{\mathrm{gas}}+\sigma_{\mathrm{*}}}$$ where $\sigma_{\mathrm{gas}}$ is the gas mass density and $\sigma_*$ is the stellar mass density. The gas mass and stellar mass densities are derived from the Balmer decrement and stellar template fitting, respectively. \citet{Barrera-Ballesteros2018} find that the mean gas fraction for the MaNGA sample ranges from 0.16 to 0.32. A $f_{\rm{gas}}$ value of 0.1 is therefore below the mean for MaNGA galaxies and thus relatively gas poor, while $f_{\rm{gas}} = 0.3$ is at the top of the range of mean values for the sample and relatively gas rich. These simulations are named for their mass ratio and gas fraction; for instance, the gas rich 1:2 mass ratio major merger is q0.5\_fg0.3, the gas rich 1:3 mass ratio major merger is q0.333\_fg0.3, and the gas poor 1:3 mass ratio major merger is q0.333\_fg0.1. All of the major merger progenitors have a bulge-to-total (B/T) mass ratio of 0, meaning that they are a pure disk initially. Both of the progenitor galaxies of the minor mergers have a B/T ratio of 0.2 and both are gas rich. These simulations are q0.2\_fg0.3\_BT0.2, which is the 1:5 mass ratio minor merger, and q0.1\_fg0.3\_BT0.2, the 1:10 mass ratio minor merger. We build the matched sample of isolated galaxy simulations for each merger simulation from two sources. First, we use a stand-alone sample of isolated galaxies that are matched for mass and gas fraction to each of the simulations. Some simulations have more than one matched isolated galaxy, but for the case where there is only one isolated galaxy, it is matched to the mass of the larger merging galaxy from the corresponding merger simulation. Second, we define snapshots of each simulated merger that fall before first pericentric passage or $>0.5$ Gyr after final coalescence as isolated galaxies. We refer to the isolated galaxies that are from the snapshots before first pericentric passage as `pre-merger' isolated galaxies and the snapshots that happen $>0.5$ Gyr after final coalescence as `post-merger' isolated galaxies. This distinction is useful because the properties of these two populations differ. \begin{table*} \centering \begin{tabular}{c|cccc} Simulation& Mass Ratio& Gas Fraction &Stellar Mass of Primary & Matched Isolated Galaxies \\ & & &[10$^{10}$ M$_{\odot}$] & \\ \hline q0.5\_fg0.3&1:2 & 0.3 & 3.9 & m0.5\_fg0.3, m1\_fg0.3\\ q0.333\_fg0.3 & 1:3 & 0.3 & 3.9& m1\_fg0.3 \\ q0.333\_fg0.1 & 1:3 & 0.1 & 4.7& m0.333\_fg0.1, m1\_fg0.1\\ q0.2\_fg0.3\_BT0.2 & 1:5 & 0.3 & 4.2 &m1\_fg0.3\_BT0.2 \\ q0.1\_fg0.3\_BT0.2 & 1:10 & 0.3 & 4.2 & m1\_fg0.3\_BT0.2 \\ \end{tabular} \caption{Key simulation parameters and matched isolated galaxies. The simulations are named for the mass ratio, gas fraction and bulge-to-total mass ratio of the merging galaxies. For instance, q0.5\_fg0.3 is a 1:2 mass ratio merger where each progenitor galaxy has a gas fraction of 0.3 and an initial B/T ratio of 0. The stellar mass of the primary (more massive) galaxy is $3.9\times10^{10}$ M$_{\odot}$. The matched isolated galaxies are mass-matched to the merging galaxies and are named for which merging galaxy they are matched to (i.e., m0.5\_fg0.3 is matched to the smaller of the two merging galaxies in the q0.5\_fg0.3 merger). } \label{simulations} \end{table*} We couple \texttt{GADGET-3} with \texttt{SUNRISE} in order to directly compare the simulated galaxies with observations. \texttt{SUNRISE} is a 3D polychromatic Monte-Carlo dust radiative transfer (RT) code (\citealt{Jonsson2006,Jonsson2010}) that is used to model a wide range of isolated and merging galaxies (e.g., \citealt{Narayanan2010,hayward11,Blecha2013a,Snyder2013,Hayward2014}). The full details of the \texttt{SUNRISE} prescription are presented in \citet{Blecha2013b,Blecha2018} and N19. Briefly, \texttt{SUNRISE} performs Monte Carlo radiative transfer on a 3D adaptively-refined grid to compute the emission from stars, HII regions, and AGN. \texttt{SUNRISE} uses the STARBURST99 stellar population synthesis models (\citealt{Leitherer1999}) to calculate the age- and metallicity-dependent spectral energy distributions for each star particle. The treatment for dust includes dust self-absorption and thermal re-emission as well as polycyclic aromatic hydrocarbon (PAH) absorption and emission. We additionally include kinematic (doppler) effects, which requires very high spectral resolution. Ultimately, \texttt{SUNRISE} calculates the emergent, attenuated resolved UV-to-IR spectra ($3300-6990$\AA, $\Delta \lambda = 0.3$\AA) for seven isotropically positioned viewing angles. We utilize the datacube of \texttt{SUNRISE} optical synthetic spectra from the seven isotropically positioned viewpoints from each merger snapshot to produce the mock datacubes. In N19, a `snapshot' is the \texttt{SUNRISE} image; in this work, we use the term `snapshot' to refer to the full datacube from a specific point in time. These snapshots occur at 50-100 Myr intervals during each merger, and we refer to them as early-stage, late-stage, and post-coalescence stage snapshots. We define these stages using the $r-$band images from N19. The early-stage mergers occur after first pericentric passage and have (view-point) average stellar bulge separations $\Delta$x $\ge$ 10 kpc, late-stage mergers have separations 1 kpc $<$ $\Delta$x $<$ 10 kpc, and post-coalescence mergers are no longer resolvable with separations $\Delta$x $\leq$ 1 kpc until 0.5 Gyr after final coalescence. With a 50-100 Myr cadence for snapshots, we have 5-10 snapshots for each of these stages. In total, there are $\sim$20 snapshots per simulation and seven viewpoints per snapshot, which amounts to 100-200 observations per merger simulation. We further discuss the importance of running RT and incorporating dust attenuation and scattering on the merger snapshots in Appendix \ref{scatter}; briefly, the stellar kinematic maps are affected by both the presence of dust and dust scattering. The implication is that for this type of kinematic analysis, it is important to use velocities derived directly from the RT product (synthetic spectra) as opposed to the original SPH particle velocities. \subsection{Preparing Mock MaNGA Kinematic Maps} \label{mimic} \begin{figure} \includegraphics[width=0.47\textwidth]{snap_40.png} \includegraphics[scale=0.66, trim=1.1cm 2.7cm 1.2cm 3.0cm, clip]{mosaic_fig_deg_ani_fg3_m12_180_view_5.png} \includegraphics[scale=0.66, trim=1.1cm 2.7cm 1.2cm 3.0cm, clip]{mosaic_fig_deg_ani_fg3_m12_240_view_5.png} \caption{Snapshots of images (left column), stellar velocity maps (middle column), and stellar velocity dispersion maps (right column) from different epochs of the q0.5\_fg0.3 simulation. The $r-$band image is the log-scaled full-resolution simulation prior to the mock-up process in order to show all of the features of the merger. The colorbar for the middle and right columns is in km s$^{-1}$. The spatial position for all panels is in arcsec and the stellar velocity and stellar velocity dispersion columns have the same spatial coverage. We include a snapshot that is an early-stage merger (first row), a late-stage merger (second row), and a post-coalescence merger (third row). The stellar kinematics change over the course of the merger. For instance, the stellar velocity map is distorted due to the superposition of two merging galaxies, while the velocity dispersion map undergoes a global enhancement with time.} \label{mosaic_fg3_m12} \end{figure} To produce stellar kinematics for our sample of simulated galaxies, we use the specifications of MaNGA to create a datacube of spectra and then we mimic the MaNGA Data Analysis Pipeline (DAP) to extract stellar kinematics to use in our kinematic classification. Examples of finalized `MaNGA-ized' stellar velocity and stellar velocity dispersion maps are presented in Figure \ref{mosaic_fg3_m12}. In this section, we describe how we mimic the specifications of MaNGA. This involves reducing the spatial and spectral resolution of the simulations to create a MaNGA-ized datacube, placing an appropriately sized fiber bundle over each galaxy, and fitting each spaxel with \texttt{ppxf} (a penalized pixel-fitting method from \citealt{Cappellari2004,Cappellari2017}) to obtain the velocity and velocity dispersion of the stars at each spatial position. SDSS-IV/MaNGA is an IFS survey that targets a sample of $>$10,000 nearby galaxies that are selected to span a wide range of environments and stellar masses (\citealt{Gunn2006,Smee2013,Bundy2015,Drory2015,Law2015,Blanton2017}). Spectra are obtained using the BOSS spectrograph on the 2.5m telescope at Apache Point Observatory (\citealt{Gunn2006}), which has a spectral resolution of $R\sim2000$. Fibers are bundled into integral field units (IFUs); MaNGA has five different fiber bundles, equipped with 19, 37, 61, 91, and 127 fibers (the largest fiber bundle is known as the `frankenbundle'); each individual fiber has a $2\farcs0$ diameter with $2\farcs5$ spacing between fibers (\citealt{Smee2013,Drory2015,Yan2016,Wake2017}). These fiber bundles range from $12\farcs5$ to $32\farcs5$ in diameter. MaNGA has a median point spread function (PSF) of $2\farcs5$, which roughly corresponds to a spatial resolution of 1-2 kpc. The primary sample of galaxies (which is 2/3 of the full sample) has coverage out to 1.5 times the $r-$band effective radius ($R_e$) and the secondary sample has coverage to 2.5 $R_e$ (\citealt{Yan2016,Wake2017}). The redshift range of the MaNGA survey is $0.01 \leq z \leq 0.15$. The most recent internal MaNGA product launch (MPL-9) includes 8000 unique galaxies, observed and reduced by the Data Reduction Pipeline (DRP; \citealt{Law2016}). The publicly available version is released as DR-15, which includes 4621 unique galaxies (\citealt{Aguado2019}). The derived properties (including the stellar kinematics) are produced by the Data Analysis Pipeline (DAP; \citealt{Westfall2019,Belfiore2019}) in the format of a single datacube per galaxy (\citealt{Yan2016calib}). The MaNGA team also creates and maintains Marvin, which is a useful tool for visualizing MaNGA data (\citealt{Marvin}). To create the mock datacubes, we begin with the \texttt{SUNRISE} synthetic spectra, which we extract at the median redshift of the MaNGA survey ($z = 0.03$). We also use the \texttt{SUNRISE} SDSS $r-$ and $g-$band images to construct the mock datacubes, since they are essential for certain steps of the process. We follow this procedure (which mirrors the MaNGA DAP whenever possible): \begin{enumerate} \item Convolve the datacube with the $2\farcs5$ MaNGA PSF. Here we use a $2\farcs5$ Gaussian kernel, which is a good approximation of the effective PSF of the MaNGA datacubes. A model of the effective PSF is automatically computed for each datacube as part of the MaNGA DRP, which convolves a simulated point source with the fiber footprint of a given set of observations, incorporating as-observed details of the seeing, transparency, differential atmospheric refraction, dithering, and other instrumental effects (\citealt{Law2016}). We briefly investigate the difference between using our simplified Gaussian kernel and the effective PSF model provided by the DRP, and find that there are small differences in the maps when the reconstructed PSF is used, in particular a slight increase in the spread of values in the velocity dispersion maps. This is to be expected, given that the reconstructed PSF does not have a perfectly Gaussian shape. However, the differences in the kinematic maps are minimal and do not cause significant differences in the classification. We define what it means for the classification to be `significantly different' in \S \ref{viewpoint}. Therefore, we conclude that using the $2\farcs5$ Gaussian PSF is adequate for this work. \item Rebin to match the spatial ($0\farcs5$ spaxels) and spectral sampling ($R \sim 2000$) of MaNGA. The spectral sampling varies as a function of wavelength. \item Use a mock $g-$band image that is rebinned to the $0\farcs5$ spatial scale to mask all spaxels in the datacube that fall below a $g-$band S/N cutoff value of 1. Follow the procedure from N19 to convolve, rebin, and introduce noise characteristic of SDSS imaging to the mock $g-$band images to match the $0\farcs5$ spatial binning of the MaNGA cubes. Then find the average $g-$band S/N per spaxel and mask all spaxels that fall below a S/N cutoff value of 1. This procedure directly follows the MaNGA DAP (\citealt{Westfall2019}), which masks all spaxels using the same $g-$band S/N cutoff. \item Use the MaNGA procedure to select which sized fiber bundle to use for each mock datacube and mask the spaxels that are external to this hexagonal footprint. We use \texttt{statmorph} (\citealt{Rodriguez-Gomez2019}) to measure the effective radius of the mock $r-$band images from N19. We then determine the smallest fiber bundle needed to cover each galaxy to 1.5 R$_e$ (this is how MaNGA's primary sample is defined). We select the smallest fiber bundle if the total angular extent of the galaxy (2$\times$1.5 R$_e$) is smaller than $12\farcs5$ and the largest fiber bundle if the angular size exceeds $32\farcs5$. \item We introduce noise to each spaxel to produce a datacube with noise and a sqrt(variance) datacube (from here on, `error datacube'). We first produce a typical noise spectrum that demonstrates how the noise trends with wavelength for MaNGA observations. We then normalize this noise spectrum using the $g-$band S/N value for each spaxel. The end result is a sqrt(variance), or error spectrum, which we use to introduce random noise to each spaxel in the datacube. The noisy spectra and the accompanying error spectra are the inputs to \texttt{ppxf}. More details of this process can be found in Appendix \ref{noise}. To verify that the S/N of the simulated spectra are representative of the MaNGA sample, we use the peak $g-$band S/N as a comparison statistic, which is the maximum value of the $g-$band S/N (per pixel) from a single galaxy observation. The peak $g-$band S/N ratio of a sample of MaNGA galaxies that span the full range of sizes, surface brightnesses, and stellar masses of the MaNGA sample ranges from 10 - 60, with a median of 25. The same statistic for the simulation suite ranges from 10 - 100, with a median of 30. In \S \ref{limitssnz}, we experiment with changing the S/N of simulated spectra, and investigate how this affects the classification. Since the MaNGA datacubes oversample the effective PSF, they also contain significance covariance in the errors between adjacent spaxels such that the S/N ratio of binned spectra does not increase as $\sqrt{N}$. This covariance is irrelevant for the fitting of individual spectra, but we account for it in our Voronoi binning by following the analytic approximation given by \citet{Law2016}, as we discuss below. \item After completing the masking steps, we further exclude regions that are background dominated. At this stage, we notice that the datacubes have `patchy' outskirts, or regions of low S/N data that are surrounded by masked regions. The MaNGA datacubes do not have this feature; instead, they exclude regions that can be characterized as `background dominated'. This patchiness does not affect the results of the classification, instead we choose to correct it for cosmetic purposes. To do this, we mask spaxels where the $g-$band signal is less than 3$\sigma$ above the background value, where $\sigma$ is the standard deviation of the noise given above. This produces the desired effect, where the mask has a sharper cutoff, matching the appearance of the MaNGA cubes. \item Rebin spatially using a Voronoi binning scheme with $g-$band S/N of 10 (\citealt{Cappellari2003}). We create spatial bins that have a $g-$band S/N of 10, reproducing the procedure described in \citet{Westfall2019}. When a Voronoi bin contains more than one spaxel, the new spectrum is the masked average of all constituent spectra while the error spectrum for that bin is determined by co-adding the error spectra. It is important to account for covariance between neighboring spaxels in our Voronoi bin calculation. In order to avoid the computational cost of calculating the covariance matrix for all spaxels, we instead use the correction from \citet{Law2016}. The correction is an analytic function of the number of spaxels in a bin ($N_{\mathrm{bins}}$): $$n_{\rm{measured}}/n_{\rm{no\ covar}} = 1 + 1.62 \times \mathrm{log} (N_{\mathrm{bins}})$$ where $n_{\rm{measured}}$ is the corrected noise level after the correction is applied to the co-added error where covariance is not considered ($n_{\rm{no\ covar}}$) and $N_{\mathrm{bins}}$ is the number of spaxels in a bin. \end{enumerate} The final step of the creation of mock kinematic maps is to pass the Voronoi binned spectra through \texttt{ppxf} (\citealt{Cappellari2004,Cappellari2017}). \texttt{ppxf} is a penalized pixel-fitting method which assumes that a galaxy spectrum is a combination of stellar templates that are convolved with the line-of-sight velocity distribution (LOSVD). To run \texttt{ppxf}, we follow these steps from the DAP: \begin{itemize} \item Normalize the flux data so that the mean over all templates is unity. \item Mask the spectra to match the wavelength range of the \texttt{MILES-HC} library (3600-7400 \AA). \item Mask the emission lines using the DAP module StellarContinuumBitMask(). \item Use the 42 template \texttt{MILES-HC} spectral library to globally fit each datacube. The templates are first convolved to the spectral resolution of MaNGA.\footnote{This is a departure from the DAP. However, as noted in \citet{Westfall2019}, there is no mathematical difference between our approach and later subtracting the difference in resolution in quadrature from the \texttt{ppxf} result.} \item We use the `\texttt{NZT}', or non-zero template iteration mode to fit all bins with \texttt{ppxf}. In this mode (which is also used in the DAP), we first fit the masked average of all spectra in the datacube and use this global fit to isolate the subset of templates allocated non-zero weights. This template subset is then used to individually fit each bin. \item Each fit iteration of \texttt{ppxf} uses an additive eight-order Legendre polynomial and a Gaussian line of sight velocity dispersion (LOSVD) with two moments. As in the DAP, due to limited spectral resolution, we do not solve for the higher order moments $h_3$ and $h_4$ (\citealt{Westfall2019}). \end{itemize} The final product of our MaNGA-izing procedure is the first two moments of the LOSVD, or a stellar velocity map and a stellar dispersion map, both with associated error maps from the fit to the stellar continuum. \subsection{Preparing Kinematic Predictors} \label{kinclass} Here we define and describe the predictors extracted from the stellar kinematic maps. The goal is to create a set of kinematic predictors that adequately describe the different types of merger-induced kinematics in the velocity and velocity dispersion maps. To develop this kinematic identification tool, we use the stellar kinematics instead of the warm ionized gas kinematics (henceforth, `gas kinematics'). The stellar and gas kinematics trace different physical regions and processes in the merging galaxies. We select the stellar kinematics because they directly trace the assembly history of a galaxy's stellar population. On the other hand, the gas kinematics can be subject to a number of non-gravitational forces. The stellar kinematics and the gas kinematics diverge in the presence of shocks, inflows, and/or outflows, all of which are processes that are not limited to merging galaxies. An analysis built on gas kinematics is a compelling direction for future work but is beyond the scope of this paper (i.e., see \citealt{Khim2020}).\footnote{Gas kinematics are not available for many MaNGA galaxies (since many are non-starforming), but are easier to obtain than stellar kinematics for many high redshift galaxies and could be more compelling direction to pursue in this context.} The kinematic predictors are based on previous work to identify merging galaxies from the stellar kinematics of observed and simulated galaxies. All of these predictors are sensitive to different orientations, merger stages, mass ratios, and/or gas fractions of merging galaxies. Our goal is to combine them into one LDA classification to best identify a variety of different types and epochs of merging galaxies. In total, we extract the following predictors (which are all introduced in Table \ref{tab:predictors}): $A$, $A_2$, $\Delta$PA, $\mathrm{v}_{\mathrm{asym}}$, $\sigma_{\mathrm{asym}}$, resid, $\lambda_{R_e}$, $\epsilon$, $\Delta x_V$, $\Delta x_\sigma$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$. We include a brief definition for all predictors in Table \ref{tab:predictors} but focus the remainder of this section on the kinematic predictors that were selected by the random forest term selection technique described in \S \ref{RFR}: $A_2$, $\Delta$PA, resid, $\lambda_{R_e}$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$. These terms are the most informative for identifying the merging galaxies and we discuss them throughout the rest of the paper. We further describe the kinematic predictors that were not selected in Appendix \ref{predcont}. \renewcommand{\arraystretch}{1.5} \begin{table*} \centering \begin{tabular}{l|l|l} Predictor Name&Description & Derivation \\ \hline $A$ &The weighted asymmetry in the position angle, & $A = \frac{\sum_i \delta\hat{\theta}_i}{2 N_{i,j}}w_{i,j}$\\ & which is calculated from the Radon profile & where $\hat{\theta}$ is the best fit kinematic position angle\\ \hline \textcolor{purple}{$A_2$} & The error-weighted asymmetry in the position angle & $A_2 = \sum_i \frac{\delta\hat{\theta}_i}{\sigma_{\delta\hat{\theta},i}}$ \\ \hline \textcolor{purple}{$\Delta$PA} & The difference between the global kinematic and & $\Delta \rm{PA} = |\rm{PA}_{\rm{kin}} - \rm{PA}_{\rm{img}}|$\\ & photometric position angles, which are measured & \\ &from \texttt{kinemetry} and the $g-$band image& \\ \hline $\sigma_{\mathrm{asym}}$ & Describes the degree of smoothness of the & $\sigma_{\mathrm{asym}} = < {\frac{\sum_{n=1}^5 k_{\mathrm{n}, v}/5}{A_{0,v}}} >_r$ \\ & velocity dispersion map & which is the sum of the higher \\ & &order coefficients from \texttt{kinemetry} \\ \hline $\mathrm{v}_{\mathrm{asym}}$ &The deviation of the velocity dispersion map & $\rm{v}_{\mathrm{asym}} = < {\frac{\sum_{n=2}^5 k_{\mathrm{n}, v}/4}{B_{1,v}}} >_r$\\ &from ordered rotation & same as above but excluding the $k_{1,v}$ term\\ \hline \textcolor{purple}{resid} & The residual between the best fit \texttt{kinemetry} & $\mathrm{resid} = \frac{\sum_{i,j}^N |V_{*} - V_{\rm{model}}|}{N}$ \\ & model and the velocity map& \\ \hline \textcolor{purple}{$\lambda_{R_e}$} & The approximate spin parameter & $\lambda_{R_e}$ $= \frac{\sum_{n=1}^N F_n R_n |V_n|}{\sum_{n=1}^N F_n R_n \sqrt{V_n^2+\sigma_n^2}}$\\ \hline $\epsilon$ & Galaxy ellipticity &Measured using \texttt{statmorph} from the $r-$band imaging \\ \hline $\Delta x_V$ & The spatial distance between the center of the& The imaging center is measured from the $r-$band image and \\ & velocity map and the imaging center in kpc & the kinematic center is from the Radon Transform\\ \hline $\Delta x_{\sigma}$ & Same as above, but for the velocity dispersion map & The center of the velocity dispersion map is determined \\ & & using a low pass filter\\ \hline \textcolor{purple}{$\mu_{1,V}$\ and $\mu_{1,\sigma}$} & The mean of the distribution of the & The distributions for each map are created\\ & velocity and velocity dispersion maps & by collecting the values from all spaxels \\ \hline \textcolor{purple}{$\mu_{2,V}$\ and $\mu_{2,\sigma}$} &The variance of the distributions& \\ \hline \textcolor{purple}{$|\mu_{3,V}|$\ and $|\mu_{3,\sigma}|$} & The skewness of the distributions & \\ \hline \textcolor{purple}{$\mu_{4,V}$\ and $\mu_{4,\sigma}$} & The kurtosis of the distributions & \\ \end{tabular} \caption{Synthesis of all of the kinematic predictors measured in this paper. We highlight the predictors that are selected as important in \textcolor{purple}{purple}. We include a brief description and derivation for each predictor. For more details, see \S \ref{kinclass} for the predictors that are selected as important and Appendix \ref{predcont} for the predictors that are not used in the classification. } \label{tab:predictors} \end{table*} \renewcommand{\arraystretch}{1} To define the asymmetry in the kinematic position angle ($A_2$), we utilize the Radon Transform from \citet{Stark2018}. We transform the velocity maps into circular coordinates ($\rho$,$\theta$) where $\rho$ is the distance from the spaxel to the center of the velocity map, which is the kinematic center (defined below), and $\theta$ is the angle between the positive x-axis and the line segment from the kinematic center to the spaxel. The angle $\theta$ ranges from 0 to 180 in the CCW direction. Positive values of $\rho$ are regions of the velocity map above the x-axis and negative values of $\rho$ are below the positive x-axis. The Radon Transform is defined as: \begin{equation} R(\rho,\theta) = \int_0^L v(x,y) dl \label{eq:radon} \end{equation} where the velocity is summed along line integrals that are centered on the point ($\rho$, $\theta$) and perpendicular to the kinematic center of the galaxy. The Radon Transform is a 2D array that is calculated at all values of $\rho$ and $\theta$. \begin{figure*} \centering \raisebox{-0.5\height}{\includegraphics[scale=0.43, trim=0cm 2cm 0cm 2cm, clip]{radon_stel_vel_RAB_fg3_m12_180_5editit.png}} \raisebox{-0.5\height}{\includegraphics[scale=0.43]{radon_prof_fg3_m12_180_5.png}} \caption{Stellar velocity map (left), bounded Absolute Radon Transform (middle), and Radon profile (right) for a snapshot during the late stages of the q0.5\_fg0.3 merger. In this case, the primary galaxy is \textbf{blueshifted} at the center of the velocity map (systemic velocity is $\sim $-100 km s$^{-1}$ and the secondary galaxy approaches from the right and is redshifted relative to the primary galaxy. To compute the Radon Transform, the velocity field is transformed into $\theta$ and $\rho$ coordinates, where $\theta \subseteq \ $[0,180] and $\rho \subseteq \ $ [-$\infty$,+$\infty$], where $\theta$ is measured CCW from the top of the map. The bounded Absolute Radon Transform is then calculated by creating line integrals over a grid of ($\rho$,$\theta$) positions, where the line is perpendicular to the kinematic center of the map. It is `bounded' because the line integral is limited to the length $R_e$. In the left panel, the kinematic center is a yellow star and the magenta and purple line segments demonstrate the calculation of the Absolute Radon Transform at $\theta \sim 45$ for positive and negative $\rho$ values, respectively. The magenta and purple regions in the middle panel have large and small values, respectively, which demonstrates that the value of the Absolute Radon Transform is smaller in the regions where the spaxel velocities vary less along the line integral. We find the minima (shown in lighter yellow) of $R_{AB}$ at each value of $\rho$, to measure the Radon profile (right), which is used to calculate the error-weighted asymmetry in the kinematic position angle, $A_2$.} \label{radon} \end{figure*} We then calculate the bounded Absolute Radon Transform, $R_{AB}$, which is integrated over a distance $R_e$ and is the absolute value of the difference between the velocity at each point and the mean value along the line segment. We present the bounded Absolute Radon Transform and the Radon profile in Figure \ref{radon}. The Radon profile is computed by determining the minimum value of $\theta$ ($\hat{\theta}$, where the hat operator denotes an estimated value) for each value of $\rho$ from the bounded Absolute Radon Transform. The value of $\hat{\theta}$ traces the direction of maximal rotation in the stellar velocity maps at each radial position. We follow the procedure from \citet{Stark2018} to determine the galaxy's kinematic center, which we describe in more detail in Appendix \ref{predcont}. We quantify the asymmetry of the Radon profiles using the kinematic predictor $A_2$, from \citet{Stark2018}: \begin{equation} A_2 = \sum_i \frac{\delta\hat{\theta}}{\sigma_{\delta\hat{\theta},i}} \label{eq:A2} \end{equation} where $\delta \hat{\theta}$ is the absolute magnitude of the difference between $\theta_i$ on one side of the Radon profile to the other (same $\rho$, different sign), $\sigma_{\delta \hat{\theta}}$ is the uncertainty on $\delta\hat{\theta}$, and the expression is summed over the $i$ values of $\hat{\theta}$. The $A_2$ predictor incorporates the absolute magnitude of the difference between the measured kinematic PA on one side of the galaxy to the other. We therefore expect that $A_2$ will be enhanced for merging galaxies, since mergers can cause warps in the stars in a galaxy (e.g., \citealt{Shapiro2008}). We use \texttt{kinemetry} to measure both $\Delta$PA and resid from the LOSVD (\citealt{Krajnovic2006}). Functionally, \texttt{kinemetry} measures the kinematic asymmetry from the line of sight velocity maps by dividing them into a set of nested elliptical rings. The best fit model at each radius is determined using a ring defined by the kinematic PA and the flattening factor $q_f$ = 1-$e$, where $e$ is the ellipticity of the ring in the plane of the sky. These models use a decomposition of the moment maps into harmonic Fourier coefficients in polar coordinates. For instance, a velocity map, $K(r, \psi)$ can be expanded into a finite number of harmonic frequencies: \begin{equation} K(r, \psi) = A_0(r) + \sum^N_{n=1} A_n(r)\ \mathrm{sin} \ (n\psi) + B_n(r)\ \mathrm{cos} \ (n \psi) \label{eq:harmonics} \end{equation} where $r$ is the semimajor axis of the ellipse, $\psi$ is the azimuthal angle, $A_0(r)$ is the systemic velocity, $N$ is the number of ellipses fit, and $A_n$ and $B_n$ are the coefficients of the harmonic expansion. The best-fitting ellipses are obtained by minimizing $\chi^2$ for the linear combination of the $A_n$ and $B_n$ coefficients. An ideal rotating disk can be described using only the $B_1$ term, which represents the cosine term for the circular velocity of a galaxy's rotating disk: \begin{equation} V(r,\psi) = V_c(r)\ \mathrm{sin} \ i \ \mathrm{cos} \ \psi \label{eq:vel} \end{equation} where $r$ is the radius in the plane of the galaxy, $\psi$ is the azimuthal angle, $V_c(r)$ is the circular velocity, and $i$ is the inclination of the galaxy disk. To determine the best fit Fourier coefficients, we run \texttt{kinemetry} multiple times. We first allow the best fit kinematic PA and value of $q_f$ to vary for each radius. We define the kinematic position angle (PA$_{\mathrm{kin}}$) to be the median value of the best fit kinematic PAs. We then allow the value of $q_f$ to vary and determine the median value. After determining the global values for kinematic PA and $q_f$, we do a final run to determine the values of the higher order kinematic moments and therefore the best fit disk model. We then compare PA$_{\mathrm{kin}}$ to the imaging major axis (PA$_{\mathrm{img}}$, which is measured using \texttt{statmorph} from the $r-$band imaging) to create the predictor $\Delta$PA. Since $\Delta$PA traces the recent global misalignments of stars, it should be elevated for the merging galaxies that have misaligned stellar disks. We use the global kinematic position angle from \texttt{kinemetry} to measure $\Delta$PA instead of the median of the kinematic position angles from the Radon Transform. The main motivation for this choice is that \texttt{kinemetry} uses an adaptive binning scheme; at each step outwards, the ellipses are larger, which gives less weight to the kinematic confusion at the outskirts of the galaxy. The Radon Transform, however, is equally sampled in $\rho$ (see Figure \ref{radon}), so it can be more influenced by the measurement of the kinematic PA at the outskirts of the galaxy. In most cases, the two measurements agree within error, but in cases where the kinematic maps are disturbed, the global kinematic PA from \texttt{kinemetry} more closely matches our by-eye assessment of the kinematic PA. \begin{figure*} \centering \includegraphics[scale=0.8, trim=0cm 5cm 0cm 5cm, clip]{kinemetry_result_fg3_m12_180_5_degraded.png} \caption{An example \texttt{kinemetry} fit to a snapshot of the q0.5\_fg0.3 merger simulation with observed stellar velocity map (left), best fit \texttt{kinemetry} model (middle), and the model velocity subtracted from the stellar velocity map (right). Note that this is the same snapshot shown in Figure \ref{radon}. The color bars show the velocity in km s$^{-1}$. In the left panel we overplot the contours from the $r-$band imaging and the imaging position angle. The kinematic position angle (from \texttt{kinemetry}) is the straight line in the middle panel. We utilize the normalized residuals as a predictor (right), which we refer to as `resid', which is the sum over all spaxels of the absolute value of the difference of the stellar velocity and the model velocity, normalized by the number of spaxels in the model.} \label{kinexample} \end{figure*} We also extract `resid', or the \texttt{kinemetry} residuals between the best fit rotating disk and the velocity map. This predictor is defined as: \begin{equation} \mathrm{resid} = \frac{\sum_{i,j}^N |V_{*} - V_{\rm{model}}(r,\Psi)|}{N} \label{eq:resid} \end{equation} where $V_{*}$ is the observed velocity map, $V_{\rm{model}}(r,\Psi)$ the circular velocity model from \texttt{kinemetry}, and $N$ is the number of spaxels fit. We include this normalization factor in order to penalize the fits that converge to a very inclined galaxy. For these galaxies, the fit is attempting to avoid fitting disordered kinematics in the exterior regions of the galaxy by fitting a smaller region. We show an example of a simulated galaxy snapshot from the q0.5\_fg0.3 simulation fit with \texttt{kinemetry} and its velocity residuals in Figure \ref{kinexample}. We measure $\lambda_{R_e}$, the approximate spin parameter, from the stellar velocity and velocity dispersion maps, which is defined by \citet{Emsellem2007}: \begin{equation} \lambda_{R_e} = \frac{\sum_{n=1}^N F_n R_n |V_n|}{\sum_{n=1}^N F_n R_n \sqrt{V_n^2+\sigma_n^2}} \label{eq:lambdare} \end{equation} where $F_n$ is the ($r-$band) flux of a spaxel, $R_n$ is the distance from the kinematic center, $V_n$ is the stellar velocity, and $\sigma_n$ is the stellar velocity dispersion. We measure $\lambda_{R_e}$ to the $r-$band effective radius. Since the fiber bundles are designed to provide coverage of each galaxy to 1.5$R_e$, if a secondary nuclei falls towards the outside edge of the hexagonal FOV, it is excluded from the measurement of $\lambda_{R_e}$. This effect is more relevant for the minor mergers, where the secondary component covers a smaller effective area of the hexagonal FOV. We measure the ellipticity of a galaxy, $\epsilon$, from the $r-$band photometry using \texttt{statmorph}. It is distinct from the ellipticity parameter used by \texttt{kinemetry} to fit rotation curves. We do not use $\epsilon$ as a kinematic predictor. Instead, we use it to construct the $\lambda_{R_e}$-$\epsilon$ diagnostic diagram in \S \ref{discuss:useless}, where the division between fast and slow rotators is defined by \citet{Cappellari2016}: \begin{equation} \lambda_{R_e} = 0.08 + \epsilon/4 \label{eq:slowfast} \end{equation} where slow rotators fall below this line. On the $\lambda_{R_e}$-$\epsilon$ diagram, $\lambda_{R_e}$\ is the more predictive of the two axes; it decreases dramatically for the `slow-rotating' population of galaxies, which are dynamically disordered and dispersion-dominated. We predict that $\lambda_{R_e}$\ will decrease for merging galaxies since mergers are kinematically disordered and can contribute to bulge-growth, which is associated with enhanced velocity dispersion. \begin{figure} \centering \includegraphics[scale=0.4, trim=2cm 0cm 0cm 0cm]{sig_inplot_fg3_m12_180_5_degraded.png} \caption{Distribution of the velocity dispersion values (in km s$^{-1}$) taken from each spaxel in the velocity dispersion map (inset, velocity dispersion bar is in km s$^{-1}$ and the spatial axis is in arcsec). This snapshot is also showcased in Figures \ref{radon} and \ref{kinexample}. We also include the measured values of the mean ($\mu_{1,\sigma}$), dispersion ($\mu_{2,\sigma}$), skew ($|\mu_{3,\sigma}|$), and kurtosis ($\mu_{4,\sigma}$) of this distribution. A distribution with a larger skew is asymmetric about the mean. A distribution with a positive kurtosis has a high degree of peakedness relative to a normal distribution. } \label{fig:distribution} \end{figure} In addition to kinematic predictors that were utilized in previous work, we define a new set of predictors based on the distributions of values in the velocity and velocity dispersion maps. These predictors include $\mu_{1,V}$/$\mu_{1,\sigma}$, $\mu_{2,V}$/$\mu_{2,\sigma}$, $|\mu_{3,V}|$/$|\mu_{3,\sigma}|$, and $\mu_{4,V}$/$\mu_{4,\sigma}$, which are the standardized moments of the stellar velocity/velocity dispersion maps. These predictors are similar to the formulation from \citet{Sweet2020}, which calculates the moments of PDF($s$), where $s$ is the normalized specific angular momentum. To determine the values of these predictors, we measure the four standardized moments of the distribution; mean ($\mu_1$), variance (standard deviation; $\mu_2$), skewness ($\mu_3$), and kurtosis ($\mu_4$). This produces eight different predictors (four each from the velocity and velocity dispersion distributions). These quantities are different from the higher order moments $h_3$ and $h_4$, which are typically measured by \texttt{ppxf}. We show an example of these predictors measured from a velocity dispersion map in Figure \ref{fig:distribution}. We expect to see an offset in the mean velocity ($\mu_{1,V}$) from systemic for the merging systems and an enhanced mean velocity dispersion ($\mu_{1,\sigma}$). The spread in the velocity distribution ($\mu_{2,V}$) and the dispersion of the velocity dispersion distribution ($\mu_{2,\sigma}$) could identify superpositions of dynamically distinct stellar components. This could include a secondary merging galaxy or features like a stellar bulge. The higher order moments could be useful for identifying subtler features of mergers, beyond bulk shifts in $\mu_{1,V}$, for example. The skewness of a distribution is sensitive to the tails; we take the absolute value to treat positive and negative skew identically. A skewed velocity or velocity dispersion distribution ($|\mu_{3,V}|$\ and $|\mu_{3,\sigma}|$) could have a faint secondary source in the field of view, where the distribution is actually a combination of two galaxy rotation curves. Kurtosis measures how peaked a distribution is relative to normal; a flatter distribution has a negative kurtosis and a more peaked distribution has a higher peak. A smoothly rotating velocity dispersion field has a normally shaped distribution whereas a disturbed field may have a negative (flatter) kurtosis ($\mu_{4,\sigma}$). On the other hand, post-coalescence mergers with recent bulge growth could have a positive kurtosis in the velocity dispersion distribution. To summarize, we extract the following kinematic predictors: $A$, $A_2$, $\Delta$PA, $\mathrm{v}_{\mathrm{asym}}$, $\sigma_{\mathrm{asym}}$, resid, $\lambda_{R_e}$, $\epsilon$, $\Delta x_V$, $\Delta x_\sigma$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$. We then use the techniques described in the following sections (\S \ref{LDA}, \S \ref{outliers}, and \S \ref{RFR}) to select the most informative of these predictors. We ultimately use the following predictors in the LDA classification: $A_2$, $\Delta$PA, resid, $\lambda_{R_e}$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$. \section{Results} \label{results4} After creating mock MaNGA datacubes from the five simulations of merging galaxies (and matched isolated galaxies), we extract the kinematic predictors introduced in \S \ref{kinclass}. We then prepare the input data, select the predictors that are most informative, and create and assess the classification itself. In \S \ref{LDA}, we describe the LDA technique. We then provide an overview of our process for preparing the data and examining it in the context of the assumptions made by the LDA in \S \ref{outliers}. Prior to running the LDA classification, we perform an initial term selection using a random forest regressor, which we describe in \S \ref{RFR}. We present the classification results in \S \ref{LDAresults} and measure performance statistics in \S \ref{accuracy4}. We present the LDA observability time in \S \ref{analyzeobservability}. Then, in \S \ref{fails} we explore some failure modes of the classification. Finally, we analyze how the classification changes with redshift and decreasing signal-to-noise (S/N) in \S \ref{limitssnz}. More details of the classification are discussed in the appendices, where we analyze possible biases of the classification in \S \ref{fair}. \subsection{Linear Discriminant Analysis} \label{LDA} The classification in this work relies upon an LDA technique that separates nonmerging galaxies from merging galaxies based upon a combination of the input predictors (for a review of LDA, see \citealt{James2013}). This approach was first presented in N19 for imaging predictors; here, we use this approach for kinematic predictors. LDA is one of many statistical learning tools that perform classification tasks. Using pre-defined features (predictors) as inputs, LDA solves for the hyperplane in multi-dimensional predictor space that maximizes the separation between different classes of objects (i.e., mergers and nonmergers). The solution is a linear combination of the input predictors; the classification is therefore relatively easy to interpret because its complexity is low. Recent work has employed other techniques to identify merging galaxies, such as random forest regressors (e.g., \citealt{Snyder2019,Goulding2018}) and convolution neural networks (CNNs; e.g., \citealt{Bottrell2019}). These techniques have various advantages and disadvantages based upon the dataset at hand and the goals of the work. Since we aim to optimize the interpretability of the method, we select LDA over an approach like a CNN. CNNs might increase the number of correct classifications, but they achieve this using complex non-linear features, which are not easily interpreted. In this work, we have made several important changes to the technique from N19. We first recap the relevant details from the LDA in N19, and then we discuss the changes. Relevant details of the LDA technique from N19 include: \begin{itemize} \item All predictors are linearly standardized prior to the LDA technique, meaning that predictors with large numerical values (such as $A_2$) do not have an outsized effect on the analysis. \item We utilize priors on the relative fraction of merging and nonmerging galaxies in nature versus in the simulations. This accounts for the fact that we have more merging galaxy snapshots (relative to nonmerging snapshots) for each simulation. We use the same priors from N19; $f_{\mathrm{merg}} = 0.1$ for the major mergers and $f_{\rm{merg}} = 0.3$ for the minor mergers. These priors are based on the fraction of nonmerging and merging galaxies from observation and simulations (e.g., \citealt{Rodriguez-Gomez2015, Lotz2011,Conselice2009,Lopez-Sanjuan2009,Shi2009}). \item We include interaction terms to explore correlations between predictors. \item In order to select which coefficients are necessary for the classification, we use a forward stepwise selection technique, which orders and includes only the most important terms. This technique adds additional terms to LD1 only if they improve the F1 statistic, which is defined in \S \ref{accuracy4}. It also protects against the unnecessary addition of terms by finding the minimum number of terms that produce an F1 statistic that is consistent with the maximum (within 1$\sigma$ errors, where $\sigma$ is the standard deviation on the F1 statistic measured from each $k-$fold cross-validation set). \item We use $k$-fold cross-validation to obtain 1$\sigma$ errors on the predictor coefficients. At each step of the forward stepwise selection process, we divide the sample into $k$ subsets. We then train the LDA on the first $k-1$ subsamples and test on the remaining subsample, which is the `cross-validation' set. We repeat these steps $k$ times for all combinations of subsamples and the variation in predictor coefficient values from the cross-validation subsamples is the 1$\sigma$ error. \end{itemize} For complete details, including the full mathematical formulation for LDA, see N19. We make several changes to the technique motivated by the additional challenges of the kinematic data: \begin{itemize} \item Due to the number of predictor terms in this work, instead of including all of the kinematic predictors in the final classification, we first utilize the RFR technique as a selection technique to eliminate predictors that are uninformative from the analysis (see \S \ref{RFR}). \item We adjust the model optimization statistic. In N19, we minimized the number of misclassifications in order to both select the predictors and determine their coefficients. Here, we utilize the F1 statistic defined in \S \ref{accuracy4} instead; it does a better job of balancing the number of false negatives and false positives in each classification. \item We also adjust the $k-$fold cross-validation, Instead of using $k=10$, we find that a value of $k=5$ improves the performance of the LDA by creating a training set that is 80\% of the sample and a cross-validation set that is 20\% of the sample (as opposed to the 90\%/10\% divide in N19). We then train the LDA on nine of the subsamples, and test on the tenth sample. We repeat this procedure ten times, and the mean number of misclassifications all ten test samples allows us to decide which set of input predictors to select \end{itemize} We use the LDA both as a term selector and to determine the coefficients and standard errors for each selected predictor. In order to directly compare the imaging classification to the kinematic classification, we utilize the same snapshots from all simulations and we rerun the imaging analysis using all the same methods as the kinematic classification. \subsection{Data Preparation and LDA Assumptions} \label{outliers} Prior to term selection and classification, we examine the distributions of predictor values. We screen for outliers and examine the data in the context of the assumptions made by the LDA. The goal is to gain an understanding of the properties of the data prior to classification. First, we remove outliers by transforming the distribution of each predictor into log space. We define data points that fall more than 5$\sigma$ above or below the mean of the distribution for each predictor as outliers. The combination of the log transformation and 5$\sigma$ cutoff allow us to identify outliers that are caused by errors in the creation of the mock maps and not simply related to very disturbed kinematics. There are $\sim4$ outliers per simulation, amounting to 4 out of 100 or 200 datapoints. Second, we check the input data for significant violations of the LDA assumptions. LDA operates under the assumptions that the predictors are normal distributed (multivariate normality), the covariance among the merging and nonmerging classes is equal (homoscedasticity), and the predictors are not strongly correlated with one another (multicollinearity). Here we test these three assumptions by closely examining the data. We carry out the same statistical tests from N19 to test for normality, homoscedasticity, and multicollinearity. We find that the data violates all three assumptions. Additionally, we plan to introduce interaction terms into the LDA classification, which further increases the multicollinearity. As discussed in N19, LDA has been shown to be robust to the violations of multivariate normality and homoscedasticity (\citealt{Duda2001,TaoLi2006}). To ensure that the LDA technique in this work is robust to these violations, we directly compare the LDA results to those of a logistic regression. The logistic regression and LDA produce similar results both in terms of the relative importance of the predictors for each simulation and in the performance of the method. This is an indication that the LDA is converging even though it nominally violates several assumptions. \subsection{Random Forest Regressor Term Selection} \label{RFR} In N19, we used a forward stepwise selection technique within the LDA to select informative predictors. Here, motivated by both an increase in the number of initial terms\footnote{This is partially due to a dearth of historically utilized kinematic predictors, so we initially introduce many more terms to determine which are informative.} and a decrease in the predictive power of these terms, we modify the term selection procedure. We introduce a random forest regressor (RFR) into the methodology to select a subset of predictors for each simulation, which will then be presented to the LDA classifier. An RFR (\citealt{Ho1995}) is an ensemble learning technique that aggregates the result of many individual decision trees run in parallel. We specifically utilize the \texttt{scikit-learn} implementation of RFR (\citealt{scikit-learn}). In an RFR, the number of features that can be used to split at each node of the decision tree is limited to a percentage of the total number of features, ensuring that the ensemble model does not rely too heavily on any one feature. This means that the RFR is able to combine all potentially predictive variables in a fair way. It is also able to incorporate non-linear features to capture some higher-order interaction terms. In practice, we find that the RFR is an efficient method to initially identify the useful features in the dataset from the extensive list of kinematic predictors.\footnote{We do not use it as the main classification technique because the features can be highly nonlinear and more opaque to interpretation. Additionally, this technique is designed to directly complement the LDA technique in N19 for comparison's sake.} In order to select the informative terms from the RFR, we include an additional predictor. This predictor is assigned a random number for each galaxy snapshot and therefore shows no significant difference between the nonmerging and merging galaxies. We use this technique to eliminate all of the terms that have a feature importance less than the random term for all simulations. In this step we eliminate the $\mathrm{v}_{\mathrm{asym}}$, $\sigma_{\mathrm{asym}}$, $A$, $\Delta x_V$, and $\Delta x_{\sigma}$ predictors. Then, for each individual simulation, we additionally eliminate predictors that have an importance less than the random value prior to initiating the LDA classification. The terms eliminated in this step vary from simulation to simulation. \subsection{Classification Results} \label{LDAresults} After using the RFR term selection to narrow the number of kinematic terms down to 11 ($\Delta$PA, resid, $\lambda_{R_e}$, $A_2$, $\mu_{1,V}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$), we run the LDA classification for each simulation individually. We also combine the three major mergers into a combined major merger classification and the two minor mergers into a combined minor merger classification. We run the LDA with interaction terms; the result is a linear combination of selected predictors and coefficients which is unique for each simulation. We present the term coefficients and standard errors for the four most important terms and the intercept term in Table \ref{table:LDAall}. Finally, we briefly discuss the main results of the LDA classification for each simulation, which we will examine in more detail in \S \ref{discuss4}. \begin{table*} \begin{center} \caption{The final LD1 predictor coefficients ($\hat{\vec{w}}$) with 1$\sigma$ confidence intervals after term selection for the first four most important terms and the intersect ($\hat{w_0}$) for all simulations. } \label{table:LDAall} \begin{tabular}{c|cccc|c} & \multicolumn{4}{c}{\Large{$\hat{\vec{w}}$}} & \Large{$\hat{w_0}$} \\ \hline Simulation & 1 & 2 & 3 & 4 & \\ \hline All Major &-6.76 $\pm$ 0.45 $\lambda_{R_e}$ & 4.99 $\pm$ 0.6 $|\mu_{3,\sigma}|$ & 4.54 $\pm$ 0.36 $\mu_{1,\sigma}$*$\lambda_{R_e}$ & -4.44 $\pm$ 0.51 $\mu_{1,\sigma}$*$|\mu_{3,\sigma}|$ & -1.21 $\pm$ 0.07\\ All Minor &-4.99 $\pm$ 0.74 $\mu_{2,\sigma}$ & -4.97 $\pm$ 0.59 $\mu_{2,\sigma}$*$\mu_{4,V}$ & 3.47 $\pm$ 0.62 $\mu_{4,V}$*$\mu_{4,\sigma}$ & 2.44 $\pm$ 0.38 $\mu_{4,V}$ & -0.76 $\pm$ 0.04 \\ q0.5\_fg0.3 &-7.15 $\pm$ 0.78 $\mu_{1,\sigma}$*$|\mu_{3,\sigma}|$ & -6.7 $\pm$ 0.63 $\mu_{1,\sigma}$*$\mu_{2,\sigma}$ & 6.65 $\pm$ 0.53 $\mu_{2,\sigma}$ & 5.75 $\pm$ 0.2 $\mu_{1,\sigma}$ &-2.57 $\pm$ 0.05 \\ q0.333\_fg0.3 & 8.27 $\pm$ 0.35 $\mu_{1,\sigma}$ & -7.84 $\pm$ 0.71 $\mu_{1,\sigma}$*$\mu_{2,\sigma}$ & 5.92 $\pm$ 0.52 $\mu_{2,\sigma}$ & 5.21 $\pm$ 0.73 $|\mu_{3,\sigma}|$ & -0.77 $\pm$ 0.18 \\ q0.333\_fg0.1 & -7.78 $\pm$ 0.91 $\mu_{1,\sigma}$*$\mu_{2,\sigma}$ & 7.09 $\pm$ 0.59 $\mu_{2,\sigma}$ &5.97 $\pm$ 0.61 $\mu_{1,\sigma}$ &-- &-0.26 $\pm$ 0.28 \\ q0.2\_fg0.3\_BT0.2 & -6.51 $\pm$ 1.09 $\mu_{1,\sigma}$ & -6.2 $\pm$ 0.93 $\mu_{2,\sigma}$*$\lambda_{R_e}$ & -5.75 $\pm$ 1.65 $A_2$ & 5.5 $\pm$ 0.67 $\mu_{1,\sigma}$*$\lambda_{R_e}$ &-0.79 $\pm$ 0.05\\ q0.1\_fg0.3\_BT0.2 &25.06 $\pm$ 5.11 $\mu_{1,\sigma}$*$\mu_{4,V}$ & 16.02 $\pm$ 3.19 $\mu_{1,\sigma}$ & -12.88 $\pm$ 2.51 $\mu_{4,V}$ & 6.8 $\pm$ 0.99 $\mu_{4,\sigma}$ &-1.06 $\pm$ 0.07 \\ \end{tabular} \end{center} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.3]{Hist_kin_major_merger_degraded.pdf} \includegraphics[scale=0.3]{Hist_kin_minor_merger_degraded.pdf} \caption{Histograms of LD1 for the populations of merging and nonmerging galaxies for the combined major merger (top) simulation and the combined minor merger (bottom) simulation. The blue nonmerging samples include both the stand-alone isolated galaxies and the pre- and post-merger isolated galaxies. The nonmerging galaxies in the top and bottom plots span different ranges in LD1 because they are composed of different samples of nonmerging galaxies and because the selected linear combination of predictors is different for the major and minor merger combined simulations. The vertical black line is the decision boundary; it is the midway point between the mean of the nonmerger and merger populations. If the LD1 value of a galaxy falls above this line, the galaxy is more likely to be a merger.} \label{histograms_major} \end{figure*} We first introduce the mechanics of the classification. LD1, which is the first linear discriminant axis, is formed from the linear combination of coefficients multiplied by the standardized predictors and an intercept term: $$\mathrm{LD1} = C*X + B$$ where $C$ is the matrix of coefficients, $X$ is the standardized values of the selected predictors, and $B$ is the intercept term. LD1 is the hyperplane that best separates the populations of merging and nonmerging galaxies for each simulation. We use the result from the major merger classification as an illustrative example of how to interpret the LDA results. The LD1 for the major merger combined run (truncated after seven terms) is: \newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}} \begin{align*} \mathrm{LD1}_{\mathrm{all\ major}} & = -6.8 \lambda_{R_e} + 5.0 |\mu_{3,\sigma}| + 4.5 \mu_{1,\sigma}*\lambda_{R_e} \\ & \ \ \ \ - 4.4 \mu_{1,\sigma}*|\mu_{3,\sigma}| -1.0 \mu_{1,\sigma}*\mathrm{resid} \\ & \ \ \ \ +1.7 \mu_{1,V}*\lambda_{R_e} +1.7 \mu_{4,\sigma}*\mu_{4,V} \\ & \ \ \ \ + ... -1.2 \numberthis \label{eq:LD1interaction} \end{align*} where LD1 is a linear combination of all selected terms, which are composed of a coefficient (positive or negative) followed by the standardized value of a predictor. The last term is the intercept term. The higher the value of LD1, the more likely the galaxy is to be classified as merging. We calculate the values of LD1 for all of the galaxies from the major and minor combined simulations in Figure \ref{histograms_major}. The horizontal line at an LD1 value of zero is the decision boundary that corresponds to a $p_{\mathrm{merg}}$ value of 0.5; all galaxies with an LD1 value greater than zero would be classified as merging using a threshold value of 0.5\footnote{This decision boundary can be moved either before the creation of the LDA or after to be more or less tolerant of false negatives and false positives.}. We find that the classification is better able to separate the merging and nonmerging classes for the major merger simulations and this is reflected in Figure \ref{histograms_major}. There are important nuances to the interpretation of the selected predictors and their coefficients in Equation \eqref{eq:LD1interaction} because the interaction terms complicate the analysis. For instance, in Equation \eqref{eq:LD1interaction}, the first selected predictor is $\lambda_{R_e}$, which has a negative coefficient. Ignoring the rest of the equation, this means that if the $\lambda_{R_e}$\ value is large, then the probability that a given galaxy is merging will decrease. However, there are other $\lambda_{R_e}$\ terms in the equation that are coupled with other predictors in interaction terms. This means that tweaking the value of $\lambda_{R_e}$\ will not linearly change the value of LD1. While the interaction terms complicate the analysis, they are an integral part of the classification. Many of the most important terms for LD1 in Table \ref{table:LDAall} are interaction terms, and including them significantly improves the performance of the LDA. As we discuss in more detail in \S \ref{discuss:interaction}, these terms are able to capture the non-monotonic movement of mergers through predictor parameter space. While it may be difficult to untangle many of the contributing terms, we can use Table \ref{tab:predictors} to determine which predictors are most prevalent and therefore informative for each simulation. For instance, the $\mu_{2,\sigma}$\ and $\mu_{1,\sigma}$\ predictors are selected as either primary or interaction terms for all simulations. They are therefore universally useful kinematic predictors (for a full discussion of why these terms are important see \S \ref{discuss:musigma}). The selected predictors from the q0.333\_fg0.1 simulation are similar to the q0.333\_fg0.3 simulation. These two simulations are matched for mass ratio but not for gas fraction. However, the difference between a gas fraction of 0.1 and 0.3 is insignificant so we hesitate to make any conclusions about the impact of gas fraction on the stellar kinematics. On the other hand, the minor merger simulations differ from the major merger simulations in the selected predictors. We find that $\lambda_{R_e}$\ and $|\mu_{3,\sigma}|$\ are important for the major mergers while the minor mergers rely more on some of the higher order terms like $\mu_{4,V}$\ and $\mu_{4,\sigma}$. We explore the implications of these findings for the physical nature of the kinematics of mergers in the discussion (\S \ref{discuss4}). \subsection{Performance Statistics and Hyperparameter Tuning} \label{accuracy4} Here, we define and measure the accuracy, precision, recall, and F1 statistic of the simulations (for a review, see \citealt{Fawcett2006}). We present these results using a confusion matrix for the major and minor combined simulations in Figure \ref{confusion4}, which shows the relative fraction of known mergers and nonmergers in the cross-validation samples that are classified by the LDA as merging and nonmerging. These quantities are derived by taking the mean of the performance statistics measured on each of the cross-validation samples. We quantify the accuracy, precision, recall, and F1 score for all simulations in Table \ref{tableaccuracy}. \begin{figure} \centering \includegraphics[scale=0.45, trim = 0cm 0cm 2cm 0cm, clip]{matrix_major_merger_0.03_degraded.png} \includegraphics[scale=0.45, trim = 0cm 0cm 2cm 0cm, clip]{matrix_minor_merger_0.03_degraded.png} \caption{Confusion matrices with the number of true negatives (upper left quadrant), false positives (upper right), false negatives (lower left), and true positives (lower right) for the major merger (top) and minor merger (bottom) combined simulations. These matrices show the mean number of galaxy snapshots in each category from the five ($k=5$) different CV samples. } \label{confusion4} \end{figure} \begin{table} \centering \begin{tabular}{c|cccc} Simulation & Accuracy & Precision & Recall & F1 \\ \hline All Major & 0.81 & 0.95 & 0.76 & 0.84 \\ All Minor & 0.69& 0.87&0.51&0.64\\ q0.5\_fg0.3&0.81&0.92 &0.60&0.73\\ q0.333\_fg0.3 & 0.80&0.90&0.79 &0.84\\ q0.333\_fg0.1 &0.80 & 0.93& 0.80&0.86\\ q0.2\_fg0.3\_BT0.2 & 0.73& 0.83 & 0.62 & 0.71\\ q0.1\_fg0.3\_BT0.2 &0.81 & 0.83 & 0.69 & 0.75\\ \end{tabular} \caption{Accuracy, precision, recall, and F1 score for all LDA runs. We define these statistics in Equations \ref{eq:accuracy}, \ref{eq:precision}, \ref{eq:recall}, and \ref{eq:F1}, respectively. The recall value is much lower than precision in all cases because there is a much higher fraction of false negatives, or mergers that are missed by the method, yet there is a low value of contaminants, or false positives. The performance statistics of the major merger classifications are $\sim$10\% higher than the minor merger classifications.} \label{tableaccuracy} \end{table} The accuracy for a given simulation is defined as the number of correct classifications of mergers as mergers (true positives) and the number of correct classifications of nonmergers as nonmergers (true negatives) divided by the number of total classifications: \begin{equation} A = \frac{TP+TN}{TP+TN+FP+FN} \label{eq:accuracy} \end{equation} where $FP$ is the number of false positives, or nonmerging galaxies that are classified as mergers, and $FN$ is the number of false negatives, or mergers that are classified as nonmerging. A classifier has a higher accuracy when it is able to increase the number of true classifications relative to false classifications. Precision is defined as the number of true positive classifications over the total number of positive classifications: \begin{equation} P = \frac{TP}{TP+FP} \label{eq:precision} \end{equation} A precise classifier maximizes the fraction of true positive classifications relative to false positives. Precision is also known as the `positive predictive value'. In this work, we seek to eliminate false positives from the sample, or nonmerging galaxies that are incorrectly classified as mergers. Recall is defined as the number of true positive classifications over the total number of known mergers: \begin{equation} R = \frac{TP}{TP+FN} \label{eq:recall} \end{equation} A classifier with high recall is also known as `complete' because it correctly identifies the majority of mergers as such. Finally, we measure the F1 score or the F1 statistic, which is the harmonic mean of the recall and precision: \begin{equation} F1 = \frac{2 P*R}{P+R} \label{eq:F1} \end{equation} F1 ranges in value from 0 to 1 and is strongly penalized if either precision ($P$) or recall ($R$) is small. We maximize the F1 statistic within the LDA during cross-validation in order to select the predictor terms that we use in the classification. Figure \ref{confusion4} presents the number of true negatives, false positives, false negatives, and true positives (left to right, top to bottom) for the combined major and minor merger simulations. It also quantifies the accuracy, precision, recall, and F1 score. The major merger classification performs better, with an accuracy/precision/recall values of 0.81/0.95/0.76 while the minor mergers have values 0.69/0.87/0.51. The imbalance between precision and recall is due to the priors utilized in the classification (we use the priors from N19 where $f_{\mathrm{merg}} = 0.1$ for the major mergers and 0.3 for the minor mergers). We have designed the classification with these strong priors so that when it is applied to galaxy surveys (where there are many less mergers), the classifier will be more balanced. As a result, the classifier produces more false negatives than false positives when tested on the training set. We experiment with adjusting the performance statistics of the classification, which is also known as `hyperparameter tuning'. It is possible to increase the number of false positives while decreasing the number of false negatives by either adjusting the decision boundary (i.e., the threshold of $p_{\mathrm{merg}}$) or by changing the priors. This could be a direction to pursue in future work if we find that we are no longer tolerant of false negatives or if we wish to adjust the priors on $f_{\rm{merg}}$. As a test, we adjust the priors so that $f_{\mathrm{merg}} = 0.5$ and find that it produces a similar classification, lower precision, and higher recall for the major merger classifications and results in slightly different selected predictors and higher performance statistics for the minor mergers. While the classification with adjusted priors performs better on the training and cross-validation datasets, we find that it is not a fair representation of the fraction of merging galaxies in nature, so we present the original classification with $f_{\rm{merg}} = 0.1,0.3$ in this work. Overall, we find that the kinematic classifications generally score lower on all performance statistics than the imaging classifications. This is true for all viewpoints (see \S \ref{viewpoint} for a discussion of the effect of viewpoint on the kinematic classification). For instance, the accuracy/precision/recall/F1 value for the combined major merger run with the imaging predictors are 0.88/0.98/0.84/0.90. For the combined minor merger run with imaging predictors, the values are 0.80/0.89/0.72/0.80. Another result is that the kinematic minor merger classifications generally score $\sim$10\% lower on all statistics than the major mergers. We discuss the implications of the performance of the kinematic predictors in comparison to the imaging predictors and the performance of the major versus minor merger classifications and in \S \ref{discussaccuracy}. \subsection{Observability Timescale} \label{analyzeobservability} \begin{table*} \centering \begin{tabular}{c|ccc} Simulation & LDA Observability Time [Gyr] & Total Merger Time [Gyr] & Observability Fraction\\ \hline q0.5\_fg0.3&0.9 & 2.2 & 0.4\\ q0.333\_fg0.3 &2.2 & 2.6 &0.8\\ q0.333\_fg0.1 & 2.4 & 2.8 &0.9\\ q0.2\_fg0.3\_BT0.2 &3.0 & 3.5 &0.9 \\ q0.1\_fg0.3\_BT0.2 &6.6 & 9.2 &0.7\\ \end{tabular} \caption{The duration of the merger, LDA observability time, total merger time, and observability fraction (LDA observability time/total merger time) for each simulation. } \label{time} \end{table*} \begin{figure*} \centering \includegraphics[scale=0.21, trim=6cm 0cm 0cm 0cm]{Mountain_plot_sep_kin_fg3_m12_0.03_degraded.pdf} \includegraphics[scale=0.21, trim=6cm 0cm 0cm 0cm]{Mountain_plot_sep_kin_fg3_m15_0.03_degraded.pdf} \caption{LD1 sensitivity with time for the q0.5\_fg0.3 (top) and q0.2\_fg0.3\_BT0.2 simulations (bottom). These two simulations are chosen because they are representative of the major and minor merger simulations, respectively. The points are the viewpoint-averaged value of LD1 for each snapshot in time along with the shaded 1$\sigma$ confidence intervals (darker shade) based on the scatter of the LD1 values for each snapshot. We also include the full range of values for each snapshot (lighter shade). We divide each plot into the early, late and post-coalescence stages of the merger. The blue lines and shaded 1$\sigma$ confidence intervals are associated with the isolated galaxies for each simulation. This includes the pre- and post-merger isolated galaxies (circles) and the stand-alone isolated galaxies (squares). The horizontal black line is the decision boundary, which marks the divide between the merging and nonmerging galaxies, or $p_{\mathrm{merg}} = 0.5$. This figure demonstrates that the major mergers have little to no overlap with the isolated galaxies, which produces a more accurate and complete classification (see \S \ref{accuracy4}). The LD1 sensitivity plots for all of the simulations will be available in an interactive figure.} \label{mountain4} \end{figure*} The LDA observability timescale is defined as the sum of all consecutive snapshots where the viewpoint-averaged LD1 value for a given snapshot is greater than zero. We present the observability timescales for all of the simulations in Table \ref{time} along with the total merger duration for each simulation and the fraction observability, or fraction of the duration of the merger that it is observable by the LDA technique. We exclude the combined major and minor mergers from this table since they are built from mergers that progress at different rates. All of the simulations have a relatively long timescale of observability ($2-6$ Gyr). The exception is the q0.5\_fg0.3, where the observability timescale is 0.9 Gyr due to a decline in the LD1 values for a handful of snapshots in the late stage of the merger. We present a visualization of how the mean values of LD1 change throughout the lifetime of each merger in Figure \ref{mountain4}. Here and throughout the remainder of this paper, we show the q0.5\_fg0.3 and q0.2\_fg0.3\_BT0.2 simulations as examples of major and minor mergers, respectively. This shows the viewpoint-averaged value of LD1 for each snapshot as well as the 1$\sigma$ confidence interval on this value and the total range. We also plot the decision boundary for each simulation, which falls at an LD1 value of zero (horizontal line). The minor mergers do not fall significantly above this line; even though the viewpoint-averaged LD1 values for the q0.2\_fg0.3\_BT0.2 simulation fall above the decision boundary, they overlap the decision boundary to 1$\sigma$ confidence at almost all points in time. This means that not all viewpoints are significantly above this boundary. On the other hand, the major merger simulations are significantly above this boundary for the majority of their duration. While this is not seen in Figure \ref{mountain4}, the q0.5\_fg0.3 simulation is an outlier when it comes to the LDA observability time. Figure \ref{mountain4} also demonstrates how sensitive the LDA classification is to the merger stage. For instance, there are a number of false negatives from the early stage of the mergers when the galaxies are more disk-like. Another key finding is that the sensitivity of the technique decays slowly during the post-coalescence and post-merger stages. We discuss the implications of the different observability timescales and the variations with time in more depth in \S \ref{discussobservability}, \ref{discuss:end}, and \ref{discuss:decoupled}. \subsection{Where and why does the LDA fail?} \label{fails} \begin{figure*} \centering \includegraphics[scale=0.28]{fp_fn_q0.5.png} \caption{Correct and incorrect classifications from the cross-validation sets for the q0.5\_fg0.3 simulation, which is representative of the major merger simulations. The correct classifications include true positives (first row) and true negatives (second row) and the incorrect classifications include false negatives (third row) and false positives (fourth row). We include the number of (non-repeated) galaxies in each category and three examples per row of galaxies from the cross-validation sample. The velocity and velocity dispersion maps for each example galaxy cover two consecutive panels, which is shown with alternating white and grey backgrounds. } \label{fig:fp_fn_maj} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.28]{fp_fn_q0.2.png} \caption{Same as Figure \ref{fig:fp_fn_maj} but for the q0.2\_fg0.3\_BT0.2 simulation, which is representative of the minor merger simulations.} \label{fig:fp_fn_min} \end{figure*} Here we summarize the factors that are most likely to lead to false classifications (false positives and false negatives) for the different simulations. Our goal is to identify the primary failure modes of the classification and assess if it is making reasonable choices. In other words, we should be concerned if our \textit{by-eye} classification disagrees with the majority of the false classifications. We present a visual version of a confusion matrix for the q0.5\_fg0.3 and q0.2\_fg0.3\_BT0.2 classifications in Figures \ref{fig:fp_fn_maj} and \ref{fig:fp_fn_min}, respectively. These simulations are representative of the results from the major and minor mergers, respectively. We generate example velocity and velocity dispersion maps for each classification category (in rows, top to bottom: TP, TN, FN, FP) by combining the results of each iteration of the $k-$fold cross-validation and then randomly selecting example snapshots from each category. In Figure \ref{fig:fp_fn_maj}, after a by-eye examination, it makes sense that many of the false negatives and false positives are incorrectly classified in the q0.5\_fg0.3 simulation. The false negatives (third row) are orderly rotating with relatively low velocity dispersions. These look like the pre-merger isolated population shown in the true negatives row. The majority of the false positives (fourth row) are post-merger snapshots that have kinematic disturbances. The incorrect classifications for the major mergers can mostly be attributed to two factors. First, the false negatives are due to a lack of disturbed features, meaning that it is difficult to correctly classify many of these snapshots as mergers. Despite these limitations, the classification does correctly identify the majority of early stage snapshots as mergers, meaning that it is out-performing the by-eye assessment in many cases. Second, kinematic disturbances that are induced by the merger persist into the post-merger stages, producing a number of false positives. These kinematic features are very similar to the features in the post-coalescence stages so it makes sense that these are commonly classified as false positives. As we discuss in more detail in \S \ref{discuss:end}, our definition of the `end' of the merger (the dividing line between post-coalescence and post-merger stages) is somewhat arbitrarily defined, and results in a number of classifications from these two stages. In Figure \ref{fig:fp_fn_min} we find that it is more challenging to correctly classify the nonmerging and merging galaxies in the q0.2\_fg0.3\_BT0.2 simulation using a by-eye assessment. For instance, the false negatives are very visually similar to the true negatives in their kinematic features. The same is true for the false positives, which are similar to the example true positives. The exception is a number of obvious merger snapshots (we show an example of one such snapshot in the upper middle pair of panels) where the kinematics are dramatically affected. However, these disturbances are short-lived, so the majority of the merging snapshots appear like the example in the upper right corner of the diagram. This diagram illustrates the crux of the problem for the minor mergers. While the LDA is able to pick up on a number of subtle features (such as stellar bulge enhancements), it ultimately struggles with a number of limitations related to the lack of identifiability of all of the stages of the merger as such. These challenges contribute to a minor merger classification that has lower performance statistics than the major merger classification. Overall, the LDA is not misclassifying obvious (by-eye) mergers or nonmergers. The lack of identifiability of mergers/nonmergers given their kinematic maps is therefore the largest challenge for this technique. Other work highlights this same challenge with kinematic predictors; \citet{Hung2016} find that a significant fraction of merging galaxy kinematics remain indistinguishable by-eye relative to the nonmerging kinematics. This indicates something fundamental about galaxy kinematics: namely, that we are not missing obvious features and instead that merging galaxies are often indistinguishable from nonmergers. \subsection{The role of viewing angle in the classification} \label{viewpoint} It is well known that many kinematic predictors, such as $\lambda_{R_e}$, are correlated with galaxy inclination (e.g., \citealt{Cappellari2007,Emsellem2011,Harborne2019}). In this section we examine how the viewing angle, which is a proxy for inclination, affects the kinematic predictors and ultimately, the LDA. As described in \S \ref{simdeets}, there are seven isotropically distributed viewpoints (0-6) at each snapshot. Critically, the inclinations are not an exact match between the stand-alone isolated galaxies and the merging galaxies. For instance, viewpoints 0 and 4 are the most face-on viewpoints for the merging galaxies, but viewpoints 4, 5, and 6 are the most face-on for the stand-alone isolated galaxies. \begin{figure*} \centering \includegraphics[scale=0.55]{viewpoint_lambdar.png} \caption{Distribution of the mean values of $\lambda_{R_e}$\ and $\epsilon$ as a function of viewpoint (top) and the full resolution $r-$band images (middle) and stellar velocity maps (bottom) for all of the different viewpoints from a snapshot in time for the q0.5\_fg0.3 simulation. The more face-on viewpoints (i.e., 4), tend towards lower values of $\lambda_{R_e}$\ and $\epsilon$, while the more edge-on viewpoints (i.e., 5) tend to have a larger $\lambda_{R_e}$\ value. We also include error bars to demonstrate the standard deviation of the spread at each viewpoint from all of the different moments in time of this simulation. While there is a relationship between inclination and $\lambda_{R_e}$, the trend is borderline significant. \textbf{This is consistent with the trend of varying $\lambda_{R_e}$\ and $\epsilon$ values with viewing angle from \citet{Emsellem2011}}. } \label{fig:inc_lambdar} \end{figure*} We first explore how inclination affects the $\lambda_{R_e}$\ predictor in Figure \ref{fig:inc_lambdar}, where $\lambda_{R_e}$\ increases as the galaxy inclination increases. For instance, viewing angles 0 and 4 are the most face-on and they also have the lowest values of $\lambda_{R_e}$. When the 1$\sigma$ errorbars are taken into consideration, the difference in $\lambda_{R_e}$\ values is marginally significant. \textbf{This is fully consistent with the results from \citet{Emsellem2011}, who predict that the measured $\lambda_{R_e}$\ and $\epsilon$ values of an axisymmetric rotating oblate spheroid vary with viewing angle (see Figure 3 of \citealt{Emsellem2011}).} These errorbars are the standard deviation in $\lambda_{R_e}$\ values from all of the different moments in time of the merger. We observe a larger difference in the $\lambda_{R_e}$\ values as a function of time (\S \ref{approxspin}) than as a function of viewpoint. \begin{figure*} \centering \includegraphics[scale=0.42]{stacked_hist_fg3_m12_kin.png} \includegraphics[scale=0.42]{stacked_hist_fg3_m15_kin.png} \caption{As in Figure \ref{histograms_major}, histograms of the LD1 value for the q0.5\_fg0.3 simulation (left) and the q0.2\_fg0.3\_BT0.2 simulation (right). We also overplot the time-averaged LD1 values for each viewpoint and errorbars to demonstrate the 1$\sigma$ variation amongst these values for all moments in time. There is less variation as a function of viewpoint than as a function of time (shown in Figure \ref{mountain4}).} \label{fig:stackedhist} \end{figure*} We next investigate how the LDA classification changes as a function of viewpoint. To visualize this, we plot the distribution of LD1 values in Figure \ref{fig:stackedhist}. We include the histograms of the LD1 value for the nonmerging (blue) and merging (red and orange) galaxies from both the q0.5\_fg0.3 simulation (left panel) and the q0.2\_fg0.3\_BT0.2 simulation (right panel). We then overplot the mean and standard deviation of the LD1 values for all snapshots of each specific viewpoint. Focusing on the mean LD1 values for the merging sample, we can determine if the LDA is varying as a function of viewpoint. Focusing first on the left panel of this figure, which is for the q0.5\_fg0.3 simulation, the means for both the nonmerging and merging galaxies are not significantly different as a function of viewing angle. In fact, the mean LD1 values are more similar than the variation we observe in LD1 as a function of time in Figure \ref{mountain4}. The implication is that the major merger LDAs are fairly robust to viewing angle. For the q0.2\_fg0.3\_BT0.2 simulation, we observe slightly more variation in the LD1 distribution as a function of viewpoint, and the most face-on viewpoints (0 and 4) have lower LD1 values, which result in more false negative detections at these viewpoints. To further quantify if the LDA is changing as a function of viewpoint, we iteratively drop the merging galaxies at each viewpoint from the analysis and rerun the classification for the q0.5\_fg0.3 and q0.2\_fg0.3\_BT0.2 simulations. If the classification changes, this could indicate that a given viewing angle and/or inclination is significantly more or less accurate than the other viewpoints, which would point to inclination itself being the primary driver of this difference. From here on, we determine that the LDA is `significantly different' from the fiducial run if either of the following criteria are met: first, the majority of the selected predictors in the top four selected terms must change or second, the performance statistics in Table \ref{tableaccuracy} must change by more than 10\% on average. This quantification of a significantly different classification applies to this section, where we explore the role of different viewing angles, and also to \S \ref{limitssnz} where we experiment with changes in the data reduction (i.e., changing the S/N or redshift of the simulated galaxies). When we rerun the LDA classification for the q0.5\_fg0.3 simulation iteratively without each viewpoint, the LDAs are not significantly different than the fiducial run. This confirms our findings in Figure \ref{fig:stackedhist}. For the q0.2\_fg0.3\_BT0.2 classification, we find that when viewpoints 2, 5, and 6 are absent, the classification is significantly different with lower performance statistics and different selected predictors. Our interpretation is that the minor mergers are best identified when the secondary nuclei is within the field of view, which happens most often in viewpoints 2, 5, and 6. Therefore, the significant changes to the classification as a function of viewpoint both in Figure \ref{fig:stackedhist} and in the rerun of the LDA without these viewpoints can be attributed to the chance positioning of the secondary galaxy as a function of viewpoint. This means that inclination-related effects on the intrinsic kinematic properties of the primary galaxy are not primarily responsible for the differences in the q0.2\_fg0.3\_BT0.2 LDA as a function of viewpoint. As a final note, in Appendix \ref{fair} in our discussion of between-class biases, we introduce the inclination itself as a predictor in the LDA. We ultimately determine that $\epsilon$, which we use as a proxy for inclination, is not an important predictor. This further supports the finding that changes in the kinematic predictors purely due to inclination effects are not biasing the LDA classification itself. To conclude, we have determined that while the kinematic predictors themselves can vary as a function of viewing angle and/or galaxy inclination, the LDA classification is only sensitive to viewpoint in the sense of the relative positioning of the secondary nucleus relative to the line of sight vector. \subsection{Limitations of the technique in $z$ and S/N} \label{limitssnz} \begin{figure} \centering \includegraphics[scale=0.47, trim = 4cm 0.75cm 1cm 1.25cm, clip]{g_band_stel_vel_fg3_m12_185_1.png} \caption{The $g-$band S/N (left), stellar velocity (middle), and velocity dispersion (right) maps from a snapshot of the q0.5\_fg0.3 simulation. We have decreased the S/N by a factor of two (second row) and redshifted the galaxy to $z = 0.1$ (third row) to demonstrate the point at which the classification begins to change. The classification has a higher failure rate when the S/N is decreased by a factor of two, mostly due to the sparsity of the Voronoi bins. Additionally, when the spaxel size is increased to mock a galaxy that is redshifted to $z=0.1$, the classification begins to change, as this is the point at which the larger-scale kinematic features are distorted by the large spaxel size.} \label{fig:comparison} \end{figure} As with the imaging identification technique in N19, the kinematic technique is sensitive to both S/N and resolution, meaning that as the S/N decreases in the spectra and/or as the redshift of the galaxy increases, the technique will undergo significant changes. We test the sensitivity by decreasing the S/N and by moving the mock galaxies to higher redshift. To test how sensitive the classification is to decreased S/N, we decrease the average S/N of q0.5\_fg0.3 simulation by factors of 1.5 and 2. In Figure \ref{fig:comparison} we compare a snapshot with S/N that has been decreased by a factor of two to the same snapshot from the fiducial run. When the S/N is decreased by a factor of 2, the classification is significantly different. While many of the predictors stay the same, the performance statistics decrease overall and there is an increase in the number of false negatives during the early stages of the merger. When the S/N is decreased, the Voronoi bins increase in size in the exterior regions of the galaxy. This obscures the large-scale kinematic features, which lowers the performance of the classification. We predict that MaNGA galaxies with low S/N may therefore be more likely to be misclassified. There are a couple of approaches we plan to explore when classifying MaNGA galaxies with different S/N ratios. One option is to implement a S/N cut when we apply the fiducial classification to the MaNGA galaxies. However, MaNGA galaxies with lower S/N need not be excluded from classification. Instead, another option is to use the classification and completeness correction from the lower S/N ratio LDA run to classify these galaxies separately. We predict that we should be able to classify the majority of the MaNGA sample since the fiducial (non-decreased) S/N of the simulation suite is representative of the MaNGA sample. For comparison's sake, in Figure \ref{fig:manga} we present a sample of MaNGA galxies that span a range in surface brightness, redshift, and stellar mass. The approximate stellar mass is from the NSA catalog and is estimated using the kcorrect code (\citealt{Blanton2007}). The simulated galaxies, which have stellar masses log M$_* \sim$ 10.6, are intermediate mass galaxies relative to the full MaNGA sample. \begin{figure} \centering \includegraphics[scale=0.63, trim=5.5cm 8cm 1.4cm 2cm, clip]{MaNGA_galaxy_sample.png} \caption{Same as Figure \ref{fig:comparison} but for a sample of MaNGA galaxies that span a range in surface brightness, redshift, and stellar mass. At low S/N (i.e., first and third rows) the velocity maps have large Voronoi bins and some kinematic predictors will be difficult to measure. As discussed in the text, the higher redshift galaxies (first and fourth row) in MaNGA also tend to be larger and more massive.} \label{fig:manga} \end{figure} We also experiment with increasing the S/N by a factor of 2. The classification undergoes minimal changes, with a slight increase in the performance statistics. When the S/N is higher, we predict that the current classification will be better able to determine if a galaxy is merging. Since the selected predictors do not significantly change, we can conclude that the fiducial run is identifying all relevant kinematic features. The mock galaxies are placed at a redshift of $z$ = 0.03, which is the median redshift of galaxies observed by MaNGA. In order to understand the limitations of the identification over the full range of redshifts for the MaNGA survey (0.01 < $z$ < 0.15), we experiment with increasing the redshift of the mock galaxies. To do this, we increase the spaxel size from $0\farcs5$ to $1\farcs0$ and $1\farcs5$ and we increase the PSF size to $5\farcs0$ and $7\farcs5$. This mimics the effects of moving the simulated galaxies to a redshift of $z=0.07$ and $z=0.1$, respectively. When we artificially redshift the galaxies we do not introduce cosmological dimming, meaning that the galaxies have the same S/N as the sample at $z = 0.03$. This is because we want to understand the effects of the apparent size of galaxies independently of S/N effects. The classification does not change significantly when the galaxies are placed at a redshift of $z=0.07$. At $z=0.1$, the classification is significantly different; the number of misclassifications increases and the selected terms are different. However, these results are based on a galaxy-galaxy merger where each galaxy has a stellar mass on order $10^{10}M_{\odot}$. While it is a valid conclusion that the technique will struggle on an intermediate mass galaxy at $z=0.1$, the MaNGA sample does not tend to include this type of galaxy. Instead, MaNGA is designed to maintain roughly uniform coverage in log $M_*$ and radial coverage, meaning that higher mass galaxies ($> 10^{11} M_{\odot}$), which are more luminous and have larger angular sizes (i.e., the fourth row of Figure \ref{fig:manga}), are observed primarily at higher redshift, somewhat alleviating this concern (\citealt{Bundy2015,Wake2017}). \subsection{Limitations of the technique in stellar mass and B/T} \label{results:mass} The simulation suite of merging galaxies is limited in stellar mass and B/T ratio. The mergers can all be characterized as intermediate-mass disk-dominated galaxies that span a range of $3.9 - 4.7 \times 10^{10}$ M$_{\odot}$ in stellar mass and 0 - 0.2 in initial B/T ratio. These limitations are especially important given that many of the leading kinematic predictors ($\lambda_{R_e}$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, and $\mu_{2,\sigma}$) are related to the intrinsic kinematic properties of galaxies. For instance, $\mu_{1,\sigma}$\ is a proxy for stellar mass, so we are skeptical if this classification can be reliably applied to galaxies that differ in properties, i.e., bulge-dominated elliptical galaxies. One possible approach to circumvent these concerns is to remove these predictors from the classification. We rerun the LDA for all simulations without the $\lambda_{R_e}$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, and $\mu_{2,\sigma}$\ predictors and find that the performance significantly decreases. Specifically, the accuracy, recall, and F1 score of the major mergers decrease by $20-50$\%. This is unsurprising given that the leading predictors presented in Table \ref{tab:predictors} include all of the predictors that are tied to the intrinsic kinematic properties of galaxies ($\lambda_{R_e}$, $\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$). Interestingly, when the intrinsic kinematic predictors are excluded, the performance of the minor merger simulations is not significantly affected. In fact, the performance of the major merger simulations is comparable or worse than that of the minor merger simulations. This highlights that the major mergers undergo a more dramatic global transformation during the merging process, which is reflected in the intrinsic kinematic properties of the remnant galaxy. Since removing these predictors significantly decreases the performance of the classification, we choose to include all predictors in this work and to attach the following caveat to this paper: Since this analysis focuses on kinematic predictors that are sensitive to intrinsic galaxy properties, we advise against applying this classification to all galaxy types in MaNGA. In \S \ref{discuss:extrapolate}, we discuss possible strategies for carefully applying the classification to MaNGA galaxies. \section{Discussion} \label{discuss4} In the discussion portion of this paper, we consider the implications of the kinematic LDA classifications for merging galaxies. We focus on the individual LD1 coefficients in \S \ref{discuss:useless} where we examine why some of the kinematic predictors that have been useful in the past are not informative in this technique. We then examine the most important kinematic predictors in \S \ref{discusssign}. In \S \ref{discussmass} we explore the impact of mass ratio on the stellar kinematics of mergers. We consider the physical meaning of the interaction terms and their importance to the classification in \S \ref{discuss:interaction}. We examine the observability timescale of the kinematic LDA technique and how the observability of a merger varies with time in \S \ref{discussobservability}. We specifically focus on the definition of the `end' of a merger in \S \ref{discuss:end} and the kinematics of the merger remnants in \S \ref{discuss:decoupled}. In \S \ref{discussaccuracy} we compare the performance of the imaging classifications to the kinematic classifications. Finally, we end with a note on applying this technique to MaNGA IFS observations in \S \ref{discuss:extrapolate}. \subsection{Why are some traditionally utilized kinematic predictors not useful in this classification?} \label{discuss:useless} Some of the kinematic predictors that are traditionally utilized to identify merging galaxies are uninformative in this analysis. In this case, `uninformative' means that the predictor is discarded during the RFR term selection steps or that it has a small relative coefficient value in the LDA. The uninformative predictors are $\Delta$PA, $\mathrm{v}_{\mathrm{asym}}$, $\sigma_{\mathrm{asym}}$, $\Delta x_{V}$, and $\Delta x_{\sigma}$. \subsubsection{The misalignment between the kinematic PA and imaging PA ($\Delta$PA) is most sensitive to galaxy inclination} A small fraction of the most dramatic mergers have significantly disturbed stellar kinematic maps. However, this does not translate to a global kinematic PA that is misaligned from the PA in imaging due to two factors. First, many of the warps seen in the stellar kinematic disks are symmetric, which can produce a global kinematic PA that is not misaligned. Second, both the PA from the kinematics and the PA from imaging are not well determined during the most disturbed stages of the merger. This contributes to random deviations around a low $\Delta$PA value. \subsubsection{The asymmetry in the velocity and velocity dispersion maps ($\mathrm{v}_{\mathrm{asym}}$\ and $\sigma_{\mathrm{asym}}$) are only sensitive to the most disturbed times in the major mergers} \begin{figure} \centering \includegraphics[scale=0.5, trim=0cm 0cm 0cm 0cm, clip]{time_evo_va_sa_fg3_m12_6.pdf} \includegraphics[scale=0.5, trim=0cm 0cm 0cm 0cm, clip]{time_evo_va_sa_fg3_m15_6.pdf} \caption{Time evolution of the merging (red and orange) and matched nonmerging (blue) galaxies for the q0.5\_fg0.3 (top, red) and q0.2\_fg0.3\_BT0.2 (bottom, orange) on the $\mathrm{v}_{\mathrm{asym}}$-$\sigma_{\mathrm{asym}}$\ diagram. The blue squares indicate the matched isolated sample of galaxies, while the blue circles are the pre- and post-mergers. Here, we show the pre-standardized predictor values. The time is zero as the simulations begin and progresses in Gyr. The $K_{\mathrm{asym}} = 0.15$ (dotted) and 0.135 (dashed) threshold lines are included from \citet{Hung2016} and \citet{Bellocchi2012}, respectively, where galaxies above the diagnostic lines are classified as merging.} \label{fig:va_sa} \end{figure} Previous work with the gas kinematics of simulated and observed mergers finds that merging galaxies have enhanced values of both $\mathrm{v}_{\mathrm{asym}}$\ and $\sigma_{\mathrm{asym}}$\ (e.g., \citealt{Shapiro2008,Bellocchi2012,Hung2016,Bloom2017}). These studies define a threshold value in $K_{\mathrm{asym}}$ to identify merging galaxies, where $K_{\mathrm{asym}} = \sqrt{\mathrm{v}_{\mathrm{asym}}^2 + \sigma_{\mathrm{asym}}^2}$. For instance, \citet{Bellocchi2012} study local luminous infrared galaxies (LIRGs) and find a threshold value of $K_{\mathrm{asym}} > 0.135$. \citet{Hung2016} define a $K_{\mathrm{asym}}$ threshold of 0.15 for the galaxies in their work, which is calculated from the velocities of gas particles from simulated \texttt{SUNRISE}\ mergers. The $\sigma_{\mathrm{asym}}$\ and $\mathrm{v}_{\mathrm{asym}}$\ predictors are ultimately unimportant in our analysis. We present the viewpoint-averaged values of $\sigma_{\mathrm{asym}}$-$\mathrm{v}_{\mathrm{asym}}$\ in Figure \ref{fig:va_sa} for the q0.2\_fg0.3\_BT0.2 and the q0.5\_fg0.3 simulations. We include the $K_{\mathrm{asym}}$ diagnostic lines used to identify mergers from \citet{Bellocchi2012} and \citet{Hung2016}. The top panel of the plot demonstrates that there is minimal time evolution for the predictor values for the minor mergers. The predictor values for the major mergers are only slightly enhanced, falling above the diagnostic line from \citet{Hung2016} for a few snapshots during the late stages of the merger. While Figure \ref{fig:va_sa} presents the predictor values in log space, the standardized predictor values of $\sigma_{\mathrm{asym}}$\ and $\mathrm{v}_{\mathrm{asym}}$\ used to construct the classification also have minimal separation between the merging and nonmerging populations. The insensitivity of any of the mergers to enhancement in the values of these predictors results in their exclusion during the RFR selection step. Ultimately, the $\mathrm{v}_{\mathrm{asym}}$\ and $\sigma_{\mathrm{asym}}$\ predictors are unimportant in this work because $\mathrm{v}_{\mathrm{asym}}$\ is only elevated for specific points in time during the late phases of merging. They are more useful in studies such as \citet{Bellocchi2012}, which focus on LIRGs, which are prototypical gas-rich major mergers, and \citet{Hung2016}, where the simulated galaxies include gas-rich major mergers. Additionally of interest, \citet{Hung2016} find that the mergers in their sample exceed the $K_{\mathrm{asym}}$ value only during the `strong interaction' or late stage of merging. \subsubsection{The offset between the center of the velocity and velocity dispersion maps and the imaging center ($\Delta x_V$ and $\Delta x_{\sigma}$) are not very sensitive to mergers} We design the $\Delta x_V$ and the $\Delta x_{\sigma}$ statistics to identify galaxies that have offsets in their kinematic centers. We find that these values are most elevated during the late stages of the merger, where there are two visible nuclei. However, since the kinematic maps are disky throughout the merger and not dramatically disturbed at most stages, these two statistics are not noticeably elevated for the duration of the merger and are therefore relatively unimportant. Statistics like these have been used in the past for galaxies such as NGC 4473, which is a `double sigma (2$\sigma$)' galaxy, meaning that it has two peaks in its 2D stellar velocity dispersion map that are aligned with the photometric major axis of the galaxy (\citealt{Krajnovic2011}). This type of velocity dispersion map is rare in observations (e.g., \citealt{Krajnovic2011}) and is associated with a co-addition of a counter-rotating stellar disk, produced by retrograde 1:1 mass ratio mergers in simulations (e.g., \citealt{Jesseit2007,Bois2011}). Therefore, these statistics may not be as useful for identifying the more typical types of mergers, which are often more unequal in mass ratio and do not occur under idealized conditions. \subsection{What can we learn from the most important LDA predictors about the kinematics of stars in mergers?} \label{discusssign} Here we examine the most important predictors in the LDA for all simulations and make connections between these predictors and the dynamical evolution of the stars during a merger. We split the discussion by predictor and for brevity we focus only on the leading predictors presented in Table \ref{table:LDAall}. The top predictors include $\lambda_{R_e}$, $\mu_{1,\sigma}$, $\mu_{2,\sigma}$, $|\mu_{3,\sigma}|$, and $\mu_{4,V}$. \subsubsection{The approximate spin parameter tracks a global `slow-down' in the velocity maps of the major mergers} \label{approxspin} The approximate spin parameter ($\lambda_{R_e}$) is a key predictor for all merger simulations and is especially important for the major mergers. The angular momentum of the merging galaxies is therefore significantly different than that of the nonmerging population; this effect is more apparent for the major mergers. In this section we examine how $\lambda_{R_e}$\ changes with time for the various simulations and how this compares to previous work. We first directly examine the pre-standardized values of $\lambda_{R_e}$\ and $\epsilon$ for the q0.2\_fg0.3\_BT0.2 and q0.5\_fg0.3 simulations in Figure \ref{fig:lambdar_epsilon}. On this diagram, we indicate the `slow-rotator' territory, which is in the lower left corner of this diagram. The $\lambda_{R_e}$-$\epsilon$ diagram is often used to kinematically distinguish the slow-rotator population of early-type galaxies from the fast-rotating population. This fast-slow rotator distinction probes the evolutionary histories of galaxies through disk assembly (see \citealt{Cappellari2016} for a review). Much recent work has focused both on examining the observed populations of fast and slow-rotators and on making predictions for how merging galaxies move through this territory. For instance, \citet{Naab2014} utilize cosmological merger tree simulations to show that major mergers significantly affect the angular momentum content of a galaxy; they can either spin up or spin down the remnant. In our case, all of the simulated galaxies begin with a $\lambda_{R_e}$\ value of $\sim0.7$. For the major mergers, the $\lambda_{R_e}$\ value dramatically decreases, to the boundary of the slow-rotator region. This confirms that major mergers can dramatically affect the kinematic properties of the remnant, kinematically transforming the galaxy from one that can be described as disk-dominated to one that is still rotating but is dispersion-dominated. \begin{figure} \centering \includegraphics[scale=0.5]{time_evo_epsilon_lambdar_fg3_m12_6.pdf} \includegraphics[scale=0.5]{time_evo_epsilon_lambdar_fg3_m15_6.pdf} \caption{Same as Figure \ref{fig:va_sa} but for the time evolution of the merging (red and orange) and nonmerging (blue) galaxies for the q0.5\_fg0.3 (top, red) and q0.2\_fg0.3\_BT0.2 (bottom, orange) on the $\lambda_{R_e}$-$\epsilon$ diagram. As the merger progresses (red points), the galaxies evolve towards decreased values of $\lambda_{R_e}$, which corresponds to increasing levels of disorder in the kinematic maps. Slow rotators, defined by \citet{Cappellari2016}, fall below the dashed line on these plots.} \label{fig:lambdar_epsilon} \end{figure} \subsubsection{The mean and dispersion of the velocity dispersion distribution ($\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$) track the growth of a stellar bulge component} \label{discuss:musigma} The overall importance of the $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ predictors reflects the fact that the velocity dispersion map is informative for identifying mergers. We examine how these two predictors evolve with time during a merger in Figure \ref{fig:musigma} for the q0.5\_fg0.3 and q0.2\_fg0.3\_BT0.2 mergers. Here we present the average value for all of the viewpoints of a given snapshot. We also include representative velocity dispersion maps for a handful of informative snapshots. \begin{figure*} \centering \includegraphics[scale=0.45]{mu_12.png} \caption{Time evolution of the mean values of $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ for the q0.5\_fg0.3 (left plot) and q0.2\_fg0.3\_BT0.2 simulations (right plot). We show the time-evolution of the merging galaxies with the red and orange points and of the nonmerging galaxies with the blue points. The stand-alone isolated galaxies are squares and the pre- and post-merger isolated galaxies are circles. Above each plot, we additionally include representative velocity dispersion maps from key snapshots. We find that the major mergers (left plot) tend to show a consistent evolution in $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ with time; both values increase as the stellar bulge component is built. We include three example velocity dispersion maps (above each plot) that belong to the early (left), late (middle), and post-coalescence (right) stages of the merger. The late and post-coalescence snapshots have elevated values of $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$; during the late stage the area between the two nuclei has an enhanced velocity dispersion (middle) and during the post-coalescence stage (right), the center of the galaxy has a larger velocity dispersion value. On the other hand, the minor mergers (right plot) show an increase in $\mu_{1,\sigma}$\ with time but there is not a significant change to $\mu_{2,\sigma}$\ with time. While both types of mergers are contributing to a stellar bulge component, the change to the dispersion maps of the major mergers is more dramatic and global. } \label{fig:musigma} \end{figure*} \textit{For all merger simulations, the $\mu_{1,\sigma}$\ value increases throughout a merger, tracing the assembly of a stellar bulge component.} This increase is more dramatic for the major mergers, which increase to a $\mu_{1,\sigma}$\ value of $\sim$200 km s$^{-1}$. Even for the minor mergers, the merger incites growth of the central velocity dispersion with time. This enhancement is still present 0.5 Gyr after coalescence, so the isolated post-coalescence stages are mixed with the merger snapshots along the $\mu_{1,\sigma}$\ axis in Figure \ref{fig:musigma}. \textit{The signatures of the bulge growth are therefore dynamically long-lived as opposed to imaging features that fade quickly with time following a merger (i.e., in N19 the imaging predictors fade within 0.5 Gyr of final coalescence).} The $\mu_{2,\sigma}$\ predictor serves different roles in major versus minor mergers, which is reflected in the different evolution of the $\mu_{2,\sigma}$\ values with time. An increase of $\mu_{2,\sigma}$\ with time for the major mergers traces the presence of two kinematic components by capturing the `bridge' of higher velocity dispersion values between two merging galaxies. This is formed by two overlapping counter-rotating features. Additionally, the post-coalescence major mergers have more significant bulge growth, which is reflected both in the enhancement in $\mu_{1,\sigma}$\ and in an increase in $\mu_{2,\sigma}$, since the entire distribution is broadened in this process. The minor mergers show less change in the value of $\mu_{2,\sigma}$\ with time. While $\mu_{2,\sigma}$\ is still informative (because it increases during the late stages of the merger to track the bridge of higher velocity dispersion), it does not continue to increase into the post-coalescence stages. This could indicate that a smaller fraction of the stars are involved in the buildup of the stellar bulge in the case of the minor mergers. \subsubsection{The skewness of the velocity dispersion distribution ($|\mu_{3,\sigma}|$) identifies secondary kinematic components} The skewness in the velocity dispersion distribution, $|\mu_{3,\sigma}|$, is an important predictor for the major mergers; it is sensitive to secondary kinematic components and disturbances in the velocity dispersion maps. For instance, the stellar bulge region has a higher dispersion, which manifests itself as a small wing on the velocity dispersion distribution (the main contribution to this distribution is the disk rotation component). In this case, the skewness predictor identifies similar features to the $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ predictors. The $|\mu_{3,\sigma}|$\ predictor is additionally important for identifying early-stage mergers. These snapshots tend to have low values of $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$. Since they have undergone first pericentric passage, they have a slight enhancement in the velocity dispersion map in the area of the primary galaxy that is perturbed by the merger. An example of such a snapshot with this type of velocity dispersion enhancement is the leftmost galaxy in the q0.5\_fg0.3 panel in Figure \ref{fig:musigma}. This galaxy is classified as merging by the classification and the most important predictor leading to this decision is $|\mu_{3,\sigma}|$. \subsubsection{The kurtosis of the velocity distribution ($\mu_{4,V}$) identifies the superposition of two merging galaxies} The kurtosis of the velocity distribution, $\mu_{4,V}$, is important for both of the individual minor mergers and the combined minor merger classification. This predictor is sensitive to perturbations in the velocity field, specifically cases where there are high velocities in the velocity distribution. When there are extreme velocities in the velocity distribution (due to the superposition of two merging nuclei), the kurtosis becomes more negative due to the flattening of the distribution. The $\mu_{4,V}$\ predictor is significant because it is able to track smaller changes in the velocity distribution, as opposed to global disruptions as in the case of the major mergers. It is a significant predictor for the minor mergers because the velocity dispersion distributions are not dramatically changing during a minor merger. Instead, the LDA must rely upon the extreme velocities caused by the superposition of a secondary nuclei. \subsection{The classification changes with mass ratio} \label{discussmass} Past studies have investigated how the properties of simulated mergers affect the kinematic predictors. \citet{Hung2016} have investigated this for a set of simulated merging galaxies with mass ratios 1:1 and 1:4. They find that the merger signatures in kinematics are most apparent for the 1:1 major merger, where they can be visible for up to twice as long as for the 1:4 major merger. While there are significant differences between the work in this paper and \citet{Hung2016} (i.e., we perform a full RT and create mock IFS maps while \citet{Hung2016} use the \texttt{GADGET-3} particle velocities to create velocity maps), we also find that the classification differs significantly with mass ratio. The major and minor merger classifications are different in several ways. The minor merger classifications have performance statistics that are $\sim$10-30\% lower. The minor mergers are also more unstable in the terms that are selected, meaning that the coefficient values fluctuate slightly when the classification is re-run. The LDA for the minor mergers is therefore using a couple of key snapshots to create the classification. If these snapshots are excluded from the training set and instead fall in the CV set, then the classification is slightly different. The overall effect is that the minor mergers are less stable and many of the selected terms have similar coefficient values, making it difficult to assess which are the most important. Another difference between the major and minor merger classifications is that they are composed of different predictors. While some predictors, such as $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ are important for all classifications, $\lambda_{R_e}$\ is more important for the major mergers and $\mu_{4,V}$\ is more important for the minor mergers. As we discuss in \S \ref{discusssign}, both major and minor mergers demonstrate bulge growth, which leads to an enhancement in $\mu_{1,\sigma}$, but the change is more apparent in the major mergers. The global kinematic properties of the major mergers are more significantly impacted; this includes the $\lambda_{R_e}$\ predictor, which traces a global slow-down in the velocity maps of the major mergers. On the other hand, the minor mergers are most sensitive to smaller-scale changes in the kinematic maps, which can be traced by predictors like $\mu_{4,V}$, which track the superposition of the secondary stellar nuclei. \subsection{The predictors evolve non-linearly with time; the LDA incorporates this behavior with interaction terms} \label{discuss:interaction} Many of the kinematic predictors in this analysis evolve with time throughout the merger. In most cases, this evolution is non-monotonic, meaning that merging galaxies evolve back and forth in predictor space as a function of merger time. The LDA technique accounts for this behavior using interaction terms. An example of an interaction term in action is the $\mu_{\sigma}*\sigma_{\sigma}$ term, which has a negative coefficient for several of the major merger simulations. This means that if the value of $\mu_{1,\sigma}$\ is relatively large, then $\mu_{2,\sigma}$\ must be relatively small for the merger probability to increase. The opposite is also true: if $\mu_{1,\sigma}$\ is relatively small, then $\mu_{2,\sigma}$\ must increase. To be clear, `relatively large' and `relatively small' refer to the standardized values of these predictors, which are measured relative to the distribution of values for the entire merger. So if a term is relatively small, the predictor value will be negative. For instance, if $\mu_{1,\sigma}$\ is large and $\mu_{2,\sigma}$\ is small, then the term becomes: (- coefficient) * (positive) * (negative) = positive value = increase in LD1. Consider the $\mu_{\sigma}-\sigma_{\sigma}$ diagram for the q0.5\_fg0.3 merger presented in the left panel of Figure \ref{fig:musigma}. The $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ predictors are correlated, so they occupy the diagonal space of this diagram. This correlation and the interaction term function so that the center of the $\mu_{\sigma}-\sigma_{\sigma}$ diagram is the `merger territory'. This picture is somewhat complicated by the fact that there are other terms in the LDA, but if the $\mu_{\sigma}*\sigma_{\sigma}$ interaction term has a large coefficient, then this interpretation generally applies. The interaction terms account for the non-monotonic evolution of the predictor values with time. When we create a classifier with only the primary predictors, it is fundamentally different from the LDA with interaction terms. The linear classifier is inaccurate, classifying many of the isolated post-merger snapshots as mergers. When the LDA is forced to generalize to one direction of movement across these diagrams in the monotonic case, it loses key information. When the interaction terms are included, the LDA becomes sensitive to the values of the other predictors and is therefore able to create `if-then' cases for the predictor space. For example, \textbf{if} the galaxy is in the later stages of merging (which are characterized by significant bulge growth and a large value of $\mu_{1,\sigma}$), \textbf{then} to avoid ambiguity with the post-merger isolated galaxies, the classification would like to see a relatively negative standardized value of $\mu_{2,\sigma}$\ in order to classify the galaxy as merging. On the other hand, \textbf{if} the galaxy has a relatively small value of $\mu_{1,\sigma}$, \textbf{then} $\mu_{2,\sigma}$\ should be large for the galaxy to be classified as merging. An example of this type of galaxy is the early stages of the merger, where the bulge growth has not yet begun so $\mu_{1,\sigma}$\ is small. However, the galaxy has undergone its first pericentric passage and is experiencing an enhancement in the velocity dispersion which leads to an increase in $\mu_{2,\sigma}$. An example of this type of galaxy is shown in the left panel of Figure \ref{fig:musigma}. We conclude that it is critical to include the interaction terms in the LDA classification. Not only do they improve the performance of the technique, but they are physically motivated by the non-monotonic evolution in kinematic predictors over the course of a merger lifetime. \subsection{The observability of a merger varies with time; mergers are missed during the early stages when the kinematics are disk-like} \label{discussobservability} In \S \ref{analyzeobservability}, we present the observability timescales of the various simulated mergers. We conclude that the kinematic LDA technique lengthens the observability timescale of the simulated mergers over that measured from the individual predictors. Here we focus specifically on how the observability of a merger changes with the merger stage, and we refer the reader to Figure \ref{mountain4} for a useful visualization of the mean value of LD1 with time in all of the simulated mergers. When we examine the observability of the mergers in terms of the pre-defined early-, late-, and post-coalescence stages, we notice several differences with stage. First, during the early stages of merging, some simulations show a larger standard deviation in the LD1 values. For all simulations, the value of LD1 tends to fall below the decision boundary for these early stages. \citet{Hung2015} find that using kinematic predictors to identify mergers results in a significant fraction of false negatives from epochs where the merger is indistinguishable from a rotating disk. We also find that a significant fraction of false negatives occur during the early stages where the rotation is indeed disk-like. During the late stage of the merger, the minor mergers do not show much variation from one snapshot to the next; the LD1 values are relatively flat. On the other hand, the major mergers, in particular q0.5\_fg0.3, show variation between the late-stage snapshots. For q0.5\_fg0.3, this is what contributes to the relatively short observability timescale. These changes in LD1 values are significantly greater than the variance due to different viewpoints. \textit{The kinematic features are therefore changing significantly and rapidly with time for some of the major mergers during the late stage.} We also find that most simulations have relatively high LD1 values during the late stage of the merger. The late stage of the merger is therefore characterized by short-lived dramatic kinematic features. This is consistent with \citet{Hung2015}, who find that kinematic tracers of mergers tend to be most informative during the late stage of the merger, which is when the imaging predictors are also most useful. As the merger progresses into the post-coalescence and post-merger isolated stages, we find that the LD1 value is more stable, which is characteristic of longer-lived kinematic features. The LD1 value does not significantly decline during the post-merger stages. We focus on the kinematics of the post-coalescence and post-merger stages in \S \ref{discuss:end} and \S \ref{discuss:decoupled}. \subsection{When does a merger end? The kinematic disturbances due to mergers are long-lived} \label{discuss:end} In N19, we define the end of the merger as 0.5 Gyr after final coalescence. This cutoff was selected so that the imaging predictors and therefore the value of LD1 decayed smoothly until the end of the merger. The imaging technique is therefore very accurate and precise during the transition from a post-coalescence merger to an isolated post-merger galaxy. In contrast, in this work the kinematic predictors and therefore the LD1 value remain elevated during the period of post-merger isolated snapshots. There are visually apparent warps in the velocity maps and the velocity dispersion maps have elevated values of $\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$\ for the post-merger isolated galaxies. \citet{Hung2016} also find a persistence of kinematic merger signatures up to $\sim$Gyrs following coalescence. We find that the kinematic disturbances fade $2-2.5$ Gyr after coalescence, meaning that in order to improve the classification, we would need to significantly extend the post-coalescence phase. \textit{Instead of changing the definition of the end of the merger, a more relevant task could be to define the merger more specifically by stage.} For instance, if it is a priority to distinguish post-coalescence from post-merger isolated galaxies (post-coalescence snapshots occur immediately following coalescence until 0.5 Gyr after coalescence and post-merger snapshots occur $>$0.5 Gyr after coalescence), we should rely on the imaging predictors, which do a better job in this specific case. In future work we plan to combine the imaging and the kinematic tools and more directly address this question. One path forward could be to create separate classifications that target different stages of the merger. This could provide a more flexible definition of galaxies that are merging, and allow other users of the technique to target stages of interest in a merger. \subsection{The kinematics of the post-merger stages track the growth of a stellar bulge and a kinematically decoupled core for the 1:2 mass ratio merger} \label{discuss:decoupled} Here we focus on the kinematics of the post-coalescence and post-merger stages. During these stages we observe the build-up of a central component in the stellar velocity dispersion maps. The change is more dramatic for the major mergers and can best be explained as tracing the growth of a stellar bulge component. \citet{Hopkins2009} investigate the effect of mass ratio on the merger remnant and find that the fraction of the primary stellar disk that is relaxed into the bulge is directly proportional to the mass ratio of the merger. This supports the hypothesis that a stellar bulge is built in the post-merger stages, since we would expect major mergers to contribute a larger fraction of stars to the bulge component. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{snap_210.png} \includegraphics[scale=0.67, trim=1.1cm 2.7cm 1.2cm 3.0cm, clip]{mosaic_fig_deg_ani_fg3_m12_260_view_1.png} \includegraphics[scale=0.67, trim=1.1cm 2.7cm 1.2cm 3.0cm, clip]{mosaic_fig_deg_ani_fg3_m12_285_view_1.png} \includegraphics[scale=0.67, trim=1.1cm 2.7cm 1.2cm 3.0cm, clip]{mosaic_fig_deg_ani_fg3_m12_311_view_1.png} \caption{Evolution of the long-lived kinematically decoupled feature in the $r-$band image (left), stellar velocity map (middle), and stellar velocity dispersion map (right) of the q0.5\_fg0.3 merger. The top panel is the last snapshot before coalescence. At 2.54 Gyr the galaxy is in the post-coalescence stage and at 2.79 and 5.18 Gyrs the galaxy is in the post-merger isolated stages. The central kinematically decoupled component appears in the velocity map around 2.54 Gyr, which is $\sim$0.4 Gyr after coalescence and does not disappear until $\sim$3 Gyr after coalescence. } \label{fig:endgameall} \end{figure} The q0.5\_fg0.3 merger has unique kinematic features in the stellar velocity map during the post-merger stages, so we focus the remainder of our discussion on this merger. We present several post-coalescence and post-merger snapshots from the q0.5\_fg0.3 merger in Figure \ref{fig:endgameall}. The stellar velocity maps are particularly intriguing because they have a distinct central component that appears in the post-coalescence phase and persists into the post-merger phase. This central feature in the stellar velocity maps is spatially coincident with the bulge-like feature in the stellar velocity dispersion maps. It is not fully counter-rotating, but is misaligned from the main stellar disk. We therefore hypothesize that we have discovered a decoupled kinematic component. Previous theoretical work has predicted that major mergers with mass ratios of 1:1 or 1:2 can produce this type of intriguing kinematic component (e.g., \citealt{Bendo2000,Jesseit2007,Crocker2009}). For instance, \citet{Bendo2000} and \citet{Jesseit2007} find that equal mass simulated mergers display a much wider range of kinematic features, including counter-rotating cores and global misalignments while the unequal mass mergers tend to have disk-like kinematics. The merger remnants with complex kinematics are intriguing because these decoupled central components have been discovered in observational studies as well. For example, the ATLAS$^{\mathrm{3D}}$ survey finds that a significant fraction of slow-rotating ETGs have decoupled kinematic components (e.g., \citealt{Emsellem2011}). Our finding of a decoupled kinematic core in the remnant of the 1:2 ($q=0.5$) mass ratio merger supports these past findings that mergers with a large mass ratio can produce dramatic central features in the kinematic maps. Furthermore, this result suggests that selecting galaxies with kinematically decoupled cores would produce a sample of post-coalescence major mergers with a mass ratio $q \gtrsim 0.5$. \subsection{The kinematic LDA is not as good at identifying merging galaxies as the imaging technique} \label{discussaccuracy} The kinematic LDA technique has a significant number of false negatives, which drives down the accuracy and recall of the technique. This is partially a result of the chosen priors which skew the classification towards minimizing false positives (see \S \ref{accuracy4} for a full discussion). As we discuss in \S \ref{fails}, this is also due to the lack of identifiability of certain snapshots as mergers, meaning that they are indistinguishable from nonmerging disks and/or post-merger remnants both visually and using their kinematic predictors. The imaging LDA performs better for all runs, with improvements on all performance statistics. This means that fewer total galaxies will be correctly classified by the kinematic technique due to the shortcomings listed above. While the major merger combined run is slightly improved when run with imaging predictors (this improves the accuracy/recall/F1 score by 9\%/11\%/7\%), the minor merger combined run experiences significant improvement (16\%/41\%/25\%). The kinematic minor merger combined classification therefore scores lower on all performance statistics relative to both the imaging minor merger classification and the kinematic major merger classification. This reflects the particular inability of the kinematic classification to identify minor mergers. The imaging LDA also has longer observability times on average, ranging from observability times of $2.2-2.8$ Gyr for the major mergers and $3.5-9.2$ Gyr for the minor mergers. The kinematic LDA has observability times of $0.9-2.4$ Gyr for the major mergers and $3.0-6.6$ Gyr for the minor mergers. So while the observability times for the kinematic LDA significantly improve upon the observability times from individual kinematic predictors, the mergers are still observable for slightly longer timescales by the imaging LDA. This has important implications for the relative capability of imaging versus kinematic predictors in identifying merging galaxies. The stellar kinematics of mergers take longer to exhibit disturbance and then remain disturbed for a longer time following a merger. The imaging predictors are better contained to the duration of the merger, and better able to identify all of the different merger stages. \citet{Hung2016} recommend using kinematic predictors in combination with imaging predictors due to the frequency of false negatives in their investigation of kinematically-identified merging galaxies. We support this conclusion, finding that the kinematic predictors have failure modes that can be improved by incorporating imaging predictors. However, there are some advantages of kinematic predictors relative to the imaging predictors. In \S \ref{discuss:end} and \S \ref{discuss:decoupled} we find that the kinematic predictors are particularly useful for identifying the post-coalescence and post-merger stages because the kinematic disturbances persist for long after imaging predictors fade. The implication is that while the imaging predictors are more informative overall, the kinematic predictors are powerful in certain domains and provide additional information. If forced to select between the imaging and kinematic classification methods, the imaging approach is better. However, the best overall approach is to combine the two techniques into one imaging + kinematic classification. We plan to discuss this topic further in future work. \subsection{Applying the technique to MaNGA IFS in future work} \label{discuss:extrapolate} In \S \ref{results:mass}, we discuss the implication of creating a kinematic classification from a suite of simulations with a narrow range in stellar mass ($3.9\times10^{10} < \mathrm{M}_{\odot}\ <4.7\times10^{10}$) and in initial B/T ratio ($0-0.2$). Many of the kinematic predictors, specifically the most important predictors for the major merger simulations, also probe the intrinsic properties of galaxies. In other words, while these predictors are useful for identifying major mergers, they also change as a function of stellar mass and morphology. The implication is that a classification created from disk-dominated intermediate mass mergers may not extrapolate well to the MaNGA sample, which spans a wider range in stellar mass ($10^8 < \mathrm{M}_{\odot} < 10^{11}$) and morphology. At present, it is unclear if this will be a concern for just the extreme cases, i.e., the most massive bulge-dominated ETGs, or if it will also cause concern in systems with a mix of rotation and dispersion support in their kinematics. We have preliminarily investigated the distribution of bulge- versus disk-dominated galaxies in MaNGA; while \citet{Wang2020} find that MaNGA galaxies are predominantly disk-dominated, \citet{Graham2018} find that a significant fraction of MaNGA galaxies (across all masses) are bulge-dominated. Since the MaNGA sample includes a diversity of different galaxy types, we plan to tread carefully when we apply the classification. One option could be to select MaNGA galaxies that have high values of $\lambda_{R_e}$\ to include in the classification; in this way, we could exclude bulge-dominated galaxies. Another option could be to de-emphasize certain domains of predictor space in the classification, or to remove the kinematic predictors that are most sensitive to intrinsic galaxy properties altogether. The details of this approach will be developed in future work, since they are beyond the scope of this paper, which focuses mostly on the creation of the technique. \section{Conclusions} \label{conclusions4} In this work, we build on the stand-alone imaging merger classifier in N19 to create a parallel LDA classifier that utilizes kinematic predictors to identify merging galaxies. To produce the classification, we use \texttt{SUNRISE}\ synthetic spectra from \texttt{GADGET-3} simulated merging galaxies to create mock `MaNGA-ized' datacubes. We convolve and rebin the synthetic spectra to the spatial and spectral resolution of MaNGA, introduce noise, and implement the Voronoi binning scheme used for the MaNGA datacubes. With \texttt{ppxf}, we extract stellar velocity and stellar velocity dispersion maps from each datacube. We then measure a number of kinematic predictors from the velocity and velocity dispersion maps. We use a random forest regressor (RFR) followed by the LDA classifier to select the most informative kinematic predictors and to carry out the classification. The selected predictors are: the difference between the kinematic PA and the imaging PA ($\Delta$PA), the \texttt{kinemetry} residuals (resid), the approximate spin parameter ($\lambda_{R_e}$), the asymmetry in the Radon profile ($A_2$), and the moments of the velocity and velocity dispersion distributions ($\mu_{1,V}$, $\mu_{1,\sigma}$, $\mu_{2,V}$, $\mu_{2,\sigma}$, $|\mu_{3,V}|$, $|\mu_{3,\sigma}|$, $\mu_{4,V}$, and $\mu_{4,\sigma}$). We then run the LDA as a classifier for all simulations individually as well as for the combined major merger simulation and the combined minor merger simulation. We first use the LDA classification as an agnostic approach to determine the most useful kinematic predictors for identifying different types of mergers. Our main conclusions are: \begin{itemize} \item Many kinematic predictors that are used in previous work to identify mergers are not as useful in this work (i.e., the deviation of the velocity and velocity dispersion maps from ordered rotation ($\mathrm{v}_{\mathrm{asym}}$\ and $\sigma_{\mathrm{asym}}$, respectively, and $\Delta$PA). These predictors are sensitive to specific stages of equal mass ratio mergers and are not as sensitive to the full range of merger parameters and stages used in this simulation suite (\S \ref{discuss:useless}). \item The mean and variance of the values in the velocity dispersion maps ($\mu_{1,\sigma}$\ and $\mu_{2,\sigma}$, respectively) are the most useful predictors for identifying mergers across all simulations, because they are sensitive to the growth of a stellar bulge component during the merger (\S \ref{discusssign}). \item The selected predictors differ as a function of mass ratio. The major mergers exhibit large-scale kinematic changes (i.e., a global slow-down of the rotation), so they rely more on predictors like $\lambda_{R_e}$. The minor mergers are identified using predictors like $\mu_{4,V}$\ which trace the superposition of a secondary stellar nucleus (\S \ref{discussmass}). \end{itemize} We also examine the performance of the LDA classification, which is measured using the four performance statistics (accuracy, precision, recall, and F1 score) as well as the observability timescale. Our main findings are: \begin{itemize} \item The LDA performance significantly improves when the interaction terms are included. These terms are capable of accounting for the non-monotonic evolution of the kinematic predictors with time (\S \ref{discuss:interaction}). \item By combining many different kinematic predictors, we create a classification where the observability timescale is a large fraction of the overall merger time (40-90\%). This corresponds to mergers that are observable for 0.9-6.6 Gyr and is an improvement on the observability timescale from any of the individual kinematic predictors (\S \ref{discussobservability}). \item The sensitivity of the LDA technique varies with epoch during the mergers. We find that there are more missed mergers (i.e., false negatives) during the early stage of the merger, where the stellar kinematics are disk-like. The mergers are most detectable during the late and post-coalescence stages (\S \ref{discussobservability}). \item The kinematic predictors (and the LD1 value) are long-lived and remain elevated for $\sim$2 Gyr following final coalescence. The stellar kinematics of the post-coalescence and post-merger epochs capture the formation of a stellar bulge component (\S \ref{discuss:end}). \item For the (major, gas rich) q0.5\_fg0.3 merger, a kinematically decoupled component is visible in the stellar velocity maps (\S \ref{discuss:decoupled}). \item The imaging classification performs better than the kinematic classification and the improvement is larger ($\sim$15\% increase in accuracy, recall, and F1 score) for the minor mergers. The kinematic LDA can be improved by adding imaging predictors (\S \ref{discussaccuracy}). \item The kinematic predictors add unique information about merging galaxies to the toolkit; for instance, the kinematic classification is better at identifying post-coalescence and post-merger galaxies relative to the imaging classification (\S \ref{discussaccuracy}). \item The kinematic classification is created from a suite of simulations that are limited in their scope (i.e., the simulated galaxies are disk-dominated and span a range of $3.9-4.7\times10^{10} \ \mathrm{M}_{\odot}$ in stellar mass). We conclude that the results may not be applied to all MaNGA galaxies (which have a range of morphologies and an approximately flat stellar mass distribution $10^8 < \mathrm{M}_{\odot}< 10^{11}$). We plan to further address this concern in future work (\S \ref{discuss:extrapolate}). \end{itemize} In Nevin et al. (2021, in prep) we will combine the kinematic classification with the imaging classification presented in N19 and apply the classifier to MaNGA galaxies. At this point, we will release the python tools for implementing these classifications. These tools are designed to be adaptable to the specifications of other imaging and/or IFS surveys, with the goal of applying the classification to other IFS surveys - i.e., SAMI, CALIFA, HECTOR. In Nevin et al. (2021, in prep) we will further investigate whether various kinematic parameters enhance the existing imaging classifier and why, and revisit the hyperparameter tuning to determine the optimal location of the decision boundary. We also plan to investigate the possibility of splitting the classification by merger stage. Our scientific goals include identifying how the star formation histories, metallicities, and AGN activity change for these different stages as well as for different mass ratios of merging galaxies. \section{Acknowledgements} We thank the anonymous referee for their thorough and thoughtful comments that have improved the quality and clarity of this paper. R. N. and J. M. C. are supported by NSF AST-1714503. L. B. acknowledges support by NSF award \#1715413. JAVM acknowledges support from the CONACyT postdoctoral fellowship program. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. We specifically utilized Comet and Oasis through the XSEDE allocation for `An Imaging and Kinematic Approach for Improved Galaxy Merger Identifications' (TG-AST130041). We would also like to acknowledge the help of Martin Kandes, who assisted with the optimization of the LDA tool. The authors acknowledge University of Florida Research Computing for providing computational resources and support that have contributed to the research results reported in this publication. The website for HiPerGator (the well-named supercomputer) is: http://researchcomputing.ufl.edu This research made use of Marvin, a core Python package and web framework for MaNGA data, developed by Brian Cherinka, Jos\'{e} S\'{a}nchez-Gallego, Brett Andrews, and Joel Brownstein (\citealt{Marvin}). Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'isica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"ur Astrophysik Potsdam (AIP), Max-Planck-Institut f\"ur Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"ur Astrophysik (MPA Garching), Max-Planck-Institut f\"ur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observat\'ario Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'onoma de M\'exico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University. \software{astropy (\citealt{Astropy2013,Astropy2018}), matplotlib (\citealt{Matplotlib}), mangadap (\citealt{Westfall2019,Belfiore2019}), numpy (\citealt{Numpy}), openmpi (\citealt{Openmpi}), scikit-learn (\citealt{scikit-learn}), scipy (\citealt{Scipy2020}), seaborn (\citealt{Seaborn}), sdss-marvin (\citealt{Marvin}), pandas (\citealt{Pandas})} \bibliographystyle{apj}
1,108,101,564,419
arxiv
\section{Introduction} \vspace{-0.25cm} Policy gradient algorithms maximize the expectation of cumulative reward by following the gradient of this expectation with respect to the policy parameters. Most existing algorithms estimate this gradient in a model-free manner by sampling returns from the real environment and rely on a likelihood ratio estimator \cite{williams1992simple,sutton1999policy}. Such estimates tend to have high variance and require large numbers of samples or, conversely, low-dimensional policy parameterizations. A second approach to estimate a policy gradient relies on backpropagation instead of likelihood ratio methods. If a differentiable environment model is available, one can link together the policy, model, and reward function to compute an analytic policy gradient by backpropagation of reward along a trajectory \cite{nguyen1990neural,jordan1992forward,deisenroth2011pilco,grondman2015online}. Instead of using entire trajectories, one can estimate future rewards using a learned value function (a critic) and compute policy gradients from subsequences of trajectories. It is also possible to backpropagate analytic action derivatives from a Q-function to compute the policy gradient without a model \cite{werbos1990menu,riedmiller2005neural,silver2014deterministic}. Following Fairbank \cite{fairbank2012value}, we refer to methods that compute the policy gradient through backpropagation as \emph{value gradient} methods. In this paper, we address two limitations of prior value gradient algorithms. The first is that, in contrast to likelihood ratio methods, value gradient algorithms are only suitable for training deterministic policies. Stochastic policies have several advantages: for example, they can be beneficial for partially observed problems \cite{singh94learningwithout}; they permit on-policy exploration; and because stochastic policies can assign probability mass to off-policy trajectories, we can train a stochastic policy on samples from an experience database in a principled manner. When an environment model is used, value gradient algorithms have also been critically limited to operation in deterministic environments. By exploiting a mathematical tool known as ``re-parameterization'' that has found recent use for generative models \cite{Rezende:ICML:2014,kingma2013auto}, we extend the scope of value gradient algorithms to include the optimization of stochastic policies in stochastic environments. We thus describe our framework as \emph{Stochastic Value Gradient} (SVG) methods. Secondly, we show that an environment dynamics model, value function, and policy can be learned jointly with neural networks based only on environment interaction. Learned dynamics models are often inaccurate, which we mitigate by computing value gradients along real system trajectories instead of planned ones, a feature shared by model-free methods \cite{williams1992simple,sutton1999policy}. This substantially reduces the impact of model error because we only use models to compute policy gradients, not for prediction, combining advantages of model-based and model-free methods with fewer of their drawbacks. We present several algorithms that range from model-based to model-free methods, flexibly combining models of environment dynamics with value functions to optimize policies in stochastic or deterministic environments. Experimentally, we demonstrate that SVG methods can be applied using generic neural networks with tens of thousands of parameters while making minimal assumptions about plants or environments. By examining a simple stochastic control problem, we show that SVG algorithms can optimize policies where model-based planning and likelihood ratio methods cannot. We provide evidence that value function approximation can compensate for degraded models, demonstrating the increased robustness of SVG methods over model-based planning. Finally, we use SVG algorithms to solve a variety of challenging, under-actuated, physical control problems, including swimming of snakes, reaching, tracking, and grabbing with a robot arm, fall-recovery for a monoped, and locomotion for a planar cheetah and biped. \vspace{-0.25cm} \section{Background} \label{sec:background} \vspace{-0.25cm} We consider discrete-time Markov Decision Processes (MDPs) with continuous states and actions and denote the state and action at time step $t$ by $\mathbf{s}^t \in \mathbb{R}^{N_S}$ and $\mathbf{a}^t \in \mathbb{R}^{N_A}$, respectively. The MDP has an initial state distribution $\mathbf{s}^0 \sim p^0(\cdot)$, a transition distribution $\mathbf{s}^{t+1} \sim p(\cdot | \mathbf{s}^t, \mathbf{a}^t)$, and a (potentially time-varying) reward function $r^t = r(\mathbf{s}^t, \mathbf{a}^t, t)$.\footnote{We make use of a time-varying reward function only in one problem to encode a terminal reward.} We consider time-invariant stochastic policies $\mathbf{a} \sim p(\cdot | \mathbf{s}; \theta)$, parameterized by $\theta$. The goal of policy optimization is to find policy parameters $\theta$ that maximize the expected sum of future rewards. We optimize either finite-horizon or infinite-horizon sums, i.e.,\ $J(\theta) = \mathbb{E} \left [ \sum_{t=0}^T \gamma^t r^t \big | \theta \right ]$ or $J(\theta) = \mathbb{E} \left [ \sum_{t=0}^\infty \gamma^t r^t \big | \theta \right ]$ where $\gamma \in [0,1]$ is a discount factor.\footnote{$\gamma < 1$ for the infinite-horizon case.} When possible, we represent a variable at the next time step using the ``tick'' notation, e.g., $\mathbf{s}' \triangleq \mathbf{s}^{t+1}$. In what follows, we make extensive use of the state-action-value Q-function and state-value V-function. \begin{align} Q^t(\mathbf{s}, \mathbf{a}) & = \mathbb{E} \left [ \sum_{\tau=t} \gamma^{\tau - t} r^{\tau} \big | \mathbf{s}^t = \mathbf{s}, \mathbf{a}^t = \mathbf{a}, \theta \right ]; V^t(\mathbf{s}) = \mathbb{E} \left [ \sum_{\tau=t} \gamma^{\tau - t} r^{\tau} \big | \mathbf{s}^t = \mathbf{s}, \theta \right ]. \end{align} For finite-horizon problems, the value functions are time-dependent, e.g., $V' \triangleq V^{t+1}(\mathbf{s}')$, and for infinite-horizon problems the value functions are stationary, $V' \triangleq V(\mathbf{s}')$. The relevant meaning should be clear from the context. The state-value function can be expressed recursively using the stochastic Bellman equation \begin{align} V^t(\mathbf{s}) & = \int \left [ r^t + \gamma \int V^{t+1}(\mathbf{s}') p(\mathbf{s}' | \mathbf{s},\mathbf{a}) d \mathbf{s}' \right ] p(\mathbf{a}|\mathbf{s}; \theta) d \mathbf{a}. \label{eq:V_recursive} \end{align} We abbreviate partial differentiation using subscripts, $g_x \triangleq \partial g(x, y) / \partial x$. \vspace{-0.25cm} \section{Deterministic value gradients} \vspace{-0.25cm} The deterministic Bellman equation takes the form $V(\mathbf{s}) = r(\mathbf{s}, \mathbf{a}) + \gamma V'(\mathbf{f}(\mathbf{s}, \mathbf{a}))$ for a deterministic model $\mathbf{s}' = \mathbf{f}(\mathbf{s}, \mathbf{a})$ and deterministic policy $\mathbf{a}=\pi(\mathbf{s};\theta)$. Differentiating the equation with respect to the state and policy yields an expression for the value gradient \begin{align} V_\mathbf{s} & = r_\mathbf{s} + r_\mathbf{a} \pi_\mathbf{s} + \gamma V'_{\mathbf{s}'} (\mathbf{f}_\mathbf{s} + \mathbf{f}_\mathbf{a} \pi_\mathbf{s} ), \label{eq:DetVgradS} \\ V_\theta & = r_\mathbf{a} \pi_\theta + \gamma V'_{\mathbf{s}'} \mathbf{f}_\mathbf{a} \pi_\theta + \gamma V'_\theta. \label{eq:DetVgradTheta} \end{align} In eq. \ref{eq:DetVgradTheta}, the term $\gamma V'_\theta$ arises because the total derivative includes policy gradient contributions from subsequent time steps (full derivation in Appendix \ref{sec:Appendix:Recursive}). For a purely model-based formalism, these equations are used as a pair of coupled recursions that, starting from the termination of a trajectory, proceed backward in time to compute the gradient of the value function with respect to the state and policy parameters. $V_\theta^0$ returns the total policy gradient. When a state-value function is used after one step in the recursion, $r_\mathbf{a} \pi_\theta + \gamma V'_{\mathbf{s}'} \mathbf{f}_\mathbf{a} \pi_\theta$ directly expresses the contribution of the current time step to the policy gradient. Summing these gradients over the trajectory gives the total policy gradient. When a Q-function is used, the per-time step contribution to the policy gradient takes the form $Q_\mathbf{a} \pi_\theta$. \vspace{-0.25cm} \section{Stochastic value gradients} \vspace{-0.25cm} One limitation of the gradient computation in eqs. \ref{eq:DetVgradS} and \ref{eq:DetVgradTheta} is that the model and policy must be deterministic. Additionally, the accuracy of the policy gradient $V_\theta$ is highly sensitive to modeling errors. We introduce two critical changes: First, in section \ref{sec:MACA:sbp}, we transform the stochastic Bellman equation (eq. \ref{eq:V_recursive}) to permit backpropagating value information in a stochastic setting. This also enables us to compute gradients along real trajectories, not ones sampled from a model, making the approach robust to model error, leading to our first algorithm ``SVG($\infty$),'' described in section \ref{sec:Maca:Trajectory}. Second, in section \ref{sec:Maca:Value}, we show how value function critics can be integrated into this framework, leading to the algorithms ``SVG($1$)'' and ``SVG($0$)'', which expand the Bellman recursion for 1 and 0 steps, respectively. Value functions further increase robustness to model error and extend our framework to infinite-horizon control. \vspace{-0.2cm} \subsection{Differentiating the stochastic Bellman equation} \label{sec:MACA:sbp} \paragraph{Re-parameterization of distributions} Our goal is to backpropagate through the stochastic Bellman equation. To do so, we make use of a concept called ``re-parameterization'', which permits us to compute derivatives of deterministic and stochastic models in the same way. A very simple example of re-parameterization is to write a conditional Gaussian density $p(y | x) = \mathcal{N}(y | \mu(x), \sigma^2(x))$ as the function $y = \mu(x) + \sigma(x) \xi$, where $\xi \sim \mathcal{N}(0, 1)$. From this point of view, one produces samples procedurally by first sampling $\xi$, then deterministically constructing $y$. Here, we consider conditional densities whose samples are generated by a deterministic function of an input noise variable and other conditioning variables: $\mathbf{y} = \mathbf{f}(\mathbf{x}, \mathbf{\xi})$, where $\mathbf{\xi} \sim \rho(\cdot)$, a fixed noise distribution. Rich density models can be expressed in this form \cite{Rezende:ICML:2014,kingma2013auto}. Expectations of a function $\mathbf{g}(\mathbf{y})$ become $\mathbb{E}_{p(\mathbf{y} | \mathbf{x})} \mathbf{g}(\mathbf{y}) = \int \mathbf{g}(\mathbf{f}(\mathbf{x},\xi)) \rho(\xi) d \xi$. The advantage of working with re-parameterized distributions is that we can now obtain a simple Monte-Carlo estimator of the derivative of an expectation with respect to $\mathbf{x}$: \begin{align} \nabla_\mathbf{x} \mathbb{E}_{p(\mathbf{y} | \mathbf{x})} \mathbf{g}(\mathbf{y}) &= \mathbb{E}_{\rho(\xi)} \mathbf{g}_\mathbf{y} \mathbf{f}_\mathbf{x} \approx \frac{1}{M} \sum_{i=1}^M \mathbf{g}_\mathbf{y} \mathbf{f}_\mathbf{x} \big |_{\xi = \xi_i}. \end{align} In contrast to likelihood ratio-based Monte Carlo estimators, $\nabla_\mathbf{x} \log p(\mathbf{y} | \mathbf{x}) \mathbf{g}(\mathbf{y})$, this formula makes direct use of the Jacobian of $\mathbf{g}$. \vspace{-0.2cm} \paragraph{Re-parameterization of the Bellman equation} \label{sec:MACA:Reparameterization} We now re-parameterize the Bellman equation. When re-parameterized, the stochastic policy takes the form $\mathbf{a} = \pi(\mathbf{s}, \eta; \theta)$, and the stochastic environment the form $\mathbf{s}' = \mathbf{f}(\mathbf{s}, \mathbf{a}, \xi)$ for noise variables $\eta \sim \rho(\eta)$ and $\xi \sim \rho(\xi)$, respectively. Inserting these functions into eq.\ (\ref{eq:V_recursive}) yields \begin{align} V(\mathbf{s}) & = \mathbb{E}_{\rho(\eta)} \bigg [ r(\mathbf{s}, \pi(\mathbf{s},\eta; \theta)) + \gamma \mathbb{E}_{\rho(\xi)} \big [ V'(f(\mathbf{s}, \pi(\mathbf{s},\eta; \theta),\xi)) \big] \bigg]. \label{eq:V_recursiveReparameterized} \end{align} Differentiating eq.\ \ref{eq:V_recursiveReparameterized} with respect to the current state $\mathbf{s}$ and policy parameters $\theta$ gives \begin{align} V_\mathbf{s} & = \mathbb{E}_{\rho(\eta)} \bigg [ r_\mathbf{s} + r_\mathbf{a} \pi_\mathbf{s} + \gamma \mathbb{E}_{\rho(\xi)} V'_{\mathbf{s}'} (\mathbf{f}_\mathbf{s} + \mathbf{f}_\mathbf{a} \pi_\mathbf{s} ) \bigg ], \label{eq:VgradS} \\ V_\theta & = \mathbb{E}_{\rho(\eta)} \bigg [ r_\mathbf{a} \pi_\theta + \gamma \mathbb{E}_{\rho(\xi)} \big [ V'_{\mathbf{s}'} \mathbf{f}_\mathbf{a} \pi_\theta + V'_\theta \big ] \bigg ]. \label{eq:VgradTheta} \end{align} We are interested in controlling systems with \emph{a priori} unknown dynamics. Consequently, in the following, we replace instances of $\mathbf{f}$ or its derivatives with a learned model $\hat{\mathbf{f}}$. \vspace{-0.2cm} \paragraph{Gradient evaluation by planning} A planning method to compute a gradient estimate is to compute a trajectory by running the policy in loop with a model while sampling the associated noise variables, yielding a trajectory $\tau = (\mathbf{s}^1, \eta^1, \mathbf{a}^1, \xi^1, \mathbf{s}^2, \eta^2, \mathbf{a}^2, \xi^2, \dots)$. On this sampled trajectory, a Monte-Carlo estimate of the policy gradient can be computed by the backward recursions: \vspace{-0.1cm} \begin{align} v_\mathbf{s} & = [r_\mathbf{s} + r_\mathbf{a} \pi_\mathbf{s} + \gamma v'_{\mathbf{s}'} (\hat{\mathbf{f}}_\mathbf{s} + \hat{\mathbf{f}}_\mathbf{a} \pi_\mathbf{s} )] \big|_{\eta, \xi}, \label{eq:VgradMCS} \\ v_\theta & = [r_\mathbf{a} \pi_\theta + \gamma ( v'_{\mathbf{s}'} \hat{\mathbf{f}}_\mathbf{a} \pi_\theta + v'_\theta)] \big|_{\eta, \xi}, \label{eq:VgradMCTheta} \end{align} where have written lower-case $v$ to emphasize that the quantities are one-sample estimates\footnote{In the finite-horizon formulation, the gradient calculation starts at the end of the trajectory for which the only terms remaining in eq.\ (\ref{eq:VgradMCS}) are $v_\mathbf{s}^T \approx r_\mathbf{s}^T + r_\mathbf{a}^T \pi_\mathbf{s}^T$. After the recursion, the total derivative of the value function with respect to the policy parameters is given by $v_\theta^0$, which is a one-sample estimate of $\nabla_\theta J$.}, and ``$\big|_x$'' means ``evaluated at $x$''. \vspace{-0.3cm} \paragraph{Gradient evaluation on real trajectories} An important advantage of stochastic over deterministic models is that they can assign probability mass to observations produced by the real environment. In a deterministic formulation, there is no principled way to account for mismatch between model predictions and observed trajectories. In this case, the policy and environment noise $(\eta, \xi)$ that produced the observed trajectory are considered unknown. By an application of Bayes' rule, which we explain in Appendix \ref{sec:Appendix:Noise}, we can rewrite the expectations in equations \ref{eq:VgradS} and \ref{eq:VgradTheta} given the observations $(\mathbf{s}, \mathbf{a}, \mathbf{s}')$ as \vspace{-0.25cm} \begin{align} V_\mathbf{s} & = \mathbb{E}_{p(\mathbf{a} | \mathbf{s})} \mathbb{E}_{p(\mathbf{s}' | \mathbf{s}, \mathbf{a})} \mathbb{E}_{p(\eta, \xi | \mathbf{s}, \mathbf{a}, \mathbf{s}')} \bigg [ r_\mathbf{s} + r_\mathbf{a} \pi_+ \gamma V'_{\mathbf{s}'} (\hat{\mathbf{f}}_\mathbf{s} + \hat{\mathbf{f}}_\mathbf{a} \pi_\mathbf{s} ) \bigg ], \label{eq:PVgradS} \\ V_\theta & = \mathbb{E}_{p(\mathbf{a} | \mathbf{s})} \mathbb{E}_{p(\mathbf{s}' | \mathbf{s}, \mathbf{a})} \mathbb{E}_{p(\eta, \xi | \mathbf{s}, \mathbf{a}, \mathbf{s}')} \bigg [ r_\mathbf{a} \pi_\theta + \gamma ( V'_{\mathbf{s}'} \hat{\mathbf{f}}_\mathbf{a} \pi_\theta + V'_\theta ) \bigg ], \label{eq:PVgradTheta} \end{align} where we can now replace the two outer expectations with samples derived from interaction with the real environment. In the special case of additive noise, $\mathbf{s}' = \hat{\mathbf{f}}(\mathbf{s}, \mathbf{a}) + \xi$, it is possible to use a deterministic model to compute the derivatives $(\hat{\mathbf{f}}_\mathbf{s}, \hat{\mathbf{f}}_\mathbf{a})$. The noise's influence is restricted to the gradient of the value of the next state, $V'_{\mathbf{s}'}$, and does not affect the model Jacobian. If we consider it desirable to capture more complicated environment noise, we can use a re-parameterized generative model and infer the missing noise variables, possibly by sampling from $p(\eta, \xi | \mathbf{s}, \mathbf{a}, \mathbf{s}')$. \vspace{-0.25cm} \subsection{SVG($\infty$)} \label{sec:Maca:Trajectory} \vspace{-0.25cm} SVG($\infty$) computes value gradients by backward recursions on finite-horizon trajectories. After every episode, we train the model, $\hat{\mathbf{f}}$, followed by the policy, $\pi$. We provide pseudocode for this in Algorithm \ref{alg:MACA} but discuss further implementation details in section \ref{sec:Maca:ModelLearning} and in the experiments. \begin{minipage}[t]{0.48\textwidth} \begin{algorithm}[H] \caption{SVG($\infty$)} \label{alg:MACA} \begin{algorithmic}[1] \small \STATE Given empty experience database $\mathcal{D}$ \FOR{trajectory $= 0 ${ \bfseries to} $\infty$} \FOR{$t = 0 ${ \bfseries to} $T$} \STATE Apply control $\mathbf{a} = \pi(\mathbf{s}, \eta; \theta)$, $\eta \sim \rho(\eta)$ \STATE Insert $(\mathbf{s}, \mathbf{a}, r, \mathbf{s}')$ into $\mathcal{D}$ \ENDFOR \STATE Train generative model $\hat{\mathbf{f}}$ using $\mathcal{D}$ \STATE $v'_\mathbf{s} = 0$ (finite-horizon) \STATE $v'_\theta = 0$ (finite-horizon) \FOR{$t = T$ {\bfseries down to} $0$} \STATE Infer $\xi | (\mathbf{s}, \mathbf{a}, \mathbf{s}')$ and $\eta | (\mathbf{s}, \mathbf{a})$ \STATE$v_\theta = [ r_\mathbf{a} \pi_\theta + \gamma (v'_{\mathbf{s}'} \hat{\mathbf{f}}_\mathbf{a} \pi_\theta + v'_\theta) ] \big |_{\eta, \xi}$ \STATE $v_\mathbf{s} = [ r_\mathbf{s} + r_\mathbf{a} \pi_\mathbf{s} + \gamma v'_{\mathbf{s}'} (\hat{\mathbf{f}}_\mathbf{s} + \hat{\mathbf{f}}_\mathbf{a} \pi_\mathbf{s} ) ] \big |_{\eta, \xi}$ \ENDFOR \STATE Apply gradient-based update using $v_\theta^0$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage}\hfill \begin{minipage}[t]{0.48\textwidth} \begin{algorithm}[H] \caption{SVG($1$) with Replay} \label{alg:MACAZeroER} \begin{algorithmic}[1] \small \STATE Given empty experience database $\mathcal{D}$ \FOR{$t = 0 ${ \bfseries to} $\infty$} \STATE Apply control $\pi(\mathbf{s}, \eta; \theta)$, $\eta \sim \rho(\eta)$ \STATE Observe $r, \mathbf{s}'$ \STATE Insert $(\mathbf{s}, \mathbf{a}, r, \mathbf{s}')$ into $\mathcal{D}$ \STATE // Model and critic updates \STATE Train generative model $\hat{\mathbf{f}}$ using $\mathcal{D}$ \STATE Train value function $\hat{V}$ using $\mathcal{D}$ (Alg. \ref{alg:FPE}) \STATE // Policy update \STATE Sample $(\mathbf{s}^k, \mathbf{a}^k, r^k, \mathbf{s}^{k+1})$ from $\mathcal{D}$ ($k \leq t$) \STATE $w = \frac{p(\mathbf{a}^k | \mathbf{s}^k ; \theta^t)}{p( \mathbf{a}^k | \mathbf{s}^k ; \theta^k)}$ \STATE Infer $\xi^k | (\mathbf{s}^k, \mathbf{a}^k, \mathbf{s}^{k+1})$ and $ \eta^k | (\mathbf{s}^k, \mathbf{a}^k)$ \STATE $v_\theta = w ( r_\mathbf{a} + \gamma \hat{V}_{\mathbf{s}'}' \hat{\mathbf{f}}_\mathbf{a} ) \pi_\theta \big |_{\eta^k, \xi^k}$ \STATE Apply gradient-based update using $v_\theta$ \ENDFOR \end{algorithmic} \end{algorithm} \end{minipage} \vspace{-0.25cm} \subsection{SVG(1) and SVG(0)} \label{sec:Maca:Value} \vspace{-0.25cm} In our framework, we may learn a parametric estimate of the expected value $\hat{V}(\mathbf{s}; \nu)$ (critic) with parameters $\nu$. The derivative of the critic value with respect to the state, $\hat{V}_\mathbf{s}$, can be used in place of the sample gradient estimate given in eq.\ (\ref{eq:VgradMCS}). The critic can reduce the variance of the gradient estimates because $\hat{V}$ approximates the \textit{expectation} of future rewards while eq.\ (\ref{eq:VgradMCS}) provides only a single-trajectory estimate. Additionally, the value function can be used at the end of an episode to approximate the infinite-horizon policy gradient. Finally, eq.\ (\ref{eq:VgradMCS}) involves the repeated multiplication of Jacobians of the approximate model $\hat{\mathbf{f}}_\mathbf{s}$, $\hat{\mathbf{f}}_\mathbf{a}$. Just as model error can compound in forward planning, model gradient error can compound during backpropagation. Furthermore, SVG($\infty$) is on-policy. That is, after each episode, a single gradient-based update is made to the policy, and the policy optimization does not revisit those trajectory data again. To increase data-efficiency, we construct an off-policy, experience replay \cite{lin1992self,wawrzynski2009cat} algorithm that uses models and value functions, SVG(1) with Experience Replay (SVG(1)-ER). This algorithm also has the advantage that it can perform an infinite-horizon computation. To construct an off-policy estimator, we perform importance-weighting of the current policy distribution with respect to a proposal distribution, $q(\mathbf{s}, \mathbf{a})$: \begin{align} \hat{V}_\theta & = \mathbb{E}_{q(\mathbf{s}, \mathbf{a})} \mathbb{E}_{p(\mathbf{s}' | \mathbf{s}, \mathbf{a})} \mathbb{E}_{p(\eta, \xi | \mathbf{s}, \mathbf{a}, \mathbf{s}')} \frac{ p(\mathbf{a} | \mathbf{s}; \theta)}{q(\mathbf{a}|\mathbf{s})} \bigg [ r_\mathbf{a} \pi_\theta + \gamma \hat{V}'_\mathbf{s} \hat{\mathbf{f}}_\mathbf{a} \pi_\theta \bigg ]. \label{eq:DeltaThetaExpectation} \end{align} Specifically, we maintain a database with tuples of past state transitions $(\mathbf{s}^k, \mathbf{a}^k, r^k, \mathbf{s}^{k+1})$. Each proposal drawn from $q$ is a sample of a tuple from the database. At time $t$, the importance-weight $w \triangleq p/q = \frac{ p(\mathbf{a}^k | \mathbf{s}^k; \theta^t) }{ p(\mathbf{a}^k | \mathbf{s}^k, \theta^k) }$, where $\theta^k$ comprise the policy parameters in use at the historical time step $k$. We do not importance-weight the marginal distribution over states $q(\mathbf{s})$ generated by a policy; this is widely considered to be intractable. Similarly, we use experience replay for value function learning. Details can be found in Appendix \ref{sec:Appendix:Models}. Pseudocode for the SVG($1$) algorithm with Experience Replay is in Algorithm \ref{alg:MACAZeroER}. We also provide a model-free stochastic value gradient algorithm, SVG($0$) (Algorithm \ref{alg:SVGZeroER} in the Appendix). This algorithm is very similar to SVG($1$) and is the stochastic analogue of the recently introduced Deterministic Policy Gradient algorithm (DPG) \cite{silver2014deterministic,lillicrap2015continuous,balduzzi2015compatible}. Unlike DPG, instead of assuming a deterministic policy, SVG(0) estimates the derivative around the policy noise $\mathbb{E}_{p(\eta)} \big [Q_\mathbf{a} \pi_\theta \big| \eta \big ]$.\footnote{Note that $\pi$ is a function of the state and noise variable.} This, for example, permits learning policy noise variance. The relative merit of SVG(1) versus SVG(0) depends on whether the model or value function is easier to learn and is task-dependent. We expect that model-based algorithms such as SVG($1$) will show the strongest advantages in multitask settings where the system dynamics are fixed, but the reward function is variable. SVG(1) performed well across all experiments, including ones introducing capacity constraints on the value function and model. SVG(1)-ER demonstrated a significant advantage over all other tested algorithms. \vspace{-0.25cm} \section{Model and value learning} \label{sec:Maca:ModelLearning} \vspace{-0.25cm} We can use almost any kind of differentiable, generative model. In our work, we have parameterized the models as neural networks. Our framework supports nonlinear state- and action-dependent noise, notable properties of biological actuators. For example, this can be described by the parametric form $\hat{\mathbf{f}}(\mathbf{s}, \mathbf{a}, \xi) = \hat{\mu}(\mathbf{s}, \mathbf{a}) + \hat{\sigma}(\mathbf{s}, \mathbf{a}) \xi$. Model learning amounts to a purely supervised problem based on observed state transitions. Our model and policy training occur \emph{jointly}. There is no ``motor-babbling'' period used to identify the model. As new transitions are observed, the model is trained first, followed by the value function (for SVG($1$)), followed by the policy. To ensure that the model does not forget information about state transitions, we maintain an experience database and cull batches of examples from the database for every model update. Additionally, we model the state-change by $\mathbf{s}' = \hat{\mathbf{f}}(\mathbf{s}, \mathbf{a}, \xi) + \mathbf{s}$ and have found that constructing models as separate sub-networks per predicted state dimension improved model quality significantly. Our framework also permits a variety of means to learn the value function models. We can use temporal difference learning \cite{sutton1988learning} or regression to empirical episode returns. Since SVG($1$) is model-based, we can also use Bellman residual minimization \cite{baird1995residual}. In practice, we used a version of ``fitted'' policy evaluation. Pseudocode is available in Appendix \ref{sec:Appendix:Models}, Algorithm \ref{alg:FPE}. \vspace{-0.25cm} \section{Experiments} \label{sec:Experiments} \vspace{-0.25cm} We tested the SVG algorithms in two sets of experiments. In the first set of experiments (section \ref{sec:Experiments:Analyzing}), we test whether evaluating gradients on real environment trajectories and value function approximation can reduce the impact of model error. In our second set (section \ref{sec:Experiments:ScalingUp}), we show that SVG(1) can be applied to several complicated, multidimensional physics environments involving contact dynamics (Figure \ref{fig:Experiments:mujoco}) in the MuJoCo simulator \cite{todorov2012mujoco}. Below we only briefly summarize the main properties of each environment: further details of the simulations can be found in Appendix \ref{sec:Appendix:ExpDetails} and supplement. In all cases, we use generic, 2 hidden-layer neural networks with \textit{tanh} activation functions to represent models, value functions, and policies. A video montage is available at \url{https://youtu.be/PYdL7bcn_cM}. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{all_envs_revised.pdf} \end{center} \vspace{-0.5cm} \caption{\emph{From left to right}: 7-Link Swimmer; Reacher; Gripper; Monoped; Half-Cheetah; Walker } \label{fig:Experiments:mujoco} \end{figure} \vspace{-0.2cm} \subsection{Analyzing SVG} \label{sec:Experiments:Analyzing} \vspace{-0.2cm} \paragraph{Gradient evaluation on real trajectories vs. planning} To demonstrate the difficulty of planning with a stochastic model, we first present a very simple control problem for which SVG($\infty$) easily learns a control policy but for which an otherwise identical planner fails entirely. Our example is based on a problem due to \cite{munos2006policy}. The policy directly controls the velocity of a point-mass ``hand'' on a 2D plane. By means of a spring-coupling, the hand exerts a force on a ball mass; the ball additionally experiences a gravitational force and random forces (Gaussian noise). The goal is to bring hand and ball into one of two randomly chosen target configurations with a relevant reward being provided only at the final time step. With simulation time step $0.01s$, this demands controlling and backpropagating the distal reward along a trajectory of $1,000$ steps. Because this experiment has a non-stationary, time-dependent value function, this problem also favors model-based value gradients over methods using value functions. SVG($\infty$) easily learns this task, but the planner, which uses trajectories from the model, shows little improvement. The planner simulates trajectories using the learned stochastic model and backpropagates along those simulated trajectories (eqs. \ref{eq:VgradMCS} and \ref{eq:VgradMCTheta}) \cite{nguyen1990neural}. The extremely long time-horizon lets prediction error accumulate and thus renders roll-outs highly inaccurate, leading to much worse final performance (c.f.\ Fig.\ \ref{fig:hand_cartpole}, \textit{left}).\footnote{We also tested REINFORCE on this problem but achieved very poor results due to the long horizon.} \vspace{-0.25cm} \paragraph{Robustness to degraded models and value functions} We investigated the sensitivity of SVG($\infty$) and SVG(1) to the quality of the learned model on Swimmer. Swimmer is a chain body with multiple links immersed in a fluid environment with drag forces that allow the body to propel itself \cite{coulom2002reinforcement,tassa2008receding}. We build chains of 3, 5, or 7 links, corresponding to 10, 14, or 18-dimensional state spaces with 2, 4, or 6-dimensional action spaces. The body is initialized in random configurations with respect to a central goal location. Thus, to solve the task, the body must turn to re-orient and then produce an undulation to move to the goal. To assess the impact of model quality, we learned to control a link-3 swimmer with SVG($\infty$) and SVG(1) while varying the capacity of the network used to model the environment (5, 10, or 20 hidden units for each state dimension subnetwork (Appendix \ref{sec:Appendix:ExpDetails}); i.e., in this task we intentionally shrink the neural network model to investigate the sensitivity of our methods to model inaccuracy. While with a high capacity model (20 hidden units per state dimension), both SVG($\infty$) and SVG(1) successfully learn to solve the task, the performance of SVG($\infty$) drops significantly as model capacity is reduced (c.f.\ Fig.\ \ref{fig:swimmer}, \textit{middle}). SVG(1) still works well for models with only 5 hidden units, and it also scales up to 5 and 7-link versions of the swimmer (Figs. \ref{fig:swimmer}, \textit{right} and \ref{fig:physics}, \textit{left}). To compare SVG(1) to conventional model-free approaches, we also tested a state-of-the-art actor-critic algorithm that learns a $V$-function and updates the policy using the TD-error $\delta = r + \gamma V' - V$ as an estimate of the advantage, yielding the policy gradient $v_\theta = \delta \nabla_\theta \log \pi$ \cite{wawrzynski2009real}. (SVG(1) and the AC algorithm used the same code for learning $V$.) SVG(1) outperformed the model-free approach in the 3-, 5-, and 7-link swimmer tasks (c.f.\ Fig.\ \ref{fig:swimmer}, \textit{left}, \textit{right}; Fig.\ \ref{fig:physics}, \textit{top left}). In figure panels \ref{fig:hand_cartpole}, \textit{middle}, \ref{fig:swimmer}, \textit{right}, and \ref{fig:physics}, \textit{left column}, we show that experience replay for the policy can improve the data efficiency and performance of SVG(1). Similarly, we tested the impact of varying the capacity of the value function approximator (Fig. \ref{fig:hand_cartpole}, \textit{right}) on a cart-pole. The V-function-based SVG(1) degrades less severely than the Q-function-based DPG presumably because it computes the policy gradient with the aid of the dynamics model. \vspace{-0.25cm} \subsection{SVG in complex environments} \label{sec:Experiments:ScalingUp} \vspace{-0.15cm} \begin{figure} \centering \includegraphics[width=0.975\textwidth]{hand_cartpole_mujoco_edited_revised.pdf} \vspace{-0.5cm} \caption{\textit{Left}: Backpropagation through a model along observed stochastic trajectories is able to optimize a stochastic policy in a stochastic environment, but an otherwise equivalent planning algorithm that simulates the transitions with a learned stochastic model makes little progress due to compounding model error. \textit{Middle}: SVG and DPG algorithms on cart-pole. SVG(1)-ER learns the fastest. \textit{Right}: When the value function capacity is reduced from 200 hidden units in the first layer to 100 and then again to 50, SVG(1) exhibits less performance degradation than the Q-function-based DPG, presumably because the dynamics model contains auxiliary information about the Q function.} \label{fig:hand_cartpole} \end{figure} \vspace{-0.25cm} \begin{figure} \centering \includegraphics[width=0.975\textwidth]{swimmer3_5_edited_revised3.pdf} \vspace{-0.4cm} \caption{\textit{Left}: For a 3-link swimmer, with relatively simple dynamics, the compared methods yield similar results and possibly a slight advantage to the purely model-based SVG($\infty$). \textit{Middle}: However, as the environment model's capacity is reduced from 20 to 10 then to 5 hidden units per state-dimension subnetwork, SVG($\infty$) dramatically deteriorates, whereas SVG(1) shows undisturbed performance. \textit{Right}: For a 5-link swimmer, SVG(1)-ER learns faster and asymptotes at higher performance than the other tested algorithms.} \vspace{-0.4cm} \label{fig:swimmer} \end{figure} \begin{figure} \centering \includegraphics[width=0.975\textwidth]{physics_domains_edited_revised2_new.pdf} \vspace{-0.4cm} \caption{Across several different domains, SVG(1)-ER reliably optimizes policies, clearly settling into similar local optima. On the 4-target Reacher, SVG(1)-ER shows a noticeable efficiency and performance gain relative to the other algorithms.} \vspace{-0.4cm} \label{fig:physics} \end{figure} In a second set of experiments we demonstrated that SVG(1)-ER can be applied to several challenging physical control problems with stochastic, non-linear, and discontinuous dynamics due to contacts. {\it Reacher} is an arm stationed within a walled box with 6 state dimensions and 3 action dimensions and the $(x,y)$ coordinates of a target site, giving 8 state dimensions in total. In 4-Target Reacher, the site was randomly placed at one of the four corners of the box, and the arm in a random configuration at the beginning of each trial. In Moving-Target Reacher, the site moved at a randomized speed and heading in the box with reflections at the walls. Solving this latter problem implies that the policy has generalized over the entire work space. {\it Gripper} augments the reacher arm with a manipulator that can grab a ball in a randomized position and return it to a specified site. {\it Monoped} has 14 state dimensions, 4 action dimensions, and ground contact dynamics. The monoped begins falling from a height and must remain standing. Additionally, we apply Gaussian random noise to the torques controlling the joints with a standard deviation of $5 \%$ of the total possible actuator strength at all points in time, reducing the stability of upright postures. {\it Half-Cheetah} is a planar cat robot designed to run based on \cite{wawrzynski2009cat} with 18 state dimensions and 6 action dimensions. Half-Cheetah has a version with springs to aid balanced standing and a version without them. {\it Walker} is a planar biped, based on the environment from \cite{DBLP:journals/corr/SchulmanLMJA15}. \vspace{-0.5cm} \paragraph{Results} Figure \ref{fig:physics} shows learning curves for several repeats for each of the tasks. We found that in all cases SVG(1) solved the problem well; we provide videos of the learned policies in the supplemental material. The 4-target reacher reliably finished at the target site, and in the tracking task followed the moving target successfully. SVG(1)-ER has a clear advantage on this task as also borne out in the cart-pole and swimmer experiments. The cheetah gaits varied slightly from experiment to experiment but in all cases made good forward progress. For the monoped, the policies were able to balance well beyond the 200 time steps of training episodes and were able to resist significantly higher adversarial noise levels than used during training (up to $25 \%$ noise). We were able to learn gripping and walking behavior, although walking policies that achieved similar reward levels did not always exhibit equally good walking phenotypes. \vspace{-0.3cm} \section{Related work} \label{sec:Related} \vspace{-0.3cm} Writing the noise variables as exogenous inputs to the system to allow direct differentiation with respect to the system state (equation \ref{eq:VgradS}) is a known device in control theory \cite{jacobson1970differential,fairbank2014value} where the model is given analytically. The idea of using a model to optimize a parametric policy around real trajectories is presented heuristically in \cite{narendra1990identification} and \cite{abbeel2006using} for deterministic policies and models. Also in the limit of deterministic policies and models, the recursions we have derived in Algorithm \ref{alg:MACA} reduce to those of \cite{atkeson2012efficient}. Werbos defines an actor-critic algorithm called Heuristic Dynamic Programming that uses a deterministic model to roll-forward one step to produce a state prediction that is evaluated by a value function \cite{werbos1990menu}. Deisenroth et al. have used Gaussian process models to compute policy gradients that are sensitive to model-uncertainty \cite{deisenroth2011pilco}, and Levine et al. have optimized impressive policies with the aid of a non-parametric trajectory optimizer and locally-linear models \cite{levine2014learning}. Our work in contrast has focused on using global, neural network models conjoined to value function approximators. \vspace{-0.3cm} \section{Discussion} \vspace{-0.3cm} We have shown that two potential problems with value gradient methods, their reliance on planning and restriction to deterministic models, can be exorcised, broadening their relevance to reinforcement learning. We have shown experimentally that the SVG framework can train neural network policies in a robust manner to solve interesting continuous control problems. The framework includes algorithm variants beyond the ones tested in this paper, for example, ones that combine a value function with $k$ steps of back-propagation through a model (SVG(k)). Augmenting SVG(1) with experience replay led to the best results, and a similar extension could be applied to any SVG(k). Furthermore, we did not harness sophisticated generative models of stochastic dynamics, but one could readily do so, presenting great room for growth. \small{ \paragraph{Acknowledgements} We thank Arthur Guez, Danilo Rezende, Hado van Hasselt, John Schulman, Jonathan Hunt, Nando de Freitas, Martin Riedmiller, Remi Munos, Shakir Mohamed, and Theophane Weber for helpful discussions and John Schulman for sharing his walker model.} \break \vspace{-0.5cm} \bibliographystyle{plain} {\footnotesize
1,108,101,564,420
arxiv
\section{Introduction} The inherently large and tunable spin-orbit interaction (SOI) energies of holes and their reduced hyperfine coupling with nuclear spins are behind the surging interest in hole spin qubits with fast all-electrical control.\cite{Hendrickx2020,Maurand2016,Watzinger2018,Scappucci2020,Hu2012} Holes can also host superconducting pairing correlations, a key ingredient for the emergence of Majorana zero modes\cite{Kloeffel2011,Maier2014a,Mao2012,Maier2014,Lutchyn2018} for topological quantum computing. Because of its attractive properties,\cite{Hendrickx2020,Watzinger2016,Sammak2019,Moutanabbir2010,Miyamoto2010,Bulaev2007,Wang2019,Hendrickx2018,Mizokuchi2018,Hendrickx2019,Vigneau2019,Gao2020,Lawrie2020} strained Ge low-dimensional system has been proposed as an effective building block to develop these emerging quantum devices. Interestingly, the simplicity of this system makes it a textbook model to uncover and elucidate subtle hole spin-related phenomena leading, for instance, to the recent observation of pure cubic Rashba spin-orbit coupling.\cite{Moriya2014} Measuring Zeeman splitting (ZS) of hole states under an external magnetic field has been central in probing hole spin properties, as it is directly related to the hole g-factor, which is itself strongly influenced by the underlying SOI, strain, symmetry, and confinement.\cite{Kotlyar2001,Winkler2003} In III-V semiconductors,\cite{Kotlyar2001,Traynor1997,Lawless1992,Warburton1993,Jovanov2012,Danneau2006,Kubisa2011,Grigoryev2016,Fischer2007,FariaJunior2019,Tedeschi2019,Bardyszewski2014,Broido1985,Ekenberg1985} hole spin splitting depends nonlinearly on the out-of-plane magnetic field strength $B$, causing Landau level crossings/anti-crossings\cite{Warburton1993,Moriya2013} and Zeeman crossings/anti-crossings.\cite{Sammak2019,Lodari2019,Winkler1996} The nonlinearity is usually modeled by a quadratic-in-field contribution to ZS,\cite{Kotlyar2001} which owes its existence to valence band mixing. Depending on the sign of the splitting, Zeeman energy can even vanish at some finite critical field, $B_c$. Theoretical studies attribute these nonlinearities to the mixing of heavy-hole (HH) and light-hole (LH) bands at finite energy.\cite{Traynor1997} Alongside with valence band mixing, Rashba and Dresselhaus spin-orbit coupling were also shown to influence the crossing field, due to the lattice inversion asymmetry and the confining potential. Detailed mechanisms of ZS of hole states are yet to be unravelled and understood and furthermore, ZS treatments for zinc-blende or diamond crystals that explicitly consider strain and SOI strength remain conspicuously missing in literature. Note that in early calculations\cite{Winkler1996} of Landau levels in Ge/SiGe quantum well (QW) to interpret cyclotron resonance experiments in Ref.~\onlinecite{Engelhardt1994}, the crossing of spin split states within the first HH subband was present and the corresponding field position was found to be sensitive to the strength of spin-orbit coupling. In that work, the authors insisted on the importance of including explicitly the split-off hole band, which was required to achieve a good agreement with experiments. Crucially, studies that included both strain and SOI were diagonalizing numerically the full $k\cdot p$ matrix.\cite{Jovanov2012,Winkler1996} However, this mathematical rigor comes at the expense of identifying the physics governing the non-linearities in ZS. To overcome these limitations and elucidate the underlying mechanisms of ZS, herein we uncover the clear signature of ZS crossings in a Ge high-mobility two-dimensional hole gas (2DHG). We also derive a theoretical framework describing the crossing of Zeeman split states that includes explicitly the SOI strength and strain. A closed formula for the crossing fields is obtained and validated by experiment. In addition to establishing the key parameters in Zeeman crossings, this analysis also provides a toolkit for a direct quantification from simple magnetotransport measurements of important physical quantities including HH out-of-plane g-factor, HH-LH splitting, and cubic Rashba spin-orbit coefficient. \section{Experimental details} The investigated 2DHG consists of a Ge/SiGe heterostructure including a strain-relaxed Si$_{0.2}$Ge$_{0.8}$ buffer setting the overall lattice parameter, a compressively-strained Ge QW, and a Si$_{0.2}$Ge$_{0.8}$ barrier separating the QW from a sacrificial Si cap layer. The growth was carried out in an Epsilon 2000 (ASMI) reduced pressure chemical vapor deposition reactor on a $100\,\text{mm}$ n-type Si(001) substrate. The growth sequence starts with the deposition of a Si$_{0.2}$Ge$_{0.8}$ virtual substrate. This virtual substrate is obtained by growing a $1.6\,\mu\text{m}$ strain-relaxed Ge buffer layer, a $1.6\,\mu\text{m}$ reverse-graded Si$_{1-x}$Ge$_x$ layer with final Ge composition $x = 0.8$, and a $500\,\text{nm}$ strain-relaxed Si$_{0.2}$Ge$_{0.8}$ buffer layer. A $16\,\text{nm}$ compressively-strained Ge quantum well is then grown on top of the Si$_{0.2}$Ge$_{0.8}$ virtual substrate, followed by a strain-relaxed $17\,\text{nm}$-thick Si$_{0.2}$Ge$_{0.8}$ barrier. An in-plane compressive strain $\epsilon_\parallel = -0.63\%$ is found in the QW via X-ray diffraction measurements.\cite{Sammak2019} A thin ($<2\,\text{nm}$) sacrificial Si cap completes the heterostructure. This cap is readily oxidized upon exposure to the cleanroom environment after unloading the Ge/SiGe heterostructure from the growth reactor. \begin{figure}[t!] \centering \includegraphics[scale=0.8]{Fig1_2020_08_28.pdf} \caption{(a) Optical micrograph of a Hall-bar shaped Ge/SiGe heterostructure field effect transistor and cross section of the gate stack and active regions of the strained Ge/SiGe heterostructure below the red cut. The strained Ge (sGe) quantum well is $16\,\text{nm}$ thick and the Si$_{0.2}$Ge$_{0.8}$ barrier on top is $17\,\text{nm}$ thick. (b) Landau level fan diagram reporting the magnetoresistance $\Delta\rho_{xx}/\rho_0 = (\rho_{xx}-\rho_0)/\rho_0$ as a function of out-of-plane magnetic field $B$ and energy $E$. Labels of filling factors $\nu = 1$ -- $4$ are shown.} \label{fig:exp-fan-chart} \end{figure} Hall-bar field effect transistors (H-FETs) are fabricated and operated with a negatively biased gate to accumulate a 2D hole gas into the QW and tune the carrier density. Fig.~1a shows an optical micrograph of the H-FET and a cross-section schematic of the active layers and the gate stack. A $170\,\text{nm}$ deep trench mesa is dry-etched around the Hall-bar shaped H-FET in order to isolate the bonding pads from the device. The sample is dipped in HF to remove the native oxide prior to a $60\,\text{nm}$ Pt layer deposition via e-beam evaporation. Ohmic contacts are obtained by diffusion of Pt into the quantum well occurring during the atomic layer deposition of a $30\,\text{nm}$ Al$_2$O$_3$ dielectric layer at a temperature of $300\,^\circ\text{C}$. Finally, a $10/200\,\text{nm}$-thick Ti/Au gate layer is deposited. An optimized Si$_{0.2}$Ge$_{0.8}$ barrier thickness of $17\,\text{nm}$ was chosen, which is thin enough to allow for a large saturation carrier density\cite{Sammak2019} (up to $7.5\times 10^{11}\,\text{cm}^{-2}$), while providing sufficient separation to reduce scattering of carriers in the QW from remote impurities,\cite{Lodari2019} leading to large hole maximum mobility ($2.6\times 10^5\,\text{cm}^{-2}$). Large density range and high mobility are key ingredients to observe Landau level fan diagrams in magnetotransport with the clarity required to reveal subtle spin-related features. In the magnetotransport studies, the longitudinal and transversal ($\rho_{xx}$ and $\rho_{xy}$) component of the 2DHG resistivity tensor were measured via a standard four-probe low-frequency lock-in technique. The measurements are recorded at a temperature of $T = 260\,\text{mK}$, measured at the cold finger of a $^3$He dilution refrigerator. A source-drain voltage bias $V_{sd} = 0.1\,\text{mV}$ is applied at a frequency of $7.7\,\text{Hz}$. The magnetoresistance characterization of the device is performed by sweeping the voltage gate $V_g$ and stepping $B$ with a resolution of $15\,\text{mV}$ and $25\,\text{mT}$, respectively. The energy $E$ is obtained using the relation $E = p\pi\hbar^2 / m^*$, where we obtain the carrier density $p$ by Hall effect measurements at low $B$ and we use the effective mass $m^*$ measured as a function of density in similar heterostructures.\cite{Lodari2019} The $\rho_{xx}$ vs. energy profiles in the upper panels of Figs.~3(a)-3(d) have been smoothed for clarity by using a Matlab routine based on Savitzky-Golay filtering method. \section{Magnetotransport studies of strained Ge 2DHG} The fan diagram in Fig.~1b shows the normalized magnetoresistance oscillation amplitude $\Delta\rho_{xx}/\rho_0 = (\rho_{xx} - \rho_0) / \rho_0$ as a function of energy and out-of-plane external magnetic field $B$ aligned along the growth direction $\mathbf{\hat{z}}$ and perpendicular to the 2DHG plane, where $\rho_0$ is the $\rho_{xx}$ value at $B = 0$. The Zeeman split energy gap, corresponding to odd integer filling factors $\nu$, deviates from its linear dependence on $B$, vanishes when the magnetic field reaches a critical value $B_c$, and then reopens at higher $B$ values. We clearly observe the associated crossing of Zeeman split states for odd integers $\nu = 3, 5, 7$, and $9$. Partial signatures of Zeeman crossings occurring at similar magnetic fields were observed in earlier studies,\cite{Sammak2019,Lodari2019} albeit the fan diagram measurements were limited in density range\cite{Sammak2019} or affected by thermal broadening.\cite{Lodari2019} These observations point to an underlying mechanism that is independent of the QW position with respect to the surface gate. \section{Theoretical framework for hole dispersion in strained Ge 2DHG} To identify the mechanisms behind the non-linearities in ZS and the parameters affecting the crossing field, we developed a perturbative model to describe the hole dispersion as a function of the out-of-plane magnetic field. The model assumes an abrupt and infinite band offset between the QW and its barriers and is based on a 6-band $k\cdot p$ Hamiltonian for HH, LH and split-off (SO) bands. The total Hamiltonian $H$ for the hole dispersion is written as\cite{Eissfeller2011}~: $H = H_k + H_\epsilon + H_{\text{SO}} + H_B + V$ where $H_k$ is a function of the wavevector operator $\mathbf{k} = (k_x,k_y,k_z)$, $H_\epsilon$ is the Bir-Pikus Hamiltonian and depends on the strain tensor components $\epsilon_{ij}$, $H_{\text{SO}}$ is the spin-orbit term proportional to the spin-orbit energy $\Delta$ and $H_B$ includes the interaction of the free electron spin with the magnetic field. $V$ is the infinite well potential for a square well of width $L$. We consider QWs grown along [001] direction and subjected to biaxial bi-isotropic strain. Thus, $\epsilon_{ij} = 0$ if $i\neq j$, $\epsilon_{xx} = \epsilon_{yy}\equiv \epsilon_\parallel$ and $\epsilon_{zz}=-D_{001}\epsilon_\parallel$, where $D_{001}$ is the Poisson ratio and $\epsilon_\parallel$ is the in-plane lattice strain. We first rewrite the total Hamiltonian $H$ in two terms~: $H = H_0(\epsilon_\parallel; k_z) + H'(n, B; k_z)$, where the integer $n\geq 1$ labels the spin-split Landau pairs such that $\nu = 2n-1$ at crossings. The eigenstates of $H_0$ consist of pure HH subbands of energy $E_l^{\text{HH}}$ and two superpositions of LH and SO holes of energy $E_l^\eta$. Here, $\eta = \{+, -\}$ is a generic label to distinguish the two orthogonal LH-SO states and $l\geq 1$ is the subband index. The perturbation $H'$ introduces the magnetic field and is eliminated to second order by a Schrieffer-Wolff transformation, resulting in an effective Hamiltonian for the 2-fold HH subband. Remarkably, the resulting effective $2\times 2$ Hamiltonian for the HH subband does not couple spin-up ($+$) and spin-down ($-$) projections. The HH dispersion as a function of $B$ is thus simply the diagonal entries of the effective matrix. We have \begin{align} \begin{split}\label{up} E_{+,l,n}^{(2)}(B) &= E_l^{\text{HH}} + 3n(n + 1)\left(\kappa - F_l\right)\frac{\mu_{\text{B}}B^2}{B_l^*}\\ &- \left[\left(2n-1\right)\left(\gamma_1 + \gamma_2\right) + 3\kappa - 6nF_l\right]\mu_{\text{B}}B, \\ \end{split} \end{align} \begin{align} \begin{split}\label{down} &E_{-,l,n}^{(2)}(B) = E_l^{\text{HH}} + 3(n - 2)(n - 1)\left(\kappa - F_l\right)\frac{\mu_{\text{B}}B^2}{B_l^*}\\ &- \left[\left(2n-1\right)\left(\gamma_1 + \gamma_2\right) - 3\kappa - 6(n-1)F_l\right]\mu_{\text{B}}B, \\ \end{split} \end{align} \noindent with \begin{subequations}\label{B*} \begin{align} B_l^* &= \frac{\kappa - F_l}{\mu_{\text{B}}\left(\gamma_2 + \gamma_3\right)^2}\left[\sum_{\eta = \pm}{\frac{\left(l_l^\eta + \sqrt{2}s_l^\eta\right)^2}{E_l^{\text{HH}} - E_l^\eta}}\right]^{-1}\label{B*_exact} \\ &\approx \frac{\kappa - F_l}{\mu_{\text{B}}\left(\gamma_2 + \gamma_3\right)^2}\left(E_l^{\text{HH}} - E_l^{\text{LH}}\right)\label{B*_approx} \end{align} \end{subequations} \noindent and \begin{subequations}\label{F} \begin{align} F_l &= \frac{32\alpha_0\gamma_3^2}{L^2}\sum_{\substack{j = 1 \\ j \neq l}}^\infty{\frac{\left[1-(-1)^{l+j}\right]l^2j^2}{\left(l^2 - j^2\right)^2}\sum_{\eta = \pm}{\frac{\left(l_j^\eta - s_j^\eta / \sqrt{2}\right)^2}{E_l^{\text{HH}} - E_j^\eta}}}\label{F_exact} \\ &\approx \frac{32\alpha_0\gamma_3^2}{L^2}\sum_{\substack{j = 1 \\ j \neq l}}^\infty{\frac{1}{E_l^{\text{HH}} - E_j^{\text{LH}}}\frac{\left[1-(-1)^{l+j}\right]l^2j^2}{\left(l^2 - j^2\right)^2}}.\label{F_approx} \end{align} \end{subequations} Here, $\mu_\text{B}$ is the Bohr magneton, $\gamma_i$ and $\kappa$ are the Luttinger parameters, $\alpha_0 = \hbar^2 / (2m_0)$ with $m_0$ the free electron mass and $l_l^\eta$ and $s_l^\eta$ are respectively the LH and SO contributions of the $l$th $\eta$ subband. The characteristic field $B_l^*$ controls the crossing positions and is filling factor-independent, while $F_l$ indicates the coupling strength between the HH subband and neighboring $\eta$ states. As we focus on the HH ground subband ($l = 1$), $l$ subscripts will be omitted for simplicity. The obtained Zeeman splitting energy $E_\text{Z}\equiv E_{-,n}^{(2)}(B) - E_{+,n}^{(2)}(B)$ of the $n$th spin-split Landau pair is~: \begin{equation}\label{zeeman} E_\text{Z} = 6(\kappa - F)\mu_\text{B}B\left[1-(2n-1)\left(\frac{B}{B^*}\right)\right]. \end{equation} \begin{figure*}[t!] \centering \includegraphics[scale=0.8]{Fig2_2020_05_28.pdf} \caption{(a) Fan diagram of the ground HH subband in a $16\,\text{nm}$ Ge well subject to $0.6\%$ compressive strain. Solid curves are the dispersion obtained from the numerical solution of $H$, while the dashed curves are obtained from the second order dispersion assuming finite or infinite SOI respectively. Circles indicate the Zeeman crossings. Filling factors $\nu$ are also indicated. (b) $\nu = 3$ crossing field as a function of the well thickness at various strain values obtained from the numerical solution of $H$ (solid curves) and through Eq. \eqref{crossing} assuming finite or infinite SOI (dashed curves).} \label{fig:theory} \end{figure*} Solving for $E_\text{Z} = 0$ results in a second order approximation for the filling factor-dependent crossing field $B_c$~: \begin{equation}\label{crossing} B_c^{(2)}(n) = \frac{B^*}{2n-1}. \end{equation} The energy difference that separates the HH subband edge from the energy at a crossing position can also be found from the second order equations. When $n\to\infty$ (or $\nu\to\infty$) this energy difference is independent of $n$~: \begin{equation}\label{deltaE} \Delta E = \left[\gamma_1 + \gamma_2 - \frac{3}{4}\left(\kappa + 3F\right)\right]\mu_{\text{B}}B^*. \end{equation} Equation \eqref{zeeman} also yields the HH weak-field g-factor~: \begin{equation}\label{g-factor} g^* = 6(\kappa - F). \end{equation} The approximations \eqref{B*_approx} and \eqref{F_approx} hold only when SOI is large enough so that the SO band can be neglected from the $k\cdot p$ framework. An explicit criterion for this is (Appendix \ref{app:H0})~: \begin{equation}\label{criterion} \Delta \gg \alpha_0\gamma_2\left(\frac{\pi}{L}\right)^2 + \frac{(-b)}{2}(1 + D_{001})|\epsilon_\parallel|, \end{equation} \noindent where $b$ is a valence band deformation potential. In addition to the perturbation scheme, $H$ is also numerically diagonalized by projecting it into the position basis via the substitution $k_z\to-i\partial/\partial z$, in which the $z$-derivative is implemented by finite differences over the simulation domain. A constant mesh grid size of $0.01\,\text{nm}$ is used for every diagonalization. The Matlab \texttt{eigs()} routine is used to retrieve the desired subset of eigenvalues. The Ge Luttinger parameters $\gamma_{1,2,3}$ and deformation potentials are taken from Ref.~\onlinecite{Paul2016}, while the parameter $\kappa$ is taken from Ref.~\onlinecite{Lawaetz1971}. Explicit matrix representations of $H_0$ and $H'$ are presented in Appendix \ref{app:matrices}. See Appendix \ref{app:H0} for additional details on the eigenvalues and eigenvectors of $H_0$. Let us now test the accuracy of the perturbative model compared to the dispersion given by solving numerically $H$. We take Ge as the QW material with width $L$ and strain $\epsilon_\parallel$ as free parameters. Since Ge has a rather high spin-orbit energy $\Delta = 260\,\text{meV}$,\cite{Polak2017} it is worthwhile to look also at the behavior of the model with approximations \eqref{B*_approx} and \eqref{F_approx}. We also focus on relaxed or compressively strained wells, which always result in a HH-like valence band edge. The calculated fan diagram of the ground HH subband is displayed in Fig.~2a for a $16\,\text{nm}$-thick well with $\epsilon_\parallel = -0.6\%$, similar to the system analyzed in Fig.~1. Assuming finite $\Delta$, the model reproduces perfectly well the numerical fan diagram up to $\sim 2\,\text{T}$, which implies that $6(\kappa - F)$ is a very accurate approximation for the HH g-factor at low fields. As the magnetic field increases, quadratic terms in $B$ become more important and the dispersions eventually cross. The dispersion of a state with spin-up projection in a given spin-split Landau pair always has a bigger curvature than the spin-down one, which can be straightforwardly inferred from the coefficients $n(n + 1)$ and $(n-2)(n-1)$ in \eqref{up} and \eqref{down}. For that reason, a Zeeman crossing cannot occur, at least to second order, if the spin-up state lies closer to the band gap than the spin-down one. Crossing fields are indicated in Fig.~2a for filling factors $\nu = 3$ and $\nu = 5$. The numerical solution of $H$ gives a crossing field $B_c = 7.27\,\text{T}$ for $\nu = 3$, whereas the second order formula (Eq. \eqref{crossing}) gives $B_c^{(2)} = 5.04\,\text{T}$. Here the second order approximation underestimates $B_c$ as it diverges from the numerical dispersion before the crossing. When assuming $\Delta\to\infty$, however, the dispersion diverges less dramatically than its finite SOI counterpart and instead overestimates the crossing field. Assuming an infinite SOI for this particular system turns out to be a good approximation, because the right-hand side of \eqref{criterion} equals $21.2\,\text{meV}$, which is much smaller than spin-orbit gap in Ge. \begin{figure}[t] \centering \includegraphics[scale=0.8]{Fig3_2020_05_29_c.pdf} \caption{{\bf Experiment vs. Theory.} (a)-(d) $\rho_{xx}$ as a function of filing factor $\nu$ and energy $E$ around the crossings of Zeeman split states. The upper parts of each panel shows a cross-section at odd filling factors $\nu = 3,5,7,9$. (e) Experimental crossing fields (dots) for $\nu = 3, 5, 7, 9, 11, 13, 15, 17$ fitted using Eq. \eqref{crossing} (solid line). The fitting parameter $B^*=25.258\,\text{T}$.} \label{fig:exp-vs-theory} \end{figure} Fig.~2b depicts the behavior of the crossing field as a function of the well thickness and strain, with and without the assumption of an infinite SOI. The crossing field $B_c$ is well approximated by $B_c^{(2)}$ for a well thickness $> 10\,\text{nm}$ with reduced strain levels, as in our experiments. For narrower and highly strained wells, third or higher perturbative terms become more important. These could be included in the model, but at the cost of extremely cumbersome equations, even with infinite SOI. On the other hand, for $\Delta\to\infty$, $B_c^{(2)}$ misses completely the increase of the crossing field for thin wells, which highlights the explicit role of the SOI strength. This is consistent with criterion \eqref{criterion}~: thin wells increase the right-hand side in \eqref{criterion} as $1/L^2$, thus requiring $\Delta$ to be even larger for this criterion to be satisfied. \section{Discussion} From the present model, we see that Zeeman crossings still occur under the assumption of an infinite QW (no barrier effects), an infinite band gap (6-band $k\cdot p$), and even an infinite spin-orbit gap (4-band $k\cdot p$ for HH and LH). Consequently, LH-HH mixing plays a crucial role in the crossing of spin-split states. Our assumptions also imply that structure inversion asymmetry (SIA) has no role in the observed crossing in ZS energy. SIA is indeed suppressed in infinite wells without external electric fields. Thus, Rashba SOI does not have a dominant effect on the value of $B_c$. The role of SOI and strain is, however, more evident in Eqs. \eqref{crossing} and \eqref{B*}. SOI and strain affect $B_c^{(2)}$ mostly through the energy splitting $E^{\text{HH}} - E^\eta$ and the parameter $F$. Compressive strain typically increases $E^{\text{HH}} - E^\eta$, which explains the increase of $B_c$ at higher compressive strain. SOI also increases $E^{\text{HH}} - E^\eta$, mainly through the spin-orbit energy $\Delta$ for $\eta = +$ or through the out-of-plane effective mass for $\eta = -$. At $\Delta = 0$ and any strain, the HH subbands share the same spectrum as the $\eta = +$ or $\eta = -$ states. Eq. \eqref{B*} then gives $B^* = 0$ hence no Zeeman crossing occurs. SOI lifts this degeneracy between HH and $\eta$ states and thus allows the existence of Zeeman crossings. The experimental observation of Zeeman crossings are further highlighted by plotting portions of the fan diagram from Fig.~1b as a function of energy and filling factor (Fig.~3a-d). The upper part of each panel shows the $\rho_{xx}$ as a function of the energy $E$ at odd-integer values of filling factors from $\nu = 3$ to $9$. Fingerprints of Zeeman crossing are observed for filing factors up to $\nu = 17$. In addition to describing the crossings in Zeeman split states, the theoretical framework described above also allows a straightforward evaluation of several parameters. First, we fit the crossing fields extracted from Fig.~3a-d ($\nu = 3,5,...,17$) with Eq. \eqref{crossing} using $B^*$ as the sole fitting parameter. This yields $B^* = 25.258\,\text{T}$ and the crossing fields obtained from Eq. \eqref{crossing} match the experimental values with a relative error $<4\%$ for $\nu = 3,5,7,9,11$ and $<10\%$ for $\nu = 13,15,17$ (Fig.~3e). Zeeman crossings also approach a fixed energy value as $\nu$ increases, as demonstrated in Eq. \eqref{deltaE}. From Fig.~1(b), we have $\Delta E \approx 17\,\text{meV}$. Knowing $B^*$ and $\Delta E$ gives the value of $F$, leading to HH effective mass and weak-field g-factor. A rearrangement of Eq. \eqref{deltaE} gives \begin{equation}\label{F_value} F = \frac{4}{9}\left(\gamma_1 + \gamma_2 - \frac{\Delta E}{\mu_{\text{B}}B^*}\right) - \frac{\kappa}{3} \approx 1.52. \end{equation} From Eqs. \eqref{g-factor} and \eqref{F_value}, we extract $g^* = 11.35$, which is close to $12.9$ obtained by solving $H$ numerically. An expression for the subband-edge HH in-plane effective mass $m^*$ involving the parameter $F$ can also be derived by inserting Eq. (5) from Ref.~\onlinecite{Drichko2018} into Eq. \eqref{g-factor}~: $m^* / m_0 = \left(\gamma_1 + \gamma_2 - 3F\right)^{-1} \approx 0.077$. This value is also close to those reported in the literature at similar hole density.\cite{Lodari2019,Terrazos2018} A close relation exists between the crossing fields, the HH g-factor and the HH-$\eta$ splitting (Eqs. \eqref{B*} and \eqref{B*_approx}). Knowing two of these quantities is enough to obtain the third. For the system described in Fig.1, the criterion \eqref{criterion} is also satisfied, thus the HH-LH splitting is found directly from Eq. \eqref{B*_approx}~: \begin{equation}\label{HH-LH} E^{\text{HH}} - E^{\text{LH}} = \frac{6\left(\gamma_2 + \gamma_3\right)^2\mu_{\text{B}}B^*}{g^*} \approx 76.0\,\text{meV}. \end{equation} A numerical solution of $H$ yields a HH-LH splitting of $62.8\,\text{meV}$. This value does not change significantly when an effective out-of-plane electric field is introduced in $H$. This is expected from square QWs whose HH-LH splitting is dominated by strain and quantum confinement.\cite{Moriya2014} For that reason, we assume that the HH-LH splitting does not change with hole concentration, or applied gate voltages. From the HH-LH splitting energy (Eq. \eqref{HH-LH}), one can finally estimate the cubic Rashba coefficient $\alpha_3$~: \begin{equation} \alpha_3 = \frac{e\alpha_0^2\gamma_3}{12\left(\gamma_2 + \gamma_3\right)^3}\left(\frac{g^*}{\mu_{\text{B}}B^*}\right)^2 \approx 4.25\times10^5\,e\,\text{\AA}^4, \end{equation} where $e$ is the elementary charge. $\alpha_3$ appears in the cubic Rashba SOI Hamiltonian of HH states\cite{Moriya2014}~: $H_3 = \beta_3i(k_-^3\sigma_+ - k_+^3\sigma_-)$, where $k_\pm = k_x \pm ik_y$ and $\sigma_\pm = (\sigma_x \pm i\sigma_y)/2$ with $\sigma_{x,y}$ the Pauli spin matrices, and $\beta_3 = \alpha_3 E_z$, with $E_z = ep/\epsilon$ the effective out-of-plane electric field in the accumulation mode 2DHG,\cite{Winkler2003} $p$ the hole density and $\epsilon$ the Ge dielectric constant. The obtained $\alpha_3$ is almost twice as large as the one obtained for Ge QW in Ref.~\onlinecite{Moriya2014}, which had a bigger HH-LH splitting of $110\,\text{meV}$. As mentioned above, we expect $\alpha_3$ to be independent of the gate voltage or hole concentration, since it depends mostly on the HH-LH splitting. The Zeeman crossings appear at a density $p\sim 6.1\times 10^{11}\,\text{cm}^{-2}$, corresponding to $E_z \approx 6.8\times 10^{-4}\,\text{V}\,\text{\AA}^{-1}$ (by taking $\epsilon = 16.2\epsilon_0$ for Ge), which yields $\beta_3 \approx 290\,\text{eV}\,\text{\AA}^3$. Note that $\alpha_3$ or $\beta_3$ are hitherto hard to measure in these high mobility systems with established methodologies~: weak anti-localization measurements are impractical due to the small characteristic transport field $B_L$ associated with $\mu$m-scale mean free paths\cite{Hikami1980,Iordanskii1994}~; Shubnikov-de Haas oscillations lack sufficient spectral resolution before onset of ZS to resolve the beatings associated with spin-split subbands.\cite{Hendrickx2018} \section{Conclusion} In summary, Zeeman energy crossing of HH states is observed in a Ge 2DHG under out-of-plane magnetic fields and discussed within a perturbative model describing the hole dispersion. Only second order perturbation in the magnetic field is necessary to describe the crossing in which SOI emerges as an essential feature. However, our analysis indicates that SIA has no effective role. Additionally, this analysis also provides a straightforward framework to evaluate several physical parameters defining the hole states from simple magnetotransport measurements. Crucially, the detailed knowledge of parameters such as the effective g-factor, the in-plane effective mass, and the cubic Rashba coefficient of the underlying material platform will provide the necessary input to further advance design and modelling of hole spin qubits and other hole-based quantum devices. \subsection*{Acknowledgment} O.~M. acknowledges support from NSERC Canada (Discovery, SPG, and CRD Grants), Canada Research Chairs, Canada Foundation for Innovation, Mitacs, PRIMA Qu\'ebec, Defence Canada (Innovation for Defence Excellence and Security, IDEaS), and NRC Canada (New Beginnings Initiative). G.~S. and M.~L. acknowledge financial support from The Netherlands Organization for Scientific Research (NWO). \subsection*{Data availability} Datasets supporting the findings of this study are available at 10.4121/uuid:c64b0509-2247-4d51-adc0-90e361b928a4
1,108,101,564,421
arxiv
\section{Introduction} As it is well known, the Standard Model (SM) of particle physics has proven to be a very successful theory, but still presents some shortcomings. Among these, one is the Hierarchy Problem, which the current literature claims that might be solved by considering extra dimensions. In higher dimensional spacetimes, the gravitational theory by Einstein can be generalized in several ways: the Kaluza-Klein approach (or slight modifications of it) is currently used by a great percentage of the higher dimensional models treating the hierarchy trouble (see, for example, \cite{basini}). Another important generalization of the General Relativity (GR) is the gravity with torsion that considers the possibility of a general non-symmetric connection. This point of view is also assumed in several Extended Theories of Gravity like $f(R)$-gravity \cite{rept,cianci,fabbri} that have recently gained interest for dark energy and dark matter issues \cite{francaviglia, sergei}. The first direct geometrical generalization in this direction is due to Cartan \cite{cartan}. The approach taken into account by theoretical physicists in the last decades is that, assuming as the starting point Cartan's generalization of GR, torsion can be coupled to fermion matter in a straightforward way. The trick in such approaches is, since the field equation for the spin connection is a constraint related to the contortion, it can be used to get rid of the torsion in the original action. Then the torsion field is a non-dynamical one and the fermionic matter is added by hand. Consequently, the new "artificial" action contains standard GR and matter fields with an additional contact 4-fermion interaction \citep{1} where the Dirac equation for the fermions is not derived from the geometrical structure of space-time. Because of the effective 4-fermion interaction term has a coupling constant proportional to Newton's gravitational constant, at first approximation, this interaction is highly suppressed. Nevertheless, it is currently claimed that extra dimensions could explain the hierarchy problem, and thus the (higher dimensional) fundamental gravity scale might be roughly M^{\ast}\sim O(1)$ TeV \citep{basini, 2,3,4,5}. The limits to the size of extra dimensions have been set up by direct searches for quantum black holes \cite{6} and the exchange of virtual gravitons on di-lepton events \cite{7}. On the other hand, the ATLAS collaboration has presented experimental limits for the coupling constant of 4-fermion contact interaction \cite{7,8}. These results are currently used for imposing bounds on the value of the fundamental gravity scale, $M^{\ast}$, and, by extension, in order to find limits on the dimensionality of the space-time. However, as we will show here, these claims could have shortcomings from the theoretical and phenomenological viewpoints. On the other hand, there exists the cosmological constant problem, that is repeatedly faced not only from the Quantum Field Theory (QFT) point of view but also from the Quantum Gravity and early cosmology viewpoints. There are many mechanisms and scenarios trying to explain consistently the problem, some of them against the physical intuition. For example, recently Brodsky et al. have argued that quark and gluon condensates are spatially restricted to the interiors of hadrons and do not extend throughout all of space \cit {9,10}. Such argument seems to have some problem. Consequently, alternative possibilities need to be studied and developed. The other problem that arises here is how to generate the 4-fermion interaction from first principles and how to get its contributions to the cosmological constant. An approach where cosmological constant comes out from fermion condensations is discussed in \cite{capolupo}. The question that immediately arises is if there exist other mechanisms to explain faithfully the 4-fermion interaction without the drawbacks inherent to the standard Einstein-Cartan theory. Some affirmative answers are possible to this question, as we will discuss below. Our argument is based on a gravity theory where a pure affine geometry is adopted with the gravitational Lagrangian given by \begin{equation} L_{g}=\sqrt{det(\mathcal{R}_{\ \mu }^{a}\mathcal{R}_{a\nu })} \tag{1} \end{equation where the specific Ricci curvature tensor is determined by \begin{equation} \mathcal{R}_{\ \mu }^{a}=\lambda \left( e_{\ \mu }^{a}+f_{\ \mu }^{a}\right) +R_{\ \mu }^{a},\qquad , \tag{2} \label{R} \end{equation that corresponds to the breaking of the $SU(2,2)$ symmetry of a group manifold in higher dimensions with original Riemann curvature: $\mathcal{R _{\mu \nu }^{AB}=\partial _{\mu }\omega _{\nu }^{AB}-\partial _{\nu }\omega _{\mu }^{AB}+\omega _{\mu }^{AC}\omega _{\nu C}^{\ \ B}-\omega _{\nu }^{AC}\omega _{\mu C}^{\ \ B}$ (see \cite{11},\cite{11a} for details) to the $SO(2,2)$ group. The absolute value of the determinant in (1) is assumed. However, imposing (anti) self-duality conditions over the generalized curvature $\mathcal{R}$, an Euclidean condition is obtained In eq. (\ref{R ), $e_{\ \mu }^{a}$ is the tetrad field \begin{equation} g_{\mu \nu }\equiv e_{\mu }^{a}e_{\nu a}\text{ \ , \ \ }\eta _{ab}\equiv e_{a}^{\nu }e_{\nu b}\text{\ \ } \tag{5} \end{equation and $f_{\ \mu }^{a}$ is antisymmetric with respect to the index permutatio \begin{equation} e_{a\mu }f_{\ \nu }^{a}=f_{\mu \nu }=-f_{\nu \mu }, \tag{3} \end{equation which is associated with a central tensorial part of the original $SU(2,2)$ group. Both $e_{\ \mu }^{a}$ and $f_{\ \mu }^{a}$ can be taken as fundamental fields from which the Palatini variational principle is applied R_{\mu \nu }$ is the Ricci curvature tensor in a manifold with torsion, $M$ (e.g. $U_{4}$) and $\lambda =(1-d)$ with $d$ being the spacetime dimension. Notice, that the Ricci tensor has symmetric and antisymmetric parts corresponding to the Christoffel and torsion contributions to the connection. Here and below, we consider Greek letters $\mu ,\nu $ as coordinates indices and Latin letters $a,b$ as tetrad indices. With this formalism, we have $M_{\mu }^{a}\equiv e^{a\nu }M_{\nu \mu }$. By using eq. (\ref{R}), the Lagrangian becomes \begin{equation} L_{g}=\sqrt{\det \left[ \lambda ^{2}\left( g_{\mu \nu }+f_{\ \mu }^{a}f_{a\nu }\right) +2\lambda R_{\left( \mu \nu \right) }+2\lambda f_{\ \mu }^{a}R_{[a\nu ]}+R_{\ \mu }^{a}R_{a\nu }\right] }, \tag{4} \end{equation where the Ricci tensor can be split in its symmetric and anti-symmetric part: $R_{\mu \nu }=R_{(\mu \nu )}+R_{[\mu \nu ]}$ (see reference \cit {11,11a,11b, 12,13, 14} for details). The basis of the considered approach is a hypercomplex construction of the (metric compatible) space-time manifold $M$ \cite{11, 11a,11b, 12, 13, 14}, where for each point of $M$ there exists a local affine space $A.$ The connection over $A,$ $\ \widetilde{\Gamma }$, defines a generalized affine connection $\Gamma $ on M $ specified by $\nabla $ and $K$, where $K$ is an invertible $\left( 1,1\right) $ tensor over $M.$ Connection is compatible and rectilinear, i.e. \begin{equation} \nabla _{\mu }K_{\rho \sigma }=K_{\rho \alpha }T_{\mu \sigma }^{\alpha },\;\;\;\;\;\text{ }\nabla _{\mu }g_{\mu \nu }=0, \tag{6} \end{equation where $T_{\mu \sigma }^{\alpha }$ is the torsion tensor and $g_{\mu \nu }$ is the metric tensor preserved under parallel transport. This compatibility condition ensures that the affine connection $\widetilde{\Gamma }$ maps auto-parallel curves of $\Gamma $ on $M$ in straight lines over the affine space $A$ (locally). The first equation is the condition determining the connection $\Gamma $ in terms of the fundamental tensor $K$. As it is well known, the Palatini variational principle determines the connection required for the space-time symmetry as well as the field equations. From here we assume a four dimrnsional spacetime. Consequently and by construction, the action $(1)$ yields the $G$-invariant conditions (namely, the intersection of the 4-dimensional Lorentz group $L_{4},$ the symplectic $Sp\left( 4\right) $ and the almost complex group $K\left( 4\right) )$, without prior assumptions. As a consequence, the gravitational, Dirac and Maxwell equations arise from the Lagrangian $L_{g}$ as a causally connected closed system. The self-consistency is given by \begin{equation} f_{\mu \nu }\equiv \frac{1}{2}\varepsilon _{\mu \nu \rho \sigma }\varphi ^{\rho \sigma }=\ast \varphi _{\mu \nu } \tag{7} \end{equation where $\varphi _{\nu \lambda }$ is related to the torsion by ${\displaystyl \frac{1}{6}\left( \partial _{\mu }\varphi _{\nu \lambda }+\partial _{\nu }\varphi _{\lambda \mu }+\partial _{\lambda }\varphi _{\mu \nu }\right) =T_{\ \nu \mu }^{\rho }\varphi _{\rho \lambda }}$ and $f_{\mu \nu }$ plays the role of electromagnetic field. As it was shown in \cite{11,11a,11b,12,13,14} for this model of gravity (see Ref.\cite{15}, for astrophysical neutrino applications), the Dirac equation is derived from the same space-time manifold and acquires a coupling modification of the form \begin{equation} \gamma^{\alpha}j\left(\frac{1-d}{d}\right)\gamma_{5}h_{\alpha}, \tag{8} \end{equation} where $h_{\alpha}=\varepsilon_{\alpha}^{\text{ }\nu\rho\sigma}T_{\text{ \nu\rho\sigma}$ is the torsion vector defined by the duality operation in 4-dimensions and $\ j$ is a parameter of pure geometrical nature. Here, the torsion described by $h_{\alpha}$ is a dynamical field and the theory is Lorentz invariant by construction. This dynamical torsion vector is responsible for generation of the 4-fermion interaction, as it will be shown below. The aim of this work is twofold: first, we discuss the possibility to explain the nature, magnitude, bounds and contributions to the cosmological constant due to the 4-fermion interaction from the point of view of unified theories with torsion based on affine geometries. Secondly, we compare our approach with other attempts coming from the context of Riemann-Cartan theory. The layout of the paper is the following. In Sec.II, we discuss the fermion interaction and vector torsion in view of the Hodge - de Rham decomposition. Sec.III is devoted to the derivation of the gravitational field and Dirac equations, while in Sec.IV, we discuss the fermionic structure. The effective action and the 4-fermionic interaction, together with the energy-momentum tensor, are considered in Sec. V. Conclusions are drawn in Sec.VI. \section{Generalized Hodge-de Rham decomposition, the vector torsion $h$ and the fermion interaction} As pointed out in references \cite{11, 11a,11b, 12,13,14}, the torsion vector $h=h_{\alpha}dx^{\alpha}$ (the 4-dimensional dual of the torsion field $T_{\beta\gamma\delta}$) plays multiple roles and can be constrained in several different physical situations. Mathematically, it is defined by the Hodge-de Rham decomposition given by the \textbf{4-dimensional Helmholtz theorem} which states: \textit{If $h=h_{\alpha }dx^{\alpha }$ $\notin F^{\prime }\left( M\right) (set of derivative of functions on M) is a 1-form on $M$, then there exist a zero-form $\Omega $, a 2-form $\alpha =A_{\left[ \mu \nu \right] }dx^{\mu }\wedge dx^{\nu }$ and a harmonic 1-form $q=q_{\alpha }dx^{\alpha }$ on $M$ that \begin{equation*} h=d\Omega +\delta \alpha +q\rightarrow h_{\alpha }=\nabla _{\alpha }\Omega +\varepsilon _{\alpha }^{\beta \gamma \delta }\nabla _{\beta }A_{\gamma \delta }+q_{\alpha }\,. \end{equation* Notice that even if $q_{\alpha }$ is not harmonic, and assuming that q_{\alpha }=$ $\left( P_{\alpha }-eA_{\alpha }\right) $ is a vector, an axial vector can be added such that the above expression takes the for \begin{align} h_{\alpha }& =\nabla _{\alpha }\Omega +\varepsilon _{\alpha }^{\beta \gamma \delta }\nabla _{\beta }A_{\gamma \delta }+\varepsilon _{\alpha }^{\beta \gamma \delta }M_{\beta \gamma \delta }+\left( P_{\alpha }-eA_{\alpha }\right) \tag{10} \\ & =\nabla _{\alpha }\Omega +\varepsilon _{\alpha }^{\beta \gamma \delta }\nabla _{\beta }A_{\gamma \delta }+b_{\alpha }+\left( P_{\alpha }-eA_{\alpha }\right) \,, \notag \end{align where $M_{\beta \gamma \delta }$ is a completely antisymmetric tensor. In such a way, $\varepsilon _{\alpha }^{\beta \gamma \delta }M_{\beta \gamma \delta }$ $\equiv b_{\alpha }$ (axial vector). One can immediately see that, due to the theorem given above, one of the roles of $h_{\alpha}$ is precisely to be a generalized energy-momentum vector, avoiding the addition "by hand" of a matter Lagrangian in the action (4). As it is well known, the addition of the matter Lagrangian leads, in general, to non-minimally coupled terms into the equations of motion of the physical fields. Consequently, avoiding the addition of energy-momentum tensor, the fields and they interactions are effectively restricted thanks to the same geometrical structure in the space-time itself. \section{Gravitational field and Dirac equations} It is possible to show \cite{11,11a,11b,12,13,14} that to derive the Dirac equation, one needs, as starting point, the symmetric part of the gravitational field equations derived from $\delta _{g}L_{g}=0$, that is \begin{align} \overset{\circ }{R}_{\mu \nu }& =-2\lambda g_{\mu \nu }+T_{\mu \rho }^{\ \ \ \alpha }T_{\alpha \nu }^{\ \ \ \rho } \tag{11} \\ & =-2\lambda g_{\mu \nu }-2w\left( g_{\mu \nu }h_{\alpha }h^{\alpha }-h_{\mu }^{\ \ \ }h_{\nu }^{\ \ \ }\right) =-2\lambda g_{\mu \nu }-2\left( g_{\mu \nu }\Pi _{\alpha }\Pi ^{\alpha }-\Pi _{\mu }\Pi _{\nu }\right) \tag{12} \end{align Here we use the obvious duality relation between $\ T$ and $h$ and define the generalized momentum vector as: $\sqrt{w}h_{\mu }^{\ \ \ }=\Pi _{\mu }$ with $w$ some arbitrary constant that will be conveniently fixed. Then, a mass-like shell condition is immediately obtained \begin{equation} \Pi ^{2}=m^{2}\Rightarrow m=\pm \sqrt{\frac{\overset{\circ }{R}}{2(1-d)}+d \,. \tag{13} \end{equation where the definition mass (where the mass-like shell hold true) is connected with the spacetime structure, due the unified character of the theory. Notice that there exists a link between the dimension of the spacetime and the scalar "Einstenian" curvature $\overset{\circ }{R}$. Moreover, the curvature and the mass are constrained to take definite values in order that $d\in $ $\mathbb{N}$, the natural number characteristic of the dimension. On the other hand, knowing that $\lambda =1-d$ and assuming that the parameter m$ $\in \mathbb{R}$ , the limiting condition on the physical values for the mass is $\frac{\overset{\circ }{R}}{2(1-d)}+d\geqslant 0$. Admitting $\Pi _{\mu }\rightarrow \widehat{P}_{\mu }-e\widehat{A}_{\mu }+\gamma ^{5}b_{\mu }$ (with $b_{\mu }\equiv \epsilon _{\mu }^{\text{ }\nu \rho \sigma }M_{\nu \rho \sigma }$ an axial vector) together with the quantum condition where the classical equation is converted to an operator: \Pi _{\mu }\rightarrow \widehat{\Pi }_{\mu }$, we hav \begin{equation} \left\{ \left[ \gamma ^{\mu }\left( \widehat{P}_{\mu }-e\widehat{A}_{\mu }+c_{1}\gamma ^{5}b_{\mu }\right) +m\right] \left[ \gamma ^{\nu }\left( \widehat{P}_{\nu }-e\widehat{A}_{\nu }+c_{1}\gamma ^{5}b_{\nu }\right) - \right] \right\} \Psi =0\,, \tag{14} \end{equation (where $\Psi =\mathbf{u}+i\mathbf{v}$ is a complex function) which leads to the Dirac equation \begin{equation} \left[ \gamma ^{\mu }\left( \widehat{P}_{\mu }-e\widehat{A}_{\mu }+c_{1}\gamma ^{5}\widehat{b}_{\mu }\right) -m\right] \Psi =0\,, \tag{15} \end{equation with $m$ given by (13). Notice that this condition, in the Dirac case, is not obtained only passing from classical variables to quantum operators, but in the case that the action does not contain explicitly $\widehat{A}_{\mu } , $h_{\mu }$ remains without specification due the gauge freedom in the momentum. Notice that the unified character of the theory make the number of equations and the field transformations above self consistent, as will be clear in Section V( see the effective action). From the second order version of (14), it is not difficult to show that for $u^{\lambda }$ (remind $\Psi \mathbf{u}+i\mathbf{v):}$ \begin{gather} \left\{ \left( \widehat{P}_{\mu }-e\widehat{A}_{\mu }+c_{1}\gamma ^{5 \widehat{b}_{\mu }\right) ^{2}-m^{2}-\frac{1}{2}\sigma ^{\mu \nu }\left[ \underset{\equiv F_{\mu \nu }}{\underbrace{\left( \nabla _{\mu }\widehat{A _{\nu }-\nabla _{\nu }\widehat{A}_{\mu }\right) }}-c_{1}\gamma ^{5}\underset \equiv S_{\mu \nu }}{\underbrace{\left( \nabla _{\mu }\widehat{b}_{\nu }-\nabla _{\nu }\widehat{b}_{\mu }\right) }}\right] \right\} u^{\lambda } \tag{16} \\ +\frac{1}{2}\sigma ^{\mu \nu }R_{\rho \left[ \mu \nu \right] }^{\lambda }u^{\rho }-\frac{1}{2}e\sigma ^{\mu \nu }\left( \widehat{A}_{\mu }\widehat{P _{\nu }-\widehat{A}_{\nu }\widehat{P}_{\mu }\right) u^{\lambda }=0 \notag \end{gather (the same, obviously, for $v^{\lambda }).$It is interesting to see that eq. (16) differs from that obtained by the standard expression derived in \cit {16} due to the appearance of the last two terms: the term involving the curvature tensor is due to the spin interaction with the gravitational field (due to the torsion term in $R_{\rho \left[ \mu \nu \right] }^{\lambda })$ and the last term is the spin interaction with the electromagnetic and mechanical momenta. The important point here is that the spin-gravity interaction term is derived since the spinors are represented as space-time vectors whose covariant derivatives are defined in terms of the G-(affine) connection (see also \cite{cosimo} for the classification of torsion tensor). \ Other important point to remark is that, in order for Dirac equation to be global (or being covariant with respect to spin transformations if working locally) global topological conditions on M are needed.Through this work, we are working locally and global issues can be taken into account as in standard references (e.g. \cite{25} or references quoted therein and \cite{26},\cite{27} for the relation between spin structures and Dirac equations). \section{Fermionic structure, electromagnetic field and anomalous gyromagnetic factor} If we introduce an expression corresponding to the antisymmetric part of the gravitational field, namely $\nabla_{\alpha}T^{\alpha\beta\gamma}=-2\lambda f^{\beta\gamma}$, in (16) then \begin{gather} \left[ \left( \widehat{P}_{\mu}-e\widehat{A}_{\mu}+c_{1}\gamma^{5}\widehat{b _{\mu}\right) ^{2}-m^{2}-\frac{1}{2}\sigma^{\mu\nu}\left( eF_{\mu\nu}-c_{1}\gamma^{5}S_{\mu\nu}\right) \right] u^{\lambda}-\frac \lambda}{d}\frac{1}{2}\sigma^{\mu\nu}f_{\left[ \mu\nu\right] }u^{\lambda} \tag{17} \\ -\frac{1}{2}e\sigma^{\mu\nu}\left( \widehat{A}_{\mu}\widehat{P}_{\nu } \widehat{A}_{\nu}\widehat{P}_{\mu}\right) u^{\lambda}=0 \notag \end{gather} as a consequence, we have \begin{equation} \left[ \left( \widehat{P}_{\mu}-e\widehat{A}_{\mu}+c_{1}\gamma^{5}\widehat{b _{\mu}\right) ^{2}-m^{2}-\frac{1}{2}\sigma^{\mu\nu}\left( eF_{\mu\nu \underset{anomalous\text{ }term}{\underbrace{-c_{1}\gamma^{5}S_{\mu\nu} \frac{\lambda}{d}f_{\mu\nu}+e\left( \widehat{A}_{\mu}\widehat {P}_{\nu} \widehat{A}_{\nu}\widehat{P}_{\mu}\right) }}\right) \right] u^{\lambda}=0 \tag{18} \end{equation} where clearly appear the contributions to the (g-2) factor due the axial vector $\widehat{b}_{\mu}$ and the geometry through the commutation relation between the covariant derivatives $\nabla$. We can go ahead and see that if \omega_{2}F_{\mu\nu}=S_{\mu\nu}$ and $\omega_{1}F_{\mu\nu}=\sigma_{\mu\nu }^{\prime}$ the last expression assumes the suggestive for \begin{gather} \left\{ \left( \widehat{P}_{\mu}-e\widehat{A}_{\mu}+c_{1}\gamma^{5}\widehat{ }_{\mu}\right) ^{2}-m^{2}-\frac{1}{2}\sigma^{\mu\nu}\left[ \left( e-c_{1}\omega_{2}\gamma^{5}-\omega_{1}\frac{\lambda}{d}\right) F_{\mu\nu }+\omega_{1}\frac{\lambda}{d}\sigma_{\mu\nu}\right] \right\}u^{\lambda }- \tag{19} \\ -\frac{e}{2}\sigma^{\mu\nu}\left( \widehat{A}_{\mu}\widehat{P}_{\nu} \widehat{A}_{\nu}\widehat{P}_{\mu}\right) u^{\lambda}=0 \notag \end{gather} with the result that the gyromagnetic factor results modified accordingly, and a 4-fermion coupling is introduced constructively, thanks to $f_{\mu\nu} . Although the anomalous term is clearly determined from the above equations, it is extremely useful in order to compare the present scheme to other theoretical approaches. With these considerations in mind, it is importantl to derive the anomalous momentum for the electron. Specifically, from the last expression, one gets the correction to the lepton anomalous momentum in the for \begin{equation*} \Delta a_{e}=-\frac{\omega _{1}}{e}\frac{\lambda }{d}\equiv \frac{\omega _{1 }{e}\left( 1-\frac{1}{d}\right) . \end{equation* The experimental precision in measurement of this quantity is \cite{17} \begin{equation*} \Delta a_{e}^{\exp }=0.28\times 10^{-12} \end{equation* and then the upper bound for the universal geometric parameter $\omega _{1}$ i \begin{equation*} \omega _{1}<e\left( \frac{d-1}{d}\right) 0.28\times 10^{-12} \end{equation* then, in 4-dimensions, we have $\omega _{1}<\frac{3}{4}e$ $0.28\times 10^{-12}$. This result is useful in order to give constraints to the theory. Another important consideration is related to the anomalous magnetic momentum. As it is well known from the quantum point of view, in the lowest-order diagram, the anomalous magnetic momentum term is given by \begin{equation*} \Delta \Gamma _{\mu }\left( p,p^{\prime }=p+q\right) =-2iA^{2}\int \frac d^{4}k}{\left( 2\pi \right) ^{4}}\Gamma _{a}S\left( k^{\prime }=k+q\right) \gamma _{\mu }S\left( k\right) \Gamma _{a} \end{equation* wher \begin{equation*} S\left( k\right) =\frac{1}{\widehat{k}-M\left( k\right) } \end{equation* is the formal propagator and $\Gamma _{a}=\left\{ I,\gamma _{\nu },\gamma _{\nu }\gamma _{5}\right\} $ for $a=S,P,V,A,T$ (scalar, pseudoscalar, vector, axial vector and tensor bilinears). The Fierz transformation for the integrand, necessary to reduce and rewrite the matrix quantities in a convenient computational form, has a general form a \begin{align*} & \Gamma _{a}\left( \widehat{k^{\prime }}-M\left( k^{\prime }\right) \right) \gamma _{\mu }\left( \widehat{k}-M\left( k\right) \right) \Gamma _{a} \\ & =\sum_{\alpha =s,p,v,a,t}C_{\alpha }\mathrm{Tr}\left[ \left( \widehat k^{\prime }}+M\left( k^{\prime }\right) \right) \gamma _{\mu }\left( \widehat{k}+M\left( k\right) \right) \Omega _{\alpha }\right] \Omega _{\alpha }, \end{align* where $C_{\alpha }$ are the coefficients of the Fierz transformation. This makes the link between experimental data and theory, as masses, phase space and matrix elements from cross sections. See also \cite{20} for details. \section{Effective action, $\Theta$ term and 4-fermion interaction} Now, let us analyze the theory from a different point of view. In some gravitational models, a link between torsion and CP violating terms certainly appears. An illustrative example is given by Ashtekar that has rewritten Einstein's theory, in its Hamiltonian formulation, as a set of differential equations obeyed by an SO(3) connection and its canonically conjugate momenta corresponding to the SO(3) gauge \cite{ashtekar}. Bengtsson and Peldan \cite{18} have shown that if one performs a particular canonical transformation involving Ashtekar's variables and the corresponding SO(3) gauge fields, the expression for the Hamiltonian constraint changes when other constraints remain unaffected. This corresponds precisely to the addition of a "CP-violating"-term to the corresponding Lagrangian. Mullick and Bandyopadhyay have shown that this CP-violating-term is responsible for nonzero torsion \cite{19}. This $\theta $-term effectively corresponds to the chiral anomaly when a fermion chiral current interacts with a gauge field. Here, the contrary statement is found from first principles: the theory with torsion leads, at effective level, a \theta $-term directly related to the space-time dimensions through the "cosmological" constant $\lambda =1-d$ as we will see soon. \subsection{\textbf{Deriving the effective Lagrangian}} As in the case of massive vector particle with spin 1, let us derive the effective Lagrangian. The procedure (see for example \cite{16}) has to be performed in 2 steps in order to avoid several subsidiary conditions: all the information for dynamics must be obtained from the same variational procedure. The starting effective Lagrangian is \begin{equation} L_{eff}=\theta f_{\mu \nu }^{\ast }f^{\mu \nu }+\frac{\theta }{2\lambda f^{\mu \nu }\left( \nabla _{\mu }h_{\nu }-\nabla _{\nu }h_{\mu }\right) \frac{\theta }{2\lambda }f^{\ast \mu \nu }\nabla _{\rho }T_{\mu \nu }^{\rho }+A\overline{\Psi }\left[ \left( \rho ^{\mu }h_{\mu }+m\right) \right] \Psi +Bh_{\mu }h^{\mu } \tag{20} \end{equation where \begin{align} \text{ }\rho _{\mu }& =\left( \mathbf{a}_{c}+\mathbf{b}_{c}\gamma ^{5}\right) \gamma _{\mu }^{c}+\varepsilon _{\mu }^{\alpha \beta \gamma }\left( \mathbf{c}_{c}\gamma _{\alpha }^{c}\sigma _{\beta \gamma }\right) \tag{21} \\ & =\left[ \mathbf{a}_{c}+(\mathbf{b}_{c}+\mathbf{c}_{c})\gamma ^{5}\right] \gamma _{\mu }^{c} \notag \end{align that corresponds to the decomposition of a general vector element of the Lie algebra of SU(2, 2). Let us notice that if $\mathbf{b}_{c}+\mathbf{c}_{c}=0 , the pseudo vectorial part of $\rho _{\mu }$ is eliminated. This fact is directly related, due to the variational procedure of the effective Lagrangian (20), with the generalized Hodge-de Rham decomposition that we have considered before, that is \begin{align} A\overline{\Psi }\rho ^{\mu }\Psi & =\overline{\Psi }\left( \mathbf{a ^{c}\gamma _{c}^{\mu }\right) \Psi +\overline{\Psi }\left( \left( \mathbf{b ^{c}+\mathbf{c}^{c}\right) \gamma ^{5}\gamma _{c}^{\mu }\right) \Psi \tag{22} \\ & =B\underset{h^{\mu }}{\underbrace{\left[ \nabla ^{\mu }\Omega +\varepsilon ^{\mu \beta \gamma \delta }\nabla _{\beta }A_{\gamma \delta }+\gamma ^{5}b^{\mu }+\left( P^{\mu }-eA^{\mu }\right) \right] }}\,. \notag \end{align Then, following the standard procedure (Berestetsky et al. \cite{16}) after deriving the equations of motion and the related constraints, the effective Lagrangian takes the for \begin{equation} L_{eff}=L_{1}+L_{2}+L_{3} \tag{23} \end{equation being \begin{align} L_{1}& =\left( \theta +\lambda \right) f_{\mu \nu }^{\ast }f^{\mu \nu }\rightarrow (theta\text{ }term) \tag{24} \\ L_{2}& =A\overline{\Psi }\left[ \left( \rho ^{\mu }\left( P_{\mu }-eA_{\mu }+\gamma ^{5}b_{\mu }+\nabla _{\mu }\Omega +\varepsilon _{\mu }^{\beta \gamma \delta }\nabla _{\beta }A_{\gamma \delta }\right) +m\right) \right] \Psi \rightarrow (Dirac-like\text{ }term) \tag{25} \\ L_{3}& =A^{2}\overline{\Psi }\rho _{\mu }\Psi \overline{\Psi }\rho ^{\mu }\Psi \rightarrow (4-fermion-term) \tag{26} \end{align It is important to note also that all dependence on coefficient values are charged on the respective parameters in order to avoid the unboundedness problem for the Lagrangian (eg: $\theta ,A,etc.).$ \subsection{Energy-momentum tensor and cosmological term} It is worth noticing that the mass term in the Dirac equation (15) contains the GR curvature scalar plus the cosmological term $\lambda=(1-d)$. In the analysis already made in \cite{20}, the mass is a constant, then it is naturally included into the Dirac equation and then into the energy-momentum tensor. Also here, the gravitational part of the Lagrangian (containing the curvature) has been avoided. We can write the effective energy-momentum tensor derived from the effective Lagrangian density $L_{eff}$, as \begin{align*} T_{\rho \sigma }& \propto 4\left( \theta +\lambda \right) \left[ f_{\alpha \rho }^{\ast }f_{\text{ }\sigma }^{\alpha }-g_{\rho \sigma }\frac{f_{\mu \nu }^{\ast }f^{\mu \nu }}{4}\right] -A\overline{\Psi }\left[ \left( g_{\rho \sigma }\rho ^{\mu }h_{\mu }-\left( \rho _{\rho }h_{\sigma }+h_{\rho }\rho _{\sigma }\right) \pm \frac{\left( \overset{\circ }{R}+\lambda d\right) g_{\rho \sigma }\mp \overset{\circ }{R}_{\rho \sigma }}{\sqrt{\overset{\circ }{R}+\lambda d}}\right) \right] \Psi \\ & -2A^{2}\left[ \frac{g_{\rho \sigma }}{2}\overline{\Psi }\rho _{\mu }\Psi \overline{\Psi }\rho ^{\mu }\Psi -\overline{\Psi }\rho _{\rho }\Psi \overline{\Psi }\rho _{\sigma }\Psi \right] \,. \end{align* Using the Dirac equation and rearranging the 4-fermion term, the above tensor can be rewritten in order to identify the effective contribution to the cosmological term from the fermion sector. We obtai \begin{align*} T_{\rho \sigma }& \propto 4\left( \theta +\lambda \right) \left[ f_{\alpha \rho }^{\ast }f_{\text{ }\sigma }^{\alpha }-g_{\rho \sigma }\frac{f_{\mu \nu }^{\ast }f^{\mu \nu }}{4}\right] -A\overline{\Psi }\left[ \left( -\left( \rho _{\rho }h_{\sigma }+h_{\rho }\rho _{\sigma }\right) \mp \frac{\overset \circ }{R}_{\rho \sigma }}{\sqrt{\overset{\circ }{R}+\lambda d}}\right) \right] \Psi \\ & +A^{2}g_{\rho \sigma }\overline{\Psi }\rho _{\mu }\Psi \overline{\Psi \rho ^{\mu }\Psi \,. \end{align* As firstly pointed out by Eddington \cite{21}, the mass term is directly related to the curvature and implied by the Mach principle. Here, we want to stress that also fermion interactions can contribute to the cosmological term and then can take part to the cosmic dynamics as a sort of dark energy contribution \cite{capolupo}. Notice that from the above expression, the pure fermionic contribution to the cosmological constant, due to the 4-fermion interaction i \begin{equation*} \Lambda _{f}\equiv \kappa \rho _{\Lambda _{f}}=+\kappa A^{2}g_{\rho \sigma \overline{\Psi }\rho _{\mu }\Psi \overline{\Psi }\rho ^{\mu }\Psi \end{equation* (where the units of the constant are $\left[ \kappa \right] =m_{Pl}^{-2}).$ Considering the possibility of quark condensates, it was conjectured [24] that a nonzero vacuum expectation value of the 4-fermion term arises from a spontaneous breaking of the global chiral symmetry by the $\left\langle \overline{q}q\right\rangle $ condensate, which sets the energy scale of the condensation to the $QCD$ scale of the running strong-interaction coupling, \Lambda _{QCD}.$ To see this, the Shifman-Vainshtein-Zakharov (SVZ) approximation can be effectively used [25] given the following result \begin{equation*} \left\langle 0\left\vert \Lambda _{f}\right\vert 0\right\rangle =\frac 16\kappa A^{2}}{9}\left( \mathbf{a}_{c}\mathbf{a}^{c}-\left( \mathbf{b}_{c} \mathbf{c}_{c}\right) \left( \mathbf{b}^{c}+\mathbf{c}^{c}\right) \right) \left\langle 0\left\vert \overline{\Psi }\Psi \right\vert 0\right\rangle ^{2}\,. \end{equation* Here, the contribution corresponding to the axial/axial vector channel is identically zero (only \textbf{A-A}\ and \textbf{V-V}\ expectation values give contributions to the cosmological constant (see the explicit channel computations below) and the arbitrary constants $\mathbf{a}_{c},\mathbf{b _{c}$ and $\mathbf{c}_{c}$ can be defined accordingly. It is also useful to consider that, from the above formula, all the parameters can be fixed from the corresponding experimental data. The traces for vector and axial-vector channels explicitly ar \begin{align*} \mathrm{Tr}\left[ \left( \widehat{k^{\prime }}+M\left( k^{\prime }\right) \right) \gamma _{\mu }\left( \widehat{k}+M\left( k\right) \right) I\right] & =4\left( k^{\prime }M\left( k\right) +kM\left( k^{\prime }\right) \right) _{\mu }, \\ \mathrm{Tr}\left[ \left( \widehat{p_{2}}+m\right) \gamma _{\mu }\left( \widehat{p_{1}}+m\right) \gamma _{5}\right] & =0, \\ \mathrm{Tr}\left[ \left( \widehat{p_{2}}+m\right) \gamma _{\mu }\left( \widehat{p_{1}}+m\right) \gamma _{\nu }\right] & =4\left[ m^{2}g_{\mu \nu }+\left( p_{2\mu }p_{1\nu }-\left( p_{1}p_{2}\right) g_{\mu \nu }\right) +p_{2\mu }p_{1\nu }\right] , \\ \mathrm{Tr}\left[ \left( \widehat{p_{2}}+m\right) \gamma _{\mu }\left( \widehat{p_{1}}+m\right) \gamma _{\nu }\gamma _{5}\right] & =4\varepsilon _{\alpha \mu \beta \nu }p_{2\alpha }p_{1\beta }, \\ \mathrm{Tr}\left[ \left( \widehat{p_{2}}+m\right) \gamma _{\mu }\left( \widehat{p_{1}}+m\right) \sigma _{\lambda \rho }\right] & =4 \end{align* and then the above result $\left\langle 0\left\vert \Lambda _{f}\right\vert 0\right\rangle $ is explicitly recovered. \section{Discussion and conclusions} Let us now analyze each term of the Lagrangian (23).\ There is a possible screening between $\theta $ plus $\lambda $ Lagrangian terms. A similar relation between $\theta $ and $\lambda $ has been conjectured in Ref. \cit {19}. There are no Holst term and FMT term, in contrast with other theories involving gravitation in canonical formulation (on the possibility of getting Holst-like terms in f(R) theories was analized in\cite{28}and\cit {29}). The vector-vector and the axial-axial terms are also in the FMT Lagrangian but the term that we have here is not constrained by any extra-parameter as the Barbero-Immirzi one. It is clear that the term proportional to the axial-vector coupling is not present into the model discussed here due to the fundamental geometrical structure of our construction. Here we hav \begin{equation*} A\overline{\Psi }\rho ^{\mu }\Psi =\overline{\Psi }\left( \mathbf{a _{c}\gamma _{\mu }^{c}\right) \Psi +\overline{\Psi }\left( \left( \mathbf{b _{c}+\mathbf{c}_{c}\right) \gamma ^{5}\gamma _{\mu }^{c}\right) \Psi \equiv \mathbf{V+A} \end{equation* where the sum of axial and vector terms appears. Immediately we see that the 4-fermion interaction $\overline{\Psi }\rho _{\mu }\Psi \overline{\Psi }\rho ^{\mu }\Psi $ geometrically only picks \textbf{V-V} and \textbf{A-A} interactions, if we suppose coefficients real (in particular $\mathbf{b}_{c}$ and $\mathbf{c}_{c}$). This important fact has been experimentally probed as pointed out in \cite{22}. In that paper, the effective 4-fermion interaction was focused on the case of neutrinos endowed with non-standard interactions. These are a natural outcome of many neutrino mass models \cite{13} and can be of two types: flavour-changing (FC) and non-universal (NU). As it is well known, see-saw-type models leads to a non-trivial structure of the lepton mixing matrix characterizing the charged and neutral current weak interactions. This leads to gauge induced non standard interactions which may violate lepton flavor and CP even with massless neutrinos. Alternatively, non-standard neutrino interactions may also arise in models where neutrino masses are \textquotedblleft calculable\textquotedblright\ from radiative corrections. Finally, in some supersymmetric unified models, the strength of non-standard neutrino interactions may be a calculable renormalization effect. How sizable are non-standard interactions will be a model-dependent issue. In some models, non-standard interaction strengths are too small to be relevant for neutrino propagation, because they are suppressed by some large scale and/or restricted by limits on neutrino masses. However, this could not be the case, and there are interesting models where moderate strength non-standard interactions remain in the limit of light (or even massless) neutrinos. Such a fact may occur even in the context of fully unified models like SO(10). Non-standard interactions may, in principle, affect neutrino propagation properties in matter as well as the detection cross sections. Thus their existence can modify the solar neutrino signal observed by the experiments. There appears, at effective level, a $\theta $ (parity violating) term that is not present in other formulations as the standard Einstein-Cartan (see for example the discussion in \cite{20}) and loop-quantum gravity inspired \cite{18,19}. For quarks, a non-zero vacuum expectation value of the 4-fermion term arises from a spontaneous breaking of the global chiral symmetry by the $\left\langle \overline{q}q\right\rangle $ condensate, which sets the energy scale of the condensation to the QCD scale of the running strong-interaction coupling, $\Lambda _{QCD}$. Quark condensates are associated to the color degree of freedom, and characterize the confined phase of quark matter and constitute, together with gluon condensates, the QCD vacuum. For leptons, which do not interact strongly and are not subjected to confinement, less is known about the form and scale of condensation. Is important to note that in the study of cosmological constant one should bear in mind that vector torsion gives the contribution to conformal anomaly (see, for instance, \cite{30})which may give qualitatively same effect to cosmology (anomaly-driven inflation or LCDM DE) as pure cosmological constant. Solving the question about the interplay between gravity models with torsion, space-time dimensionality and 4-fermion interaction is still complicated, although some claims in the recent literature point out the contrary \cite{23}. The basic points under discussion are: dimensional compactification in space-times with torsion, the origin of 4-fermion interaction and the specific extra-dimensional models that have to be considered. From the viewpoint of the model presented here, the main question to be addressed is, in an space-time with dimensionality $>$ 4, that the torsion dual will not be a vector but a higher rank tensor field. However, the total antisymmetry of the torsion in such a case, simplifies any physical analysis. In a forthcoming paper, we will discuss the possible experimental tests of the unified scheme presented here. \section{Acknowledgements} DJCL\ is very grateful to the people of the Bogoliubov Laboratory of Theoretical Physics \ (BLTP)\ and JINR\ Directorate by they hospitality and financial support, and also to Professors J.W.F. Valle and F.J. Escrihuela for bring me important references on the subject.
1,108,101,564,422
arxiv
\section{Introduction} \setlength\abovedisplayskip{1pt} \setlength\belowdisplayskip{1pt} In many edge networks, mobile and IoT devices collecting a huge amount of data are often connected to each other or a central node wirelessly. The unreliable nature of wireless connectivity, together with constraints in computing resources at edge devices, puts forth a significant challenge for the computation, communication and coordination required to learn an accurate model at the network edge. In this paper, we consider a many-to-one wireless architecture for distributed learning at the network edge, where the edge devices collaboratively train a machine learning model, using local data, in a distributed manner. This departs from conventional approaches which rely heavily on cloud computing to handle high complexity processing tasks, where one significant challenge is to meet the stringent low latency requirement. Further, due to privacy concerns, it is highly desirable to derive local learning model updates without sending data to the cloud. In such distributed learning scenarios, the communication between the edge devices and the server can become a bottleneck, in addition to the other challenges in achieving edge intelligence. In this paper, we consider a wireless edge network with $M$ devices and an edge server, where a high-dimensional machine learning model is trained using distributed learning. In such a setting with unreliable and rate-limited communications, local updates at sender devices should be carefully crafted and compressed to make full use of the wireless communication resources available and should work in concert with the receiver (edge server) so as to learn an accurate model. Notably, lossy wireless communications for edge intelligence presents unique challenges and opportunities \cite{Zhu2018a}, subject to bandwidth and power requirements, on top of the employed multiple access techniques. Since it often suffices to compute a function of the sum of the local updates for training the model, over-the-air computing is a favorable alternative to the standard multiple-access communications for edge learning. More specifically, over-the-air computation \cite{Goldenbaum2013, Abari2016} takes advantage of the superposition property of wireless multiple-access channel via simultaneous analog transmissions of the local messages, and then computes a function of the messages at the receiver, scaling signal-to-noise ratio (SNR) well with an increasing number of users. In a nutshell, when multiple edge devices collaboratively train a model, it is plausible to employ distributed learning over-the-air. We seek to answer the following key questions: 1) What is the impact of the wireless communication bandwidth/power on the accuracy and convergence of the edge learning? 2) What coordinates in local gradient signals should be communicated by each edge device to the receiver? 3) How should the coordination be carried out so that multiple sender devices can work in concert with the receiver? 4) What is the optimal way for the receiver to process the received noisy gradient signals to be used for the stochastic gradient descent algorithm? 5) How should each sender device carry out power allocation across subcarriers to transmit its local updates? Intuitively, it is sensible to allocate more power to a coordinate with larger gradient value to speed up the convergence. Further, power allocation should also be channel-aware. \begin{figure*}[!tbh] \begin{center} \centerline{\includegraphics[width=1.5\columnwidth]{figs/newSchematic.pdf}} \caption{A bandlimited coordinate descent algorithm for distributed learning over wireless multi-access channel} \label{commsmodel} \end{center} \vspace{-7ex} \end{figure*} To answer the above questions, we consider an integrated learning and communication scheme where multiple edge devices send their local gradient updates over multi-carrier communications to the receiver for learning. Let $K$ denote the number of subcarriers for communications, where $K$ is determined by the wireless bandwidth. First, $K$ dimensions of the gradient updates are determined (by the receiver) to be transmitted. Multiple methods can be used for selecting $K$ coordinates, e.g., selecting the top-$k$ (in absolute value) coordinates of the sum of the gradients or randomized uniform selection. This paper will focus on randomly uniform selection (we elaborate further on this in Section V). During the subsequent communications, the gradient updates are transmitted only in the $K$-selected dimensions via over-the-air computing over $K$ corresponding sub-carriers, each experiencing time-varying channel conditions and hence time-varying transmission errors. The devices are subject to power constraints, giving rise to a key question on how to allocate transmission power across dimension, at each edge device, based on the gradient update values and channel conditions. Thus, we explore joint optimization of the power allocation and the learning rate to obtain the best estimate of the gradient updates and minimize the impact of the communication error. We investigate a centralized solution to this problem as a benchmark, and then devise sub-optimal distributed solutions amenable to practical implementation. We note that we have also studied the impact of errors of synchronization across devices in this setting (we omit the details due to limited space). The main contributions of this paper are summarized as follows: \begin{itemize} \item We take a holistic approach to study federated learning algorithms over wireless MAC channels, and the proposed bandlimited coordinated descent(BLCD) algorithm is built on innovative integration of computing in the air, multi-carrier communications, and wireless resource allocation. \item We characterize the impact of communication error and compression, in terms of its resulting gradient bias and mean squared error (MSE), on the convergence performance of the proposed algorithms. Specifically, when the communication error is unbiased, the BLCD algorithm would converge to a stationary point under very mild conditions on the loss function. In the case the bias in the communication error does exist, the iterates of the BLCD algorithm would return to a contraction region centered around a scaled version of the bias infinitely often. \item To minimize the impact of the communication error, we study joint optimization of power allocation at individual devices and learning rates at the receiver. Observe that since there exists tradeoffs between bias and variance, minimizing the MSE of the communication error does not necessarily amount to minimizing the bias therein. Our findings reveal that optimal power allocation across different sub-carriers should take into account both the gradient values and channel conditions, thus generalizing the widely used water-filling policy. We also develop sub-optimal distributed solutions amenable to implementation. In particular, due to the power constraints at individual devices, it is not always feasible to achieve unbiased estimators of the gradient signal across the coordinates. To address this complication, we develop a distributed algorithm which can drive the bias in the communication error to (close to) zero under given power constraints and then reduce the corresponding variance as much as possible. \end{itemize} \section{Related Work} Communication-efficient SGD algorithms are of great interest to reduce latency caused by the transmission of the high dimensional gradient updates with minimal performance loss. Such algorithms in the ML literature are based on compression via quantization \cite{Alistarh2016, Wen2017, Bernstein2018a, Wu2018}, sparsification \cite{Aji2017, Stich2018, Alistarh2018} and federated learning \cite{Konecny2016} (or local updates \cite{Stich2018a}), where lossless communication is assumed to be provided. At the wireless edge, physical-layer design and communication loss should be taken into consideration for the adoption of the communication-efficient algorithms. Power allocation for over-the-air computation is investigated for different scenarios in many other works \cite{Dong2018, Liu2019, Wen2018, Zhu2018b, Cao2019} including MIMO, reduced dimensional MIMO, standard many to one channel and different channel models. In related works on ML over wireless channels, \cite{Zhu2018, Yang2019, Zeng2019, Amiri2019, Amiri2019a, Amiri2019c, Ahn2019, Sery2019} consider over-the-air transmissions for training of the ML model. The authors in \cite{Amiri2019} propose sparsification of the updates with compressive sensing for further bandwidth reduction, and recovered sum of the compressed sparse gradients is used for the update. They also apply a similar framework for federated learning and fading channels in \cite{Amiri2019a}. \cite{Zhu2018} considers a broadband aggregation for federated learning with opportunistic scheduling based on the channel coefficients for a set of devices uniformly distributed over a ring. Lastly, \cite{Sery2019} optimize the gradient descent based learning over multiple access fading channels. It is worth noting that the existing approaches for distributed learning in wireless networks do not fully account for the characteristics of lossy wireless channels. It is our hope that the proposed BLCD algorithms can lead to an innovative architecture of distributed edge learning over wireless networks that accounts for computation, power, spectrum constraints and packet losses. \section{Federated Learning over Wireless Multi-access Networks} \subsection{Distributed Edge Learning Model} Consider an edge computing environment with $M$ devices $\mathcal{M}=\{1,\ldots,M\}$ and an edge server. As illustrated in Figure 1, a high-dimensional ML model is trained at the server by using an SGD based algorithm, where stochastic gradients are calculated at the devices with the data points obtained by the devices and a (common) subset of the gradient updates are transmitted through different subcarriers via over-the-air. The general edge learning problem is as follows: \begin{equation} \min_{w\in\mathbb{R}^d} f(w):=\frac{1}{M} \sum_{m=1}^{M} \mathbb{E}_{\xi_m} [ l(w, \xi_m)] \end{equation} in which $l(\cdot)$ is the loss function, and edge device $m$ has access to inputs $\xi_m $. Such optimization is typically performed through empirical risk minimization iteratively. In the sequel, we let $w_t$ denote the parameter value of the ML model at communication round $t$, and at round $t$ edge device $m$ uses its local data $\xi_{m,t}$ to compute a stochastic gradient $g^m_t (w_t):=\nabla l(w_t,\xi_{m,t})$. Define $g_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} g^m_t (w_t)$. The standard vanilla SGD algorithms is given as \begin{equation} \label{eqn:genericupdate} w_{t+1} = w_t - \gamma g_t(w_t) \end{equation} with $\gamma$ being the learning rate. Nevertheless, different updates can be employed for different SGD algorithms, and this study will focus on communication-error-aware SGD algorithms. \subsection{Bandlimited Coordinate Descent Algorithm} Due to the significant discrepancy between the wireless bandwidth constraint and the high-dimensional nature of the gradient signals, we propose a sparse variant of the SGD algorithm over wireless multiple-access channel, named as bandlimited coordinate descent (BLCD), in which at each iteration only a common set of $K$ coordinates, $I(t)\subset \{1, \ldots, d\}$ (with $K\ll d$), of the gradients are selected to be transmitted through over-the-air computing for the gradient updates. The details of coordinate selection for the BLCD algorithm are relegated to Section \ref{sec:controlphase}. Worth noting is that due to the unreliable nature of wireless connectivity, the communication is assumed to be lossy, resulting in erroneous estimation of the updates at the receiver. Moreover, gradient correction is performed by keeping the difference between the update made at the receiver and the gradient value at the transmitter for the subsequent rounds, as gradient correction dramatically improves the convergence rate with sparse gradient updates \cite{Stich2018}. For convenience, we first define the gradient sparsification operator as follows. \begin{definition} $ C_I : \mathbb{R}^d \rightarrow \mathbb{R}^d$ for a set $I \subseteq \{1,\ldots, d\}$ as follows: for every input $x \in \mathbb{R}^d$, $ \big(C_I (x)\big)_j$ is $(x)_{j} $ for $ j \in I$ and $0$ otherwise. \end{definition} Since this operator $C_I$ compress a $d$-dimensional vector to a $k$-dimension one, we will also refer this operator as compression operator in the rest of the paper. \begin{algorithm}[!t] \caption{Bandlimited Coordinate Descent Algorithm}\label{alg_1} \begin{algorithmic}[1] \STATE \textbf{Input:} Sample batches \(\xi_{m,t}\), model parameters \(w_1\), initial learning rate \(\gamma\), sparsification operator \(C_t(.)\), \(\forall m=1,\dots,M; \forall t=1,\dots,T.\) \STATE \textbf{Initialize:} \(r_t^m:=0\). \FOR{$t=1:T$} \FOR{$m=1:M$} \STATE \(g_t^m(w_t):= \text{stochasticGradient}(f(w_t,\xi_{m,t}))\) \STATE \(u_{t}^m := \gamma g_t^m(w_t)+r_t^m \) \STATE \(r_{t+1}^m := u_t^m-C_t(u_t^m)\) \STATE Compute power allocation coefficients \(b_{km}^*,\forall k=1,\dots,K\). \STATE Transmit \(\mathbf{b}^*\odot C_t(u_t^m)\) \ENDFOR \STATE Compute gradient estimator $\hat{G}_t(w_t)$ \STATE \(w_{t+1}:= w_t - \hat{G}_t(w_t) \). \STATE Broadcast \(w_{t+1}\) back to all transmitters. \ENDFOR \end{algorithmic} \end{algorithm} With a bit abuse of notation, we let $C_t$ denote $C_{I(t)}$ for convenience in the following. Following \cite{Karimireddy2019}, we incorporate the sparsification error made in each iteration (by the compression operator $C_t$) into the next step to alleviate the possible gradient bias therein and improve the convergence possible. Specifically, as in \cite{Karimireddy2019}, one plausible way for compression error correction is to update the gradient correction term as follows: \begin{align} r_{t+1}^m &= u_t^m - C_t(u_t^m), \label{eqn:SGDmemupdatestd}\\ u_t^m &\triangleq \gamma g^m_t(w_t) + r_t^m \end{align} which $ r_{t+1}^m $ keeps the error in the sparsification operator that is in the memory of user $m$ at around $t$, and $u_t^m $ is the scaled gradient with correction at device $m$ where the scaling factor $\gamma$ is the learning rate in equation~\eqref{eqn:genericupdate}. {(We refer readers to \cite{Karimireddy2019} for more insights of this error-feedback based compression SGD.)} Due to the lossy nature of wireless communications, there would be communication errors and the gradient estimators at the receiver would be erroneous. In particular, the gradient estimator at the receiver in the BLCD can be written as \begin{equation} \label{eqn:SGDupdate} \hat{G}_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} C_t \left(u_t^m \right) + \epsilon_t, \end{equation} where $\epsilon_t$ denotes the random communication error in round $t$. In a nutshell, the bandlimited coordinate descent algorithm is outlined in Algorithm~\ref{alg_1}. Recall that $g_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} g^m_t(w_t)$ and define $r_t \triangleq \frac{1}{M}\sum_{m=1}^{M} r^m_t$. Thanks to the common sparsification operator across devices, the update in the SGD algorithm at communicatioon round $t$ is given by \begin{equation} \label{eqn:updatesimplified} w_{t+1} = w_t - \big[ C_t(\gamma g_t(w_t) +r_t) + \epsilon_t\big]. \end{equation} To quantify the impact of the communication error, we use the corresponding communication-error free counterpart as the benchmark, defined as follows: \begin{equation} \label{eqn:gcsimplified} \hat{w}_{t+1} = w_t - C_t(\gamma g_t(w_t) +r_t) . \end{equation} It is clear that $w_{t+1}= \hat{w}_{t+1} - \epsilon_t $. For convenience, we define \(\tilde{w}_t \triangleq {w}_{t} - r_t \). It can be shown that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \). Intuitively, $w_{t+1}$ in (\ref{eqn:updatesimplified}) is a noisy version of the iterate $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), which implies that \(\tilde{w}_{t+1} \) is a noisy version of the compression-error correction of $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), where the ``noisy perturbation'' is incurred by the communication error. \subsection{BLCD Coordinate Transmissions over Multi-Access Channel} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figs/BLCD-MACprotocol2.pdf}}% \vspace{0in} \caption{A multi-access communication protocol for bandlimited coordinate selection and transmission.}\label{flowchart1} \end{center} \vspace{-0.15in} \end{figure} A key step in the BLCD algorithm is to achieve coordinate synchronization of the transmissions among many edge devices. To this end, we introduce a receiver-driven low-complexity multi-access communication protocol, as illustrated in Fig.~\ref{flowchart1}, with the function $C_t(x)$ denoting the compression of $x$ at round $t$. Let $I(t)$ (of size $K$) denote the subset of coordinates chosen for transmission by the receiver at round $t$. Observe that the updates at the receiver are carried out only in the dimensions $I(t)$. Further, the edge receiver can broadcast its updated iterate to participant devices, over the reverse link. This task is quite simple, given the broadcast nature of wireless channels. In the transmissions, each coordinate of the gradient updates is mapped to a specific subcarrier and then transmitted through the wireless MAC channel, and the coordinates transmitted by different devices over the same subcarrier are received by the edge server in the form of an aggregate sum. {It is worth noting that the above protocol is also applicable to the case when the SGD updates are carried out for multiple rounds at the devices.} When there are many edge devices, over-the-air computation can be used to take advantage of superposition property of wireless multiple-access channel via simultaneous analog transmissions of the local updates. More specifically, at round t, the received signal in subcarrier $k$ is given by: \begin{equation} \label{eqn:channel} y_k (t) = \sum_{m=1}^{M} b_{km}(t) h_{km}(t) x_{km} (t) + n_k (t) \end{equation} where $b_{km}(t)$ is a power scaling factor, $h_{km}(t)$ is the channel gain, and $x_{km}(t)$ is the message of user $m$ through the subcarrier $k$, respectively, and $n_k (t) \sim \mathcal{N}(0,\sigma^2)$ is the channel noise. To simplify notation, we omit $(t)$ when it is clear from the context in the following. Specifically, the message $x_{km}=(C_t(u^m_t))_{l(k)}$, with a one-to-one mapping $l(k)=(I(t))_k$, which indicates the $k$-th element of $I(t)$, transmitted through the $k$-th subcarrier. The total power that a device can use in the transmission is limited in practical systems. Without loss of generality, we assume that there is a power constraint at each device, given by $\sum_{k=1}^{K} \absq{b_{km} x_{km}} \leq E_m,\ \forall m\in \{ 1, \ldots, M \}$. Note that $b_{km}$ hinges heavily upon both $\bm{h}_m=[h_{1m}, \ldots, h_{Km}]^\top$ and $\bm{x}_{m}=[x_{1m}, \ldots, x_{Km}]^\top$, and a key next step is to optimize $b_{km} (\bm{h}_{m}, \bm{x}_{m})$. In each round, each device optimizes its power allocation for transmitting the selected coordinates of its update signal over the $K$ subcarriers, aiming to minimize the communication error so as to achieve a good estimation of $G_t(w_t)$ (or its scaled version) for the gradient update, where $$G_t(w_t) \triangleq \frac{1}{M}\sum_{m=1}^M C_t(u_t^m).$$ From the learning perspective, based on $\{y_k\}_{k=1}^K$, it is of paramount importance for the receiver to get a good estimate of $G_t(w_t)$. Since $n_k(t)$ is Gaussian noise, the optimal estimator is in the form of \vspace{0.05in} \begin{equation} \label{eqn:estimator} \big(\widehat{G}_t(w_t)\big)_{k} = \begin{cases} \alpha_{l(k)} y_{l(k)}, & k \in I(t) \\ 0 & \text{otherwise} \end{cases}\vspace{0.05in} \end{equation} where $\{ \alpha_k \}_{k=1}^K$ are gradient estimator coefficients for subcarriers. It follows that the communication error (i.e., the gradient estimation error incurred by lossy communications) is given by \begin{equation} \epsilon_t = \widehat{G}_t(w_t) - G_t(w_t) . \label{comm-error} \end{equation} We note that $\{\alpha_k\}_{k=1}^K$ are intimately related to the learning rates for the $K$ coordinates, scaling the learning rate to be $\{\gamma \alpha_k\}_{k=1}^K$. It is interesting to observe that the learning rates in the proposed BLCD algorithm are essentially different across the dimensions, due to the unreliable and dynamically changing channel conditions across different subcarriers. \section{Impact of Communication Error and Compression on BLCD Algorithm} \label{sec:convergence} Recall that due to the common sparsification operator across devices, the update in the SGD algorithm at communication round $t$ is given by \[ w_{t+1} = w_t - \big[ C_t(\gamma g_t(w_t) +r_t) + \epsilon_t\big]. \] Needless to say, the compression operator $C_t$ plays a critical role in sparse transmissions. In this study, we impose the following standard assumption on the compression rate of the operator. \begin{assumption} \label{asmpt:compression} For a set of the random compression operators $\{C_t\}_{t=1}^T$ and any $x\in \mathbb{R}^d$, it holds \begin{equation} \E \normsq{x - C_t(x)} \leq (1-\delta) \normsq{x} \end{equation} for some $\delta \in (0,1]$. \end{assumption} We impose the following standard assumptions on the non-convex objective function $f(\cdot)$ and the corresponding stochastic gradients $g^m_t (w_t)$ computed with the data samples of device $m$ in round $t$. (We assume that the data samples $\{\xi_{m,t}\}$ are i.i.d.~across the devices and time.) \begin{assumption} \label{asmpt:smoothness} (Smoothness) A function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is L-smooth if for all ${x},{y}\in \mathbb{R}^d$, it holds \begin{equation} \abs{f({y})-f({x})-\innp{\nabla f({x})}{{y}-{x}}} \leq \frac{L}{2} \normsq{{y}-{x}}. \end{equation} \end{assumption} \begin{assumption} \label{asmpt:boundedmoment} For any $x\in \mathbb{R}^d$ and for any $m=1, \ldots, M$, a stochastic gradient $g_t^m(x), \forall t$, satisfies \begin{equation} \E[g_t^m(x)] = \nabla f (x), \textrm{ } \E \normsq{g_t^m(x)} \leq G^2 \end{equation} where $G>0$ is a constant. \end{assumption} \begin{table*}[t] \centering \begin{minipage}{1\textwidth} \begin{align} \mathbb{E}_t [ f(\tilde{w}_{t+1}) ] \hspace{-0.03in}\leq& f(\tilde{w}_{t} )\hspace{-0.03in}+\hspace{-0.03in}\langle\nabla f(\tilde{w}_{t}),\mathbb{E}_t[\tilde{w}_{t+1}\hspace{-0.03in}-\hspace{-0.03in}\tilde{w}_{t}]\rangle \hspace{-0.03in}+\hspace{-0.03in}\frac{L}{2}\mathbb{E}_t[\lVert \tilde{w}_{t+1}-\tilde{w}_{t} \rVert^2] \nonumber\\ &\hspace{-0.5in}=f(\tilde{w}_{t})\hspace{-0.03in}-\hspace{-0.03in} \langle\nabla f(\tilde{w}_{t}),\gamma \mathbb{E}_t[g_t (w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert\gamma g_t(w_t) \rVert^2]\hspace{-0.03in}+\hspace{-0.03in}\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert^2] \hspace{-0.03in}+\hspace{-0.03in} L\mathbb{E}_t[\langle \gamma g_t(w_t) ,\epsilon_t\rangle]\nonumber\\ &\hspace{-0.5in}= f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in}\langle\nabla f({w}_{t}),\gamma \mathbb{E}_t[g_t(w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t]\rangle \hspace{-0.03in}-\hspace{-0.03in} \langle\nabla f(\tilde{w}_{t})\hspace{-0.03in}-\hspace{-0.03in}\nabla f({w}_{t}), \gamma \mathbb{E}_t[g_t(w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t]\rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\Vert\epsilon_t\rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} L\mathbb{E}_t[\langle\gamma g_t(w_t),\epsilon_t\rangle] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\Vert\gamma g_t(w_t) \rVert^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in} \gamma \lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}-\hspace{-0.03in} \langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{\rho}{2} \lVert \gamma \nabla f(w_t)\hspace{-0.03in}+\hspace{-0.03in}\mathbb{E}_t[\epsilon_t] \rVert_2^2\hspace{-0.03in}+\hspace{-0.03in}\frac{L^2}{2\rho}\mathbb{E}_t[\lVert r_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert \epsilon_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} L\langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L\gamma^2}{2} \mathbb{E}_t \lVert g_t(w_t) \rVert_2^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in} \gamma \lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}+\hspace{-0.03in} (L-1) \lVert \nabla f(w_t) \rVert\lVert \mathbb{E}_t[\epsilon_t] \rVert \hspace{-0.03in}+\hspace{-0.03in} \frac{\rho}{2}\left( \gamma^2\lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}+\hspace{-0.03in} \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2\hspace{-0.03in}+\hspace{-0.03in}2\gamma \langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \right) \hspace{-0.03in}+\hspace{-0.03in} \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L\gamma^2}{2} G^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) - \gamma \lVert \nabla f(w_t) \rVert_2^2 + (L-1+ 2 \gamma) \lVert \nabla f(w_t) \rVert\lVert \mathbb{E}_t[\epsilon_t] \rVert + \frac{\gamma^2 \rho }{2}\lVert \nabla f(w_t) \rVert_2^2 + \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] + \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2 +\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] + \frac{L\gamma^2}{2} G^2 \nonumber\\ &\hspace{-0.5in}= f(\tilde{w}_{t}) -\gamma \left[ 1-\frac{\rho}{2}\gamma \right] \lVert \nabla f(w_t) \rVert_2^2 + (L-1+2\gamma) \lVert \nabla f(w_t) \rVert \lVert \mathbb{E}_t[\epsilon_t] \rVert + \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] + \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2 +\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] + \frac{L\gamma^2}{2} G^2 \label{gradient-bound} \end{align} \hrule \end{minipage}\vspace{-0.2in} \end{table*} It follows directly from \cite{Karimireddy2019} that $ \mathbb{E} [\lVert r_t \rVert_2^2]\leq \frac{4(1-\delta)}{\delta^2}\gamma^2 G^2. $ Recall that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \) and that \(\tilde{w}_{t+1} \) can be viewed as a noisy version of the compression-error correction of $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), where the ``noisy perturbation'' is incurred by the communication error. For convenience, let $ \mathbb{E}_t[\epsilon_t]$ denote the gradient bias incurred by the communication error and $ \mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ]$ be the corresponding mean square error, where $ \mathbb{E}_t$ is taken with respect to channel noise. Let $\eta= \frac{L-1+2\gamma}{\gamma (2-\rho\gamma)} $ with $0<\rho<2$. Let $f^*$ denote the globally minimum value of $f$. We have the following main result on the iterates in the BLCD algorithm. \begin{theorem} \label{thm:convergence} Under Assumptions \ref{asmpt:compression}, \ref{asmpt:smoothness} and \ref{asmpt:boundedmoment}, the iterates $\{w_t\}$ in the BLCD algorithm satisfies that \begin{align} &\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\sum_{t=0}^T \left(\lVert\nabla f(w_t)\rVert_2 \hspace{-0.03in}-\hspace{-0.03in} \eta \lVert \underbrace{\mathbb{E}_t[\epsilon_t]}_{\mbox{bias}} \rVert_2\right)^2 \nonumber\\ &\hspace{0.1in}\leq\hspace{-0.03in}\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\sum_{t=0}^T \left[\frac{L\eta }{ L \hspace{-0.03in}-\hspace{-0.03in}1 \hspace{-0.03in}+\hspace{-0.03in} 2 \gamma}\underbrace{\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2]}_{\mbox{MSE}} \hspace{-0.03in}+\hspace{-0.03in} \left(1\hspace{-0.03in}+\hspace{-0.03in} \eta^2 \right)\hspace{-0.03in} \lVert \underbrace{\mathbb{E}_t[\epsilon_t]}_{\mbox{bias}} \rVert_2^2 \right]\nonumber\\ &\hspace{0.1in}+\hspace{-0.03in}\frac{2}{T\hspace{-0.03in}+\hspace{-0.03in}1}\frac{f(w_0)\hspace{-0.03in}-\hspace{-0.03in}f^*}{\gamma(2\hspace{-0.03in}-\hspace{-0.03in}\rho\gamma)}\hspace{-0.03in}+\hspace{-0.03in} \left(\frac{L}{\rho} \frac{2(1\hspace{-0.03in}-\hspace{-0.03in} \delta)}{\delta^2} \hspace{-0.03in}+\hspace{-0.03in} \frac{1}{2} \right)\hspace{-0.03in} \frac{2L \gamma G^2}{ 2-\rho\gamma}. \label{main-result} \end{align} \end{theorem} \vspace{-0ex} \begin{proof} Due to the limited space, we outline only a few main steps for the proof. Recall that \(\tilde{w}_t= {w}_{t} - r_t \). It can be shown that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \). As shown in (\ref{gradient-bound}), using the properties of the iterates in the BLCD algorithm and the smoothness of the objective function $f$, we can establish an upper bound on $\mathbb{E}_t [ f(\tilde{w}_{t+1}) ] $ in terms of \(f(\tilde{w}_{t}) \) the corresponding gradient \( \nabla f(w_t) \), and the gradient bias and MSE due to the communication error. Then, (\ref{main-result}) can be obtained after some further algebraic manipulation. \end{proof} {\bf Remarks.} Based on Theorem~\ref{thm:convergence}, we have a few observations in order. \begin{itemize} \item We first examine the four terms on the right hand side of (\ref{main-result}): The first two terms capture the impact on the gradient by the time average of the bias in the communication error $\epsilon_t$ and that of the corresponding the mean square, denoted as MSE; the two items would go to zero if the bias and the MSE diminish; the third term is a scaled version of $ f(w_0) - f^* $ and would go to zero as long as $\gamma = O(T^{-\beta}) $ with $\beta < 1$; and the fourth term is proportional to $\gamma$ and would go to zero when $\gamma \rightarrow 0$. \item If the right hand side of (\ref{main-result}) diminishes as $T \rightarrow \infty$, the iterates in the BLCD algorithm would ``converge'' to a neighborhood around $\eta \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$, which is a scaled version of the bias in the communication error. For convenience, let $ \bar{\epsilon}= \limsup_t \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$, and define a contraction region as follows: \[ A_{\gamma} = \left\{ w_t: \lVert\nabla f(w_t)\rVert_2 \leq (\eta + \Delta) \bar{\epsilon} \right\}. \] where $\Delta >0$ is an arbitrarily small positive number. It then follows that the iterates in the BLCD algorithm would ``converge'' to a contraction region given by $A_{\gamma}$, in the sense that the iterates return to $A_{\gamma}$ infinitely often. Note that $f$ is assumed to be any nonconvex smooth function, and there can be many contraction regions, each corresponding to a stationary point. \item When the communication error is unbiased, the gradients would diminish to $0$ and hence the BLCD algorithm would converge to a stationary point. In the case the bias in the communication error does exist, there exists intrinsic tradeoff between the size of the contraction region and $\eta \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$. When the learning rate $\gamma$ is small, the right hand side of (\ref{main-result}) would small, but $\eta$ can be large, and vice verse. It makes sense to choose a fixed learning rate that would make $\eta$ small. In this way, the gradients in the BLCD algorithm would ``concentrate" around a (small) scaled version of the bias. \item Finally, the impact of gradient sparsification is captured by $\delta$. For instance, when (randomly) uniform selection is used, $\delta=\frac{k}{d}$. We will elaborate on this in Section \ref{sec:controlphase}. \end{itemize} Further, we have the following corollary. \begin{corollary} \label{thm:convergencezeromean} Under Assumptions \ref{asmpt:compression}, \ref{asmpt:smoothness}, and \ref{asmpt:boundedmoment}, we have that if $\E_t[\epsilon_t]=0$ and \(\gamma = \frac{1}{\sqrt{T+1}} \), the BLCD algorithm converges to a stationary point and satisfies that \begin{align} &\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\hspace{-0.03in}\sum_{t=0}^T \lVert\nabla f(w_t)\rVert_2^2 \nonumber\\ &\leq\hspace{-0.03in} \frac{1} {2 - \frac{\rho}{\sqrt{T+1}} } \left\{ \frac{2 (f(w_0)\hspace{-0.03in}-\hspace{-0.03in}f^*) }{\sqrt{T+1}} \hspace{-0.03in}+\hspace{-0.03in} \frac{2 L G^2}{\sqrt{T\hspace{-0.03in}+\hspace{-0.03in}1}} \hspace{-0.03in}\left(\frac{L}{\rho} \frac{2(1\hspace{-0.03in}-\hspace{-0.03in}\delta)}{\delta^2} \hspace{-0.03in}+\hspace{-0.03in} \frac{1}{2} \right)\hspace{-0.03in} \right. \nonumber\\ &\left. \hspace{0.5in}+\ \frac{L}{T+1} \sum_{t=0}^T\underbrace{\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2]}_{\text{MSE}} \right\} \end{align} \end{corollary} \vspace{-2ex} \section{Communication Error Minimization via Joint Optimization of Power Allocation and Learning Rates} \label{sec:update} Theorem~\ref{thm:convergence} reveals that the communication error has a significant impact on the convergence behavior of the BLCD algorithm. In this section, we turn our attention to minimizing the communication error (in term of MSE and bias) via joint optimization of power allocation and learning rates. Without loss of generality, we focus on iteration $t$ (with abuse of notation, we omit $t$ in the notation for simplicity). Recall that the coordinate updates in the BLCD algorithm, sent by different devices over the same subcarrier, are received by the edge server as an aggregate sum, which is used to estimate the gradient value in that specific dimension. We denote the power coefficients and estimators as $\bm{b} \triangleq [b_{11}, b_{12}, \ldots, b_{1M}, b_{21}, \ldots, b_{KM} ]$ and $\bm{\alpha} \triangleq [\vec{\alpha}_{1}, \ldots, \vec{\alpha}_{K}]$. In each round, each sender device optimizes its power allocation for transmitting the selected coordinates of their updates over the $K$ subcarriers, aiming to achieve the best convergence rate. We assume that the perfect channel state information is available at the corresponding transmitter, i.e., $\bm{h}_m=[h_{1m}, \ldots, h_{Km}]^\top$ is available at the sender $m$ only. Based on (\ref{comm-error}), the mean squared error of the communication error in iteration $t$ is given by \begin{equation} \mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ] = \E\bigg[\normsq{\widehat{G}_t(w_t) - G_t(w_t)}\bigg] \end{equation} where the expectation is taken over the channel noise. For convenience, we denote $\mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ] $ as $\mbox{\textrm{MSE}}_1$, and after some algebra, it can be rewritten as the sum of the variance and the square of the bias: \begin{align} \hspace{-0.08in} \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b})\hspace{-0.03in}=\hspace{-0.05in} \sum_{k=1}^K\hspace{-0.04in} \bigg[\hspace{-0.04in} \underbrace{\sum_{m=1}^M\hspace{-0.04in} \left(\hspace{-0.03in} \alpha_k b_{km} h_{km} \hspace{-0.04in} -\hspace{-0.04in} \frac{1}{M}\hspace{-0.04in} \right)\hspace{-0.04in} x_{km}\hspace{-0.04in} }_{\mbox{bias in $k$th coordinate}} \bigg]^2\hspace{-0.11in} +\hspace{-0.04in} \underbrace{\sum_{k=1}^K \hspace{-0.04in} \sigma^2 \alpha^2_k}_{\mbox{variance\ } } \label{bias-var} \end{align} Recall that $\{\alpha_k\}_{k=1}^K$ are intimately related to the learning rates for the $K$ coordinates, making the learning rate effectively $\{\gamma \alpha_k\}_{k=1}^K$. \subsection{Centralized Solutions to Minimizing MSE (Scheme 1)} In light of the above, we can cast the MSE minimization problem as a learning-driven joint power allocation and learning rate problem, given by \begin{align} \textbf{P1:\ } \min_{\bm{\alpha}, \bm{b}} \quad & \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b}) \\ \textrm{s.t.} \quad & \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m, \quad \forall m \\ & b_{km}\geq 0, \ \alpha_k \geq 0 \quad \forall k,m \end{align} which minimizes the MSE for every round. The above formulated problem is non-convex because the objective function involves the product of variables. Nevertheless, it is biconvex, i.e., for one of the variables being fixed, the problem is convex for the other one. In general, we can solve the above bi-convex optimization problem in the same spirit as in the EM algorithm, by taking the following two steps, each optimizing over a single variable, iteratively: \begin{equation*} \begin{aligned} \textbf{P1-a:} \ \min_{\bm{\alpha}} \quad & \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b})\quad \textrm{s.t.} \quad \alpha_k \geq 0, \quad \forall k \\ \textbf{P1-b:} \min_{\bm{b}} \quad & \mbox{\textrm{MSE}}_{11}(\bm{\alpha}, \bm{b}) \\ \textrm{s.t.} \quad & \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m \ \ \forall m, \quad b_{km}\geq 0 \ \ \forall k,m. \end{aligned} \end{equation*} Since (\textbf{P1-a}) is unconstrained,for given $\{ b_{km} \} $, the optimal solution to (\textbf{P1-a}) is given by \begin{align} \alpha^*_k \! = \! \max \left\{ \frac{\big(\sum_{m=1}^M x_{km}\big)\big(\sum_{m=1}^M b_{km} h_{km} x_{km} \big)}{M \big[\sigma^2 + \big(\sum_{m=1}^M b_{km} h_{km} x_{km} \big)^2\big]}, 0\right\}. \end{align} Then, we can solve (\textbf{P1-b}) by optimizing $\bm{b}$ only. Solving the sub-problems (\textbf{P1-a}) and (\textbf{P1-b}) iteratively leads to a local minimum, however, not necessarily to the global solution. Observe that the above solution requires the global knowledge of $x_{km}$'s and $h_{km}$'s of all devices, which is difficult to implement in practice. We will treat it as a {\em benchmark} only. Next, we turn our attention to developing distributed sub-optimal solutions. \vspace{-0.05in} \subsection{Distributed Solutions towards Zero Bias and Variance Reduction (Scheme 2)} As noted above, the centralized solution to ({\bf P1}) requires the global knowledge of $x_{km}$'s and $h_{km}$'s and hence is not amenable to implementation. Further, minimizing the MSE of the communication error does not necessarily amount to minimizing the bias therein since there exists tradeoffs between bias and variance. Thus motivated, we next focus on devising distributed sub-optimal solutions which can drive the bias in the communication error to (close to) zero, and then reduce the corresponding variance as much as possible. Specifically, observe from (\ref{bias-var}) that the minimization of MSE cost does not necessarily ensure \(\hat{G}\) to be an unbiased estimator, due to the intrinsic tradeoff between bias and variance. To this end, we take a sub-optimal approach where the optimization problem is decomposed into two subproblems. In the subproblem at the transmitters, each device \(m\) utilizes its available power and local gradient/channel information to compute a power allocation policy in terms of \(\{b_{1m},b_{2m},\dots,b_{Km}\}\). In the subproblem at the receiver, the receiver finds the best possible \(\alpha_k\) for all \(k=1,\dots,K\). Another complication is that due to the power constraints at individual devices, it is not always feasible to achieve unbiased estimators of the gradient signal across the coordinates. Nevertheless, for given power constraints, one can achieved unbiased estimators of a scaled down version of the coordinates of the gradient signal. In light of this, we formulate the optimization problem at each device (transmitter) $m$ to ensure an unbiased estimator of a scaled version $\zeta_m$ of the transmitted coordinates, as follows: \begin{align}\label{prob2a} \mbox{\bf Device~m:} \ &\underset{\{b_{km}\}_{k=1:K}}{\max} ~ \zeta_m\\ \text{s.t.}\hspace{0.1in}\sum_{k=1}^K b_{km}^2&x_{km}^2\leq E_m, \ \ b_{km}\hspace{-0.03in}\geq\hspace{-0.03in} 0, \\ \zeta_mx_{km}&-b_{km}h_{km}x_{km}= 0, & \forall k=1,\dots,K,\label{q56} \end{align} where maximizing $\zeta_m$ amounts to maximizing the corresponding SNR (and hence improving the gradient estimation accuracy). The first constraint in the above is the power constraint, and the second constraint is imposed to ensure that there is no bias of the same scaled version of the transmitted signals across the dimensions for user $m$. The power allocation solution can be found using Karush-Kuhn-Tucker (KKT) conditions as follows: \begin{align}\label{q57} \Aboxed{\zeta_m^* = \sqrt{\frac{E_m}{\sum_{k=1}^K \frac{x_{km}^2}{h_{km}^2}}},~ b_{km}^*=\frac{\zeta^*_{m}}{h_{km}}, ~\forall k.} \end{align} Observe that using the obtained power allocation policy in (\ref{q57}), all $K$ transmitted coordinates for device $m$ have the same scaling factor $\zeta_m$. Next, we will ensure zero bias by choosing the right \(\boldsymbol{\alpha}\) for gradient estimation at the receiver, which can be obtained by solving the following optimization problem since all transmitted gradient signals are superimposed via the over-the-air transmission: \begin{align} \mbox{\bf Receiver side:} \ \underset{\{\alpha_k\},}{\min}~ &\sum_{k=1}^K {\nu}_k^2(\alpha_k, \{b_{km}^*\})\label{q58}\\ \text{s.t.}~ e_k(\alpha_k, \{b_{km}^*\}) = 0, &~~~\alpha_k\geq 0, \forall k=1,\dots,K, \label{q59} \end{align} where \(e_k\) and \( \nu_k^2 \) denote the bias and variance components, given as follows: \begin{align} e_k(\alpha_k, \{b_{km}^*\}) &= \alpha_k\left( \sum_{m=1}^M \zeta_m^* x_{km} \right) - \frac{1}{M}\sum_{m=1}^M x_{km},\nonumber\\ \nu_k^2(\alpha_k, \{b_{km}^*\}) &= \alpha_k^2\sigma^2, \end{align} for all \(k = 1,\dots,K\). For given $\{\zeta_m^*\}$, it is easy to see that \begin{align} \Aboxed{ \alpha_k^* = \frac{\frac{1}{M}\sum_{m=1}^M x_{km}}{\sum_{m=1}^M \zeta_m^*x_{km}} \simeq \frac{1}{\sum_{m=1}^M \zeta_m^*}, \ \ \forall k.} \end{align} We note that in the above, from an implementation point of view, since $\{x_{km} \}$ is not available at the receiver, it is sensible to set $ \alpha_k^\dagger \simeq \frac{1}{\sum_{m=1}^M \zeta_m^*}$. Further, \( \{\zeta_m^*\} \) is not readily available at the receiver either. Nevertheless, since there is only one parameter $\zeta_m^*$ from each sender $m$, the sum \(\sum_{m=1}^M \zeta_m^*\) can be sent over a control channel to the receiver to compute \(\alpha_k^\dagger\). It is worth noting that in general the bias exists even if $E_m$ is the same for all senders. Next, we take a closer look at the case when the number of subchannels \(K\) is large (which is often the case in practice). Suppose that \(\{x_{km}\}\) are i.i.d.~across subchannels and users, and so are \(\{ h_{km} \}\). We can then simplify \(\zeta_m^*\) further. For ease of exposition, we denote \(\mathbb{E}[x_{km}^2] = \varphi^2+\bar{x}^2\) and \(\mathbb{E}\left[ \frac{1}{h_{km}^2} \right] = \varpi^2\). When \(K\) is large, for every user \(m\) we have that: \begin{align} &\zeta_m^* = \frac{\sqrt{E_m}}{\sqrt{\sum_{k=1}^K \frac{x_{km}^2}{h_{km}^2}}} \underset{\substack{\text{when $K$}\\\text{is large}}}{\Longrightarrow} \zeta_m^* \approx \frac{\sqrt{E_m}}{\sqrt{K (\varphi^2+\bar{x}^2) \varpi^2}} \end{align} As a result, the bias and variance for each dimension $k$ could be written as, \begin{align} e_k(\alpha_k^*, \{b_{km}^*\}) &= \hspace{-0.03in}\sum_{m=1}^M\hspace{-0.03in} \left[\frac{ \sqrt{E_m}}{\sum_{m=1}^M \sqrt{E_m}} \hspace{-0.03in}-\hspace{-0.03in} \frac{1}{M}\right] x_{km}, \forall k. \label{eq:ek=0}\\ {\nu}_k^2 & = \hspace{-0.03in}\frac{K\varpi^2(\varphi^2+\bar{x}^2)}{\left(\sum_{m=1}^M\sqrt{E_m}\right)^2}\sigma^2, \forall k. \label{eq82} \end{align} {\em Observe that when $E_m$ is the same across the senders, the bias term \( \mathbb{E}_t [\epsilon_t] = \mathbf{0}\) in the above setting according to \eqref{eq:ek=0}. } \subsection{A User-centric Approach Using Single-User Solution (Scheme 3)} In this section, we consider a suboptimal user-centric approach, which provides insight on the power allocation across the subcarriers from a single device perspective. We formulate the single device (say user $m$) problem as \begin{equation*} \begin{aligned} \textbf{P2:\ }&\min_{\{b_{km}\}, \{\alpha_k\}}~ \sum_{k=1}^K \bigg[ \big( \alpha_k b_{km} h_{km} - 1 \big) x_{km}\bigg]^2 + \sigma^2 \sum_{k=1}^K \alpha^2_k \\ &\textrm{s.t.}~ \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m; \ b_{k}\geq 0, \ \alpha_k \geq 0, \forall k. \end{aligned} \end{equation*} \vspace{-2ex} \begin{theorem} \label{thm:beta} The optimal solution $\{b_{km}^*, \alpha_k^*\} $ to (\textbf{P2}) is given by \begin{equation} \label{eqn:singleuser} (b^*_{km})^2 = \bigg[ \sqrt{\frac{ \sigma^2} {\lambda x_{km}^2 h_{km}^2} } - \frac{\sigma^2}{h_{km}^2 x_{km}^2} \bigg]^+, \forall k, \end{equation} \begin{align} \alpha^*_k=\frac{b^*_{km} h_{km} x^2_{k} }{ \sigma^2 + (b^*_{km})^2 h^2_{km} x^2_{km}}, ~~~~\forall k, \end{align} where $\lambda_m$ is a key parameter determining the waterfilling level: \begin{equation} \sum_{k=1}^K \bigg[ \sqrt{\frac{1}{\lambda_m}} \sqrt{\frac{x_{km}^2 \sigma^2} { h_{km}^2} } - \frac{\sigma^2}{h_{km}^2 } \bigg]^+ = E_m. \end{equation} \end{theorem} The proof of Theorem~\ref{thm:beta} is omitted due to space limitation. Observe that Theorem~\ref{thm:beta} reveals that the larger the gradient value (and the smaller channel gain) in one subcarrier, the higher power the it should be allocated to in general, and that $ \{x_{km}/h_{km)} \}$ can be used to compute the water level for applying the water filling policy. Based on the above result, in the multi-user setting, each device can adopt the above single-user power allocation solution as given in Theorem \ref{thm:beta}. This solution can be applied individually without requiring any coordination between devices. Next, we take a closer look at the case when the number of subchannels \(K\) is large. Let $\bar{E}_m$ denote the average power constraint per subcarrier. When \(K\) is large, after some algebra, the optimization problem \textbf{P2} can be further approximated as follows: \begin{align} \textbf{P3: } \underset{{b}_{km}}{\min}& ~\mathbb{E} \left[ \frac{{{x}^2_{km}} \sigma^2} {{{b}^2_{km}} {{h}^2_{km}} {{x}^2_{km}} +\sigma^2} \right] \nonumber\\ \text{s.t. }& ~ \mathbb{E}\left[ {b}^2_{km} {{x}^2_{km}}\right]\leq \bar{E}_m,~ {b}_{km}\geq 0, \end{align} where the expectation is taken with respect to $\{{h}_{km}\}$ and $\{{x}_{km}\}$. The solution for \(k=1,\dots,K\) is obtained as follows:\vspace{-0.01in} \begin{align} & b_{km}^* \hspace{-0.04in}=\hspace{-0.04in} \sqrt{\hspace{-0.04in}\left[\hspace{-0.03in} \frac{\sigma \lvert x_{km}\rvert^{-1}}{h_{km}\sqrt{\lambda_m}}\hspace{-0.03in}-\hspace{-0.03in} \frac{\sigma^2}{x_{km}^2h_{km}^2} \hspace{-0.03in}\right]^+}\\ & \lambda_m \hspace{-0.03in}<\hspace{-0.03in}\frac{h_{km}^2x_{km}^2}{\sigma_k^2}\hspace{-0.03in}\Rightarrow\hspace{-0.03in} b^*_{km}\hspace{-0.03in}>\hspace{-0.03in}0\label{q82} \end{align} We can compute the bias and the variance accordingly. \section{Coordinate Selection for Bandlimited Coordinate Descent Algorithms} \label{sec:controlphase} The selection of which coordinates to operate on is crucial to the performance of sparsified SGD algorithms. It is not hard to see that selecting the top-$k$ (in absolute value) coordinates of the sum of the gradients provides the best performance. However, in practice it may not always be feasible to obtain top-$k$ of the sum of the gradients, and in fact there are different solutions for selecting $k$ dimensions with large absolute values; see e.g., \cite{Amiri2019a, Ivkin2019}. Note that each device individually transmitting top-$k$ coordinates of their local gradients is not applicable to the scenario of over-the-air communications considered here. Sequential device-to-device transmissions provides an alternative approach \cite{shi2019distributed}, but these techniques are likely to require more bandwidth with wireless connection. Another approach that is considered is the use of compression and/or sketching for the gradients to be transmitted. For instance, in \cite{Amiri2019a}, a system that updates SGD via decompressing the compressed gradients transmitted through over-the-air communication is examined. To the best of our knowledge, such techniques do not come with rigorous convergence guarantees. A similar approach is taken in \cite{Ivkin2019}, where the sketched gradients are transmitted through an error-free medium and these are then used to obtain top-$k$ coordinates; the devices next simply transmit the selected coordinates. Although such an approach can be taken with over-the-air computing since only the summation of the sketched gradients is necessary; this requires the transmission of $\mathcal{O}(k\log d)$ dimensions. To provide guarantees with such an approach $\mathcal{O}(k\log d + k)$ up-link transmissions are needed. Alternatively, uniformly selected $\mathcal{O}(k\log d + k)$ coordinates can be transmitted with similar bandwidth and energy requirements. For the practical learning models with non-sparse updates, uniform coordinate selection tend to perform better. Moreover, the common $K$ dimensions can be selected uniformly via synchronized pseudo-random number generators without any information transfer. To summarize, uniform selection of the coordinates is more attractive based on the energy, bandwidth and implementation considerations compared to the methods aiming to recover top-$k$ coordinates; indeed, this is the approach we adopt. \section{Experimental Results} In this section, we evaluate the accuracy and convergence performance of the BLCD algorithm, when using one of the following three schemes for power allocation and learning rate selection (aiming to minimize the impact of communication error): 1) the bi-convex program based solution (Scheme 1), 2) the distributed solution towards zero bias in Section~\ref{sec:update}. (Scheme 2); 3) the single-user solution (Scheme 3). We use the communication error free scheme as the baseline to evaluate the performance degradation. We also consider the naive scheme (Scheme 4) using equal power allocation for all dimensions, i.e., $b_{km}=\sqrt{E/ {\sum_{k=1}^K x^2_{km}}}$. In our first experiment, we consider a simple single layer neural network trained on the MNIST dataset. The network {consists of two 2-D convolutional layers with filter size \(5\times 5\) followed by a single fully connected layer and it} has 7840 parameters. $K=64$ dimensions are uniformly selected as the support of the sparse gradient transmissions. For convenience, we define $E_{avg}$ as the average sum of the energy (of all devices) per dimension normalized by the channel noise variance, i.e., $E_{avg}= E M \E[h_{km}^2]/ K \sigma^2.$ Without loss of generality, we take the variance of the channel noise as $\sigma^2=1$ and $\{h_{km}\}$ are independent and identically distributed Rayleigh random variables with mean $1$. The changes on $E_{avg}$ simply amount to different SNR values. In Fig. \ref{fig:comparison1}, we take $K=64$, $M=8$, batch size $4$ to calculate each gradient, and the learning rate $\gamma=0.01$. In the second experiment, we expand the model to {a more sophisticated \(5\)-layer neural network and an \(18\)-layer ResNet \cite{resnetpaper} with \(61706\) and \(11175370\) parameters, respectively}. {The \(5\)-layer network consists of two 2-D convolutional layers with filter size \(5\times 5\) followed by three fully connected layers. In all experiments, we have used a learning rate of \(0.01\). the local dataset of each worker is randomly selected from the entire MNIST dataset.} We use \(10\) workers with varying batch sizes and we utilize \(K=1024\) sub-channels for sparse gradient signal transmission. It can be seen from Fig.~\ref{fig:comparison1} that in the presence of the communication error, the centralized solution (Scheme 1) based on bi-convex programming converges quickly and performs the best, and it can achieve accuracy close to the ideal error-free scenario. Further, the distributed solution (Scheme 2) can eventually approach the performance of Scheme 1, but the single-user solution (Scheme 3) performs poorly, so does the naive scheme using equal power allocation (Scheme 4). Clearly, there exists significant gap between its resulting accuracy and that in the error-free case, and this is because the bias in Scheme 3 is more significantly. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{figs/Figure_2.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $\alpha_k=1/8$, $E_{avg}=0.1$ and a batch size of \(4\). Training model consists of a single layer neural network with 7840 differentiable parameters.} \label{fig:comparison1} \vspace{2ex} \includegraphics[width=\columnwidth]{figs/Figure_1.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $10$ workers and a batch size of \(256\). Training model consists of a \(5\)-layer deep neural network with 61706 differentiable parameters.}\label{fig:comparison2} \vspace{-0ex} \end{center} \end{figure} Next, Figures~\ref{fig:comparison2}, \ref{fig:comparison3} {and \ref{fig:resnet}} depict the results in the second experiment using much larger-scale deep neural networks. It can be observed from Figs.~\ref{fig:comparison2}, \ref{fig:comparison3} {and \ref{fig:resnet}} that the SNR can have significant impact on the final accuracy. As expected, {the convergence on the ResNet network is slower in comparison to other DNNs due to the huge number of parameters and small batch size. Nevertheless, it is clear that the learning accuracy improves significantly at high SNR. (The solution of the distributed algorithm for \(E_{avg}=10\) is omitted in Fig. \ref{fig:resnet}, since it is indistinguishably close to error-free solution.}) It is interesting to observe that when the SNR increases, the distributed solution (Scheme 2) can achieve accuracy close to the ideal error-free case, but the single-user solution (Scheme 3) would not. It is worth noting that due to the computational complexity of bi-convex programming in this large-scale case, Scheme 4 could be solved effectively (we did not present it here). Further, the batch size at each worker can impact the convergence rate, but does not impact the final accuracy. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{figs/Figure_3.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $10$ workers and a batch size of \(4\). Training model consists of a \(5\)-layer deep neural network with 61706 differentiable parameters.} \label{fig:comparison3} \vspace{2ex} \includegraphics[width=\columnwidth]{figs/resnet1.pdf} \vspace{-4ex} \caption{{Testing accuracy over training iterations for $10$ devices and a batch size of \(4\). Training model consists of an \(18\)-layer ResNet network with more than 11 million differentiable parameters.}}\label{fig:resnet} \end{center}\vspace{0ex} \end{figure} \section{Conclusions} In this paper, we consider a many-to-one wireless architecture for distributed learning at the network edge, where multiple edge devices collaboratively train a machine learning model, using local data, through a wireless channel. Observing the unreliable nature of wireless connectivity, we design an integrated communication and learning scheme, where the local updates at edge devices are carefully crafted and compressed to match the wireless communication resources available. Specifically, we propose SGD-based bandlimited coordinate descent algorithms employing over-the-air computing, in which a subset of k-coordinates of the gradient updates across edge devices are selected by the receiver in each iteration and then transmitted simultaneously over k sub-carriers. We analyze the convergence of the algorithms proposed, and characterize the effect of the communication error. Further, we study joint optimization of power allocation and learning rates therein to maximize the convergence rate. Our findings reveal that optimal power allocation across different sub-carriers should take into account both the gradient values and channel conditions. We then develop sub-optimal solutions amenable to implementation and verify our findings through numerical experiments. \section*{Acknowledgements} The authors thank Gautam Dasarathy for stimulating discussion in the early stage of this work. This work is supported in part by NSF Grants CNS-2003081, CNS-CNS-2003111, CPS-1739344 and ONR YIP N00014-19-1-2217. \normalem \bibliographystyle{IEEEtran} \section{Introduction} \setlength\abovedisplayskip{1pt} \setlength\belowdisplayskip{1pt} In many edge networks, mobile and IoT devices collecting a huge amount of data are often connected to each other or a central node wirelessly. The unreliable nature of wireless connectivity, together with constraints in computing resources at edge devices, puts forth a significant challenge for the computation, communication and coordination required to learn an accurate model at the network edge. In this paper, we consider a many-to-one wireless architecture for distributed learning at the network edge, where the edge devices collaboratively train a machine learning model, using local data, in a distributed manner. This departs from conventional approaches which rely heavily on cloud computing to handle high complexity processing tasks, where one significant challenge is to meet the stringent low latency requirement. Further, due to privacy concerns, it is highly desirable to derive local learning model updates without sending data to the cloud. In such distributed learning scenarios, the communication between the edge devices and the server can become a bottleneck, in addition to the other challenges in achieving edge intelligence. In this paper, we consider a wireless edge network with $M$ devices and an edge server, where a high-dimensional machine learning model is trained using distributed learning. In such a setting with unreliable and rate-limited communications, local updates at sender devices should be carefully crafted and compressed to make full use of the wireless communication resources available and should work in concert with the receiver (edge server) so as to learn an accurate model. Notably, lossy wireless communications for edge intelligence presents unique challenges and opportunities \cite{Zhu2018a}, subject to bandwidth and power requirements, on top of the employed multiple access techniques. Since it often suffices to compute a function of the sum of the local updates for training the model, over-the-air computing is a favorable alternative to the standard multiple-access communications for edge learning. More specifically, over-the-air computation \cite{Goldenbaum2013, Abari2016} takes advantage of the superposition property of wireless multiple-access channel via simultaneous analog transmissions of the local messages, and then computes a function of the messages at the receiver, scaling signal-to-noise ratio (SNR) well with an increasing number of users. In a nutshell, when multiple edge devices collaboratively train a model, it is plausible to employ distributed learning over-the-air. We seek to answer the following key questions: 1) What is the impact of the wireless communication bandwidth/power on the accuracy and convergence of the edge learning? 2) What coordinates in local gradient signals should be communicated by each edge device to the receiver? 3) How should the coordination be carried out so that multiple sender devices can work in concert with the receiver? 4) What is the optimal way for the receiver to process the received noisy gradient signals to be used for the stochastic gradient descent algorithm? 5) How should each sender device carry out power allocation across subcarriers to transmit its local updates? Intuitively, it is sensible to allocate more power to a coordinate with larger gradient value to speed up the convergence. Further, power allocation should also be channel-aware. \begin{figure*}[!tbh] \begin{center} \centerline{\includegraphics[width=1.5\columnwidth]{figs/newSchematic.pdf}} \caption{A bandlimited coordinate descent algorithm for distributed learning over wireless multi-access channel} \label{commsmodel} \end{center} \vspace{-7ex} \end{figure*} To answer the above questions, we consider an integrated learning and communication scheme where multiple edge devices send their local gradient updates over multi-carrier communications to the receiver for learning. Let $K$ denote the number of subcarriers for communications, where $K$ is determined by the wireless bandwidth. First, $K$ dimensions of the gradient updates are determined (by the receiver) to be transmitted. Multiple methods can be used for selecting $K$ coordinates, e.g., selecting the top-$k$ (in absolute value) coordinates of the sum of the gradients or randomized uniform selection. This paper will focus on randomly uniform selection (we elaborate further on this in Section V). During the subsequent communications, the gradient updates are transmitted only in the $K$-selected dimensions via over-the-air computing over $K$ corresponding sub-carriers, each experiencing time-varying channel conditions and hence time-varying transmission errors. The devices are subject to power constraints, giving rise to a key question on how to allocate transmission power across dimension, at each edge device, based on the gradient update values and channel conditions. Thus, we explore joint optimization of the power allocation and the learning rate to obtain the best estimate of the gradient updates and minimize the impact of the communication error. We investigate a centralized solution to this problem as a benchmark, and then devise sub-optimal distributed solutions amenable to practical implementation. We note that we have also studied the impact of errors of synchronization across devices in this setting (we omit the details due to limited space). The main contributions of this paper are summarized as follows: \begin{itemize} \item We take a holistic approach to study federated learning algorithms over wireless MAC channels, and the proposed bandlimited coordinated descent(BLCD) algorithm is built on innovative integration of computing in the air, multi-carrier communications, and wireless resource allocation. \item We characterize the impact of communication error and compression, in terms of its resulting gradient bias and mean squared error (MSE), on the convergence performance of the proposed algorithms. Specifically, when the communication error is unbiased, the BLCD algorithm would converge to a stationary point under very mild conditions on the loss function. In the case the bias in the communication error does exist, the iterates of the BLCD algorithm would return to a contraction region centered around a scaled version of the bias infinitely often. \item To minimize the impact of the communication error, we study joint optimization of power allocation at individual devices and learning rates at the receiver. Observe that since there exists tradeoffs between bias and variance, minimizing the MSE of the communication error does not necessarily amount to minimizing the bias therein. Our findings reveal that optimal power allocation across different sub-carriers should take into account both the gradient values and channel conditions, thus generalizing the widely used water-filling policy. We also develop sub-optimal distributed solutions amenable to implementation. In particular, due to the power constraints at individual devices, it is not always feasible to achieve unbiased estimators of the gradient signal across the coordinates. To address this complication, we develop a distributed algorithm which can drive the bias in the communication error to (close to) zero under given power constraints and then reduce the corresponding variance as much as possible. \end{itemize} \section{Related Work} Communication-efficient SGD algorithms are of great interest to reduce latency caused by the transmission of the high dimensional gradient updates with minimal performance loss. Such algorithms in the ML literature are based on compression via quantization \cite{Alistarh2016, Wen2017, Bernstein2018a, Wu2018}, sparsification \cite{Aji2017, Stich2018, Alistarh2018} and federated learning \cite{Konecny2016} (or local updates \cite{Stich2018a}), where lossless communication is assumed to be provided. At the wireless edge, physical-layer design and communication loss should be taken into consideration for the adoption of the communication-efficient algorithms. Power allocation for over-the-air computation is investigated for different scenarios in many other works \cite{Dong2018, Liu2019, Wen2018, Zhu2018b, Cao2019} including MIMO, reduced dimensional MIMO, standard many to one channel and different channel models. In related works on ML over wireless channels, \cite{Zhu2018, Yang2019, Zeng2019, Amiri2019, Amiri2019a, Amiri2019c, Ahn2019, Sery2019} consider over-the-air transmissions for training of the ML model. The authors in \cite{Amiri2019} propose sparsification of the updates with compressive sensing for further bandwidth reduction, and recovered sum of the compressed sparse gradients is used for the update. They also apply a similar framework for federated learning and fading channels in \cite{Amiri2019a}. \cite{Zhu2018} considers a broadband aggregation for federated learning with opportunistic scheduling based on the channel coefficients for a set of devices uniformly distributed over a ring. Lastly, \cite{Sery2019} optimize the gradient descent based learning over multiple access fading channels. It is worth noting that the existing approaches for distributed learning in wireless networks do not fully account for the characteristics of lossy wireless channels. It is our hope that the proposed BLCD algorithms can lead to an innovative architecture of distributed edge learning over wireless networks that accounts for computation, power, spectrum constraints and packet losses. \section{Federated Learning over Wireless Multi-access Networks} \subsection{Distributed Edge Learning Model} Consider an edge computing environment with $M$ devices $\mathcal{M}=\{1,\ldots,M\}$ and an edge server. As illustrated in Figure 1, a high-dimensional ML model is trained at the server by using an SGD based algorithm, where stochastic gradients are calculated at the devices with the data points obtained by the devices and a (common) subset of the gradient updates are transmitted through different subcarriers via over-the-air. The general edge learning problem is as follows: \begin{equation} \min_{w\in\mathbb{R}^d} f(w):=\frac{1}{M} \sum_{m=1}^{M} \mathbb{E}_{\xi_m} [ l(w, \xi_m)] \end{equation} in which $l(\cdot)$ is the loss function, and edge device $m$ has access to inputs $\xi_m $. Such optimization is typically performed through empirical risk minimization iteratively. In the sequel, we let $w_t$ denote the parameter value of the ML model at communication round $t$, and at round $t$ edge device $m$ uses its local data $\xi_{m,t}$ to compute a stochastic gradient $g^m_t (w_t):=\nabla l(w_t,\xi_{m,t})$. Define $g_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} g^m_t (w_t)$. The standard vanilla SGD algorithms is given as \begin{equation} \label{eqn:genericupdate} w_{t+1} = w_t - \gamma g_t(w_t) \end{equation} with $\gamma$ being the learning rate. Nevertheless, different updates can be employed for different SGD algorithms, and this study will focus on communication-error-aware SGD algorithms. \subsection{Bandlimited Coordinate Descent Algorithm} Due to the significant discrepancy between the wireless bandwidth constraint and the high-dimensional nature of the gradient signals, we propose a sparse variant of the SGD algorithm over wireless multiple-access channel, named as bandlimited coordinate descent (BLCD), in which at each iteration only a common set of $K$ coordinates, $I(t)\subset \{1, \ldots, d\}$ (with $K\ll d$), of the gradients are selected to be transmitted through over-the-air computing for the gradient updates. The details of coordinate selection for the BLCD algorithm are relegated to Section \ref{sec:controlphase}. Worth noting is that due to the unreliable nature of wireless connectivity, the communication is assumed to be lossy, resulting in erroneous estimation of the updates at the receiver. Moreover, gradient correction is performed by keeping the difference between the update made at the receiver and the gradient value at the transmitter for the subsequent rounds, as gradient correction dramatically improves the convergence rate with sparse gradient updates \cite{Stich2018}. For convenience, we first define the gradient sparsification operator as follows. \begin{definition} $ C_I : \mathbb{R}^d \rightarrow \mathbb{R}^d$ for a set $I \subseteq \{1,\ldots, d\}$ as follows: for every input $x \in \mathbb{R}^d$, $ \big(C_I (x)\big)_j$ is $(x)_{j} $ for $ j \in I$ and $0$ otherwise. \end{definition} Since this operator $C_I$ compress a $d$-dimensional vector to a $k$-dimension one, we will also refer this operator as compression operator in the rest of the paper. \begin{algorithm}[!t] \caption{Bandlimited Coordinate Descent Algorithm}\label{alg_1} \begin{algorithmic}[1] \STATE \textbf{Input:} Sample batches \(\xi_{m,t}\), model parameters \(w_1\), initial learning rate \(\gamma\), sparsification operator \(C_t(.)\), \(\forall m=1,\dots,M; \forall t=1,\dots,T.\) \STATE \textbf{Initialize:} \(r_t^m:=0\). \FOR{$t=1:T$} \FOR{$m=1:M$} \STATE \(g_t^m(w_t):= \text{stochasticGradient}(f(w_t,\xi_{m,t}))\) \STATE \(u_{t}^m := \gamma g_t^m(w_t)+r_t^m \) \STATE \(r_{t+1}^m := u_t^m-C_t(u_t^m)\) \STATE Compute power allocation coefficients \(b_{km}^*,\forall k=1,\dots,K\). \STATE Transmit \(\mathbf{b}^*\odot C_t(u_t^m)\) \ENDFOR \STATE Compute gradient estimator $\hat{G}_t(w_t)$ \STATE \(w_{t+1}:= w_t - \hat{G}_t(w_t) \). \STATE Broadcast \(w_{t+1}\) back to all transmitters. \ENDFOR \end{algorithmic} \end{algorithm} With a bit abuse of notation, we let $C_t$ denote $C_{I(t)}$ for convenience in the following. Following \cite{Karimireddy2019}, we incorporate the sparsification error made in each iteration (by the compression operator $C_t$) into the next step to alleviate the possible gradient bias therein and improve the convergence possible. Specifically, as in \cite{Karimireddy2019}, one plausible way for compression error correction is to update the gradient correction term as follows: \begin{align} r_{t+1}^m &= u_t^m - C_t(u_t^m), \label{eqn:SGDmemupdatestd}\\ u_t^m &\triangleq \gamma g^m_t(w_t) + r_t^m \end{align} which $ r_{t+1}^m $ keeps the error in the sparsification operator that is in the memory of user $m$ at around $t$, and $u_t^m $ is the scaled gradient with correction at device $m$ where the scaling factor $\gamma$ is the learning rate in equation~\eqref{eqn:genericupdate}. {(We refer readers to \cite{Karimireddy2019} for more insights of this error-feedback based compression SGD.)} Due to the lossy nature of wireless communications, there would be communication errors and the gradient estimators at the receiver would be erroneous. In particular, the gradient estimator at the receiver in the BLCD can be written as \begin{equation} \label{eqn:SGDupdate} \hat{G}_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} C_t \left(u_t^m \right) + \epsilon_t, \end{equation} where $\epsilon_t$ denotes the random communication error in round $t$. In a nutshell, the bandlimited coordinate descent algorithm is outlined in Algorithm~\ref{alg_1}. Recall that $g_t(w_t) = \frac{1}{M}\sum_{m=1}^{M} g^m_t(w_t)$ and define $r_t \triangleq \frac{1}{M}\sum_{m=1}^{M} r^m_t$. Thanks to the common sparsification operator across devices, the update in the SGD algorithm at communicatioon round $t$ is given by \begin{equation} \label{eqn:updatesimplified} w_{t+1} = w_t - \big[ C_t(\gamma g_t(w_t) +r_t) + \epsilon_t\big]. \end{equation} To quantify the impact of the communication error, we use the corresponding communication-error free counterpart as the benchmark, defined as follows: \begin{equation} \label{eqn:gcsimplified} \hat{w}_{t+1} = w_t - C_t(\gamma g_t(w_t) +r_t) . \end{equation} It is clear that $w_{t+1}= \hat{w}_{t+1} - \epsilon_t $. For convenience, we define \(\tilde{w}_t \triangleq {w}_{t} - r_t \). It can be shown that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \). Intuitively, $w_{t+1}$ in (\ref{eqn:updatesimplified}) is a noisy version of the iterate $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), which implies that \(\tilde{w}_{t+1} \) is a noisy version of the compression-error correction of $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), where the ``noisy perturbation'' is incurred by the communication error. \subsection{BLCD Coordinate Transmissions over Multi-Access Channel} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=\columnwidth]{figs/BLCD-MACprotocol2.pdf}}% \vspace{0in} \caption{A multi-access communication protocol for bandlimited coordinate selection and transmission.}\label{flowchart1} \end{center} \vspace{-0.15in} \end{figure} A key step in the BLCD algorithm is to achieve coordinate synchronization of the transmissions among many edge devices. To this end, we introduce a receiver-driven low-complexity multi-access communication protocol, as illustrated in Fig.~\ref{flowchart1}, with the function $C_t(x)$ denoting the compression of $x$ at round $t$. Let $I(t)$ (of size $K$) denote the subset of coordinates chosen for transmission by the receiver at round $t$. Observe that the updates at the receiver are carried out only in the dimensions $I(t)$. Further, the edge receiver can broadcast its updated iterate to participant devices, over the reverse link. This task is quite simple, given the broadcast nature of wireless channels. In the transmissions, each coordinate of the gradient updates is mapped to a specific subcarrier and then transmitted through the wireless MAC channel, and the coordinates transmitted by different devices over the same subcarrier are received by the edge server in the form of an aggregate sum. {It is worth noting that the above protocol is also applicable to the case when the SGD updates are carried out for multiple rounds at the devices.} When there are many edge devices, over-the-air computation can be used to take advantage of superposition property of wireless multiple-access channel via simultaneous analog transmissions of the local updates. More specifically, at round t, the received signal in subcarrier $k$ is given by: \begin{equation} \label{eqn:channel} y_k (t) = \sum_{m=1}^{M} b_{km}(t) h_{km}(t) x_{km} (t) + n_k (t) \end{equation} where $b_{km}(t)$ is a power scaling factor, $h_{km}(t)$ is the channel gain, and $x_{km}(t)$ is the message of user $m$ through the subcarrier $k$, respectively, and $n_k (t) \sim \mathcal{N}(0,\sigma^2)$ is the channel noise. To simplify notation, we omit $(t)$ when it is clear from the context in the following. Specifically, the message $x_{km}=(C_t(u^m_t))_{l(k)}$, with a one-to-one mapping $l(k)=(I(t))_k$, which indicates the $k$-th element of $I(t)$, transmitted through the $k$-th subcarrier. The total power that a device can use in the transmission is limited in practical systems. Without loss of generality, we assume that there is a power constraint at each device, given by $\sum_{k=1}^{K} \absq{b_{km} x_{km}} \leq E_m,\ \forall m\in \{ 1, \ldots, M \}$. Note that $b_{km}$ hinges heavily upon both $\bm{h}_m=[h_{1m}, \ldots, h_{Km}]^\top$ and $\bm{x}_{m}=[x_{1m}, \ldots, x_{Km}]^\top$, and a key next step is to optimize $b_{km} (\bm{h}_{m}, \bm{x}_{m})$. In each round, each device optimizes its power allocation for transmitting the selected coordinates of its update signal over the $K$ subcarriers, aiming to minimize the communication error so as to achieve a good estimation of $G_t(w_t)$ (or its scaled version) for the gradient update, where $$G_t(w_t) \triangleq \frac{1}{M}\sum_{m=1}^M C_t(u_t^m).$$ From the learning perspective, based on $\{y_k\}_{k=1}^K$, it is of paramount importance for the receiver to get a good estimate of $G_t(w_t)$. Since $n_k(t)$ is Gaussian noise, the optimal estimator is in the form of \vspace{0.05in} \begin{equation} \label{eqn:estimator} \big(\widehat{G}_t(w_t)\big)_{k} = \begin{cases} \alpha_{l(k)} y_{l(k)}, & k \in I(t) \\ 0 & \text{otherwise} \end{cases}\vspace{0.05in} \end{equation} where $\{ \alpha_k \}_{k=1}^K$ are gradient estimator coefficients for subcarriers. It follows that the communication error (i.e., the gradient estimation error incurred by lossy communications) is given by \begin{equation} \epsilon_t = \widehat{G}_t(w_t) - G_t(w_t) . \label{comm-error} \end{equation} We note that $\{\alpha_k\}_{k=1}^K$ are intimately related to the learning rates for the $K$ coordinates, scaling the learning rate to be $\{\gamma \alpha_k\}_{k=1}^K$. It is interesting to observe that the learning rates in the proposed BLCD algorithm are essentially different across the dimensions, due to the unreliable and dynamically changing channel conditions across different subcarriers. \section{Impact of Communication Error and Compression on BLCD Algorithm} \label{sec:convergence} Recall that due to the common sparsification operator across devices, the update in the SGD algorithm at communication round $t$ is given by \[ w_{t+1} = w_t - \big[ C_t(\gamma g_t(w_t) +r_t) + \epsilon_t\big]. \] Needless to say, the compression operator $C_t$ plays a critical role in sparse transmissions. In this study, we impose the following standard assumption on the compression rate of the operator. \begin{assumption} \label{asmpt:compression} For a set of the random compression operators $\{C_t\}_{t=1}^T$ and any $x\in \mathbb{R}^d$, it holds \begin{equation} \E \normsq{x - C_t(x)} \leq (1-\delta) \normsq{x} \end{equation} for some $\delta \in (0,1]$. \end{assumption} We impose the following standard assumptions on the non-convex objective function $f(\cdot)$ and the corresponding stochastic gradients $g^m_t (w_t)$ computed with the data samples of device $m$ in round $t$. (We assume that the data samples $\{\xi_{m,t}\}$ are i.i.d.~across the devices and time.) \begin{assumption} \label{asmpt:smoothness} (Smoothness) A function $f:\mathbb{R}^d \rightarrow \mathbb{R}$ is L-smooth if for all ${x},{y}\in \mathbb{R}^d$, it holds \begin{equation} \abs{f({y})-f({x})-\innp{\nabla f({x})}{{y}-{x}}} \leq \frac{L}{2} \normsq{{y}-{x}}. \end{equation} \end{assumption} \begin{assumption} \label{asmpt:boundedmoment} For any $x\in \mathbb{R}^d$ and for any $m=1, \ldots, M$, a stochastic gradient $g_t^m(x), \forall t$, satisfies \begin{equation} \E[g_t^m(x)] = \nabla f (x), \textrm{ } \E \normsq{g_t^m(x)} \leq G^2 \end{equation} where $G>0$ is a constant. \end{assumption} \begin{table*}[t] \centering \begin{minipage}{1\textwidth} \begin{align} \mathbb{E}_t [ f(\tilde{w}_{t+1}) ] \hspace{-0.03in}\leq& f(\tilde{w}_{t} )\hspace{-0.03in}+\hspace{-0.03in}\langle\nabla f(\tilde{w}_{t}),\mathbb{E}_t[\tilde{w}_{t+1}\hspace{-0.03in}-\hspace{-0.03in}\tilde{w}_{t}]\rangle \hspace{-0.03in}+\hspace{-0.03in}\frac{L}{2}\mathbb{E}_t[\lVert \tilde{w}_{t+1}-\tilde{w}_{t} \rVert^2] \nonumber\\ &\hspace{-0.5in}=f(\tilde{w}_{t})\hspace{-0.03in}-\hspace{-0.03in} \langle\nabla f(\tilde{w}_{t}),\gamma \mathbb{E}_t[g_t (w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert\gamma g_t(w_t) \rVert^2]\hspace{-0.03in}+\hspace{-0.03in}\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert^2] \hspace{-0.03in}+\hspace{-0.03in} L\mathbb{E}_t[\langle \gamma g_t(w_t) ,\epsilon_t\rangle]\nonumber\\ &\hspace{-0.5in}= f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in}\langle\nabla f({w}_{t}),\gamma \mathbb{E}_t[g_t(w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t]\rangle \hspace{-0.03in}-\hspace{-0.03in} \langle\nabla f(\tilde{w}_{t})\hspace{-0.03in}-\hspace{-0.03in}\nabla f({w}_{t}), \gamma \mathbb{E}_t[g_t(w_t)] \hspace{-0.03in}+\hspace{-0.03in} \mathbb{E}_t[\epsilon_t]\rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\Vert\epsilon_t\rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} L\mathbb{E}_t[\langle\gamma g_t(w_t),\epsilon_t\rangle] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\Vert\gamma g_t(w_t) \rVert^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in} \gamma \lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}-\hspace{-0.03in} \langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{\rho}{2} \lVert \gamma \nabla f(w_t)\hspace{-0.03in}+\hspace{-0.03in}\mathbb{E}_t[\epsilon_t] \rVert_2^2\hspace{-0.03in}+\hspace{-0.03in}\frac{L^2}{2\rho}\mathbb{E}_t[\lVert r_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert \epsilon_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} L\langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \hspace{-0.03in}+\hspace{-0.03in} \frac{L\gamma^2}{2} \mathbb{E}_t \lVert g_t(w_t) \rVert_2^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) \hspace{-0.03in}-\hspace{-0.03in} \gamma \lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}+\hspace{-0.03in} (L-1) \lVert \nabla f(w_t) \rVert\lVert \mathbb{E}_t[\epsilon_t] \rVert \hspace{-0.03in}+\hspace{-0.03in} \frac{\rho}{2}\left( \gamma^2\lVert \nabla f(w_t) \rVert_2^2 \hspace{-0.03in}+\hspace{-0.03in} \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2\hspace{-0.03in}+\hspace{-0.03in}2\gamma \langle \nabla f(w_t), \mathbb{E}_t[\epsilon_t] \rangle \right) \hspace{-0.03in}+\hspace{-0.03in} \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] \hspace{-0.03in}+\hspace{-0.03in} \frac{L\gamma^2}{2} G^2 \nonumber\\ &\hspace{-0.5in}\leq f(\tilde{w}_{t}) - \gamma \lVert \nabla f(w_t) \rVert_2^2 + (L-1+ 2 \gamma) \lVert \nabla f(w_t) \rVert\lVert \mathbb{E}_t[\epsilon_t] \rVert + \frac{\gamma^2 \rho }{2}\lVert \nabla f(w_t) \rVert_2^2 + \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] + \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2 +\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] + \frac{L\gamma^2}{2} G^2 \nonumber\\ &\hspace{-0.5in}= f(\tilde{w}_{t}) -\gamma \left[ 1-\frac{\rho}{2}\gamma \right] \lVert \nabla f(w_t) \rVert_2^2 + (L-1+2\gamma) \lVert \nabla f(w_t) \rVert \lVert \mathbb{E}_t[\epsilon_t] \rVert + \frac{L^2}{2\rho} \mathbb{E}_t[\lVert r_t \rVert_2^2] + \lVert \mathbb{E}_t[\epsilon_t] \rVert_2^2 +\frac{L}{2}\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2] + \frac{L\gamma^2}{2} G^2 \label{gradient-bound} \end{align} \hrule \end{minipage}\vspace{-0.2in} \end{table*} It follows directly from \cite{Karimireddy2019} that $ \mathbb{E} [\lVert r_t \rVert_2^2]\leq \frac{4(1-\delta)}{\delta^2}\gamma^2 G^2. $ Recall that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \) and that \(\tilde{w}_{t+1} \) can be viewed as a noisy version of the compression-error correction of $\hat{w}_{t+1}$ in (\ref{eqn:gcsimplified}), where the ``noisy perturbation'' is incurred by the communication error. For convenience, let $ \mathbb{E}_t[\epsilon_t]$ denote the gradient bias incurred by the communication error and $ \mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ]$ be the corresponding mean square error, where $ \mathbb{E}_t$ is taken with respect to channel noise. Let $\eta= \frac{L-1+2\gamma}{\gamma (2-\rho\gamma)} $ with $0<\rho<2$. Let $f^*$ denote the globally minimum value of $f$. We have the following main result on the iterates in the BLCD algorithm. \begin{theorem} \label{thm:convergence} Under Assumptions \ref{asmpt:compression}, \ref{asmpt:smoothness} and \ref{asmpt:boundedmoment}, the iterates $\{w_t\}$ in the BLCD algorithm satisfies that \begin{align} &\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\sum_{t=0}^T \left(\lVert\nabla f(w_t)\rVert_2 \hspace{-0.03in}-\hspace{-0.03in} \eta \lVert \underbrace{\mathbb{E}_t[\epsilon_t]}_{\mbox{bias}} \rVert_2\right)^2 \nonumber\\ &\hspace{0.1in}\leq\hspace{-0.03in}\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\sum_{t=0}^T \left[\frac{L\eta }{ L \hspace{-0.03in}-\hspace{-0.03in}1 \hspace{-0.03in}+\hspace{-0.03in} 2 \gamma}\underbrace{\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2]}_{\mbox{MSE}} \hspace{-0.03in}+\hspace{-0.03in} \left(1\hspace{-0.03in}+\hspace{-0.03in} \eta^2 \right)\hspace{-0.03in} \lVert \underbrace{\mathbb{E}_t[\epsilon_t]}_{\mbox{bias}} \rVert_2^2 \right]\nonumber\\ &\hspace{0.1in}+\hspace{-0.03in}\frac{2}{T\hspace{-0.03in}+\hspace{-0.03in}1}\frac{f(w_0)\hspace{-0.03in}-\hspace{-0.03in}f^*}{\gamma(2\hspace{-0.03in}-\hspace{-0.03in}\rho\gamma)}\hspace{-0.03in}+\hspace{-0.03in} \left(\frac{L}{\rho} \frac{2(1\hspace{-0.03in}-\hspace{-0.03in} \delta)}{\delta^2} \hspace{-0.03in}+\hspace{-0.03in} \frac{1}{2} \right)\hspace{-0.03in} \frac{2L \gamma G^2}{ 2-\rho\gamma}. \label{main-result} \end{align} \end{theorem} \vspace{-0ex} \begin{proof} Due to the limited space, we outline only a few main steps for the proof. Recall that \(\tilde{w}_t= {w}_{t} - r_t \). It can be shown that \(\tilde{w}_{t+1}= \tilde{w}_{t} - \gamma g_t(w_t) - \epsilon_{t} \). As shown in (\ref{gradient-bound}), using the properties of the iterates in the BLCD algorithm and the smoothness of the objective function $f$, we can establish an upper bound on $\mathbb{E}_t [ f(\tilde{w}_{t+1}) ] $ in terms of \(f(\tilde{w}_{t}) \) the corresponding gradient \( \nabla f(w_t) \), and the gradient bias and MSE due to the communication error. Then, (\ref{main-result}) can be obtained after some further algebraic manipulation. \end{proof} {\bf Remarks.} Based on Theorem~\ref{thm:convergence}, we have a few observations in order. \begin{itemize} \item We first examine the four terms on the right hand side of (\ref{main-result}): The first two terms capture the impact on the gradient by the time average of the bias in the communication error $\epsilon_t$ and that of the corresponding the mean square, denoted as MSE; the two items would go to zero if the bias and the MSE diminish; the third term is a scaled version of $ f(w_0) - f^* $ and would go to zero as long as $\gamma = O(T^{-\beta}) $ with $\beta < 1$; and the fourth term is proportional to $\gamma$ and would go to zero when $\gamma \rightarrow 0$. \item If the right hand side of (\ref{main-result}) diminishes as $T \rightarrow \infty$, the iterates in the BLCD algorithm would ``converge'' to a neighborhood around $\eta \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$, which is a scaled version of the bias in the communication error. For convenience, let $ \bar{\epsilon}= \limsup_t \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$, and define a contraction region as follows: \[ A_{\gamma} = \left\{ w_t: \lVert\nabla f(w_t)\rVert_2 \leq (\eta + \Delta) \bar{\epsilon} \right\}. \] where $\Delta >0$ is an arbitrarily small positive number. It then follows that the iterates in the BLCD algorithm would ``converge'' to a contraction region given by $A_{\gamma}$, in the sense that the iterates return to $A_{\gamma}$ infinitely often. Note that $f$ is assumed to be any nonconvex smooth function, and there can be many contraction regions, each corresponding to a stationary point. \item When the communication error is unbiased, the gradients would diminish to $0$ and hence the BLCD algorithm would converge to a stationary point. In the case the bias in the communication error does exist, there exists intrinsic tradeoff between the size of the contraction region and $\eta \lVert \mathbb{E}_t[\epsilon_t] \rVert_2$. When the learning rate $\gamma$ is small, the right hand side of (\ref{main-result}) would small, but $\eta$ can be large, and vice verse. It makes sense to choose a fixed learning rate that would make $\eta$ small. In this way, the gradients in the BLCD algorithm would ``concentrate" around a (small) scaled version of the bias. \item Finally, the impact of gradient sparsification is captured by $\delta$. For instance, when (randomly) uniform selection is used, $\delta=\frac{k}{d}$. We will elaborate on this in Section \ref{sec:controlphase}. \end{itemize} Further, we have the following corollary. \begin{corollary} \label{thm:convergencezeromean} Under Assumptions \ref{asmpt:compression}, \ref{asmpt:smoothness}, and \ref{asmpt:boundedmoment}, we have that if $\E_t[\epsilon_t]=0$ and \(\gamma = \frac{1}{\sqrt{T+1}} \), the BLCD algorithm converges to a stationary point and satisfies that \begin{align} &\frac{1}{T\hspace{-0.03in}+\hspace{-0.03in}1}\hspace{-0.03in}\sum_{t=0}^T \lVert\nabla f(w_t)\rVert_2^2 \nonumber\\ &\leq\hspace{-0.03in} \frac{1} {2 - \frac{\rho}{\sqrt{T+1}} } \left\{ \frac{2 (f(w_0)\hspace{-0.03in}-\hspace{-0.03in}f^*) }{\sqrt{T+1}} \hspace{-0.03in}+\hspace{-0.03in} \frac{2 L G^2}{\sqrt{T\hspace{-0.03in}+\hspace{-0.03in}1}} \hspace{-0.03in}\left(\frac{L}{\rho} \frac{2(1\hspace{-0.03in}-\hspace{-0.03in}\delta)}{\delta^2} \hspace{-0.03in}+\hspace{-0.03in} \frac{1}{2} \right)\hspace{-0.03in} \right. \nonumber\\ &\left. \hspace{0.5in}+\ \frac{L}{T+1} \sum_{t=0}^T\underbrace{\mathbb{E}_t[\lVert\epsilon_t\rVert_2^2]}_{\text{MSE}} \right\} \end{align} \end{corollary} \vspace{-2ex} \section{Communication Error Minimization via Joint Optimization of Power Allocation and Learning Rates} \label{sec:update} Theorem~\ref{thm:convergence} reveals that the communication error has a significant impact on the convergence behavior of the BLCD algorithm. In this section, we turn our attention to minimizing the communication error (in term of MSE and bias) via joint optimization of power allocation and learning rates. Without loss of generality, we focus on iteration $t$ (with abuse of notation, we omit $t$ in the notation for simplicity). Recall that the coordinate updates in the BLCD algorithm, sent by different devices over the same subcarrier, are received by the edge server as an aggregate sum, which is used to estimate the gradient value in that specific dimension. We denote the power coefficients and estimators as $\bm{b} \triangleq [b_{11}, b_{12}, \ldots, b_{1M}, b_{21}, \ldots, b_{KM} ]$ and $\bm{\alpha} \triangleq [\vec{\alpha}_{1}, \ldots, \vec{\alpha}_{K}]$. In each round, each sender device optimizes its power allocation for transmitting the selected coordinates of their updates over the $K$ subcarriers, aiming to achieve the best convergence rate. We assume that the perfect channel state information is available at the corresponding transmitter, i.e., $\bm{h}_m=[h_{1m}, \ldots, h_{Km}]^\top$ is available at the sender $m$ only. Based on (\ref{comm-error}), the mean squared error of the communication error in iteration $t$ is given by \begin{equation} \mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ] = \E\bigg[\normsq{\widehat{G}_t(w_t) - G_t(w_t)}\bigg] \end{equation} where the expectation is taken over the channel noise. For convenience, we denote $\mathbb{E}_t [\lVert \epsilon_t \rVert_2^2 ] $ as $\mbox{\textrm{MSE}}_1$, and after some algebra, it can be rewritten as the sum of the variance and the square of the bias: \begin{align} \hspace{-0.08in} \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b})\hspace{-0.03in}=\hspace{-0.05in} \sum_{k=1}^K\hspace{-0.04in} \bigg[\hspace{-0.04in} \underbrace{\sum_{m=1}^M\hspace{-0.04in} \left(\hspace{-0.03in} \alpha_k b_{km} h_{km} \hspace{-0.04in} -\hspace{-0.04in} \frac{1}{M}\hspace{-0.04in} \right)\hspace{-0.04in} x_{km}\hspace{-0.04in} }_{\mbox{bias in $k$th coordinate}} \bigg]^2\hspace{-0.11in} +\hspace{-0.04in} \underbrace{\sum_{k=1}^K \hspace{-0.04in} \sigma^2 \alpha^2_k}_{\mbox{variance\ } } \label{bias-var} \end{align} Recall that $\{\alpha_k\}_{k=1}^K$ are intimately related to the learning rates for the $K$ coordinates, making the learning rate effectively $\{\gamma \alpha_k\}_{k=1}^K$. \subsection{Centralized Solutions to Minimizing MSE (Scheme 1)} In light of the above, we can cast the MSE minimization problem as a learning-driven joint power allocation and learning rate problem, given by \begin{align} \textbf{P1:\ } \min_{\bm{\alpha}, \bm{b}} \quad & \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b}) \\ \textrm{s.t.} \quad & \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m, \quad \forall m \\ & b_{km}\geq 0, \ \alpha_k \geq 0 \quad \forall k,m \end{align} which minimizes the MSE for every round. The above formulated problem is non-convex because the objective function involves the product of variables. Nevertheless, it is biconvex, i.e., for one of the variables being fixed, the problem is convex for the other one. In general, we can solve the above bi-convex optimization problem in the same spirit as in the EM algorithm, by taking the following two steps, each optimizing over a single variable, iteratively: \begin{equation*} \begin{aligned} \textbf{P1-a:} \ \min_{\bm{\alpha}} \quad & \mbox{\textrm{MSE}}_1(\bm{\alpha}, \bm{b})\quad \textrm{s.t.} \quad \alpha_k \geq 0, \quad \forall k \\ \textbf{P1-b:} \min_{\bm{b}} \quad & \mbox{\textrm{MSE}}_{11}(\bm{\alpha}, \bm{b}) \\ \textrm{s.t.} \quad & \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m \ \ \forall m, \quad b_{km}\geq 0 \ \ \forall k,m. \end{aligned} \end{equation*} Since (\textbf{P1-a}) is unconstrained,for given $\{ b_{km} \} $, the optimal solution to (\textbf{P1-a}) is given by \begin{align} \alpha^*_k \! = \! \max \left\{ \frac{\big(\sum_{m=1}^M x_{km}\big)\big(\sum_{m=1}^M b_{km} h_{km} x_{km} \big)}{M \big[\sigma^2 + \big(\sum_{m=1}^M b_{km} h_{km} x_{km} \big)^2\big]}, 0\right\}. \end{align} Then, we can solve (\textbf{P1-b}) by optimizing $\bm{b}$ only. Solving the sub-problems (\textbf{P1-a}) and (\textbf{P1-b}) iteratively leads to a local minimum, however, not necessarily to the global solution. Observe that the above solution requires the global knowledge of $x_{km}$'s and $h_{km}$'s of all devices, which is difficult to implement in practice. We will treat it as a {\em benchmark} only. Next, we turn our attention to developing distributed sub-optimal solutions. \vspace{-0.05in} \subsection{Distributed Solutions towards Zero Bias and Variance Reduction (Scheme 2)} As noted above, the centralized solution to ({\bf P1}) requires the global knowledge of $x_{km}$'s and $h_{km}$'s and hence is not amenable to implementation. Further, minimizing the MSE of the communication error does not necessarily amount to minimizing the bias therein since there exists tradeoffs between bias and variance. Thus motivated, we next focus on devising distributed sub-optimal solutions which can drive the bias in the communication error to (close to) zero, and then reduce the corresponding variance as much as possible. Specifically, observe from (\ref{bias-var}) that the minimization of MSE cost does not necessarily ensure \(\hat{G}\) to be an unbiased estimator, due to the intrinsic tradeoff between bias and variance. To this end, we take a sub-optimal approach where the optimization problem is decomposed into two subproblems. In the subproblem at the transmitters, each device \(m\) utilizes its available power and local gradient/channel information to compute a power allocation policy in terms of \(\{b_{1m},b_{2m},\dots,b_{Km}\}\). In the subproblem at the receiver, the receiver finds the best possible \(\alpha_k\) for all \(k=1,\dots,K\). Another complication is that due to the power constraints at individual devices, it is not always feasible to achieve unbiased estimators of the gradient signal across the coordinates. Nevertheless, for given power constraints, one can achieved unbiased estimators of a scaled down version of the coordinates of the gradient signal. In light of this, we formulate the optimization problem at each device (transmitter) $m$ to ensure an unbiased estimator of a scaled version $\zeta_m$ of the transmitted coordinates, as follows: \begin{align}\label{prob2a} \mbox{\bf Device~m:} \ &\underset{\{b_{km}\}_{k=1:K}}{\max} ~ \zeta_m\\ \text{s.t.}\hspace{0.1in}\sum_{k=1}^K b_{km}^2&x_{km}^2\leq E_m, \ \ b_{km}\hspace{-0.03in}\geq\hspace{-0.03in} 0, \\ \zeta_mx_{km}&-b_{km}h_{km}x_{km}= 0, & \forall k=1,\dots,K,\label{q56} \end{align} where maximizing $\zeta_m$ amounts to maximizing the corresponding SNR (and hence improving the gradient estimation accuracy). The first constraint in the above is the power constraint, and the second constraint is imposed to ensure that there is no bias of the same scaled version of the transmitted signals across the dimensions for user $m$. The power allocation solution can be found using Karush-Kuhn-Tucker (KKT) conditions as follows: \begin{align}\label{q57} \Aboxed{\zeta_m^* = \sqrt{\frac{E_m}{\sum_{k=1}^K \frac{x_{km}^2}{h_{km}^2}}},~ b_{km}^*=\frac{\zeta^*_{m}}{h_{km}}, ~\forall k.} \end{align} Observe that using the obtained power allocation policy in (\ref{q57}), all $K$ transmitted coordinates for device $m$ have the same scaling factor $\zeta_m$. Next, we will ensure zero bias by choosing the right \(\boldsymbol{\alpha}\) for gradient estimation at the receiver, which can be obtained by solving the following optimization problem since all transmitted gradient signals are superimposed via the over-the-air transmission: \begin{align} \mbox{\bf Receiver side:} \ \underset{\{\alpha_k\},}{\min}~ &\sum_{k=1}^K {\nu}_k^2(\alpha_k, \{b_{km}^*\})\label{q58}\\ \text{s.t.}~ e_k(\alpha_k, \{b_{km}^*\}) = 0, &~~~\alpha_k\geq 0, \forall k=1,\dots,K, \label{q59} \end{align} where \(e_k\) and \( \nu_k^2 \) denote the bias and variance components, given as follows: \begin{align} e_k(\alpha_k, \{b_{km}^*\}) &= \alpha_k\left( \sum_{m=1}^M \zeta_m^* x_{km} \right) - \frac{1}{M}\sum_{m=1}^M x_{km},\nonumber\\ \nu_k^2(\alpha_k, \{b_{km}^*\}) &= \alpha_k^2\sigma^2, \end{align} for all \(k = 1,\dots,K\). For given $\{\zeta_m^*\}$, it is easy to see that \begin{align} \Aboxed{ \alpha_k^* = \frac{\frac{1}{M}\sum_{m=1}^M x_{km}}{\sum_{m=1}^M \zeta_m^*x_{km}} \simeq \frac{1}{\sum_{m=1}^M \zeta_m^*}, \ \ \forall k.} \end{align} We note that in the above, from an implementation point of view, since $\{x_{km} \}$ is not available at the receiver, it is sensible to set $ \alpha_k^\dagger \simeq \frac{1}{\sum_{m=1}^M \zeta_m^*}$. Further, \( \{\zeta_m^*\} \) is not readily available at the receiver either. Nevertheless, since there is only one parameter $\zeta_m^*$ from each sender $m$, the sum \(\sum_{m=1}^M \zeta_m^*\) can be sent over a control channel to the receiver to compute \(\alpha_k^\dagger\). It is worth noting that in general the bias exists even if $E_m$ is the same for all senders. Next, we take a closer look at the case when the number of subchannels \(K\) is large (which is often the case in practice). Suppose that \(\{x_{km}\}\) are i.i.d.~across subchannels and users, and so are \(\{ h_{km} \}\). We can then simplify \(\zeta_m^*\) further. For ease of exposition, we denote \(\mathbb{E}[x_{km}^2] = \varphi^2+\bar{x}^2\) and \(\mathbb{E}\left[ \frac{1}{h_{km}^2} \right] = \varpi^2\). When \(K\) is large, for every user \(m\) we have that: \begin{align} &\zeta_m^* = \frac{\sqrt{E_m}}{\sqrt{\sum_{k=1}^K \frac{x_{km}^2}{h_{km}^2}}} \underset{\substack{\text{when $K$}\\\text{is large}}}{\Longrightarrow} \zeta_m^* \approx \frac{\sqrt{E_m}}{\sqrt{K (\varphi^2+\bar{x}^2) \varpi^2}} \end{align} As a result, the bias and variance for each dimension $k$ could be written as, \begin{align} e_k(\alpha_k^*, \{b_{km}^*\}) &= \hspace{-0.03in}\sum_{m=1}^M\hspace{-0.03in} \left[\frac{ \sqrt{E_m}}{\sum_{m=1}^M \sqrt{E_m}} \hspace{-0.03in}-\hspace{-0.03in} \frac{1}{M}\right] x_{km}, \forall k. \label{eq:ek=0}\\ {\nu}_k^2 & = \hspace{-0.03in}\frac{K\varpi^2(\varphi^2+\bar{x}^2)}{\left(\sum_{m=1}^M\sqrt{E_m}\right)^2}\sigma^2, \forall k. \label{eq82} \end{align} {\em Observe that when $E_m$ is the same across the senders, the bias term \( \mathbb{E}_t [\epsilon_t] = \mathbf{0}\) in the above setting according to \eqref{eq:ek=0}. } \subsection{A User-centric Approach Using Single-User Solution (Scheme 3)} In this section, we consider a suboptimal user-centric approach, which provides insight on the power allocation across the subcarriers from a single device perspective. We formulate the single device (say user $m$) problem as \begin{equation*} \begin{aligned} \textbf{P2:\ }&\min_{\{b_{km}\}, \{\alpha_k\}}~ \sum_{k=1}^K \bigg[ \big( \alpha_k b_{km} h_{km} - 1 \big) x_{km}\bigg]^2 + \sigma^2 \sum_{k=1}^K \alpha^2_k \\ &\textrm{s.t.}~ \sum_{k=1}^{K} \absq{ b_{km} x_{km} } \leq E_m; \ b_{k}\geq 0, \ \alpha_k \geq 0, \forall k. \end{aligned} \end{equation*} \vspace{-2ex} \begin{theorem} \label{thm:beta} The optimal solution $\{b_{km}^*, \alpha_k^*\} $ to (\textbf{P2}) is given by \begin{equation} \label{eqn:singleuser} (b^*_{km})^2 = \bigg[ \sqrt{\frac{ \sigma^2} {\lambda x_{km}^2 h_{km}^2} } - \frac{\sigma^2}{h_{km}^2 x_{km}^2} \bigg]^+, \forall k, \end{equation} \begin{align} \alpha^*_k=\frac{b^*_{km} h_{km} x^2_{k} }{ \sigma^2 + (b^*_{km})^2 h^2_{km} x^2_{km}}, ~~~~\forall k, \end{align} where $\lambda_m$ is a key parameter determining the waterfilling level: \begin{equation} \sum_{k=1}^K \bigg[ \sqrt{\frac{1}{\lambda_m}} \sqrt{\frac{x_{km}^2 \sigma^2} { h_{km}^2} } - \frac{\sigma^2}{h_{km}^2 } \bigg]^+ = E_m. \end{equation} \end{theorem} The proof of Theorem~\ref{thm:beta} is omitted due to space limitation. Observe that Theorem~\ref{thm:beta} reveals that the larger the gradient value (and the smaller channel gain) in one subcarrier, the higher power the it should be allocated to in general, and that $ \{x_{km}/h_{km)} \}$ can be used to compute the water level for applying the water filling policy. Based on the above result, in the multi-user setting, each device can adopt the above single-user power allocation solution as given in Theorem \ref{thm:beta}. This solution can be applied individually without requiring any coordination between devices. Next, we take a closer look at the case when the number of subchannels \(K\) is large. Let $\bar{E}_m$ denote the average power constraint per subcarrier. When \(K\) is large, after some algebra, the optimization problem \textbf{P2} can be further approximated as follows: \begin{align} \textbf{P3: } \underset{{b}_{km}}{\min}& ~\mathbb{E} \left[ \frac{{{x}^2_{km}} \sigma^2} {{{b}^2_{km}} {{h}^2_{km}} {{x}^2_{km}} +\sigma^2} \right] \nonumber\\ \text{s.t. }& ~ \mathbb{E}\left[ {b}^2_{km} {{x}^2_{km}}\right]\leq \bar{E}_m,~ {b}_{km}\geq 0, \end{align} where the expectation is taken with respect to $\{{h}_{km}\}$ and $\{{x}_{km}\}$. The solution for \(k=1,\dots,K\) is obtained as follows:\vspace{-0.01in} \begin{align} & b_{km}^* \hspace{-0.04in}=\hspace{-0.04in} \sqrt{\hspace{-0.04in}\left[\hspace{-0.03in} \frac{\sigma \lvert x_{km}\rvert^{-1}}{h_{km}\sqrt{\lambda_m}}\hspace{-0.03in}-\hspace{-0.03in} \frac{\sigma^2}{x_{km}^2h_{km}^2} \hspace{-0.03in}\right]^+}\\ & \lambda_m \hspace{-0.03in}<\hspace{-0.03in}\frac{h_{km}^2x_{km}^2}{\sigma_k^2}\hspace{-0.03in}\Rightarrow\hspace{-0.03in} b^*_{km}\hspace{-0.03in}>\hspace{-0.03in}0\label{q82} \end{align} We can compute the bias and the variance accordingly. \section{Coordinate Selection for Bandlimited Coordinate Descent Algorithms} \label{sec:controlphase} The selection of which coordinates to operate on is crucial to the performance of sparsified SGD algorithms. It is not hard to see that selecting the top-$k$ (in absolute value) coordinates of the sum of the gradients provides the best performance. However, in practice it may not always be feasible to obtain top-$k$ of the sum of the gradients, and in fact there are different solutions for selecting $k$ dimensions with large absolute values; see e.g., \cite{Amiri2019a, Ivkin2019}. Note that each device individually transmitting top-$k$ coordinates of their local gradients is not applicable to the scenario of over-the-air communications considered here. Sequential device-to-device transmissions provides an alternative approach \cite{shi2019distributed}, but these techniques are likely to require more bandwidth with wireless connection. Another approach that is considered is the use of compression and/or sketching for the gradients to be transmitted. For instance, in \cite{Amiri2019a}, a system that updates SGD via decompressing the compressed gradients transmitted through over-the-air communication is examined. To the best of our knowledge, such techniques do not come with rigorous convergence guarantees. A similar approach is taken in \cite{Ivkin2019}, where the sketched gradients are transmitted through an error-free medium and these are then used to obtain top-$k$ coordinates; the devices next simply transmit the selected coordinates. Although such an approach can be taken with over-the-air computing since only the summation of the sketched gradients is necessary; this requires the transmission of $\mathcal{O}(k\log d)$ dimensions. To provide guarantees with such an approach $\mathcal{O}(k\log d + k)$ up-link transmissions are needed. Alternatively, uniformly selected $\mathcal{O}(k\log d + k)$ coordinates can be transmitted with similar bandwidth and energy requirements. For the practical learning models with non-sparse updates, uniform coordinate selection tend to perform better. Moreover, the common $K$ dimensions can be selected uniformly via synchronized pseudo-random number generators without any information transfer. To summarize, uniform selection of the coordinates is more attractive based on the energy, bandwidth and implementation considerations compared to the methods aiming to recover top-$k$ coordinates; indeed, this is the approach we adopt. \section{Experimental Results} In this section, we evaluate the accuracy and convergence performance of the BLCD algorithm, when using one of the following three schemes for power allocation and learning rate selection (aiming to minimize the impact of communication error): 1) the bi-convex program based solution (Scheme 1), 2) the distributed solution towards zero bias in Section~\ref{sec:update}. (Scheme 2); 3) the single-user solution (Scheme 3). We use the communication error free scheme as the baseline to evaluate the performance degradation. We also consider the naive scheme (Scheme 4) using equal power allocation for all dimensions, i.e., $b_{km}=\sqrt{E/ {\sum_{k=1}^K x^2_{km}}}$. In our first experiment, we consider a simple single layer neural network trained on the MNIST dataset. The network {consists of two 2-D convolutional layers with filter size \(5\times 5\) followed by a single fully connected layer and it} has 7840 parameters. $K=64$ dimensions are uniformly selected as the support of the sparse gradient transmissions. For convenience, we define $E_{avg}$ as the average sum of the energy (of all devices) per dimension normalized by the channel noise variance, i.e., $E_{avg}= E M \E[h_{km}^2]/ K \sigma^2.$ Without loss of generality, we take the variance of the channel noise as $\sigma^2=1$ and $\{h_{km}\}$ are independent and identically distributed Rayleigh random variables with mean $1$. The changes on $E_{avg}$ simply amount to different SNR values. In Fig. \ref{fig:comparison1}, we take $K=64$, $M=8$, batch size $4$ to calculate each gradient, and the learning rate $\gamma=0.01$. In the second experiment, we expand the model to {a more sophisticated \(5\)-layer neural network and an \(18\)-layer ResNet \cite{resnetpaper} with \(61706\) and \(11175370\) parameters, respectively}. {The \(5\)-layer network consists of two 2-D convolutional layers with filter size \(5\times 5\) followed by three fully connected layers. In all experiments, we have used a learning rate of \(0.01\). the local dataset of each worker is randomly selected from the entire MNIST dataset.} We use \(10\) workers with varying batch sizes and we utilize \(K=1024\) sub-channels for sparse gradient signal transmission. It can be seen from Fig.~\ref{fig:comparison1} that in the presence of the communication error, the centralized solution (Scheme 1) based on bi-convex programming converges quickly and performs the best, and it can achieve accuracy close to the ideal error-free scenario. Further, the distributed solution (Scheme 2) can eventually approach the performance of Scheme 1, but the single-user solution (Scheme 3) performs poorly, so does the naive scheme using equal power allocation (Scheme 4). Clearly, there exists significant gap between its resulting accuracy and that in the error-free case, and this is because the bias in Scheme 3 is more significantly. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{figs/Figure_2.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $\alpha_k=1/8$, $E_{avg}=0.1$ and a batch size of \(4\). Training model consists of a single layer neural network with 7840 differentiable parameters.} \label{fig:comparison1} \vspace{2ex} \includegraphics[width=\columnwidth]{figs/Figure_1.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $10$ workers and a batch size of \(256\). Training model consists of a \(5\)-layer deep neural network with 61706 differentiable parameters.}\label{fig:comparison2} \vspace{-0ex} \end{center} \end{figure} Next, Figures~\ref{fig:comparison2}, \ref{fig:comparison3} {and \ref{fig:resnet}} depict the results in the second experiment using much larger-scale deep neural networks. It can be observed from Figs.~\ref{fig:comparison2}, \ref{fig:comparison3} {and \ref{fig:resnet}} that the SNR can have significant impact on the final accuracy. As expected, {the convergence on the ResNet network is slower in comparison to other DNNs due to the huge number of parameters and small batch size. Nevertheless, it is clear that the learning accuracy improves significantly at high SNR. (The solution of the distributed algorithm for \(E_{avg}=10\) is omitted in Fig. \ref{fig:resnet}, since it is indistinguishably close to error-free solution.}) It is interesting to observe that when the SNR increases, the distributed solution (Scheme 2) can achieve accuracy close to the ideal error-free case, but the single-user solution (Scheme 3) would not. It is worth noting that due to the computational complexity of bi-convex programming in this large-scale case, Scheme 4 could be solved effectively (we did not present it here). Further, the batch size at each worker can impact the convergence rate, but does not impact the final accuracy. \begin{figure}[!t] \begin{center} \includegraphics[width=\columnwidth]{figs/Figure_3.pdf} \vspace{-4ex} \caption{Testing accuracy over training iterations for $10$ workers and a batch size of \(4\). Training model consists of a \(5\)-layer deep neural network with 61706 differentiable parameters.} \label{fig:comparison3} \vspace{2ex} \includegraphics[width=\columnwidth]{figs/resnet1.pdf} \vspace{-4ex} \caption{{Testing accuracy over training iterations for $10$ devices and a batch size of \(4\). Training model consists of an \(18\)-layer ResNet network with more than 11 million differentiable parameters.}}\label{fig:resnet} \end{center}\vspace{0ex} \end{figure} \section{Conclusions} In this paper, we consider a many-to-one wireless architecture for distributed learning at the network edge, where multiple edge devices collaboratively train a machine learning model, using local data, through a wireless channel. Observing the unreliable nature of wireless connectivity, we design an integrated communication and learning scheme, where the local updates at edge devices are carefully crafted and compressed to match the wireless communication resources available. Specifically, we propose SGD-based bandlimited coordinate descent algorithms employing over-the-air computing, in which a subset of k-coordinates of the gradient updates across edge devices are selected by the receiver in each iteration and then transmitted simultaneously over k sub-carriers. We analyze the convergence of the algorithms proposed, and characterize the effect of the communication error. Further, we study joint optimization of power allocation and learning rates therein to maximize the convergence rate. Our findings reveal that optimal power allocation across different sub-carriers should take into account both the gradient values and channel conditions. We then develop sub-optimal solutions amenable to implementation and verify our findings through numerical experiments. \section*{Acknowledgements} The authors thank Gautam Dasarathy for stimulating discussion in the early stage of this work. This work is supported in part by NSF Grants CNS-2003081, CNS-CNS-2003111, CPS-1739344 and ONR YIP N00014-19-1-2217. \normalem \bibliographystyle{IEEEtran}
1,108,101,564,423
arxiv
\section{Introduction} The discourse of wave-particle duality has always attracted attention from the early days of quantum mechanics. It is believed that it lies at the heart of quantum mechanics \cite{feynman}. It was understood from the beginning that the object exhibits both wave and particle natures. Objects showing both wave and particle natures are often called quantons \cite{bunge}. It was Bohr who first pointed out that both properties are mutually exclusive and formulated it as a principle of complementarity \cite{bohr}. Wootters and Zurek \cite{wootters} revisited Bohr's complementarity principle from the information-theoretic approach, looking at two-slit interference in the presence of a path detector, and found that simultaneous observation of both natures is possible with the proviso that the more you observe one, the more it will obscure the other. Later, Greenberger and Yasin \cite{greenberger} formulated a quantitative bound in terms of the predictability and fringe visibility. The \emph{predictability} was defined as \emph{a priori} information i.e., it tells one the difference between probabilities of going through different paths. Englert \cite{englert} proposed a stronger path quantifier which was based on \emph{a posteriori} path information acquired using a path detector, and derived a bound on the path \emph{distinguishability} and fringe visibility, ${\mathcal D}^2 + {\mathcal V}^2 \le 1$. This relation, generally called the wave particle duality relation, is understood to be a quantitative statement of Bohr's principle. Of late the concept of wave particle duality has been generalized to multipath interference \cite{3slit,cd15,nduality,bagan,roy}. \begin{figure} \centering \includegraphics[width=7.0 cm]{qcsetup.pdf} \caption{Schematic diagram to illustrate a typical interference experiment with a quantum which-path device BS2. The beam-splitter BS2 is in a superposition of being present in the path of the photon and being away from it.} \label{qcsetup} \end{figure} In a Mach-Zehnder interferometer, it is understood that in the balanced mode, only one of the detectors registers all the photons, and no photons arrive at the other detector due to destructive interference. In this situation, it is logical to believe that the photon follows both paths, which later interfere. If the second beam-splitter is removed, photons from one path can only reach a particular detector. So it is logical to assume that each photon detected by any detector came from only one path and not both. So the presence of the second beam-splitter makes the photons behave as a wave, following both paths, and in its absence they behave like particles, following only one path at a time. Wheeler introduced an idea that if the choice of removing or retaining the beam-splitter is made after the photon has traversed most of its path, one can affect the past of the particle in the sense of making sure, even after a delay, that the photons behave like a wave or like a particle \cite{wheeler}. This ``delayed choice" idea has been a subject of debate for a long time. Some years back, a proposal was made by Ionicioiu and Terno \cite{terno} suggesting that the second beam-splitter could be a quantum beam-splitter (QBS), such that it is in a quantum superposition of being present and absent (see Fig. \ref{qcsetup}). The idea was that this would force the photon to be in a superposition of wave and particle natures. This ``quantum delayed choice" experiment, with a quantum beam-splitter immediately became a subject of attention, and many experimental and theoretical studies were carried out \cite{celeri,tang,peruzzo,kaiser,tangexpt,zheng}. Apart from the obvious relevance of this new class of experiments to Wheeler's delayed choice idea, there have been speculations that the superposition of wave and particle natures might violate complementarity. In particular, some claims of exceeding the bound set by the two-path duality relation of the kind $\mathcal{D}^2+\mathcal{V}^2 \leq 1$ have been made \cite{tang}. In this paper, we investigate the issue of wave particle duality in the more general scenario of $n$-path interference, where the path detector is in a quantum superposition of being present and absent. \section{Wave-particle duality in multipath interference} \label{Preliminaries} \subsection{Duality relation for pure quanton and quantum path detector} Consider an $n$-path interference experiment (see Fig. \ref{nslit}) with pure initial quanton state \begin{equation} |\psi_{in}\rangle=\sum_{i=1}^n\sqrt{p_i}\, {|\psi_i\rangle}, \end{equation} where ${p_i}$ is the probability of acquiring the $i$th path and $|\psi_i\rangle$ forms an orthonormal basis. We use a quantum path detector (QPD) to detect the path acquired by a quanton. There are two degrees of freedom associated with it. One is its location, which is assumed to have two states, $|Y\rangle$ corresponding to it being \emph{present} in the paths of the quantum and $|N\rangle$ corresponding to be being \emph{absent} from the path. The other degree of freedom is its internal state denoted by $|d_i\rangle$, which corresponds to it detecting the path of the quanton. Initially, the QPD is assumed to be in the state $|d_0\rangle$, and if the quanton goes through the $k$th path, the QPD state changes to $|d_k\rangle$. So the full initial detector state is given by \begin{equation} |\phi_0\rangle = |d_0\rangle \left(c_1\, |Y\rangle + c_2\, |N\rangle \right) , \label{phi0} \end{equation} where ${c_1}$ is the amplitude of QPD presence and $c_2$ the amplitude of its absence; $c_1^2+c_2^2 =1$. The state represents the QPD being in a superposition of the two locations. Initially, the joint state of quanton and QPD is given by \begin{eqnarray}\label{rhin} |\Psi_{in}\rangle = |\psi_{in}\rangle|\phi_0\rangle = \sum_{i=1}^n\sqrt{p_i}\, {|\psi_i\rangle} |d_0\rangle \left(c_1\, |Y\rangle + c_2\, |N\rangle \right),~~~~~ \end{eqnarray} which denotes a pure state of the quanton with amplitude $\sqrt{p_k}$ to go through the $k$th path, being in the state $|\psi_k\rangle$, and the QPD in a superposition of being present and absent. The interaction can be represented by a controlled unitary operation, ${U}$. The combined state of quanton and QPD, after the quanton has traversed the paths and interacted with the QPD, can be written as \begin{eqnarray}\label{rhstate} |\Psi\rangle &=& c_1\big[\sum_{i=1}^n\sqrt{p_i}\,|\psi_i\rangle|d_i\rangle\big] |Y\rangle + c_2\big[\sum_{i=1}^n\sqrt{p_i}\,|\psi_i\rangle\big] |d_0\rangle|N\rangle .\nonumber\\ \end{eqnarray} The first term in the above equation represents the quanton states entangled with the internal states of the QPD, when the QPD is present in the path of the quanton, i.e., it is in the state $|Y\rangle$. Here path information of the quanton is encoded in the $|d_i\rangle$ states of the QPD, and the quanton behaves as a particle. The second term represents the pure state of the quanton in a superposition of $n$ paths, acting like a wave, and the QPD away from its path, in the state $|N\rangle$. The state (\ref{rhstate}) can be written as $c_1|\text{particle}\rangle|Y\rangle + c_2|\text{wave}\rangle|N\rangle$, and represents a superposition of particle nature and wave nature, with amplitudes $c_1$ and $c_2$, respectively. It is the most natural generalization of the wave and particle superposition states studied earlier (without a QPD) \cite{celeri,tang,peruzzo,kaiser,tangexpt,zheng}, to the case where there is a real QPD present. A similar state has also been used in a very recent work using a QPD \cite{wang}. It may be convenient to use the density operator formalism if one wants to generalize the analysis to mixed states. The density operator for the state (\ref{rhstate}), is given by \begin{equation}\label{rh} \rho_{\text{QD}}=\sum_{i,j=1}^n\sqrt{p_i p_j}\, |\psi_i\rangle \langle \psi_j|\otimes U_i |\phi_0\rangle \langle \phi_0| U_j^\dag , \end{equation} where $U_i |\phi_0\rangle= c_1\, |d_i\rangle|Y\rangle + c_2\, |d_0\rangle|N\rangle $. The above interaction creates entanglement between the quanton and path detector. Thus, for gaining knowledge of the path of the quanton, it is sufficient to do a measurement on the states $|d_i\rangle$ of the QPD. Here we will use the unambiguous quantum state discrimination (UQSD) method for gaining the path information \cite{3slit,cd15}. For wave information we will use $l_1$ norm measure of coherence \cite{cd15,coherence,tqcoherence}. Let us now look at the path distinguishability and the measure of coherence. \begin{figure} \centering \includegraphics[width=7.0 cm]{nslitv2.pdf} \caption{Schematic drawing of an $n$-path interference experiment with a quantum which-path detector. The path-detector is in a superposition of being present and absent in the path of the photon.} \label{nslit} \end{figure} \textit{{Path distinguishability}}: Based on UQSD, the path-distinguishability for $n$-path interference \cite{3slit,cd15}, is given by \begin{eqnarray} \mathcal{D}_Q &:=& 1 - {1\over n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, |\langle \phi_0| U_j^\dag U_i |\phi_0\rangle| \nonumber\\&=& 1-{1\over n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, \left( c_1^2\, |\langle{d_j} |{d_i}\rangle|+c_2^2 \right). \label{DQ} \end{eqnarray} It is essentially the maximum probability with which the states $U_i|\phi_0\rangle$ can be \emph{unambiguously} distinguished from each other. \textit{{Quantum coherence}}: Quantum coherence \cite{coherence,cd15,tqcoherence} gives the wave nature of a quanton, given by \begin{equation} \label{coh} {\mathcal C}(\rho) := {1\over n-1}\sum_{i\neq j} \abs{\rho_{ij}} , \end{equation} where $n$ is the dimensionality of the Hilbert space. The reduced density matrix of the quanton can be obtained by tracing out all the states of the QPD: \begin{eqnarray} \label{rdm} \rho_Q &=& \sum_{i,j=1}^n \sqrt{p_i p_j}\, \mbox{Tr}\left( U_i |\phi_0\rangle \langle \phi_0| U_j^\dag \right) |\psi_i\rangle\langle\psi_j|. \end{eqnarray} The set $\{ |\psi_i\rangle\}$ forms a complete basis for the $n$ path setup. Thus, the coherence can be obtained using the reduced density matrix \begin{eqnarray} \label{cohr} {\mathcal C} &=& {1\over n-1}\sum_{i\neq j} \abs{\langle\psi_i|\rho_Q|\psi_j\rangle} \nonumber\\ &=& {1\over n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, \abs{ \mbox{Tr}\left( U_i |\phi_0\rangle \langle \phi_0| U_j^\dag \right) }. \end{eqnarray} Using Eq. (\ref{phi0}), we get the following form: \begin{eqnarray} \label{cohfn} {\mathcal C}={1\over n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, \left( c_1^2\, |\langle{d_j} |{d_i}\rangle|+c_2^2 \right). \end{eqnarray} Combining Eqs. (\ref{DQ}) and (\ref{cohfn}), we get \begin{equation} {\mathcal D}_Q + {\mathcal C} = 1. \label{duality} \end{equation} This is a tight wave particle duality relation which had been derived earlier for $n$-path interference \cite{cd15}. So, the relation continues to hold even in the case of a QPD. \textit{{Two-path experiment:}} For $n=2$ and $p_1=p_2=\tfrac{1}{2}$, the path distinguishability (\ref{DQ}) and coherence (\ref{cohfn}) becomes \begin{equation} {\mathcal D}_Q = c_1^2 \left(1- \abs{\langle{d_1} |{d_2}\rangle} \right) \end{equation} \begin{equation} {\mathcal C}= 1- c_1^2 + c_1^2 \abs{\langle{d_1} |{d_2}\rangle}. \end{equation} Our result reproduces the earlier result \cite{qtwist} for a two path experiment in the presence of a QPD, while recognizing that for two paths, the coherence $\mathcal{C}$ is identical to the traditional visibility $\mathcal{V}$ \cite{tqcoherence}. It also satisfies Eq. (\ref{duality}) in the same way. \subsection{Superposition of wave and particle natures} The preceding analysis is for the behavior of the quanton irrespective of the \emph{location} state of the QPD. One might argue that one would get the same result if QPD were not in the superposition state (\ref{phi0}), but in a mixed state of being present and absent. To really see the effect of the QPD being in a superposition, one should look at the behavior of the quanton conditioned on obtaining a superposition location state of the QPD. To this end, let us assume the QPD location is measured in certain basis and collapses to \begin{equation}\label{bsp} |\phi_{\alpha}\rangle= \cos{\alpha}\, |Y\rangle+ \sin{\alpha}\, |N\rangle , \end{equation} which is the state just for the location degree of the QPD. The interaction can be represented by a controlled unitary operation, $\mbox{U}$. The combined state of quanton and QPD can be written as \begin{equation}\label{rh1} \rho_{\text{QD}}=\sum_{i,j=1}^n\sqrt{p_i p_j}\, |\psi_i\rangle \langle \psi_j|\otimes |d_i'\rangle \langle d_j'|. \end{equation} where $ |d_i'\rangle \equiv \langle \phi_{\alpha}| U_i |\phi_0\rangle = c_1 \cos{\alpha}\, |d_i\rangle + c_2 \sin{\alpha} \, |d_0\rangle$; with normalization condition $c_1^2 \cos^2{\alpha}+ c_2^2 \sin^2{\alpha}=1.$ The above interaction creates the entanglement between the quanton and path detector, with the QPD out of the picture now. Following the earlier procedure, we will use the UQSD method for gaining the path information and coherence for wave information. Based on UQSD, the path-distinguishability for $n$-path interference is given by \begin{eqnarray} {\mathcal D}_Q &=& 1-\tfrac{1}{n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, |\big( c_1^2 \cos^2{\alpha}\,\langle{d_j} |{d_i}\rangle+c_2^2 \sin^2{\alpha} \nonumber\\&& + \tfrac{c_1 c_2}{2} \sin{2 \alpha} \left( \langle d_j|d_0 \rangle+\langle d_0|d_i \rangle\right) \big)| . \label{DQ1} \end{eqnarray} The reduced density matrix of the quanton can be obtained by tracing out the detector states \begin{eqnarray} \label{rdm1} \rho_Q &=& \sum_{i,j=1}^n \sqrt{p_i p_j}\, \mbox{Tr}\left(|d_i'\rangle|\langle d_j'| \right) |\psi_i\rangle\langle\psi_j|. \end{eqnarray} The set $\{ |\psi_i\rangle\}$ forms a complete incoherent basis for $n$ path setup. Thus, the coherence can be obtained using the reduced density matrix \begin{eqnarray} \label{cohr1} {\mathcal C} &=& \tfrac{1}{n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, \abs{\langle d_j'|d_i'\rangle}. \end{eqnarray} Using Eq. (\ref{phi0}), we get the following form: \begin{eqnarray} \label{cohf1} {\mathcal C}&=&\tfrac{1}{n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, |\big( c_1^2 \cos^2{\alpha}\,\langle{d_j} |{d_i}\rangle+c_2^2 \sin^2{\alpha} \nonumber\\&& + \tfrac{c_1 c_2}{2} \sin{2 \alpha} \left( \langle d_j|d_0 \rangle+\langle d_0|d_i \rangle\right) \big)|. \end{eqnarray} Combining Eqs. (\ref{DQ1}) and (\ref{cohf1}), we get \begin{equation} {\mathcal D}_Q + {\mathcal C} = 1. \label{duality1} \end{equation} Thus, even when quanton is forced to be in a superposition of wave and particle natures, the usual wave-particle duality relation continues to hold. This is at variance with earlier claims suggesting that wave-particle duality relations are violated in such a situation. \subsection{Perspectives} At this stage, it may be useful to analyze these results in light of various earlier works. It is widely believed that the superposition of wave and particle natures may lead to a violation of the complementarity. However, most experiments that have been carried out, do not involve a path-detecting device. Rather, the beam-splitter BS2 (see Fig. \ref{qcsetup}) is in a superposition of being present and absent. So, in the situation where BS2 is in a superposition, there is no way of knowing if a particular photon received at (say) D1, followed one path or both paths. In such a situation, one can only talk of the probability of taking one path or the other; the duality relation that is meaningful is the one derived by Greenberger and Yasin \cite{greenberger}. The duality relation pertaining to \emph{detecting} which path the quanton followed, derived by Englert \cite{englert}, is not applicable in such scenarios. The analysis carried out in the previous subsections shows that complementarity is always respected in the multipath interference experiment which has a path-detecting device in the superposition of being present and absent. Equation (\ref{DQ}) has a nice interpretation that the path-detecting states $|d_i\rangle$ are present with a probability $c_1^2$ and absent with probability $c_2^2$. And it leads to the perfect duality relation (\ref{duality}). However, if one naively uses the same definition, which appears reasonable, for the case where the quanton is really forced to be in a superposition of wave and particle behaviors, one will run into a problem. With that reasoning, one would imagine that the path-detecting states $|d_i\rangle$ are present with a probability $c_1^2\cos^2\alpha$ and absent with probability probability $c_2^2\sin^2\alpha$. The distinguishability will then come out to be ${\mathcal D}_Q = 1-\tfrac{1}{n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, |\big( c_1^2 \cos^2{\alpha}\,\langle{d_j} |{d_i}\rangle+c_2^2 \sin^2{\alpha})|$. But the coherence in this situation will be ${\mathcal C}=\tfrac{1}{n-1}\sum_{i\neq j} \sqrt{p_i p_j}\, |\big( c_1^2 \cos^2{\alpha}\,\langle{d_j} |{d_i}\rangle+c_2^2 \sin^2{\alpha} + \tfrac{c_1 c_2}{2} \sin{2 \alpha} \left( \langle d_j|d_0 \rangle+\langle d_0|d_i \rangle\right) \big)|$. It is easy to see that the sum $\mathcal{D}_Q+\mathcal{C}$ may exceed $1$ because of the term $\tfrac{c_1 c_2}{2} \sin{2 \alpha} (\langle d_j|d_0 \rangle+\langle d_0|d_i \rangle)$, which is a signature of interference between the wave and particle natures. One may naively interpret it as a violation of complementarity. However, recognizing that the paths of the quanton are correlated with $ |d_i'\rangle \equiv \langle \phi_{\alpha}| U_i |\phi_0\rangle = c_1 \cos{\alpha}\, |d_i\rangle + c_2 \sin{\alpha} \, |d_0\rangle$, and not just with $|d_i\rangle$, one can see that the unambiguous discrimination of $|d_i'\rangle$ is what will yield the correct distinguishability (\ref{DQ1}). This distinguishability leads to the correct duality relation (\ref{duality1}). So we see that even in the scenario where there is an interference between the wave and particle natures, quantum complementarity is fully respected, governed by the wave particle duality relation (\ref{duality1}). In the experiments where there is no real path-detector in place, it is all the more likely to come to an erroneous conclusion regarding the violation of complementarity. \subsection{Generalized duality relation} We extend our analysis for a noisy scenario. The mixed quanton state can be taken as $\rho_{in}=\sum_{ij} \rho_{ij}|\psi_i \rangle \langle \psi_j|$. The initial joint state of a quanton and a detector system can be written as $\rho'^{\rm(in)}_{\text{QD}}=\rho_{in}\otimes \rho^{(0)}_\phi$. The effect of noise on the QPD can be represented as \begin{equation} \label{rhon} \rho_\phi^{(0)} \longrightarrow \widetilde\rho_\phi^{(0)} =\sum_{i}K_{i}\rho_\phi^{(0)} K^{\dagger }_{i}, \end{equation} with completeness relation $\sum_{i}K^{\dagger }_{i}K_{i}=\mathcal{I}$. The spectral decomposition of the transformed QPD can then be written as \begin{equation} \label{spectral} \widetilde\rho_\phi^{(0)} =\sum_{k} {r_{k}} |\phi_k \rangle \langle \phi_k|, \end{equation} where $\sum_{k} r_k = 1$, $r_k\ge0$, and $ \langle \phi_k|\phi_l\rangle =\delta_{kl}$. The combined quanton-QPD state, when QPD is considered in state Eq. (\ref{bsp}), can be written as \begin{equation} \label{rhoqd} \rho_{\text{QD}}'=\sum_{i,j=1}^n \rho_{ij} |\psi_i\rangle \langle \psi_j|\otimes \sum_{k} {r_{k}} |d'_{ki}\rangle \langle d'_{kj}|\, \end{equation} where $ |d_{ki}'\rangle \equiv \langle \phi_{\alpha}| U_i |\phi_k\rangle = c_1 \cos{\alpha}\, |d_{ki}\rangle +c_2 \sin{\alpha} |d_k\rangle$ The path distinguishability for mixed QPD (\ref{spectral}) can be calculated using \begin{equation} \mathcal{D}_Q^\prime =1-\frac{1}{n-1} \sum_k r_k \sum_{i\neq j} \sqrt{\rho_{ii}\rho_{jj}} |\langle d'_{kj} | d'_{ki} \rangle |. \label{mixD} \end{equation} To find the measure of coherence, let us first calculate the reduced density matrix of the quanton, given by \begin{eqnarray} \label{6} \rho_Q' &=& \sum_{i,j=1}^n \rho_{ij} \mbox{Tr}\left( \sum_k r_k |d'_{ki}\rangle \langle d'_{kj}| \right) |\psi_i\rangle\langle\psi_j|. \end{eqnarray} The coherence comes out to be \begin{eqnarray} \label{mixC} {\mathcal C}'&=&\tfrac{1}{n-1} \sum_{i\neq j} \left|\rho_{ij} \ \sum_k r_k \langle d'_{kj} | d'_{ki} \rangle \right| \nonumber \\ &\leqslant& \tfrac{1}{n-1} \sum_k r_k \sum_{i\neq j} |\rho_{ij}| |\langle d'_{kj} | d'_{ki} \rangle| . \end{eqnarray} Combining Eq. (\ref{mixD}) and Eq. (\ref{mixC}), we get \begin{equation} {\mathcal D}_Q'+{\mathcal C}' + \tfrac{1}{n-1} \sum_k r_k \sum_{i\neq j} (\sqrt{\rho_{ii}\rho_{jj}} - |\rho_{ij}|)|\langle d'_{kj}|d'_{ki}\rangle| = 1. \label{cdq} \end{equation} Every principal 2x2 sub matrix of (\ref{rhoqd}) is positive semi-definite \cite{horn}, thus we have \begin{equation} \sqrt{\rho_{ii}\rho_{jj}} - |\rho_{ij}| \ge 0. \label{eq:se} \end{equation} Therefore, we find that Eq. (\ref{cdq}) reduces to \begin{equation} {\mathcal D}_Q'+{\mathcal C}' \le 1, \label{rel} \end{equation} where the inequality is saturated for pure initial quanton states. \section{Are experiments with a quantum device really unique?} Two-path interference experiments with a quantum device have attracted lots of attention. But are these experiments really unique? In this section, we try to answer this question. Let us consider the setup shown in Fig. \ref{qcsetup}. Since it does not use a path-detector, the duality relations derived in the previous section are not directly applicable here. For simplicity, let us consider the QBS to be in an equal superposition state $|\phi\rangle = \tfrac{1}{\sqrt{2}} (|Y\rangle + |N\rangle)$, $|Y\rangle$ represents the situation when BS2 is in the path, and $|N\rangle$ when it is not. Let the quanton in the two paths also be in an equal superposition state $|\psi\rangle = \tfrac{1}{\sqrt{2}} (e^{i\theta}|\psi_1\rangle + |\psi_2\rangle)$, $\theta$ being an arbitrary phase difference between the two paths. The effect of BS2 is to take $|\psi_1\rangle,|\psi_2\rangle$ to $|D_1\rangle, |D_2\rangle$, the detector states of the two detectors $D_1$ and $D_2$, respectively. The transformation can be written as $U_Y|\psi_1\rangle = \tfrac{1}{\sqrt{2}} (|D_1\rangle + |D_2\rangle)$ and $U_Y|\psi_2\rangle = \tfrac{1}{\sqrt{2}} (|D_1\rangle - |D_2\rangle)$. If BS2 is absent, the transformation is as follows: $U_N|\psi_1\rangle = |D_2\rangle$ and $U_N|\psi_2\rangle = |D_1\rangle$. The action of the QBS can be represented by a unitary operator $U_{\text{QBS}} = U_Y\otimes|Y\rangle\langle Y| + U_N\otimes|N\rangle\langle N|$. Using this, the effect of the QBS on the quanton can be written as follows: \begin{eqnarray} U_{\text{QBS}}|\psi\rangle\otimes|\phi\rangle &=& \tfrac{1}{2}\Big[(U_Y(e^{i\theta}|\psi_1\rangle+|\psi_2\rangle)|Y\rangle\nonumber\\ && + U_N(e^{i\theta}|\psi_1\rangle+|\psi_2\rangle)|N\rangle\Big] \nonumber\\ &=& \Big(\tfrac{|N\rangle}{2}+e^{\tfrac{i\theta}{2}}\cos\tfrac{\theta}{2}\tfrac{|Y\rangle}{\sqrt{2}}\Big)|D_1\rangle \nonumber\\ && + e^{\tfrac{i\theta}{2}}\Big(e^{\tfrac{i\theta}{2}}\tfrac{|N\rangle}{2}+i\sin\tfrac{\theta}{2}\tfrac{|Y\rangle}{\sqrt{2}}\Big)|D_2\rangle~~~~~~ \label{qbs} \end{eqnarray} The above relation implies that detectors $D_1$ and $D_2$ click with probabilities $\tfrac{1}{2}+\tfrac{1}{4}\cos\theta$ and $\tfrac{1}{2}-\tfrac{1}{4}\cos\theta$, respectively. Let us consider a setup similar to the one shown in Fig. \ref{qcsetup}, except that the second beam-splitter BS2 is not a quantum device but a classical \emph{biased beam-splitter} with reflection and transmission coefficients given by $|r|^2$ and $|t|^2$, respectively, such that $|r|^2+|t|^2=1$. The action of a biased beam-splitter can be described by the operator $U_{\text{BBS}}=(r|D_1\rangle + t|D_2\rangle)\langle\psi_1| + (t|D_1\rangle - r|D_2\rangle)\langle\psi_2|$. It transforms the incoming state $|\psi\rangle$ as \begin{eqnarray} U_{\text{BBS}}|\psi\rangle &=& \tfrac{1}{\sqrt{2}}\Big[(e^{i\theta}r+t)|D_1\rangle + (e^{i\theta}t-r)|D_2\rangle\Big] . \label{bbs} \end{eqnarray} One can verify that if $\theta=0$ and $r=t=\tfrac{1}{\sqrt{2}}$, the quanton will always land at the detector $D_1$. The state (\ref{bbs}) implies that detectors $D_1$ and $D_2$ click with probabilities $\tfrac{1}{2}+rt\cos\theta$ and $\tfrac{1}{2}-rt\cos\theta$, respectively. For $rt=\tfrac{1}{4}$, one cannot experimentally distinguish between this situation and the previous one, described by (\ref{qbs}), involving a QBS. The original proposal claimed that one can correlate the detected quantons with the $|Y\rangle$ and $|N\rangle$ states, and get wave or particle natures \cite{terno}. But even in doing that, at a time one can see either wave nature or particle nature. A similar effect can be achieved by randomly removing BS2 from the quanton path. Recognizing the fact that correlating with $|Y\rangle$ and $|N\rangle$ states was like a statistical effect, some authors referred to it as a \emph{classical mixture} of wave and particle natures, and suggested that to get a true superposition, the quanton be observed conditioned on detection of the state $|\phi_{\alpha}\rangle = \cos\alpha|Y\rangle + \sin\alpha|N\rangle$ \cite{tang,kaiser,zheng}. For the interesting case of $\alpha=\pi/4$, the (unnormalized) state of the quanton in that situation will be \begin{eqnarray} \langle\phi_{\alpha}|U_{\text{QBS}}|\psi\rangle &=& \tfrac{1}{2}\Big(\tfrac{1}{\sqrt{2}}+e^{\tfrac{i\theta}{2}}\cos\tfrac{\theta}{2}\Big)|D_1\rangle \nonumber\\ && + \tfrac{1}{2}e^{\tfrac{i\theta}{2}}\Big(e^{\tfrac{i\theta}{2}}\tfrac{1}{\sqrt{2}}+i\sin\tfrac{\theta}{2}\Big)|D_2\rangle . \label{qbs1} \end{eqnarray} This state is indeed different from (\ref{qbs}), and the two will yield different results. However, the state for a \emph{classical} biased beam-splitter, given by (\ref{bbs}), may be rewritten as \begin{eqnarray} U_{\text{BBS}}|\psi\rangle &=& \sqrt{2}r\Big(\tfrac{t-r}{2r}+e^{\tfrac{i\theta}{2}}\cos\tfrac{\theta}{2}\Big)|D_1\rangle \nonumber\\ && + \sqrt{2}re^{\tfrac{i\theta}{2}}\Big(e^{\tfrac{i\theta}{2}}\tfrac{t-r}{2r}+i\sin\tfrac{\theta}{2}\Big)|D_2\rangle . \label{bbs1} \end{eqnarray} For $\tfrac{t-r}{\sqrt{2}r}=1$, (\ref{bbs1}) is very similar in form to (\ref{qbs1}), and the probability of (say) $D_2$ clicking will show the same behavior with respect to the phase $\theta$. The message from the preceding analysis is that the quantum case of the QBS is different from the classical mixture case of the QBS, as has been experimentally observed earlier \cite{tangexpt}. However, both these situations can also be mimicked by an appropriately biased \emph{classical} beam-splitter. We feel it will be interesting to explore the implications of the connection between a QBS and a biased classical beam-splitter. What about a two-path interference experiment with a real two-state path-detecting device, which is in a superposition of being present and absent, one may ask. In the following, we will show even this experiment is completely equivalent to a two-path interference experiment with a real two-state path-detecting device, which is \emph{always present}, and is not in a superposition in the sense that is being discussed here. Let us consider a two-path interference experiment with a which-way detector whose two states that correlate with the two paths of the quanton are \emph{not} orthogonal to each other. The state of the quanton and path-detector may be written as \begin{equation} |\Psi\rangle = \tfrac{1}{\sqrt{2}} (|\psi_1\rangle|d_1\rangle + |\psi_2\rangle|d_2\rangle), \label{2state} \end{equation} where $\langle d_1|d_2\rangle \neq 0$. Now it can be shown that the states $|d_1\rangle,|d_2\rangle$ can be represented in terms of an expanded Hilbert space as follows \cite{awpd,neha}: \begin{eqnarray} |d_1\rangle = \gamma|q_1\rangle + \beta|q_3\rangle,\hskip 5mm |d_2\rangle = \gamma|q_2\rangle + \beta|q_3\rangle , \label{d1d2} \end{eqnarray} where $|q_1\rangle, |q_2\rangle, |q_3\rangle$ are orthonormal states, and $\gamma,\beta$ are certain constants which we need not specify for the present purpose. In this basis, the entangled state (\ref{2state}) has the following form \begin{eqnarray} |\Psi\rangle = \tfrac{1}{\sqrt{2}} \gamma[|\psi_1\rangle|q_1\rangle + |\psi_2\rangle|q_2\rangle] + \tfrac{1}{\sqrt{2}}\beta[|\psi_1\rangle + |\psi_2\rangle] |q_3\rangle.\nonumber\\ \label{2staten} \end{eqnarray} This state can be interpreted as a representation of a superposition of wave and particle natures. The quanton state correlated with $|q_3\rangle$ represents a quanton showing wave nature, and the rest showing particle nature. If one were to measure an observable $Q$ which has $|q_1\rangle, |q_2\rangle, |q_3\rangle$ as three eigenstates with distinct eigenvalues, the quantons detected in coincidence with $|q_3\rangle$ will show full interference, and those detected in coincidence with $|q_1\rangle, |q_2\rangle$ will show full particle nature. This state will show all the features that the state (\ref{rh}) can show, although it involves only a conventional path detector and not a quantum device. Such a state can also be produced without expanding the Hilbert space, but by introducing a two-state ancilla system interacting with the path-detector \cite{qwpd}. From this analysis, we conclude that although a lot of research interest was generated by the interference experiments with a quantum device, the effect they show can also be seen in conventional interference experiments. \section{Conclusions} In conclusion, we have theoretically analyzed an $n$-path interference experiment where the path-detector is assumed to exist in a superposition of being present and absent from the interference path. We have shown that the $n$-path wave particle duality relation derived earlier \cite{cd15} continues to hold even in this case. The duality relation remains tight even in the situation where there is expected to be interference between the wave and particle natures of the quanton. So, the various interference experiments, with a quantum device, may be of interest for various reasons but are completely within the realm of complementarity. We have also shown that the effects seen due to a path detector in a quantum superposition, can also be seen in interference experiments with a conventional which-way detector. The effects seen in the quantum delayed choice experiment, i.e., without a real path detector, but with a QBS, can also be seen in a conventional Mach-Zehnder setup with a biased beam-splitter. \begin{acknowledgements} M.A.S. acknowledges the National Key R$\&$D Program of China, Grant No. 2018YFA0306703. \end{acknowledgements}
1,108,101,564,424
arxiv
\section{Introduction} \input{src1_intro} \section{System Design} \input{src3_method} \section{Study 1: Creative Adaptation with a Human-in-the-loop Analogical Search Engine} \input{src5_study1} \section{Study 2: Enabling a Fully Automated Analogical Search Engine} \input{src7_study2} \section{Case Studies with Researchers} \label{section:case studies} \input{src8_case_studies} \section{Design Implications} \label{section:design implications} \input{src9_design} \section{Discussion} \input{src10_discussion} \section{Conclusion} \input{src11_conclusions} \begin{acks} We thank our study participants for their valuable insights and feedback. This work was supported by Center for Knowledge Acceleration, National Science Foundation (FW-HTF-RL, grant no. 1928631; IIS, grant no. 1816242; SHF, grant no.1814826), the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant no. 852686, SIAM) and NSF-BSF grant no. 2017741. This work is also based upon work supported by the Google Cloud Research Credits program with the award GCP19980904. \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection{\new{Summary of contribution}} With the exponential growth of research output and the deepening specialization within different fields, encouraging analogical inspiration for scientific innovation that connects distant domains becomes ever more challenging. Our human-in-the-loop and fully automated analogical search engines represent an approach for supporting such analogical inspirations for challenging scientific problems. We have demonstrated in Study 1 that our human-in-the-loop system finds novel results that participants would be unlikely to encounter from keyword-based search, and that these results lead to high levels of creative adaptation. Through a mediation analysis we also showed that this success was driven by the analogical search engine's ability to find \textit{partial} purpose matches (e.g., matching at the high-level purpose but differs at {the} low-level {details}). We saw the nuanced effects of partial purpose alignment on the results' goodness as analogs for inspiration. Through qualitative observations, we described how certain attributes of analogical mapping were perceived as more salient by participants, and that mismatches on them can have either a positive (i.e. generative insights) or a negative impact (i.e. critical misalignment) on creative adaptation. In contrast, keyword-based search resulted in more \textit{full} purpose matches and a higher level of direct application. The value of keyword-based search and analogy-based search thus may complement each other, while keyword-based search can help researchers find `exactly that', analogy-based search can help researchers to switch from a preservative mode (i.e. aiming to find results with maximal similarity to the query) to a generative mode (i.e. aiming to find analogs that are relevant despite the surface dissimilarity) of searching, and ultimately lead them to recognize unusual relations and come up with ways to creatively adapt existing ideas for novel domains. We also demonstrated how improving the sequence-to-sequence purpose and mechanism identification model can remove the human-in-the-loop but maintain a similar level of accuracy on purpose-match by human judges. This improvement enabled us to develop a fully automated analogical search system to use as a probe to study searchers' more natural interaction with analogical results. Through a series of evaluation we first show that our automated analogical search pipeline can emulate human judgment of purpose match and that it finds partial purpose matches in top ranked results with a similar rate compared to the human-in-the-loop system {used} in Study 1. Then through case studies we find generalizable challenges that future analogical search engines may face, such as early rejection of alternative mechanism ideas and the difficulty of abstracting and representing purposes at the right level. From our studies we synthesize design implications for future analogical search engines, such as supporting purpose representations at different levels of abstraction, supporting the iterative process of steering away from critically misaligned analogs and towards a fertile land of generative misalignment, and providing explanations on why certain analogical search results may be relevant. We envision that future studies will shed light on deeper cognitive sources of the challenges identified here. A fruitful avenue of research may be studying how the dual processing theory~\cite{wason1974dual,kahneman2011thinking} underlies or interacts with analogical search interaction. Studying also how simplification heuristics~\cite{mintzberg1976structure} may \new{negatively bias} interact{ion} with analogical results \new{and how it may be} reduced for expert user populations may be an interesting future direction~\cite{carol2007slowing,lambe2016dual}. \subsection{\new{Limitations and future work}} \subsubsection{\new{Experimental design and improving its validity}} \label{subsubsection:exp_validity} Our findings have several limitations. First the design of our studies may be improved to increase the experimental validity. We believe that our coders of the ideation outcomes had a reasonable understanding of participants’ research context from descriptions of current and past research topics, think-alouds with 45 papers, and end-of-experiment discussions, and that the procedure of coding reduced potential biases (e.g., the coders were blind to experimental conditions, relied on participants’ statements of novelty and distance). Despite this, it is possible that they judged ideas differently from domain experts, for example coding more or fewer ideas as creative adaptation, or pre-filtering useful ideas in the human-in-the-loop stage. In addition, other quality dimensions such as potential for impact or domain-expert-judged idea quality are largely inaccessible within the studies presented here. Future research may improve on these limitations by iterating on the experimental design, collecting data for triangulating the results and capturing \new{other} quality dimension of the generated ideas. \new{Additionally, future work may add ablation studies to quantify the effects of human filtering in Study 1 on the ideation outcome as well as sensitivity studies to relate how much the increased token-level classification performance of trained models may reduce the burden of human filtering.} \new{Furthermore, additional experiments with baselines other than keyword-based search using the whole abstract will help pinpoint the potential advantages of representing and matching papers using extracted purposes and mechanisms. For example, Chan et al.~\cite{chan2018solvent} found that embedding all words from an abstract (using GloVe embeddings) resulted in retrieval of fewer analogical items than when extracted purposes and mechanisms were used. Replicating this result with additional approaches such as contextualized word embeddings and pre-trained language models (e.g., ELMo~\cite{elmo}, BERT~\cite{bert}, and SciBERT~\cite{scibert}) will be valuable.} \subsubsection{\new{Potential sampling bias}} \new{The sampling strategy in Study 1 was purposefully unbalanced, where analogical papers were sampled twice as much as keyword papers to ensure participants' exposure to sufficiently diverse results. This was crucial for uncovering potential benefits and challenges of our analogical search engine and investigating its viability. This ratio was chosen purposefully, to balance the statistical power for detecting potentially significant differences between the conditions, while also limiting the number of papers that each participant had to review. Given the cognitive burden of reviewing a paper while thinking aloud, we decided on 45 in total with the 2:1 ratio to fit the practical time limits of interviews. However, this may have led to unanticipated effects on ideation outcomes despite having accounted for the difference in sample sizes by measuring the outcomes in ratios. For example, when the results were combined into a single blinded list, the over-representation of analogical results over more purpose-aligned keyword results may have shifted the users' overall perceived value of the list to be more or less positive. Users' perception of diverse results may have been further affected by their relative over-representation. For example, increased cognitive load for processing analogical mapping~\cite{halford1992analogical,sweller1990cognitive,halford1998processing} may suggest that results that fully match on the purpose search query may have been perceived even more favorably than analogical results, due to a negative spill over effect from the rest of the papers in the list, which were less likely matched on the purpose. Investigating whether such factors led to compounding effects beyond our ratio-based measures of usefulness remains an open question for future work.} \subsubsection{\new{Controlling the diversity of search results}} \label{subsubsection:control_diversity} \new{Our work is also limited by the lack of controllability in sampling the search results beyond purpose similarity. As described in \S\ref{subsubsection:stage2-overview}, from pilot tests in our corpus we discovered that even close purpose matches of scientific papers already had high variance in terms of the mechanisms they proposed which allowed us to focus our approach to sampling based solely on purpose similarity. The simplicity of this approach also means fewer hyper parameters in the sampling mechanism compared to other approaches~\cite{hope_kdd17,hope2021scaling}. However, all the approaches including this work thus far lacked a mechanism for explicitly controlling the diversity in retrieved search results which remains a fruitful avenue for future work. For example, prior research has uncovered the nuanced effects of distance (e.g., near vs. far sources of inspiration~\cite{chan2017semantically,siangliulue2015providing}), suggesting the benefit of targeting analogs at different distance from the source problem for the right context. Future research may also uncover further complexities in the relationship between novelty and purpose-match. The result of our mediation analysis (Table~\ref{table:mediation}) showed that the novelty of content among the search results in Study 1 was not a significant factor to the same extent that the three levels of purpose match was. However, the relationship between novelty and purpose match may be more complex than the levels of manipulation presented in this work. For example,~\cite{diedrich2015creative} suggested a greater importance of novelty than usefulness for predicting creativity scores. Future work may design mechanisms to manipulate the variance in content novelty and alignment in the purpose-mechanism schema to uncover dynamics between the two that go beyond the results from mediation analyses presented here (\S\ref{section:purpose-match-mediation}). Furthermore,} challenges \new{with the abstraction of purposes remain open, for example} how core versus peripheral attributes of research purposes may be identified, and how they may be selectively matched at a specific level of the conceptual hierarchy. \new{Finally, not all query formulations are created equal in terms of their suitability for analogical search. We observed in the case studies that participants wanted to express different query intent via reformulation (\S\ref{subsubsection:case_studies_query_reformulation}). While participants could reformulate their search queries and examine the returned results from our analogical search engine in real-time, it was unclear whether and how specific query formulations may lead to more or less diverse results, and how subsequent queries may be updated after reviewing them. As such, systems that assist users in the potentially tedious process of query reformulation~\cite{white2010predicting} (for example, by way of automatic query expansion~\cite{carpineto2012survey}) in the context of analogical search will be important.} \subsubsection{\new{Studying the effect of larger context of scientific innovation on analogical innovation}} Due to our focus on ideation outcomes, our results do not explain how these ideas may be integrated, developed, and shared across the research communities. Studying the lifetime of ideas that goes beyond their inception will deepen our understanding of the factors that currently make analogical innovation such a rare event in sciences (for example, Hofstra et al. suggested that more semantically distant conceptual combinations receive far less uptake~\cite{diversity_innovation_paradox}). Through interviewing our study participants and other colleagues in academia we found emerging structures related to this challenge. Our interviews inform{ed} us that in general the context in which a scientist exists -- such as the scientist's role in a project, the maturity of a project, and the broader academic culture -- can ultimately change how they interact with and seek analogical inspirations. For example a third-year PhD student studying chemical engineering commented ``In the current stage of my project it's more about parameter-tuning -- running multiple experiments at once and comparing which configuration works the best... If I were a first year PhD student maybe I would be in a broader field and exploration.'' In contrast, a PhD in biology who recently defended noted that ``analogical inspirations would perhaps be more useful if you're looking for a postdoc or a faculty position.'' In addition, the underlying career incentive structures in academia may also affect researchers' perception of and openness to analogical inspirations. One of the study participants commented ``since I'm already a third year PhD student and my project is further along and more firmed up, I'm not really looking for really far inspirations... first we push the specific way we have in mind with many iterations on the experiments until, say, publication.'' In addition to the career-wise incentives there are other types of competitive resourcefulness (e.g., social resources such as the advisors' and colleagues' expertise that participants can easily tap into; physical and other forms resources such as tangible artifacts like previously developed code packages or experimental processes and setups). These factors can influence scientists' perception of their advantage and lead them to interpret analogical inspirations as more or less useful, feasible, and directly applicable to their research. This observation is further suggested by survey results that asked our participants: ``\textit{Could this paper be useful to you?},'' their ratings were significantly higher for keyword papers than analogy papers despite them having come up with creative adaptation ideas more often with analogy papers. \new{Therefore,} future work \new{that studies incentive structures, the quality of ideation outcome, their feasibility, the differences in research context e.g., frames of research contribution such as discovery-oriented vs. novel system development-oriented, and taking a longitudinal observation of the variation in such factors will} add \new{a significant} depth to our understanding. \section*{Author's Statement} \subsection{Stage One. Training Seq2Seq models for identifying purpose and mechanism tokens} \subsubsection{Overview of Modeling} In the first stage of the system, purpose and mechanism tokens are identified from paper abstracts (fig.~\ref{fig:system_design}, \cirnum{1}). Research paper abstracts often include descriptions of the most important purpose or \textit{the core problem addressed in a paper} and the proposed mechanism or \textit{the approach taken to address the problem}, making them good candidates for identification and extraction of tokens corresponding to them. For example, for a similar problem of facilitating heat transfer, Paper A may propose an approach that modifies the structure of the material used at the interface between crystalline silicon (semiconductor material) and the substrate, while Paper B may propose a more distant mechanism (due to the mismatch on scale) of fin-based heat sinks commonly used for electronic devices. The goal of this first stage is to automatically identify and extract tokens that correspond to the similar purpose (e.g., `facilitate heat transfer') as well as the mechanisms (e.g., `modifying the structure of the material used at the interface between crystalline silicon' vs. `fin-based heat sinks') from the abstract A and B. One relevant automated approach for identifying purposes and mechanisms from scientific abstracts is DISA~\cite{disa}, which formulates the task as supervised sentence classification. However, we found that many key sentences in abstracts include both purpose and mechanism, breaking the assumptions of a sentence-level classifier (e.g., ``In this paper, [\textit{a wavelet transforms based method}] for [\textit{filtering noise from images}] is presented.''). To overcome this limitation we follow~\cite{hope2021scaling} and frame purpose and mechanism identification as a sequence-to-sequence (Seq2Seq) learning task~\cite{Seq2SeqICLR,Seq2SeqNIPS} and develop deep neural networks with inductive biases capable of learning token-level patterns in the training dataset. Our dataset consists of crowdsourced annotations from Chan et al. (the dataset is constructed via application of~\cite{chan2018solvent} to a larger corpus of around \fnum{2000} paper abstracts largely in computer science domains) (table.~\ref{table:training_stats}). We train the models to classify input features (tokens or spans of tokens) as either purpose (PP), mechanism (MN) or neither. We train two deep neural networks (Model 1 and 2), achieving increasing accuracy of classification. The first model is based on a Bi-directional LSTM (BiLSTM) architecture for sequence tagging~\cite{huang2015bidirectional,LSTM_Schmidhuber}, in which the forward (the beginning of the sequence to the end) and the backward passes condition each token position in the text with its left and right context, respectively. A main source of improvement of Model 2 over Model 1 is the ability to more selectively attend to informative \new{tokens} in a sentence rather than treating each \new{token in a sequence as} independent of each other \new{(as a hypothetical example, an extremely effective model based on this approach may assign more weights to the tokens `selectively attend to informative tokens', as they represent the core mechanism described in the previous sentence)} and to leverage the regularities of co-occurrence with surrounding words through {the} self-attention \new{mechanism}~\cite{attention_vaswani}. \subsubsection{Seq2Seq Model Implementation Details} We implement the BiLSTM architecture of Model 1 in \textsc{PyTorch}~\cite{pytorch}. We use pre-trained \textsc{GloVe}~\cite{pennington_glove} word embeddings with 300 dimensions, consistent with prior work~\cite{pennington_glove,landauer1997solution,bojanowski2017enriching_subword_info} to represent each token in the sequence as 300-dimensional input vectors for the model. We train the model with a cross entropy loss objective for per-token classification in the three (PP, MN, Neither) token classes. For Model 2, we adapt the \textsc{SpanRel}~\cite{spanrel} architecture and implement it on \textsc{AllenNLP}~\cite{allennlp}. We implement a self attention mechanism that tunes weights for the core word in each span as well as the boundary words that distinguish the context of use, consistent with~\cite{lee-etal-2017-end}. We use the pre-trained \textsc{ELMo 5.5B}~\cite{elmo} embeddings for token representation following the near state-of-the-art performance reported in~\cite{spanrel} on the scientific Wet Lab Protocol dataset. We train the model using a similar procedure as Model 1. We leave detailed training parameters for Model 1 and 2 to the Appendix. \subsubsection{\new{Introducing Human-in-the-loop Filtering for Model 1}} The final classification performance (F1-scores) of Model 1 on the validation set is 0.509 (Purpose), 0.497 (Mechanism), and 0.801 (neither). We found that the limited accuracy contributed to how the system retrieves irrelevant search results. Because reactions to obviously irrelevant results are not useful, we added a human-in-the-loop~\cite{dow2005wizard} filtering stage. The filtering proceeded as follows: members from the research team inputted problem queries received from study participants into the system. Once the model produced matches, they went over from the top of the sorted list and removed only those that are irrelevant to the problem context. They continued filtering until at least 30 papers with reasonable purpose similarity were collected. After Winsorizing at top and bottom 10\%~\cite{winsorizing}, the human filterers reviewed 45 papers per query (SD: 27.6, min: 6, max: 138) for 5 queries (SD: 2.4, min:2, max: 9) to collect 33 (SD: 3.5, min: 30, max: 40) purpose-similar papers (about 12/45 = 26\% error rate). In Study 1 we show that the limited retrieval accuracy of Model 1 is sufficient for use as a probe with this additional human-in-the-loop filtering. In Study 2 and case studies, we demonstrate how this filtering can be removed with Model 2 while achieving a similar accuracy. \subsubsection{Scaling Model Inference} In order to have sufficient coverage to return diverse results, we collected an initial corpus of 2.8 million research papers from Springer Nature\footnote{\url{https://dev.springernature.com/}}. After deduplication (based on Digital Object Identifier using BigQuery\footnote{\url{https://cloud.google.com/bigquery}}) and filtering only papers with at least 50 characters in the abstract we were left with 1.7 million papers in four subjects (Table~\ref{table:corpus_stats}). We stored the resulting corpus in Google Cloud storage buckets\footnote{\url{https://cloud.google.com/storage}}. To scale the classification of the Seq2Seq models we used the Apache Beam API\footnote{\url{https://beam.apache.org/}} on Google Cloud Dataflow\footnote{\url{https://cloud.google.com/dataflow/}} to parallelize the operation. \subsection{Stage Two. Constructing a purpose similarity space} \subsubsection{Overview} \label{subsubsection:stage2-overview} In the second stage, the identified purpose texts are incorporated \new{into the system} to enable search-by-analogy of papers that solve similar problems using different mechanisms, at an interactive speed (fig.~\ref{fig:system_design}, \cirnum{2}). Relevant previous approaches include \citet{hope_kdd17} which first clusters similar purposes (through $k$-means with pruning) and subsequently samples within each cluster of similar purposes to maximize the diversity of mechanisms (via a GMM approximation algorithm~\cite{ravi1994heuristic}), or \cite{hope2021scaling} which \new{employs} similarity metrics {to} balance {the} \textit{similarity} to a purpose query and {the} \textit{distance} to a mechanism query (and vice versa). In contrast, from pilot tests in our corpus we discovered that even close purpose matches of scientific papers already had high variance in terms of the mechanisms they propose. We hypothesize that this may be the case due to the enormous span of possible research topics and the relative sparseness of their coverage in our corpus, and/or due to the emphasis on novelty in scientific research that discourages future papers which might contribute relatively small variations to an existing mechanism. We leave exploration of these hypotheses for future work and simplify our sampling of the scientific papers to the one based solely on the similarity of purpose, sufficient for ensuring diversity. In order to support fast retrieval (e.g., sub-second response time) of papers with similar purposes at scale (e.g., millions of papers), we pre-train Spotify's \textsc{Annoy}\footnote{\url{https://github.com/spotify/annoy}} indices of nearest neighboring purposes. \new{\textsc{Annoy} trains a neural network to assign an embedding vector corresponding to a purpose an index in the high-dimensional space that brings it close to other indices of purpose vectors that have similar meaning (see \S\ref{subsubsection:stage2_implementation_details} for details of the metric used for the similarity of meaning).} \textsc{Annoy} uses random projection and tree-building \new{(see~\cite{annoy_readme,random_projection_wiki})} to create read-only, file-based indices. Because it decouples creation of the static index files from lookup, it enables efficient and flexible search by utilizing many parallel processes to quickly load and map indices into memory. \subsubsection{\new{Interactive Speed}} \new{Additionally \textsc{Annoy} minimizes its memory footprint in the process. This efficiency, critical for real-time applications such as ours, was further validated during our test of the end-to-end latency on the Web, with the average response taking 2.4s (SD = 0.56s)\footnote{We tested with 20 topically varied search queries that have not previously been entered to the engine to test the latency end-users experience and to exclude the effect of caching from it.}. The level of latency we observed was sufficiently low to enable interactive search by end users (both human-in-the-loop filterers in Study One and researcher participants in case studies).} \subsubsection{Implementation Details} \label{subsubsection:stage2_implementation_details} To construct the similarity space, we first encode the purpose texts into high-dimensional embedding vectors which then can be used to compute pairwise semantic similarity. Here, the choice of an encoding algorithm depends on three main constraints. First, the pairwise similarity, when computed, should correlate well with the human-judged semantic similarity between the purposes. Second, similarity calculation between varying lengths of texts should be possible because extracted purposes can differ in length. Third, computationally efficient methods are preferred for scaling. To meet these requirements, we chose \textsc{Universal Sentence Encoder} (\textsc{USE})\footnote{\url{https://tfhub.dev/google/universal-sentence-encoder-large/5}} to encode purposes into fixed 512-dimensional vectors. \textsc{Universal Sentence Encoder} trains a transformer architecture~\cite{attention_vaswani} on a large corpus of both unsupervised (e.g., Wikipedia) and supervised (e.g., Stanford Natural Language Inference dataset~\cite{snli}) data to produce a neural network that can encode text into vectors that meaningfully correlate with human judgment (e.g., evaluated on the semantic textual similarity benchmark~\cite{semeval17}). \textsc{USE} can handle texts of varying lengths (e.g., from short phrases to sentences to paragraphs), and with high efficiency~\cite{universal_sentence_encoder}, thereby making it suitable for our system. We pre-compute pairwise similarity of the purpose embeddings and store the indices in neighborhoods of high similarity for fast retrieval of similar purposes. As mentioned before, we train the \textsc{Annoy} indices on Google Cloud AI Platform\footnote{\url{https://cloud.google.com/ai-platform}}. We use 1 - the Euclidean distance of normalized vectors (i.e., given two vectors $\vec{u}$ and $\vec{v}$, $\text{distance}(\vec{u}, \vec{v}) = \sqrt{\left(2\left(1 - \text{cos}\left(\vec{u}, \vec{v}\right)\right)\right)}$) as a similarity metric (using a Euclidean distance based metric for nearest neighbor clustering shows good performance, see~\cite{bachrach2014} for a related discussion on the impact of the distance metric on the retrieval performance). We set the hyper-parameter $k$ specifying the number of trees in the forest to 100 (larger $k$'s result in more accurate results but also decreases performance\new{; see~\cite{annoy_readme} for further details}). Empirically, 100 seemed to strike a good balance between the precision-performance trade-off, thus we did not experiment with this parameter further. \subsection{Stage Three. Retrieving the results} In the last stage, the front-end interface interacts with end users and receives problem queries. These queries are then relayed to the back-end for retrieval of papers that solve similar problems using different mechanisms. The retrieved papers are presented on the front-end for users to review (fig.~\ref{fig:system_design}, \cirnum{3}). When a user query is received, the back-end first encodes it using the same encoding algorithm used as the construction method of the purpose similarity space (i.e. \textsc{Universal Sentence Encoder}). Using this query embedding, the back-end searches the pre-trained similarity space for papers with similar purposes. The papers with high purpose similarity are then returned to and displayed on the front-end. We describe the actual interfaces used in the studies in the corresponding design sections (\S\ref{subsubsection:apparatus1}, \S\ref{subsubsection:apparatus2}). Together the design of our system enabled what is to our knowledge the first functioning prototype of an interactive analogical search engine for scientific papers at scale. In the following sections we report on how such a search engine can help researchers find analogical papers that facilitate creative ideation. \subsection{Coding ideation outcomes} \label{section:types-of-brainstorming-ideation-outcomes} We are interested in studying whether an analogical search engine provides distinctive and complementary value to other commonly used search approaches that rely on surface similarity. In particular, our focus is on the inspirational value rather than the immediate relevance of search results or the direct usefulness of solutions. The highest value of creative inspiration often comes from creatively adapting ideas to reformulate a problem and recognizing new bridges to previously unknown domains that open up entirely new spaces of ideas. For example, recognizing a connection \new{from} the ancient art form of origami to fold intricate structures with paper and building a sufficiently compact, deployable solar panel arrays and radiation shields led NASA to hire origami experts~\cite{peraza2014origami,zirbel2013origami,origami_open_innovation}. Our approach to measuring ideation outcome is through the use of a quaternary variable categorizing the types of ideation. To capture the inspirational value of analogical search and move beyond the measurements focused on the immediate relevance or the direct usefulness we distinguish the Creative Adaptation and Direct Application types of ideation. In our studies these two types corresponded to think-alouds that result\new{ed} in novel ideas whereas the rest (Background and None) corresponded to think-alouds in which no new ideas {were} produced. $\sbullet[.75]$~\textbf{Creative Adaptation:} Novel mechanism ideas that involve substantial adaptation of the information provided in the paper. These ideas are typically associated with a higher uncertainty of success due to the less familiar nature of the domains involved. $\sbullet[.75]$~\textbf{Direct Application:} More directly applicable ideas that involve less adaptation than Creative Adaptation. These ideas are typically associated with a lower uncertainty of success because researchers are more familiar with the domains. $\sbullet[.75]$~\textbf{Background:} The information provided in the paper is good for background reading (e.g., to learn about other domains). $\sbullet[.75]$~\textbf{None:} Did not result in new ideas \new{nor was useful for background reading.} \begin{wrapfigure}{R}{.5\textwidth} \begin{center} \includegraphics[width=.5\textwidth]{figures/new_ideation_examples.png} \end{center} \caption{Example papers for the purpose of facilitating heat transfer heat in semiconductors. (Top) A Direct Application paper involves directly applicable ideas and techniques for manipulating the interface material and structure to control thermal conductance. (Bottom) A Creative Adaptation example involves transferring a distant idea (fin-based design for heat sinks) and creatively adapting it into the target problem context (designing nano-scale fins that could absorb heat and convert it to useful energy). Figure credits: contact configurations and interface resistance from~\cite{interface_resistance}, fin-based heat sink from~\cite{heatfins}, nano-fins from~\cite{nanofins}.} \label{fig:ideation_examples} \end{wrapfigure} Creative Adaptation ideas generally involve{d} a substantial amount of adaptation, while Direct Application ideas {were} closer to the source domain and more directly applicable. For example, using the data from one of our participants, applying the techniques for manipulating thermal conductance at solid-solid interfaces {wa}s considered a direct application idea for P1 (fig.~\ref{fig:ideation_examples}, left) because he {wa}s familiar with the concept of controlling the interfacial thermal conductivity given the relevant approaches he developed in his current and past research projects. Thus the connections to the source problem {were} directly recognizable. On the other hand, creating a fin-based wall structure for heat transfer {wa}s an example of creative adaptation idea (fig.~\ref{fig:ideation_examples}, right) because of its novelty and the participant's unfamiliarity in relevant domains. The unfamiliarity and uncertainty {wa}s generally more associated with analogs for creative adaptation than direct application. On the other hand, the unfamiliarity also sometimes act{ed} as a barrier to participants' openness and subsequent ideation. Though challenging, in order to recognize novel connections to the source problem the participants may need to suspend their early rejection of a seemingly foreign idea and its surface-level mismatches and engage in deeper processing which could lead to re-imagination and re-formulation of the research problem at hand. \new{To code the Creative Adaptation and Direct Application types of ideation outcomes, the coders took into consideration different linguistic and contextual aspects of the descriptions of the ideas and their think-aloud process (details in \S\ref{subsubsection:data_and_coding}).} \subsection{Design of the study} \subsubsection{\new{Participants}} We recruited eight graduate (four women) researchers in the fields of sciences and engineering via email advertisement at a private R1 U.S. institution. Four were senior PhD students (3rd year or above and one recently defended their thesis) and the rest was 2nd year or below. Disciplinary backgrounds of the participants included: Mechanical (3), Biomedical (2), Environmental (1), Civil (1), and Chemical Engineering (1). Once a participant signed up for the study, we asked them to describe their research problems and send the research team search queries they use to look for inspirations on popular search engines such as Google Scholar\footnote{\url{https://scholar.google.com/}}. Members of the research team screened papers with relevant purposes using these queries on the filtering interface (\new{fig.~\ref{fig:study1_interfaces}}, left). Despite our efforts to collect papers over diverse topical areas, the search engine did not contain enough papers for two of the participants who work on relatively novel fields (e.g., ``machine learning methods of 3D bioprinting''). These participants were interviewed on their current practices for reviewing prior works and coming up with new ideas for research and were not included in the subsequent analyses. \subsubsection{\new{Study Procedure and Keyword-search Control}} The rest of the participants were then invited to in-person interviews. To ensure that participants would be exposed to a \new{sufficiently} diverse set of analogical mechanisms and to maximize our power to observe the ideation process, we generated a list of top 30 results from the analogical search engine \new{using the search queries provided by the study participants}. As a control condition we also included top 15 results from a keyword-based search engine using the standard \textsc{Okapi BM25} algorithm~\cite{intro_to_IR} ($k_1 = 1.2, b = 0.75$) \new{using the same search queries as the analogical search engine}. The order of results in the list was randomized and participants were blind to condition. \new{To account for the difference in the quantity of exposure in the analysis, we normalized the ideation outcomes by the number of results returned in each condition.} Using this list we employed a think-aloud protocol~\cite{think_aloud_van1994,think_aloud_lewis} in which participants were presented with the title, abstract, and other metadata of papers and asked to think aloud as they read through them with the goal of generating ideas useful for their research using our Web-based interface (fig.~\ref{fig:study1_interfaces}, right). Although time consuming, this approach allowed us to capture rich data on \new{participants' thought process} and how those processes changed and evolved as participants considered how a paper might relate to their own research problems. In addition, we asked the participants to make a judgment on the novelty of each paper on a 3-point Likert-scale. After participants finished reviewing the 45 papers, we interviewed them about their overall thoughts on the results' relevance and novelty and whether there were any surprising or unique results. Each interview lasted about one and a half hours and the participants were compensated \$15/hr for their participation. \subsubsection{Data and Coding} \label{subsubsection:data_and_coding} In total, our data consist{ed} of 267 paper recommendations for six participants and their Likert-scale questionnaire responses measuring the content novelty, after removing 3 within-condition duplicates (these papers included cosmetic changes such as different capitalization in the title or abstract). One participant ran out of time towards the end of the interview and only provided novelty measures for the last 17 paper recommendations in the randomized list. Thus, 250 transcripts of participants' think-aloud ideation after reading each paper {were} used for analyzing ideation outcomes. \new{To code the distance between the Creative Adaptation and Direct Application types of ideation outcomes, the coders took into consideration (1) the verbs used to describe the ideas (e.g., `design', `develop', or `invent' were generally associated more with distant ideas compared to `apply', `use', `adopt'; see Table.~\ref{table:brainstorm-ex}); (2) the context of ideas such as participants' expression of unfamiliarity or uncertainty of the domain involved (e.g., ``I'm not really sure'' vs. ``I'm familiar with this domain''); and (3) participants' perceived immediacy of the idea's applicability (i.e., ideas perceived by participants as more immediately applicable were associated with direct application but not creative adaptation ideas). Two of the authors coded a fraction of the data together (13/250, 5.2\%) and then independently coded the rest blind-to-condition, using the four ideation outcomes types described in \S\ref{section:types-of-brainstorming-ideation-outcomes} and with the following protocol: The coders first judged the existence of an idea. If there was, then its type was further distinguished between Creative Adaptation and Direct Application using the linguistic and contextual descriptions described above (e.g., Creative Adaptation ideas were more frequently associated with the `design' words, higher unfamiliarity and uncertainty of the domains, and less immediate applicability, compared to Direct Application ideas). In case there was no concrete idea in the data, coders further distinguished between the Background vs None cases.} The agreement between coders was significant, with Cohen's $\kappa = 0.89$ (near perfect agreement) for the four categories of ideation outcome. Given the high level of agreement between the coders, any disagreements were resolved via discussion on a case-by-case basis. \begin{figure}[t] \begin{center} \includegraphics[width=\textwidth]{figures/system_interfaces_study_1.pdf} \end{center} \caption{The front-end interfaces. (Left) Human reviewers used this filtering interface to input search queries received from the participants and remove papers with obviously irrelevant purposes. \new{To assist the reviewers' filtering process, model predicted purpose (e.g., \textit{the noise reduction} and \textit{time}, highlighted in red at the bottom of the filtering interface) and mechanism (highlighted in green) tokens were also provided along with the title and the abstract text.} The background color turned green when the ``Similar'' button is clicked and red when the ``Dissimilar'' button is clicked. (Right) The ideation task interface {wa}s populated with a list of human filtered papers for review by the participants in Study 1 (the order of papers was randomized).} \label{fig:study1_interfaces} \end{figure} \subsubsection{\new{Apparatus 1}: the human-in-the-loop filtering interface} \label{subsubsection:apparatus1} In Study 1, members of the research team first received search queries from study participants and reviewed the model-produced purpose matches to filter irrelevant papers using a filtering interface (fig.~\ref{fig:study1_interfaces}, left). This additional step was introduced to ensure that papers with obviously dissimilar purposes are not returned to study participants. Reviewers determined whether each paper contained a clearly irrelevant purpose in which case it was removed by clicking the \textit{Dissimilar} button at the bottom of the paper. On the other hand when the \textit{Similar} button was clicked it turned the background of the paper green in the interface and increased the number of the papers collected so far. Reviewers continued the screening process until at least 30 papers with reasonable purpose similarity were collected. \subsubsection{\new{Apparatus 2}: the ideation task interface} \label{subsubsection:apparatus2} The filtered papers were then displayed as a randomized list of papers to study participants (fig.~\ref{fig:study1_interfaces}, right). In addition to the content and metadata of papers (e.g., authors, publication date, venue, etc.), each paper was presented with a Likert-scale question for measuring content novelty and a text input for ideation. \subsubsection{Limitations} To reduce potential biases, our coders were blind to experimental conditions and relied on participants’ statements of \new{ideas' novelty and usefulness} (e.g., ``I've never seen something like this before,'' ``this is not a domain I would've searched if I used Google Scholar''), and achieved a high inter-rater reliability. We believe coders had a reasonable understanding of how participants arrived at specific ideas from descriptions of their current and past research topics, think-alouds, and end-of-experiment discussions. Despite this, \new{we also acknowledge the limitations of this approach and discuss how future research may improve upon it (see \S\ref{subsubsection:exp_validity}).} \subsubsection{On reporting the results} We report the result of our studies below. To denote statistical significance we use the following notations: $^{*} (\alpha = 0.05)$, $^{**} (\alpha = 0.01)$, $^{***} (\alpha = 0.001)$, $^{****} (\alpha = 0.0001)$. Alpha levels {were} adjusted when appropriate in post-hoc analyses using Bonferroni correction. \subsection{Result} \xhdr{Finding novel papers for creative ideas} Our key measure of success is how paper recommendations from the analogy search engine (hereinafter \textit{analogy papers}) help scientists generate creative ideas for their own research problems. To this end, we investigate a) whether analogy papers are novel and complementary to the papers found from the keyword-search baseline (hereinafter \textit{keyword papers}) and b) whether analogy papers resulted in more creative adaptation ideas than direct application ideas in ideation. \subsubsection{Analogy papers differed from keyword papers and were judged more novel} \begin{wrapfigure}{R}{.5\textwidth} \begin{center} \includegraphics[width=.5\textwidth]{figures/novelty_keyword_overlap.png} \end{center} \vspace{-1em} \caption{(Left) Participants judged analogy papers significantly more novel. The mean response to the question \textit{"Have you seen this paper before?"} was significantly higher in Analogy: 2.7 (SD: 0.48) than in Keyword: 2.3 (SD: 0.55). (Right) There were significantly more overlapping words between search query terms provided by participants and the title and abstract text of papers: Keyword: 4.1 (SD: 1.74) vs. Analogy: 1.6 (SD: 1.42).} \label{fig:novelty_keyword_overlap} \end{wrapfigure} The viability of our approach is based on the assumption that the analogy search pipeline returns a different distribution of results than a keyword-based baseline. This assumption appeared to hold true: the keyword-search and analogy-search conditions resulted in almost completely disjoint sets of paper recommendations. Out of the total 267 papers, the overlap between analogy and keyword papers was only one. Analogy papers appeared to represent a complementary set of results users would be unlikely to encounter through keyword-based search. To further examine this assumption we had participants rate the novelty of the results by asking them ``\textit{have you seen this paper before?}'' on a 3-point Likert scale response options of 1: ``\textit{Yes, I have seen this paper before}'', 2: ``\textit{Yes, not exactly this paper but I have seen similar ideas before}'', and 3: ``\textit{No, I have not seen anything like this before}''. Participants found papers recommended in the analogy condition to contain significantly more novel ideas (2.7, SD: 0.48) compared to the keyword condition (2.3, SD: 0.55) (Welch's two-tailed t-test, $t = -5.53, p = 1.33\times10^{-7}$) (fig.~\ref{fig:novelty_keyword_overlap}, left). Participants thought the ``variance in results is much higher than using other search engines'' (P5) and ``there're a lot of bordering domains... which can be useful if I want to get ideas in them'' (P4). This difference was also reflected in the content of papers, with keyword papers having significantly more overlapping terms with participant-provided query terms (4.1, SD: 1.74) than analogy papers (1.6, SD: 1.42) (Welch's two-tailed t-test, $t(145.27) = 11.70, p = 1.10\times10^{-22}$) (fig.~\ref{fig:novelty_keyword_overlap}, right)\footnote{We measured the term overlap between participants' queries and the content of papers (title and abstract). To preprocess text, we used \textsc{NLTK}~\cite{nltk} to tokenize papers' content, remove stopwords, digits, and symbols, and lemmatize adjectives, verbs, and adverbs. \new{Finally,} using the processed tokens we constructed a set of unique terms \new{for each paper and the query which was then compared to find overlapping terms}.}. More occurrences of familiar query terms in keyword papers' titles and abstracts may have led participants to perceive them as more familiar. \subsubsection{Analogy papers resulted in more creative adaptation ideas than direct application ideas} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/heatmap_brainstorm_condition.png} \caption{Frequency of the ideation outcome types by condition. Darker colors represent higher rates. Creative adaptation is 5.3 times more frequent among analogy papers (53 in Analogy vs. 10 in Keyword), while most of direct application is from keyword papers (3 in Analogy vs. 16 in Keyword). The distributions differed significantly (chi-squared test, $\chi^2(3) = 52.12, p < 1.0\times10^{-10}$ overall and $\chi^2(1) = 28.41, p = 9.84\times10^{-8}$ for the contrast between the rates of creative adaptation and direct application ideas).} \label{fig:ideation-outcome} \end{figure} We found that the distribution of ideation outcome types differed significantly between analogy and keyword papers ($\chi^2(3) = 52.12, p < 1.0\times10^{-10}$). Participants came up with more creative adaptation ideas (N = 53; 32\% of total) over direct application ideas (N = 3; 2\%) using analogy papers. In contrast, keyword papers resulted in more direct application ideas (N = 16; 19\%) than creative adaptation ideas (N = 10; 12\%) (fig.~\ref{fig:ideation-outcome}). The difference between creative adaptation and direct application was significant ($\chi^2(1) = 28.41, p = 9.84\times10^{-8}$). \begin{table*}[t!] \begin{tabular}{c p{3cm} c p{6cm}} \toprule {\textbf{PID}} & {\textbf{Research Problem}} & {\textbf{Type}} & \textbf{Paper Title $\rightarrow$ New Idea} (paraphrased)\\ \midrule \multirow{10}{*}{1} & \multirow{10}{3cm}{Improve nanoscale heat transfer in semiconductor material} & \multirow{5}{*}{Direct Application} & \textit{Experimental investigation of thermal contact conductance for nominally flat metallic contact} $\rightarrow$ Apply the techniques in the paper to manipulate thermal conductance at the solid-solid interface\\[2cm] & & \multirow{4}{*}{Creative Adaptation} & \textit{Investigation on periodically developed heat transfer in a specially enhanced channel} $\rightarrow$ Design nanoscale ``fins'' to absorb heat and convert it to mechanical energy\\ \midrule \multirow{10}{*}{2} & \multirow{10}{3cm}{Grow plants better by optimizing entry of nanoparticle fertilizers into the plant} & \multirow{4}{*}{Direct Application} & \textit{Nanoinformatics: Predicting Toxicity Using Computational Modeling} $\rightarrow$ Apply the computational modeling from the paper for predicting toxicity of candidate nanoparticles\\[2cm] & & \multirow{5}{*}{Creative Adaptation} & \textit{Identification of Plant Using Leaf Image Analysis} $\rightarrow$ Invent a hyperspectral 3D imaging mechanism for plants that optically senses, traces, and images plant cells in 3-dimensional structures\\ \midrule \multirow{12}{*}{3} & \multirow{12}{3cm}{Enhance the evaporation efficiency of thin liquid films in heat pipes and thermosyphones} & \multirow{5}{*}{Direct Application} & \textit{Thin film evaporation effect on heat transport capability in a grooved heat pipe} $\rightarrow$ Adopt the techniques in the paper for manipulating the solid interface's surface properties to balance the film thickness and disjoining pressure\\[3cm] & & \multirow{5}{*}{Creative Adaptation} & \textit{Alkaline treatment kinetics of calcium phosphate by piezoelectric quartz crystal impedance} $\rightarrow$ Design novel liquid film materials for manipulating hydrophobicity to change disjoining pressure\\ \bottomrule \end{tabular} \caption{Examples of Direct Application and Creative Adaptation types for three participants (PID). Each participant's research problem is described in the Problem column. While the topics of research problems vary, Creative Adaptation ideas are more distant in terms of content compared to the source problem than Direct Application ideas are, and may be characterized by the use of different sets of verbs (\{\textit{design}, \textit{invent}\} in Creative Adaptation ideas versus \{\textit{apply}, \textit{adopt}\} in Direct Application ideas).} \label{table:brainstorm-ex} \end{table*} To illustrate more concretely the divergent patterns of ideation leading to Creative Adaptation and Direct Application ideas, we describe vignettes from three participants (table~\ref{table:brainstorm-ex}). While Direct Application ideas represented close-knit techniques and mechanisms directly useful for the source problem (described with verbs such as \textit{apply} and \textit{adopt}), Creative Adaptation type ideas were more distant from the source problem and \new{could} be characterized with the use of different verbs associated with significant adaptation (\textit{design} and \textit{invent}). For example, P1's research focused on the methods for improving nanoscale heat transfer in semiconductor materials. Previously he developed mechanisms for manipulating the thermal conductivity at solid-solid interfaces, specifically by adjusting the semiconductor wall structures. Thus, a paper reporting experimental results of manipulating thermal conductance on planar metallic contact points was deemed a directly useful paper that might contain helpful techniques. On the other hand an analogy paper which dealt with the heat transfer phenomenon at a macroscale, using fin-based heat sink designs for electronic devices, gave him a new inspiration: to adapt fins for nanoscale heat transfer in semiconductors to not only transfer heat but also convert it into a useful form of mechanical energy. Despite the mismatch on scale ([macroscale] $\nleftrightarrow$ [microscale]), challenging the assumption of the typical size of a fin-based design engendered an idea to creatively adapt it to convert heat into energy through an array of tiny fins, rather than merely dissipating it into space as in the original formulation of the problem. P1 also found another analogy paper focused on thermal resistance at a liquid-solid interface useful for future ideation because despite its surface dissimilarities, there was a potential mapping that may open up a new space of ideas (e.g., [liquid] $\nleftrightarrow$ [polymer substrate], [solid] $\nleftrightarrow$ [germanium], yet the pairwise relation [liquid:solid] $\leftrightarrow$ [polymer substrate:germanium] may be analogous and interesting)\new{:} ``This is liquid... but it's about liquid-solid interface which can be useful... because for the substrate that sits on top of silicon or germanium you use polymers which have liquid-like properties'' (P1). In the case of P2, a paper focused on computational methods for toxicity prediction was deemed directly helpful because ``if certain nanomaterials are toxic to certain microorganisms that eat plants or kill them but safe for the plant, we can target these organisms using the nanomaterials as pesticide. Another way this can be helpful is in predicting the chance of toxicity of the nanoparticles in our fertilizers'' (P2). Whereas an analogy paper that uses image analysis for plant identification reminded her of ``hyperspectral imaging in plants, like a CT scan for plants. So making a hyperspectral 3D model using something like this... to optically sense and trace plant cells (such that the entry of fertilizer nanoparticles into plant cells can be monitored, a sub-problem of P2's research problem) would be pretty cool.'' As a third example, P6's research focused on recording and simulating electrical activity using microelectrode arrays. To him, an analogy paper about printing sensors for electrocardiogram (ECG) recording seemed to present an interesting idea despite its mismatch in terms of scale ([nanoscale] $\nleftrightarrow$ [macroscale]) \new{and manufacturing mechanism (}[fabrication] $\nleftrightarrow$ [printing]\new{), because} the pairwise relation between [nanoscale:fabrication] $\leftrightarrow$ [macroscale:printing] engendered a reflection on the relative advantages of different methods and future research directions): ``Interesting idea! Instead of nanoscale fabrication, printing can be a good alternative for example for rapid prototyping. But I think the resolution won't be enough (for use) in nanoscale... works for this particular paper's goal, but an idea for future research is whether we can leverage the benefit of both worlds -- rapid printing and precision of nanoscale fabrication'' (P6). \subsubsection{The level of purpose-match had different effects on the ideation outcome} \label{section:purpose-match-mediation} Suggested in these examples is a certain kind of distance the ideas in analogy papers maintain in order to spur creative adaptation. We hypothesize that some amount of difference in purpose facilitates creative adaptation. This process may involve a curvilinear relationship between the degree of purpose mismatch and the resulting ideation outcome, with too much or too little deviation leading to a little-to-no benefit or even an adverse effect on the ideation outcome\new{, a pattern that is consistent with the findings in the literature of creativity and learning outcomes (e.g., Csikszentmihalyi's optimal difficulty~\cite{csikszentmihalyi1990flow}).} For this analysis, we coded each paper based on three levels of purpose-match to the source problem: $\sbullet[.75]$~\textbf{Full:} Both high- and low-level purposes match $\sbullet[.75]$~\textbf{Part:} Only the high-level abstract purpose matches. Explicit descriptions of the high-level purpose exist in either title and abstract of the paper. At the same time, certain low-level aspects of the participant's research problem are mismatched as evidenced by relevant comments from the participant $\sbullet[.75]$~\textbf{None:} Neither high- nor low-level purposes match \begin{table*}[t] \begin{tabular}{c c p{10cm}} \toprule \textbf{Purpose-Match} & \textbf{PID} & \textbf{Participant Comment} \\ \midrule \multirow{3}{*}{Full} & \multirow{3}{*}{2} & ``It's a little bit old (from 2010) but I have read papers from that era. I love this... because the paper mentions everything else and especially one word which is `disjoining pressure' -- if I were to publish my current project that's going to be the core topic.''\\ \midrule \multirow{3}{*}{Part} & \multirow{3}{*}{1} & ``Though I'm not familiar with GFRP-GFRP... but I can see that they're referring to glass fiber reinforced plastic, so this is something not crystalized material... learning about this kind of materials is interesting.''\\ \midrule None & 3 & ``I don't know what a lot of words mean. I don't typically work with animals cells.''\\ \bottomrule \end{tabular} \caption{Examples of different purpose-match types. Purpose-Match shows the level of purpose-match between a recommended paper and each participant's research problem (see table~\ref{table:brainstorm-ex} for descriptions of research problems). Fully matching purposes are those that match at both high- (more abstract) and low-levels (specific details). Partial matches only match at the high-level abstraction and differ in details. The Participant Comment column shows relevant excerpts from the participant.} \label{table:purpose-match-ex} \end{table*} Examples of these types of purpose-match are provided in Table~\ref{table:purpose-match-ex}. \new{High-level match can be considered as a first-order criterion of purpose match and low-level match as a second-order criterion: If the paper does not have overlapping terms in terms of its purpose with the user query cast at a high level (e.g., transfer heat, grow plants) then the low-level match does not matter, but if the paper's purpose matches at the high level, its low-level alignment (e.g., specific aspects of the purpose, such as its scale or materialistic phase) will additionally determine full (i.e., aligned in both high- and low-level aspects of the purpose) vs partial match (i.e., aligned only in the high-level but not low-level aspects of the purpose). Therefore, the coding procedure was symmetrical to the procedure described for coding four types of ideation outcome, with the high-level purpose match deciding between \{Full, Part\} and None match types, while the low-level purpose further distinguishing between Full vs. Partial match.} Following this procedure, two independent coders achieved an inter-rater reliability Cohen's $\kappa = 0.72$ (substantial agreement) and disagreements were resolved \new{with case-by-case discussion}. We used the \textsc{mediation} package\footnote{\url{https://cran.r-project.org/web/packages/mediation/index.html}}~\cite{tingley2014mediation} to conduct a mediation analysis between the condition, the kind of purpose-match, and the binary Creative Adaptation ideation outcome. The analysis showed that the effect of condition (Keyword vs. Analogy) on the binary outcome of creative adaptation was mediated by the degree of purpose-match\new{, but not by the novelty of content, suggesting that the difference between full vs. partial matching on purpose is much more significant than the variance in the content novelty. We come back to this result in the discussion (\S\ref{subsubsection:control_diversity}).} Table~\ref{table:mediation} presents the result {of} the mediation analys{e}s. The regression coefficient between creative adaptation and condition was significant as was the regression coefficient between the degree of purpose match and creative adaptation. The indirect effect was $(-.42)\times(.21) = -.09$. We tested the significance of this indirect effect using a bootstrapping procedure~\cite{mediation_bootstrapping} ($p < 2\times 10^{-16}$)\new{, by computing the} unstandardized indirect effects for each of \fnum{1000} boostrapped samples as well as the 95\% confidence interval (CI)\footnote{Alternatively, it is possible that the mediating effect of the degree of purpose-match on the likelihood of creative adaptation outcome is moderated by novelty. However, the result of our analysis showed that this was unlikely: The effect was insignificant using the boostrapping method -.04, ($p = 0.12$, 95\% CI = $[-.09, .01]$).}. \begin{table*}[t] \centering \begin{tabular}{@{}l *{5}{S[input-symbols=(),table-format=-1.3,table-space-text-post=****,table-align-text-post=false]} @{}} \toprule & \text{Effect of Condition} & \text{Unique Effect} & \text{Indirect Effect} & \multicolumn{2}{c}{\text{CI 95\%}} \\ \cmidrule(lr){5-6} \textit{Mediator} & \text{on Mediator (\textit{a})} & \text{of Mediator (\textit{b})} & \text{(\textit{a$\times$b})} & \text{Lower} & \text{Upper} \\ \midrule \multirow{2}{*}{Purpose-match} & -0.42\textsuperscript{****} & 0.21\textsuperscript{****} & -0.09\textsuperscript{****} & -0.14 & -0.05 \\ & (.08) & (.05) & & & \\ \addlinespace \rowcolor{lgrey} & 0.40\textsuperscript{****} & -0.06 & -0.02 & -0.07 & 0.02\\ \rowcolor{lgrey} \multirow{-2}{*}{Novelty} & (.07) & (.05) & & & \\ \addlinespace \multirow{2}{*}{Pid} & -0.02 & 0.03\textsuperscript{*} & -0.001 & -0.02 & 0.02 \\ & (.22) & (.02) & & & \\ \bottomrule \end{tabular} \caption{Regression table of three mediation analyses using \textit{Purpose-match}, \textit{Novelty} and \textit{Pid} (Participant ID) as mediators between Condition and the binary Creative Adaptation outcome variable. Purpose-match was the only significant mediator \new{between Condition and Creative Adaptation} (indirect effect=-.09, significant using a bootstrapping method~\cite{mediation_bootstrapping} with \fnum{1000} iterations, $p < 2\times 10^{-16}$).} \label{table:mediation} \end{table*} \begin{figure}[t] \begin{minipage}{.33\textwidth} \centering \includegraphics[width=\textwidth]{figures/creative_adaptation_ratio.png} \caption{Proportion of creative adaptation ideas among the partial purpose-match papers. Creative Adaptation was significantly more frequent among the analogy papers (47\%) than keyword papers (21\%) (Welch's two-tailed t-test, $p = 9.0\times10^{-4}$.} \label{fig:chance_creative_adaptation_mean_ranks} \end{minipage}% ~\qquad \begin{minipage}{.62\textwidth} \centering \includegraphics[width=\textwidth]{figures/stacked_barchart_swapped.png} \caption{The rate of ideation outcome types in full and partial purpose matches. Among the keyword papers as the purpose mismatch increases, the rate of creative adaptation also increases from 0\% to 21\% (middle). However, this rate is significantly higher among the analogy papers (47\%) than the keyword papers (21\%). Note that while purpose mismatches led to more creative adaptation among analogy papers, a large portion of them also resulted in no ideation outcome (38\%).} \label{fig:partial_mismatches_outcome} \end{minipage} \end{figure} Partial purpose matches in both keyword and analogy papers led to creative adaptation, but the rate was significantly higher with analogy papers. As expected, the ratio of direct application decreased from the keyword papers that fully match in purpose (Keyword Full, 68\%) to the keyword papers that partially match in purpose (Keyword Part, 6\%) (fig.~\ref{fig:partial_mismatches_outcome}). At the same time, the rate of creative adaptation increased from the keyword papers that fully match in purpose (Keyword Full, 0\%) to the keyword papers that partially match in purpose (Keyword Part, 21\%). However, the rate of creative adaptation differed significantly between the keyword and analogy papers, with the rate more than doubling among the analogy papers over keyword papers (Analogy Part 47\% vs. Keyword Part 21\%). Homing in on the partial matches, these papers led to creative adaptation ideas significantly more often in analogy search (47\%) than keyword search (21\%) (Welch's two-tailed t-test, $t(112.22) = -3.40, p = 9.0\times10^{-4}$, fig.~\ref{fig:chance_creative_adaptation_mean_ranks}, left). While the partial purpose mismatch was highly associated with creative adaptation ideas, it could be a double-edged sword. Among the analogy papers, 38\% of the partial mismatches resulted in no useful ideation outcome as opposed to the 47\% that resulted in creative adaptation (fig.~\ref{fig:partial_mismatches_outcome}, Analogy Part). Therefore, \textbf{knowing what mismatches are beneficial to creative adaptation} has important implications for facilitating generative misalignment for ideation. \subsection{Motivation and structure of the study} The findings of Study 1 suggest potential benefits of an analogical search engine for scientific research, but a core limitation of interactivity due to the human-in-the-loop system design prevented its use as a more realistic probe for understanding researchers' natural interaction with analogical results. Specifically, the results of Study 1 are limited by the lack of participants' ability to reformulate search queries and the study design that involved returning only a fixed number of papers that blended both keyword and analogy papers in a randomized order. These factors significantly deviate from realistic usage scenarios of a deployed analogical search engine and prevent us from observing the full scope of user interaction. In order to move beyond these limitations, first we need a fully automated pipeline that removes the need for human-in-the-loop filtering, thus allowing us to enable query reformulation and interaction with corresponding search results. To achieve this, we improved the model accuracy on extracting purposes and mechanisms from paper abstracts by training a more sophisticated neural network that leverages more nuanced linguistic patterns. Specifically, we implement{ed} an attention mechanism within a span-based sequence-to-sequence model (Model 2) such that it \new{could} learn words that frequently co-occur to describe coherent purposes or mechanisms in paper abstracts\new{, and as a result, learning more informative words for our purpose} (see Appendix for details of implementation). Through evaluating the system backed by this improved pipeline, we demonstrate how it can remove the human-in-the-loop while maintaining similar levels of accuracy. In the following sections, we report the evaluation results that show 1) an improved token-level prediction accuracy using the span-based Model 2; 2) rankings of the results aligning well with human-judgment of purpose-match from Study 1; and 3) top ranked results of the system maintaining a similar rate of partial purpose matches relative to that of the human-in-the-loop system from Study 1. The interactivity enabled by the automated analogical search pipeline further allows us to observe its use in more realistic scenarios. To probe how researchers would interact with an analogical search engine and what challenges they might face in the process, we ran case studies with six researchers (\S\ref{section:case studies}). \new{From these studies, w}e uncover potential challenges (\S\ref{section:case studies}) and synthesize design implications for future analogical search engines (\S\ref{section:design implications}). \subsection{Result} \begin{minipage}[t]{\textwidth} \centering \begin{minipage}[tc]{.6\textwidth} \centering \begin{tabular}{l c c c c}\toprule \multirow{2}{*}{\textbf{Model}} & \textbf{Embedding} & \multirow{2}{*}{\textbf{All}} & \multirow{2}{*}{\textbf{PP}} & \multirow{2}{*}{\textbf{MN}} \\ & (finetuned) & & & \\ \midrule 1. Model 2~\cite{spanrel} & ELMo (N) & \textbf{\textcolor{blue}{0.65}} & 0.65 & 0.64 \\ 2. BiLSTM & ELMo (N) & 0.63 & 0.67 & 0.59 \\ 3. BiLSTM & SciBERT (N) & 0.62 & 0.69 & 0.55 \\ 4. BiLSTM-CRF~\cite{elmo} & ELMo (N) & 0.58 & 0.59 & 0.57 \\ 5. BiLSTM & GloVe (Y) & 0.55 & 0.56 & 0.53 \\ \midrule 6. Model 1 & GloVe (N) & 0.50 & 0.51 & 0.50 \\ \bottomrule \end{tabular} \label{table:models} \captionof{table}{F1 scores of different models, sorted by the overall F1 score of Purpose (PP) and Mechanism (MN) detection. The span-based Model 2 gave the best Overall F1 score (blue). In comparison, the average agreement (\%) between two experts' and crowdworkers' annotations was $0.68$ (PP) and $0.72$ (MN)~\cite{chan2018solvent}. We used AllenNLP~\cite{allennlp} to implement the baseline models 1 -- 5.} \end{minipage} ~\qquad \begin{minipage}[tc]{.32\textwidth} \centering \includegraphics[width=\textwidth]{figures/mean_ranks_SpanRel.png} \vspace{-1em} \captionof{figure}{Mean ranks of human-judged high and low purpose match papers from the span-based pipeline. Low matches were ranked significantly lower (the rank number was higher), on average at $465^{\text{th}}$ (SD: 261.92) than high matches at $343^{\text{th}}$ (SD: 279.48).} \label{fig:mean_ranks} \end{minipage} \end{minipage} \subsubsection{Improved token-level prediction of a span-based model} First we compared the span-based Model 2 with five other baselines to evaluate the token-level classification performance (Table~7). Model 2's overall F1 score was the highest at 0.65 (Purpose; PP: 0.65, Mechanism; \new{MN: 0.64, an 0.14- and 0.14-absolute-point increase from Model 1, respectively}) on the validation set which represents \new{an overall} 0.15-absolute-point increase from \new{Model 1} used for {the} initial human-in-the-loop analogical search engine. \subsubsection{Pipeline with a span-based model reflected human judgment for ranking the results} The improved token-level prediction performance materialized as an increase in the pipeline's ability to judge the degree of purpose match. For this evaluation, we first recorded every query \new{provided by Study 1 participants that human-in-the-loop filterers} used \new{to search and filter the relevant papers}. Then, we simulated the search condition of the filterers for the automated pipeline by providing it input as the exact queries they used. We capped the number of top search results sufficiently large at \fnum{1000} for each query. From these top \fnum{1000} results, we selected papers that also appeared in the human-in-the-loop system and collected the corresponding human-vetted judgments of high or low purpose-match. For each of these papers, we also collected its corresponding rank positions on the new (automated) pipeline's list of results. We compared the mean ranks of papers that are judged by human filterers as high purpose match to those of low purpose matches. The result showed that the new pipeline indeed was able to distinguish between the two groups of papers; low purpose matches (i.e. papers that were deemed not relevant and subsequently filtered by the judges in Study 1) were placed at significantly lower positions on the list than high purpose matches (i.e. unfiltered papers in Study 1). The mean rank for low purpose matches was 465 while for high purpose matches it was 343 (fig.~\ref{fig:mean_ranks}). This difference was significant ($t(192.49) = 3.29, p = 0.0012$. Welch's two-tailed t-test.). \subsubsection{\new{Different model performance on finding papers that fully or partially match on purpose}} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/model_comparison_pm.png} \vspace{-1.5em} \caption{Distribution of Full, Part, and None purpose matches among the five sourcing mechanisms: \textit{BiLSTM with filtering} represents the human-in-the-loop system (Study 1); \new{\textit{Model 1} represents a} system based on the BiLSTM model \new{alone,} without human-in-the-loop filtering; \textit{Model 2} \new{represents the fully automated system}; \textit{Random} \new{represents randomly sampled papers}; \textit{Keyword} \new{represents} keyword-based search (\new{Control in} Study 1). \textit{Model 2} and \textit{BiLSTM with filtering} showed a similar distribution of purpose matches, \new{and more partial purpose matches than} \textit{BiLSTM} \new{alone}. Random showed mostly no matches. The \textit{Keyword} condition resulted in the highest number of fully matched papers and the lowest number of no matches, suggesting that keyword-based search may be an effective mechanism \new{for direct search tasks, but potentially less effective for inspirational/exploratory search tasks.}} \label{fig:model_comparison_pm} \end{figure} \xhdr{Data and coding} In addition to the overall rankings reflecting human-vetted judgments we also found that the proportion of partial purpose matches was significant among the top-ranked results. We sourced top 20 results for each participant's research problem with the automated system (Model 2) using the exact queries and order used by the human-in-the-loop filterers in Study 1. We compared this to four other approaches: 1) the human-in-the-loop system in Study 1 (\textit{BiLSTM with filtering}), 2) a BiLSTM-based system excluding the human-in-the-loop from 1 (\textit{BiLSTM}), 3) randomly sampled papers (Random), and 4) a keyword-based search results\new{, which was used as control in} Study 1 (\textit{Keyword}). There were no overlapping papers between Model 2 and other conditions except for {the} Keyword \new{condition} which had 1 overlapping paper. To code the degree of purpose match, we blended the results of Model 2, biLSTM, and Random conditions. Two of the authors coded a fraction of the data together blind-to-condition (7.4\%, $N = 20/270$) following the same procedure used in Study 1. Then they independently coded the rest blind-to-condition achieving an inter-rater agreement of $\kappa = 0.80$ (substantial agreement). We resolved any disagreement through discussion on an individual case basis. \begin{figure}[t] \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/model_comparison_pm_t_test_including_random_with_keyword.png} \vspace{-1em} \caption{The distribution of mean purpose match scores over different conditions (mappings: None $\mapsto$ 0, Part $\mapsto$ 1, and Full $\mapsto$ 2). The mean purpose-match score of the system backed by Model 2 (0.63, SD: 0.56) is significantly higher than that of the system used in Study 1 without the human-in-the-loop (BiLSTM, $\mu=0.45$, SD: 0.58) (Welch's two-tailed t-test, $t(237.87) = 2.49, p = 0.0135$), similar to that of the system with the human-in-the-loop (BiLSTM with filtering, $\mu=0.62$, SD: 0.52) ($t(244.65) = 0.25, p = 0.80$), and significantly lower than that of the keyword-based search (Keyword, $\mu=1.04$, SD:0.65) ($t(159.38) = -4.57, p = 0$).} \label{fig:model_comparison_pm_t_test} \end{minipage}% ~\qquad \begin{minipage}{.45\textwidth} \centering \includegraphics[width=\textwidth]{figures/system_interfaces_case_study.pdf} \vspace{-1em} \caption{The search interface used for case studies featured an input for query reformulation which participants used to iteratively reformulate their queries.} \label{fig:case_study_interface} \end{minipage} \end{figure} \xhdr{Result} We found that the Model 2-based system achieved a parity with the human-in-the-loop system (Study 1) for finding purpose matches (fig.~\ref{fig:model_comparison_pm}), with more than half of the system's top 20 results judged to be partial purpose matches. In \new{contrast}, when human-in-the-loop filtering was removed from the BiLSTM-based system, the frequency of partial purpose matches \new{decreased} from 58\% to 37\% \new{while} the frequency of no matches \new{increased} from 40\% to 59\%. Random sampling resulted in mostly irrelevant results, with no alignment on purpose with the source problem. An interesting point of comparison is between the keyword-based and Model 2-based search results. Keyword search mostly outperformed Model 2-based \new{system} by finding full purpose matches at a much higher rate (23\% in keyword search vs. 4\% in the Model 2-based system), \new{with similar rates of} partial purpose matches (58\% vs. 55\%), and \new{significantly less} no purpose matches (19\% vs. 41\%). On average the purpose match score was the highest in keyword-search followed by the Model 2-based and the human-in-the-loop systems (fig.~\ref{fig:model_comparison_pm_t_test}). \new{Combined} with the results of Study 1, this suggests the complementary value of analogical search: The higher rate of full-matches in keyword-search may be good when searchers know what they are looking for, \new{such as in direct search tasks} and foraging from familiar sources of ideas. Nonetheless, because analogy papers were both deemed significantly more novel by the scientists and had little-to-no overlap with keyword-search papers, they augmented keyword-based search results with a complementary set of papers \new{that introduce useful} mistmatches in their purposes. This set of papers may open up new domains of ideas that scientists may not have been aware of, \new{and encourage} creative adaptation. \subsection{Participants and Design} Participants were asked to formulate purpose queries for their own research problems and interact with the results to find interesting papers. If a paper gave them a new idea relevant to their research project, they were asked to write a short project proposal in a shared Google Doc and explain how the paper helped them to come up with the idea. Interviews were conducted via Zoom and lasted for roughly an hour. Participants were paid \$20 in compensation. One participant was an assistant professor in mechanical engineering at a public R1 U.S. university and five were PhD researchers in the fields of sciences and engineering at a private R1 U.S. university. Two were senior PhD students (3rd year or above) and the rest were 2nd year or below. Disciplinary backgrounds of the participants included Chemical (2), Civil (3), and Mechanical Engineering (1). \new{We note that one participant previously took part in Study 1, whose research focus was the same in terms of its general domain. However, the participant's ideas and the specific papers of interest that led to them did not have overlap between the two studies.} Table~\ref{table:case-studies-research-problems} describes participants' research problems. \begin{table*}[t] \begin{tabular}{c p{13.25cm}} \toprule \textbf{PID} & \textbf{Participants' Description of Research Problem} \\ \midrule \multirow{1}{*}{1} & Improve heat pipe evaporation\\ \rowcolor{lgrey} \multirow{1}{*}{2} & Computer simulations for fluids in nanoscale and uncovering their heat-transfer properties\\ \multirow{3}{*}{3} & Developing a model to identify complex steps in Nuclear Power Plant (NPP) operation, and understanding what task features and structures cause the complexity and how this influences the operators' performance\\ \rowcolor{lgrey} \multirow{1}{*}{4} & Designing simulators for training bridge inspectors\\ \multirow{2}{*}{5} & Developing algorithms and extensible frameworks for detecting personal protective equipment (PPE) in construction sites to improve the safety of construction workers\\ \rowcolor{lgrey} \multirow{1}{*}{6} & Convergence rates of optimization algorithms under multiple initial starting positions\\ \bottomrule \end{tabular} \caption{Case study participants' descriptions of own research problems} \label{table:case-studies-research-problems} \end{table*} \textit{Apparatus: Search interface}. The improved \new{performance} of Model 2 \new{backed the} fully automated pipeline without \new{human} filtering. \new{The search interface interacting with this back-end} included a text input for reformulating purpose search queries as well as a \new{list view of search results} that showed a sorted list of papers with similar purposes (fig.~\ref{fig:case_study_interface}). \subsection{Result} \subsubsection{Overall impressions} Overall participants described their experience with the analogical search engine in a positive light (e.g., ``helps me think at a broad topic or a big picture level'' -- P2; ``find some very interesting and useful ideas, the design is also very simple, good when focusing on key areas of research'' -- P5; and ``very interested now what the future of this engine would look like'' -- P3), but a deeper look suggested that the success of ideation depended on how well searchers were able to engage with analogical results that deviate from their expectations: ``It's surprising that the engine recommends examples like these'' -- P3; ``If I input the same search queries on Google Scholar it'd not normally return these things... this search engine works in a different way'' -- P1. \subsubsection{``Not the kind of paper I'd look for \textbf{but...}'': The challenge of early rejections} \label{subsubsection:case_studies_early_rejection} Unlike similarity-maximizing search engines, the diversity in analogical search results can lead to premature rejection of alternative mechanism ideas. \new{One of the factors contributing to premature rejection of alternatives may be the tendency for adherence to a set of existing ideas or concepts, as studied in the literature of design fixation (e.g.,~\cite{jansson1991design}). In our study, the} participants found the variety of domains featured in search results confusing, and it sometimes prevented them from engaging with the ideas therein. For example, P3, whose research studies ways to manage or reduce task complexity for nuclear power plant operators, expected to see results similar to Google Scholar which are typically in the domains of operational and managerial sciences, but was surprised by unfamiliar domains represented in search results: ``These (\textit{distributed networked systems design} or \textit{path planning for automated robots}) are not the kinds of fields that I normally read in, if I found them elsewhere I would've probably thought they're irrelevant and skipped'' (P3). Ranging from unfamiliar terms (P1, P4, P5) to unfamiliar categories of approaches (e.g., ``Not sure what `Gauss-Newton approach for solving constrained optimization' is'' -- P6), or high-level research directions (e.g., ``this is different from my research direction, people who work on this direction might find it interesting, though'' -- P1), participants saw the diversity of results as a challenge for engagement. P1 pointed out a \new{perceived} gap between {the} expectation of least effort and the cognitive processing required when engaging with analogical ideas and adapting them: \begin{quote} \textit{``As I understand it, I think this search engine is trying to present papers from related but different fields to let people make connections. But people expect less friction. (The result is) something interesting but I can't directly write it into a project proposal... I think it would be challenging to make people get interested in investing time to read the papers in depth to come up with connections. I wonder what would happen if this was hosted just as an online website (instead of the study context)''} -- P1 \end{quote} On the other hand, analogs that did get examined more deeply could ultimately lead to creative adaptation. For example, P3 mapped task scheduling among computer processes to task assignment among the nuclear power plant operators, and came up with an idea to adapt algorithmic scheduling used in real-time distributed systems to a scheduling mechanism that could be useful in her research context. Represented symbolically this process was akin to ideating what might best fill in the `?' in the relational structure [scheduling algorithm:processes in distributed systems] $\leftrightarrow$ [?:nuclear power plant operators]: ``I think the algorithms proposed in this paper could be useful for calculating the operator task execution time, the power plant system's response time, and the time margin between the execution time and the system response time... so that the next task assignment can factor in these margins and things related to workers' well-being like rest and time required between switching tasks'' (P3). Participants seemed to recognize a small number of core relations as kernel for creative adaptation. In the example of P3, \textit{scheduling processes} in the distributed systems paper piqued her interest and led her to connect them with similar concepts in the literature she was already familiar with: ``You need to make that connection... I saw parallels between (distributed systems domain) concepts like [scheduling] and [tasks] and [scheduling tasks for the operators]'' (P3). Similarly, P5 recognized a similarity between [monitoring people's performance] in fitness training and [monitoring whether construction workers are wearing personal protective equipment] in construction sites. He then adapted the idea of tracking heat emission in the fitness context to his {own}: ``I like the idea of [monitoring heat emissions] in fitness training... maybe I can also detect heat emissions from construction workers to see if they are wearing the safety vests or masks while also monitoring the site conditions and worker efficiency. It also gives me an idea to monitor the $\text{CO}_2$ emissions from workers so as to improve the robustness of detection'' (P5). In this case, \textit{monitoring} and the \textit{physical nature} of the activities involved helped P5 see the connection useful for creatively adapting the source idea. \begin{wrapfigure}{R}{.35\textwidth} \begin{center} \includegraphics[width=.35\textwidth]{figures/purpose_hierarchy.png} \vspace{-1em} \caption{Diagram showing different abstraction levels of purposes and their relations. Node \new{\cirnum{A}} \new{corresponds to a more specific query than its} higher-level representation, denoted as \new{\cirnum{B}}. Similarly, node \new{\cirnum{C}} represents a more specific purpose representation \new{of} \new{\cirnum{A}}, accessible via the \new{\cirnum{A}} $\underset{\text{\new{abstraction}}}{\rightarrow}$ \new{\cirnum{B}} $\underset{\text{\new{specification}}}{\rightarrow}$ \new{\cirnum{C}} path.} \label{fig:purpose_hierarchy} \end{center} \end{wrapfigure} \subsubsection{``I don't know what to type in'': The challenge of query (re-)formulation} \label{subsubsection:case_studies_query_reformulation} Another challenge participants faced was that they were not used to formulating their search queries in terms of high level purposes of their research. On average participants entered 5.2 queries (Min: 1, Max: 18, SD: 5.87), 87\% (27) of which were in the form of a single noun phrase (e.g., ``heat pipe evaporation,'' -- P1, ``task complexity'' -- P3, ``theoretical optimization convergence for non-convex functions'' -- P6) or a comma-separated set of multiple noun phrases (e.g., ``heat transfer, nanoscale, fluid'' -- P2) that represented specific aspects related to research purposes rather than the core purposes themselves. For example, the purpose of `heat pipe evaporation' may be to transfer heat, and the purpose of searching for `theoretical optimization convergence for...' may be to \new{detect} when optimization \new{converges or diverges, or to effectively sample unknown (non-convex) distributions.} One of the reasons why participants formulated search queries in this way may be \new{wrongly assuming that the search engine used} keyword matching to find results. For example, extensive \new{prior} experience with search engines that highlight matching keywords in abstracts (e.g., Google Scholar) in response to users' search queries can reinforce such assumptions among {the users}. In addition, participants' domain knowledge useful for judging which of the returned papers are relevant may have led them to notice a set of keywords the inclusion of which strongly signifies the relevance of a paper. In contrast, the analogical search results often seemed to not feature such directly similar terms and this contributed to the difficulty of judging whether a result is relevant and how: ``I find these papers not very related to my search query at first. It'd be better if you can use some graph or some pictures to indicate how these papers can relate to my keywords'' (P5); ``I'd not consider... (because) they are totally different, right? They look irrelevant... until I think about it I can realize that it's useful. But if you give me the paper, at first I don't realize that'' (P3). \new{While} it may not feel as compelling or natural to participants, formulating and abstracting queries at a high level may lead to searching more distant results that are analogous at a higher level. For example, by querying ``detect personal protective equipment'' instead of ``personal protective equipment construction,'' P5 found novel mechanisms of detection, such as general image segmentation algorithms or an approach to monitoring heat in the context of fitness training not specific to construction sites and personal protective equipment but nonetheless useful for creative adaptation. Querying ``scheduling tasks'' instead of ``task complexity'' for P3 resulted in finding scheduling algorithms in distributed computer systems that otherwise P3 would not have encountered, while ``assigning tasks'' led to novel auction mechanisms which made her think about a system in which each power plant operator can bid for a task as opposed to being assigned one. \new{Schematically, fig.~\ref{fig:purpose_hierarchy} shows how} formulating queries at a higher level of abstraction \new{than specifying the problem context in full details} (\new{\cirnum{A} $\rightarrow$ \cirnum{B}}) may lead to \new{discovering} novel mechanisms that are relevant \new{at the high level of abstraction, and in more} distant ways \new{from the original problem formulation} (\new{\cirnum{B} $\rightarrow$ \cirnum{C}}). \subsection{Support purpose representation at different levels of abstraction} Analogical search engines should support users to formulate their purpose queries at different levels of abstraction. Additionally the search engine may prompt users to consider abstracting or specifying their purpose queries in the first place, and explain how it might help bring new insights into their problems. As seen in the case studies (Section~\ref{subsubsection:case_studies_query_reformulation}), scientists recognized their purpose queries may be represented at multiple levels, but prior experiences with similarity maximizing search engines may also have anchored them around pre-existing rigid formulation of purposes. Prompting users to consider their research problems at multiple levels may work against this rigidity, and providing candidate suggestions at varying levels may further reduce the cognitive demand. Moving up on the hierarchy to abstract purpose queries may be possible through removing parts of the query words that correspond to specific constraints, or by replacing them with more general descriptions. For example two participants of Study 1 had an identical purpose representation at a high level (``facilitate heat transfer'') despite the differences in materialistic phases targeted in each purpose: solid material and semiconductors for \p{P1}{Study 1} and liquid thin films for \p{P3}{Study 1}. \new{Furthermore}, we also observed that looking for only the exact match of a purpose can lead to missed opportunities. For example, although ``fins represent a different idea for transferring the heat'' and ``they (fins) don't match in terms of the scale -- macro, not nano,'' it nevertheless made \p{P1}{Study 1} wonder ``what if we could design nanoscale wall structures that act like fins that convert heat to mechanical energy?''. A corrollary to th{is} observation is that sometimes the superpositions of misalignment with just the right amount can lead to interesting results. For \p{P4}{Study 1}, a paper presenting experimental techniques for piezoelectric properties was interesting despite its misalignment such as \new{[}\textit{simulation}-based\new{]} (source) $\nleftrightarrow$ \new{[}\textit{experimental}\new{]} (analog) and \new{[}\textit{dielectric properties}\new{]} (source) $\nleftrightarrow$ \new{[}\textit{piezoelectric properties}\new{]} (analog): ``Though it's an experimental study, it's very close in terms of the material and phenomenon so likely to be helpful. Because we might be able to pick up some trends like, if we increased the temperature, the dielectric response gets stronger or weaker, inferred from the experimental piezoelectric responses, which can then be used to corroborate simulation results or help configure its parameters'' (\p{P4}{Study 1}). However, too much deviation seemed detrimental to its potential for inspiration: ``\new{[}Molecular dynamic simulation\new{]} is the same tool, but (this paper studies) \new{[}thermal\new{]} (not \new{[}dielectric\new{]}) properties on \new{[}polymer composites\new{]}... \new{[}polymer composites\new{]} are harder to model'' (\p{P4}{Study 1}). In sum, analogical search engines should support not only the capability to `narrow it down' with specific constraints, but also ways to relax them to broaden the search space when suitable, thus making feasible the sweet spot between too little (i.e. similarity maximization and trivial matches) and too much deviation (i.e. critical misalignment and unusable analogs). \subsection{Support iterative steering from critical misalignment and towards generative misalignment} Analogical search engines should recognize that important constraints may be discovered by users only after seeing misaligned analogs, and support this discovery process by presenting effective examples of misalignment to users. Analogs that deviate on some aspects of the source problem but preserve important relations may be particularly conducive to analogical inspiration that opens up not just individual solutions, but entirely new domains of solutions. However at the same time scientists also found it challenging to know how to come up with effective search queries because combinations of misalignment can sometimes lead to an unintended intersection of domains: ``I feel like I'm tricking the machine because \new{[}thin film\new{]} is often used with \new{[}solids\new{]}, and the term \new{[}pressure\new{]} also appears a lot in \new{[}manufacturing\new{]}... so combining them gives a subset of papers concerned with heat transfer in solid materials and in manufacturing'' (\p{P3}{Study 1}); ``on Google Scholar also, I get a lot of polymer strings and get (irrelevant) results like \textit{we use an \new{[}electric\new{]} device to study \new{[}vibration and stress\new{]} of \new{[}polymers\new{]}}... the machine is picking up \new{[}electric\new{]} and \new{[}properties\new{]} such as vibration and stress in the context of studying polymers but what I really want is \new{[}electric properties\new{]} of \new{[}polymers\new{]} \textit{not} \new{[}electronic devices\new{]} to study the \new{[}mechanical properties\new{]} of \new{[}polymers\new{]}'' (\p{P4}{Study 1}). Nonetheless, seeing misaligned analogs can be an effective way of reasoning about salient constraints and reflecting on hidden assumptions. For example, while evaluating papers about designing microelectrode arrays, \p{P6}{Study 1} said: \textit{``Now I think about this (result), I assumed a lot of things when typing that search query... though impedance and topology are my main focus in microelectrode arrays, the coating, size, interface between a cell membrane and electrodes/sensors, biocompatibility, softness of electrodes, fabrication process, material of the platform: silicon or polymer or graphene, form factor: attaching electrodes to a shank-like structure or a broom-like structure, degree of invasiveness, are all part of the possible areas of research and it makes sense that they showed up -- there is no way the machine would have known that from my query.''} This excerpt illustrates how knowing what the necessary specifications are and which constraints need to be abstracted to cast a wide-enough net to catch interesting ideas appeared to be a difficult task for scientists, especially when they had to recall important attributes rather than simply recognize them from examples of misalignment. Prior work in cognitive sciences also show how dissimilarity associated with various factors in analogical mappings~\cite{gentner2012analogical} can pressure working memory~\cite{waltz2000role}, increase cognitive load~\cite{sweller1990cognitive}, and increases response time taken to produce correct mappings for analogy problems~\cite{keane1994constraints}. Therefore, analogical search engines should help to reduce the cognitive effort required in the process, for example by proactively retrieving results that are `usefully' misaligned such that searchers can better recognize (as opposed to having to recall) salient constraints and refine their problem representation. This process is deeply exploratory~\cite{white2009exploratory,russell1993cost,CiteSense} in nature, and suggest the importance of both providing end-users a sense of progress over time~\cite{perfect_search_engine} as well as adequate feedback mechanisms for the machine to adjust according to the changing end-user search intent~\cite{schnabel2020doesn,schnabel2019shaping,kelly2003implicit}. For example, while the machine may `correctly' recognize a significant anaogical relevance at a higher level of purpose representation and recommend \textit{macro}-scale mechanisms to a scientist who studies \textit{nano}-scale phenomena (\p{P1}{Study 1}) or solid and semiconductor-based cooling mechanisms to a scientist in liquid and evaporative cooling systems (\p{P3}{Study 1}), these analogs may be critically misaligned on the specific constraints of the problem (i.e. the scale or materialistic phase) and thus considered by end-users as useless and even harmful. \subsection{\new{Support reflection and explanation of analogical relevance}} \new{Throughout the process of analogical search, human-AI coordination is critical for success, and an important aspect is how deeply the human users can reflect on the retrieved analogs~\cite{hao2016reflection} and recognize how different notions of relevance may exist for their own problem context, despite potential dissimilarity on the surface.} \new{Looking at previous examples of the tools and techniques developed for targeted reflection support may be useful to this end. For example, ImageCascade~\cite{koch2020imagesense} provides intelligent support such as automatically generated mood-boards and semantic labels for groups of images to help designers communicate their design intent to others. Another system, Card Mapper, visualizes relative co-occurrences of design concepts using proximity in the design space~\cite{darzentas2019card}. Similarly representing the space of analogical ideas using spatial encoding of similarity between two analogs, or designing information that supports getting a sense of the space of search results --- e.g., semantic category labels similar to ImageCascade's or the distribution of the domains that analogs are pulled from --- may be an avenue for fruitful future research.} The explanation of relevance is also important especially when there is a risk of early rejection (\S\ref{subsubsection:case_studies_early_rejection}). Using examples from the case studies, one approach to explaining relevance might be to surface a small number of core common features between an analog and a problem query. Such common features were {considered} useful {by} scientists for making analogical connections, and they could creatively adapt them for their own research problem context. When common features are not directly retrieved, generation of more elaborate explanations may be required. \new{We refer to~\cite{bansal2021does,smith2020no,buccinca2021trust,kang2022you} for those interested in future design considerations of automatically generated recommendation explanation.} \new{Further complementing the direct explanation of relevance approach,} techniques such as prompting or reminding \new{the searchers of previously rejected or overlooked ideas} may also trigger deeper reflection {and} delay premature rejection of the ideas based solely on their surface dissimilarity. Participants from both studies commented that the critical first step towards analogical inspiration may be raising {similarly} enough attention and interest above the initial `hump' of cognitive demand. Gentle reminders (e.g., ``Ask me {later} if this would be interesting and also provide a list of items'' -- \p{P1}{Case Studies}) or resurfacing previously rejected papers in light of new information (\p{P1}{Case Studies}, \p{P3}{Case Studies}) may help with users cross this barrier.
1,108,101,564,425
arxiv
\section{Introduction} Active galactic nuclei (AGN) with observed bolometric luminosities of around 10$^{11}$ $-$ 10$^{14}$ L$_{\odot}$, and that include Seyfert galaxies amongst its class, are believed to be powered by accretion of matter onto super massive black holes residing at the centres of galaxies (Lynden-Bell 1969, Rees 1984). According to the standard picture, accretion leads to the formation of accretion disk that emits black body radiation. The observed ultra-violet (UV)/optical radiation in AGN is well represented by the superposition of several multi-temperature black body components (Frank et al. 2002) and the observed big blue bump (BBB) in AGN spectra is often attributed to the accretion disk. AGN are known to show flux variations since their discovery and is now considered one of their defining characteristics. (Ulrich et al. 1997, Wagner \& Witzel 1995). Such flux variations are seen on a range of time scales from a fraction of hours to years over the complete electromagnetic spectrum from low energy radio to high energy $\gamma$-rays (Wagner \& Witzel 1995, Ulrich et al. 1997, Zhang et al. 2017, Giveon et al. 1999). In spite of having a wealth of monitoring data on large samples of AGN with varying time resolutions through time domain surveys as well as dedicated monitoring programs, the physical mechanisms that cause AGN flux variations are still not well understood. Though different physical processes contribute to the emission at different wavebands, the UV-optical emission is believed to be emitted from an optically thick and geometrically thin accretion disk (Frank et al. 2002). Therefore, study of flux variations in the UV/optical bands can enable one to understand the processes happening in the accretion disk of AGN in particular in the non-blazar category of AGN. Earlier efforts on the study of UV variations in AGN, were by Paltani \& Courvoisier (1994) who carried out a systematic analysis of the flux variations in the UV of different classes of AGN using data from the international Ultraviolet Explorer (IUE) covering the period 1978 $-$ 1991. Also, UV variability of blazars has been studied using IUE data (Edelson et al. 1991, Edelson 1992). According to their analysis, blazars show stronger variability at shorter wavelengths than at longer wavelengths. Subsequent to the work of Paltani \& Courvoisier (1994), Welsh et al. (2011) carried out a systematic study of the UV variability of a large number of AGN using data from the Galaxy Evolution Explorer (GALEX) data base. According to Welsh et al. (2011), the UV variability of quasars is much more than their optical fluctuations and among the UV bands, the variability observed in the far-UV (FUV; 1344$-$1786 \AA) band is larger than the variability in the near-UV (NUV; 1771$-$2831 \AA) band with is also similar to that found by Paltani \& Courvoisier (1994). The analysis of Paltani \& courvoisier (1994) failed to find any significant differences between the UV properties of radio-loud and radio-quiet quasars prompting the authors to suggest that the UV emission from AGN is independent of the radio emission properties. Studies of optical flux variations in different categories of AGN indicate, blazars tend to show large amplitude and high duty cycle of variability within a night compared to other radio-loud and radio-quiet AGN (Stalin et al. 2004). On year like time scales, among Seyfert galaxies in the optical band, radio-loud sources are more variable than their radio-quiet counterparts (Rakshit \& Stalin 2017). Short time scale UV flux variations of the order of 1000 to 10000 seconds were found in the Seyfert 1 galaxy NGC 7469 by Welsh et al. (1998), using the Faint Object Spectrograph on the Hubble Space Telescope as well as Fairall 9 (Lohfink et al. 2017). Most of the studies on the UV flux variability of AGN (Sakata et al. 2011, Paltani \& Courvoisier 1994) either using spectroscopy or broad band photometry indicate that the UV flux variability characteristics of AGN can be well described by accretion disk models. Vanden Berk et al. (2004) based on two epochs of observations on a large number of quasars found a spectral hardening of the UV continuum emission with increasing flux values. Similar results were also found by Wilhite et al. (2005) using spectroscopic observations. Paltani \& Walter (1996) using IUE observations observed that the spectra of Seyfert galaxies vary with time and they become flatter when the source brightens. To explain the observations, they proposed a two component model wherein the UV flux variations consist of a variable component with a constant spectral shape and a non-variable component from the small blue bump (SBB). Also, there are reports that claim the constancy of UV spectral shape during flux variations of AGN (D. Alloin et al. 1995, Rodrigues-Pascual et al. 1997). As we have limited studies on the UV flux variability characteristics of AGN both in long term as well as short term, it is of great importance to expand the studies on the already known UV flux variability nature of AGN to a larger sample of sources, and having data for a longer duration of time than that analysed before by Paltani \& Walter (1996). Towards this, we have carried out a statistical analysis of the UV variability of a sample of Seyfert galaxies, a category of AGN for which sufficient data is available and focussed mainly on the FUV (1150$-$2000 \AA) and NUV (1850$-$3200 \AA) flux variations. \section{Sample and Data} Our sample of Seyfert 1 galaxies was taken from Dunn et al. (2006) who have provided continuum light curves in different wavebands for a sample of 175 Seyfert galaxies as part of the Program in Extra Galactic Astronomy (PEGA) \footnote{http://www.astro.gsu.edu/PEGA/IUE}. The data towards this compilation were taken from the observations carried by IUE between the period 1978 to 1995. In this database, Dunn et al. (2006) have provided continuum flux measurements at three line free regions in the spectra of each of the Seyfert galaxies. In IUE spectra, the NUV and FUV cover the wavelength regions 1850$-$3200 \AA ~and 1150$-$2000 \AA ~respectively. For most of the sources, flux measurements are available in three NUV passbands (2200, 2400 and 2740 \AA) with bin sizes of 50, 60 and 30 ~\AA ~and three FUV passbands (1355, 1720 and 1810 \AA) with bin sizes of 30, 30 and 50 \AA. For this study, we have downloaded the light curves for all the Seyfert galaxies that are available in the PEGA database and we applied the following conditions to select the light curves for further analysis: \begin{enumerate} \item The sources must have data from the two cameras of IUE namely the short wavelength prime (SWP) and long-wavelength prime (LWP). \item The total number of points (that includes all the three continuum passbands in FUV and NUV) must be larger than 50 \end{enumerate} The above two conditions lead us to a final sample of 14 Seyfert galaxies spanning the redshift range 0.002 $ < z < $ 0.07. Of the 14 selected Seyfert galaxies, one galaxy (NGC 1068) belongs to the Seyfert 2 category (having narrow permitted and forbidden lines), while the remaining 13 sources belong to the Seyfert 1 category with broad permitted lines and narrow forbidden lines. The details of the objects selected for this study are given in Table~\ref{tab:details}. In this table, the total in column 7 refers to the total number of photometric points for a source in all the six passbands together. The entries against $\lambda_1$, $\lambda_2$ and $\lambda_3$ in SWP and LWP columns refer to the central wavelength used for the photometry and N$_{NUV}$ and N$_{FUV}$ give the number of points in each of the NUV and FUV passbands. The observed flux values in all the six passbands were corrected for galactic extinction using the $A_V$ values taken from NED \footnote{ned.ipac. caltech.edu} which uses Schlafly et al. (2011) and the extinction law evaluated in the UV range using the formalism given by Cardelli et al. (1989). The galactic extinction corrected flux values were then subjected to further analysis. We note here that the measured flux values were not corrected for extinction due to the host galaxies of the sources. The light curves in all the FUV and NUV passbands for the sources are given in Fig. \ref{Fig:1} through Fig. \ref{Fig:5}. In these figures, the quoted wavelengths are in the observed frame of the sources. The present sample analysed here has some overlap with that reported by Paltani \& Walter (1996). The sample analysed by Paltani \& Walter (1996) has 15 sources that includes Seyfert galaxies, radio-loud as well as radio-quiet quasars. Their analysis was based on IUE data collected upto 1991. Our sample analysed here contains 14 Seyfert galaxies using data from IUE upto 1995. Though there are 10 sources in common to the sample reported here and that of Paltani \& Walter (1996), the data analysed here is more extended in terms of the number of epochs and the total duration (1978$-$1995) compared to Paltani \& Walter (1996) who have analysed data from IUE until 1991. \subsection{Flux variability} For all the 14 sources selected based on the criteria outlined above, we carried out analysis to characterise their variability. This was done by calculating their normalized excess variance defined in Vaughan et al. (2003) as \begin{equation} F_{var} = \sqrt{\frac{S^2 - \overline{\sigma_{err}^2}} {\overline{x}^2}} \end{equation} where $S^2$ and $\sigma_{err}^2$ are the sample variance and average error defined as \begin{equation} S^2 = \frac{1}{N-1} \sum_{i}(x_i - \overline{x})^2 \end{equation} \begin{equation} \overline{\sigma_{err}^2} = \frac{1}{N} \sum_{i=1}^N \sigma_{err,i}^2 \end{equation} The error in $F_{var}$ was calculated again using Vaughan et al. (2003) and is defined as \begin{equation}\label{eq:ferr} \sigma_{F_{var}} = \sqrt{\left(\sqrt{\frac{1}{2N}}\frac{\overline{\sigma_{err}^2}}{\overline{x}^2 F_{var}}\right)^2 + \left(\sqrt{\frac{\overline{\sigma_{err}^2}}{N}}\frac{1}{\overline{x}}\right)^2} \end{equation} \begin{table*} \scriptsize \caption{Details of the objects studied for variability. The M$_{BH}$ values are in solar units, and were taken from the data base of Bentz \& Katz (2015) except for the sources NGC 1068 and Mrk 488 where it was taken from Greenhill \& Gwinn (1997) and Wang \& Lu (2001) respectively.}\label{tab:details} \begin{tabular}{lccclccllcccccc} \hline Name& RA(2000) &Dec (2000)& Redshift & log (M$_{BH}$) & L$_{bol}$ & N$_{Total}$ & \multicolumn{4}{c}{SWP} & \multicolumn{4}{c}{LWP} \\ \cline{8-15} & & & & & (erg/sec) & & $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & N$_{FUV}$ & $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & N$_{NUV}$ \\ \hline Fairall 9 & 01:23:45.8 & -58:48:20.5 & 0.047 & 8.299 & 44.78 & 693 & 1418 & 1800 & 1895 & 156 & 2303 & 2512 & 2868 & 75\\ NGC 1068 & 02:42:40.7 & -00:00:47.8 & 0.004 & 7.176 & 44.60 & 75 & 1360 & 1726 & 1816 & 14 & 2208 & 2409 & 2750 & 11 \\ 3C 120 & 04:33:11.1 & +05:21:15.6 & 0.033 & 7.745 & 43.98 & 186 & 1399 & 1776 & 1869 & 42 & 2272 & 2479 & 2830 & 20 \\ Akn 120 & 05:16:11.4 & -00:08:59.4 & 0.032 & 8.068 & 44.40 & 177 & 1398 & 1775 & 1868 & 36 & 2271 & 2471 & 2828 & 23 \\ NGC 3516 & 11:06:47.5 & +72:34:06.9 & 0.009 & 7.395 & 43.89 & 330 & 1366 & 1735 & 1826 & 85 & 2219 & 2421 & 2764 & 25 \\ NGC 3783 & 11:39:01.8 & -37:44:18.7 & 0.010 & 7.371 & 43.57 & 834 & 1368 & 1736 & 1827 & 164 & 2221 & 2423 & 2726 & 114\\ NGC 4051 & 12:03:09.6 & +44:31:52.8 & 0.002 & 6.130 & 42.38 & 117 & 1358 & 1724 & 1814 & 30 & 2205 & 2405 & 2746 & 9 \\ NGC 4151 & 12:10:32.6 & +39:24:20.6 & 0.003 & 7.555 & 43.30 & 2904 & 1359 & 1725 & 1816 & 542 & 2207 & 2407 & 2749 & 426\\ NGC 4593 & 12:39:39.4 & -05:20:39.3 & 0.009 & 6.882 & 43.45 & 138 & 1367 & 1735 & 1826 & 29 & 2219 & 2421 & 2764 & 17 \\ NGC 5548 & 14:17:59.5 & +25:08:12.4 & 0.017 & 7.718 & 43.92 & 1104 & 1378 & 1749 & 1841 & 214 & 2237 & 2441 & 2787 & 154\\ Mrk 478 & 14:42:07.5 & +35:26:22.9 & 0.079 & 7.330 & 44.95 & 117 & 1462 & 1855 & 1953 & 19 & 2373 & 2589 & 2956 & 20 \\ 3C 390.3 & 18:42:09.0 & +79:46:17.1 & 0.056 & 8.638 & 44.32 & 330 & 1431 & 1816 & 1911 & 94 & 2323 & 2534 & 2893 & 16 \\ Mrk 509 & 20:44:09.7 & -10:43:04.5 & 0.034 & 8.049 & 44.78 & 288 & 1401 & 1779 & 1872 & 55 & 2275 & 2482 & 2834 & 41 \\ NGC 7469 & 23:03:15.6 & +08:52:26.4 & 0.016 & 6.956 & 44.42 & 750 & 1377 & 1748 & 1839 & 236 & 2235 & 2439 & 2784 & 14 \\ \hline \end{tabular} \end{table*} \begin{table*} \scriptsize \caption{Results of the analysis of variability. The entries in columns 2, 3 and 4 are for the FUV bands, while the entries in columns 5,6 and 7 are for the NUV bands. Columns 8 and 9 give the mean $\alpha$ in SWP and LWP, and the last column gives the mean $\alpha$ estimated using IUE data covering the range of 1150 - 3200 \AA ~and taken from Paltani \& Walter (1996)}\label{tab:fvar_values} \begin{tabular}{lcccccclll} \hline Name & \multicolumn{6}{c}{F$_{var} \pm \sigma_{F_{var}}$} & $\overline{\alpha}$(SWP) & $\overline{\alpha}$(LWP) & $\overline{\alpha}$ \\ \cline{2-7} & \multicolumn{3}{c}{SWP} & \multicolumn{3}{c}{LWP} & & & \\ \cline{2-7} & $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & $\lambda_1$ & $\lambda_2$ & $\lambda_3$ & & & \\ \hline Fairall 9 & 0.586 $\pm$ 0.012 & 0.574 $\pm$ 0.012 & 0.562 $\pm$ 0.006 & 0.515 $\pm$ 0.022 & 0.499 $\pm$ 0.012 & 0.440 $\pm$ 0.009 & 0.92 $\pm$ 0.04 & 1.68 $\pm$ 0.05 & 0.9 \\ NGC 1068 & 0.529 $\pm$ 0.052 & 0.548 $\pm$ 0.036 & 0.522 $\pm$ 0.028 & 0.335 $\pm$ 0.073 & 0.339 $\pm$ 0.086 & 0.351 $\pm$ 0.022 & 1.68 $\pm$ 0.10 & 1.28 $\pm$ 0.15 & --- \\ 3C 120 & 0.321 $\pm$ 0.058 & 0.327 $\pm$ 0.030 & 0.297 $\pm$ 0.026 & 0.414 $\pm$ 0.157 & 0.336 $\pm$ 0.052 & 0.282 $\pm$ 0.017 & 0.80 $\pm$ 0.09 & 2.46 $\pm$ 0.20 & 1.9 \\ Akn 120 & 0.174 $\pm$ 0.035 & 0.154 $\pm$ 0.022 & 0.154 $\pm$ 0.022 & 0.109 $\pm$ 0.057 & 0.104 $\pm$ 0.029 & 0.086 $\pm$ 0.017 & 1.41 $\pm$ 0.04 & 1.60 $\pm$ 0.07 & 1.5 \\ NGC 3516 & 0.600 $\pm$ 0.013 & 0.596 $\pm$ 0.012 & 0.590 $\pm$ 0.008 & 0.495 $\pm$ 0.044 & 0.435 $\pm$ 0.019 & 0.461 $\pm$ 0.011 & 1.75 $\pm$ 0.03 & 2.81 $\pm$ 0.10 & 2.2 \\ NGC 3783 & 0.305 $\pm$ 0.013 & 0.278 $\pm$ 0.010 & 0.269 $\pm$ 0.008 & 0.225 $\pm$ 0.035 & 0.217 $\pm$ 0.014 & 0.221 $\pm$ 0.007 & 1.15 $\pm$ 0.02 & 1.67 $\pm$ 0.04 & 1.5 \\ NGC 4051 & 0.240 $\pm$ 0.021 & 0.215 $\pm$ 0.016 & 0.206 $\pm$ 0.012 & 0.109 $\pm$ 0.076 & 0.137 $\pm$ 0.031 & 0.151 $\pm$ 0.011 & 1.91 $\pm$ 0.11 & 2.52 $\pm$ 0.13 & --- \\ NGC 4151 & 0.749 $\pm$ 0.011 & 0.728 $\pm$ 0.008 & 0.708 $\pm$ 0.006 & 0.681 $\pm$ 0.030 & 0.629 $\pm$ 0.016 & 0.650 $\pm$ 0.006 & 0.99 $\pm$ 0.02 & 2.25 $\pm$ 0.03 & 1.2 \\ NGC 4593 & 0.440 $\pm$ 0.016 & 0.462 $\pm$ 0.018 & 0.407 $\pm$ 0.011 & 0.246 $\pm$ 0.048 & 0.254 $\pm$ 0.019 & 0.284 $\pm$ 0.012 & 2.03 $\pm$ 0.08 & 2.53 $\pm$ 0.07 & 2.0 \\ NGC 5548 & 0.362 $\pm$ 0.008 & 0.337 $\pm$ 0.006 & 0.315 $\pm$ 0.004 & 0.253 $\pm$ 0.008 & 0.262 $\pm$ 0.004 & 0.271 $\pm$ 0.006 & 1.22 $\pm$ 0.02 & 1.06 $\pm$ 0.04 & 1.3 \\ Mrk 478 & 0.119 $\pm$ 0.039 & 0.196 $\pm$ 0.018 & 0.153 $\pm$ 0.016 & 0.134 $\pm$ 0.032 & 0.119 $\pm$ 0.019 & 0.226 $\pm$ 0.012 & 1.00 $\pm$ 0.13 & 1.88 $\pm$ 0.08 & --- \\ 3C 390.3 & 0.712 $\pm$ 0.027 & 0.780 $\pm$ 0.018 & 0.645 $\pm$ 0.014 & 0.144 $\pm$ 0.056 & 0.367 $\pm$ 0.016 & 0.309 $\pm$ 0.015 & 1.57 $\pm$ 0.15 & 4.43 $\pm$ 0.18 & --- \\ Mrk 509 & 0.234 $\pm$ 0.024 & 0.230 $\pm$ 0.027 & 0.226 $\pm$ 0.021 & 0.209 $\pm$ 0.024 & 0.183 $\pm$ 0.016 & 0.207 $\pm$ 0.018 & 1.13 $\pm$ 0.04 & 0.80 $\pm$ 0.04 & 1.2 \\ NGC 7469 & 0.215 $\pm$ 0.047 & 0.204 $\pm$ 0.029 & 0.183 $\pm$ 0.020 & 0.213 $\pm$ 0.068 & 0.189 $\pm$ 0.037 & 0.195 $\pm$ 0.021 & 1.31 $\pm$ 0.02 & 1.73 $\pm$ 0.16 & 1.4 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \hbox{ \resizebox{5.5cm}{8cm}{\includegraphics{3c120.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{3c390.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{ark120.png}} } \caption{FUV and NUV light curves. The top three panels are for the FUV bands and the light curves in the bottom three panels are for NUV bands. For 3C 120, from top to bottom the wavelength of the light curves are 1399 \AA, 1776 \AA, 1869 \AA , 2272 \AA , 2479 \AA ~and 2830 \AA. For 3C 390.3, the wavelength of the light curves from top to bottom are 1431 \AA, 1816 \AA, 1911 \AA, 2323 \AA, 2534 \AA ~and 2893 \AA. For Akn 120, the light curves shown from top to bottom are in wavelengths of 1390 \AA, 1775 \AA, 1868 \AA, 2271 \AA, 2477 \AA ~and 2828 \AA. } \label{Fig:1} \end{figure*} \begin{figure*} \centering \hbox{ \resizebox{5.5cm}{8cm}{\includegraphics{fairall_9.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{mrk478.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{mrk509.png}} } \caption{Light curves in increasing order of wavelengths from top to bottom for Fairall 9 (left), Mrk 478 (middle) and Mrk 509 (right). For Fairall 9, the wavelengths are 1418 \AA, 1800 \AA, 1895 \AA, 2303 \AA, 2512 \AA ~and 2868 AA. ~For Mrk 478, the light curves from top to bottom have wavelengths of 1462 \AA, 1855 \AA, 1953 \AA, 2373 \AA, 2589 \AA ~and 2956 \AA. For Mrk 509, from the top to the bottom panels, the light curves have the wavelengths of 1401 \AA, 1779 \AA, 1872 \AA, 2275 \AA, 2482 \AA ~and 2834 \AA.} \label{Fig:2}. \end{figure*} \begin{table} \caption{Average F$_{var}$ values for the different wavelength bands} \label{tab:afvar} \begin{tabular}{cc} \hline Mean wavelength & Mean F$_{var}$ \\ \hline 1389 $\pm$ 30 & 0.399 $\pm$ 0.198 \\ 1762 $\pm$ 38 & 0.402 $\pm$ 0.203 \\ 1855 $\pm$ 40 & 0.374 $\pm$ 0.188 \\ 2255 $\pm$ 49 & 0.292 $\pm$ 0.168 \\ 2460 $\pm$ 53 & 0.291 $\pm$ 0.148 \\ 2806 $\pm$ 64 & 0.295 $\pm$ 0.139 \\ \hline \end{tabular} \end{table} \begin{table*} \small \caption{Results of the linear least squares fit between the variation of spectral indices and the fluxes at the shortest wavelength in NUV and FUV bands. The columns r and P are the correlation coefficient and probability respectively.}\label{tab:specvar} \begin{tabular}{lccclcllcccccc} \hline Name & \multicolumn{4}{c}{SWP} & \multicolumn{4}{c}{LWP} \\ \cline{2-8} & Slope & Intercept & r & P & slope & Intercept & r & P \\ \hline Fairall 9 & -0.100 $\pm$ 0.012 & 1.425 $\pm$ 0.083 & 0.405 & 0.000 & -0.393 $\pm$ 0.033 & 2.965 $\pm$ 0.105 & 0.601 & 0.000 \\ NGC 1068 & -0.065$\pm$ 0.038 & 2.188 $\pm$ 0.353 & 0.530 & 0.052 & 0.009 $\pm$ 0.078 & 1.075 $\pm$ 0.489 & -0.109 & 0.751 \\ 3C 120 & -0.178 $\pm$ 0.038 & 2.268 $\pm$ 0.326 & 0.455 & 0.044 & -0.622 $\pm$ 0.116 & 4.131 $\pm$ 0.345 & 0.760 & 0.002 \\ Akn 120 & -0.131 $\pm$ 0.026 & 3.097 $\pm$ 0.301 & 0.530 & 0.001 & -0.291 $\pm$ 0.071 & 3.866 $\pm$ 0.556 & 0.653 & 0.001 \\ NGC 3516 & -0.079 $\pm$ 0.007 & 2.157 $\pm$ 0.036 & 0.286 & 0.008 & 0.074 $\pm$ 0.026 & 2.577 $\pm$ 0.098 & -0.101 & 0.637 \\ NGC 3783 & -0.088 $\pm$ 0.006 & 2.204 $\pm$ 0.068 & 0.683 & 0.000 & -0.262 $\pm$ 0.030 & 3.634 $\pm$ 0.217 & 0.329 & 0.000 \\ NGC 4051 & -1.140 $\pm$ 0.223 & 3.653 $\pm$ 0.326 & 0.596 & 0.001 & -1.525 $\pm$ 0.873 & 4.472 $\pm$ 1.199 & 0.285 & 0.457 \\ NGC 4151 & -0.019 $\pm$ 0.001 & 1.310 $\pm$ 0.027 & 0.720 & 0.000 & -0.058 $\pm$ 0.006 & 2.951 $\pm$ 0.090 & 0.380 & 0.000 \\ NGC 4593 & -0.002 $\pm$ 0.007 & 2.302 $\pm$ 0.018 & 0.378 & 0.043 & -1.798 $\pm$ 0.626 & 4.939 $\pm$ 0.802 & 0.010 & 0.971 \\ NGC 5548 & -0.234 $\pm$ 0.019 & 2.280 $\pm$ 0.085 & 0.547 & 0.000 & -0.547 $\pm$ 0.075 & 2.533 $\pm$ 0.250 & 0.179 & 0.050 \\ Mrk 478 & -1.201 $\pm$ 0.621 & 4.474$\pm$ 1.896 & 0.181 & 0.518 & -5.171 $\pm$ 2.293 & 9.716 $\pm$ 3.427 & 0.177 & 0.483 \\ 3C 390.3 & -1.043 $\pm$ 0.142 & 2.160 $\pm$ 0.172 & 0.609 & 0.000 & -12.695 $\pm$ 4.974 & 7.483 $\pm$ 1.222 & 0.445 & 0.084 \\ Mrk 509 & -0.038 $\pm$ 0.017 & 1.653 $\pm$ 0.191 & 0.097 & 0.485 & -0.318 $\pm$ 0.098 & 2.956 $\pm$ 0.722 & 0.388 & 0.031 \\ NGC 7469 & -0.221 $\pm$ 0.009 & 2.982 $\pm$ 0.067 & 0.786 & 0.000 & -0.211 $\pm$ 0.093 & 2.846 $\pm$ 0.460 & 0.360 & 0.206 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \hbox{ \resizebox{5.5cm}{8cm}{\includegraphics{ngc1068.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{ngc3516.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{ngc3783.png}} } \caption{Light curves in FUV and NUV bands for the sources NGC 1068 (left), NGC 3516 (middle) and NGC 3783 (right). For NGC 1068 the light curves from the top to the bottom panels are at wavelengths of 1360 \AA, 1726 \AA, 1816 \AA, 2208 \AA, 2409 \AA ~and 2750 \AA. For NGC 3516 the light curves are at increasing order of wavelengths from top to bottom and have wavelength values of 1366 \AA, 1735 \AA, 1826 \AA, 2219 \AA, 2421 \AA ~and 2764 \AA. For NGC 3783, the light curves from top to bottom panels have wavelengths of 1368 \AA, 1736 \AA, 1827 \AA, 2221 \AA, 2423 \AA ~and 2766 \AA. } \label{Fig:3}. \end{figure*} \begin{figure*} \centering \hbox{ \resizebox{5.5cm}{8cm}{\includegraphics{ngc4051.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{ngc4151.png}} \hspace*{0.5cm}\resizebox{5.5cm}{8cm}{\includegraphics{ngc4593.png}} } \caption{Light curves in FUV and NUV bands for the sources NGC 4051, NGC 4151 and NGC 4593. For NGC 4051, the light curves in increasing order of wavelength from top to bottom are at wavelengths of 1358 \AA, 1724 \AA, 1814 \AA, 2205 \AA, 2405 \AA ~and 2746 \AA. For NGC 4151, the light curves are at 1359 \AA, 1725 \AA, 1816 \AA, 2207 \AA, 2407 \AA ~and 2749 \AA ~from the top to the bottom panels. For NGC 4593, the shown light curves from top to bottom are for wavelengths of 1367 \AA, 1735 \AA, 1826 \AA, 2219 \AA, 2421 \AA ~and 2764 \AA.} \label{Fig:4}. \end{figure*} \begin{figure*} \centering \hbox{ \resizebox{5.5cm}{8cm}{\includegraphics{ngc5548.png}} \hspace*{0.5cm}\resizebox{5.55cm}{8cm}{\includegraphics{ngc7469.png}} } \caption{FUV and NUV light curves for the sources NGC 5548 and NGC 7469. For NGC 5548, the wavelengths of the light curves from top to bottom are 1378 \AA, 1749 \AA, 1841 \AA, 2237 \AA, 2441 \AA ~and 2787 \AA. For NGC 7469 the light curves shown from top to bottom have wavelengths of 1377 \AA, 1748 \AA, 1839 \AA, 2235 \AA, 2439 \AA ~and 2784 \AA.} \label{Fig:5}. \end{figure*} Majority of the sources in our sample, have overlapping coverage in FUV and NUV passbands except for five sources, namely, 3C 120, 3C 390.3, NGC 3516, NGC 7469 and NGC 4051. This is evident in the light curves of these sources shown in Figs. 1, 3 and 5. Because of this, for calculating F$_{var}$, we have considered only those duration of the light curves that have overlapping coverage in both FUV and NUV passbands. The calculated F$_{var}$ values for all the sources in each of the six continuum passbands are given in Table~\ref{tab:fvar_values}. A source is considered variable if its F$_{var}$ is greater than zero, and it is significant at the one sigma level. In all instances in our sample, F$_{var}$ is many times greater than their associated errors except in four cases where it is less than three times their associated errors. Among these four too, in two cases F$_{var}$ is more than two times their associated errors and in the remaining two cases it is between one and two sigma. We therefore argue that all the sources in the sample analysed here are highly variable in all the six passbands, except in four instances where the variability is less significant. The mean F$_{var}$ values in all the six passbands for all the sources studied here is shown in Table~\ref{tab:afvar}. There is an indication that the variations at the shorter wavelengths are larger than those at the longer wavelengths, but the larger error bars preclude us to draw any firm conclusion on the differences in variability between different wavelengths. Clubbing the F$_{var}$ values in the three SWP passbands together as SWP and the three LWP passbands together as LWP, we obtained simple average values of F$_{var}$ as 0.392 $\pm$ 0.196 and 0.293 $\pm$ 0.152 for SWP and LWP respectively. The total number of F$_{var}$ values are thus 42, in each of the SWP and LWP passbands. The distributions of F$_{var}$ for SWP and LWP and their cumulative distributions are given in Fig. \ref{Fig:6} and Fig. \ref{Fig:7} respectively. The average value of F$_{var}$ is larger in SWP than in LWP, however, as the error bars are larger we carried out a two sample KS test. The null hypothesis that was tested was that the two independent F$_{var}$ values pertaining to SWP and LWP were drawn from the same distribution. This null hypothesis was accepted as D was lesser than the critical value of D (D$_{crit})$. We obtained values of 0.286 and 0.356 for D and D$_{crit}$ respectively for a significance level of 0.01. This statistically points to no difference in the F$_{var}$ values between SWP and LWP bands. Available studies do indicate that in UV, AGN show wavelength dependent variability with the shorter wavelengths showing large amplitude of variability compared to the longer wavelengths (Sakata et al. 2011, Vanden Berk et al. 2004, Welsh et al. 2011). Data analysed here do indicate that variations at the shorter wavelengths are larger than that at longer wavelengths, however, due to the quality of the data, the error bars are too large to draw any conclusion on variation of amplitude of variability with wavelength. \begin{figure} \centering \hspace*{-0.2cm}\resizebox{9.5cm}{8cm}{\includegraphics{fvar_hist.pdf}} \caption{Distribution of $F_{var}$ values for the sources studied here in FUV (top panel) and NUV (bottom panel) bands}.\label{Fig:6} \end{figure} \begin{figure} \centering \hspace*{-0.2cm}\resizebox{9.5cm}{8cm}{\includegraphics{cum.pdf}} \caption{Cumulative distribution of the values of F$_{var}$ in FUV (in green) and NUV (in blue) bands}.\label{Fig:7} \end{figure} \begin{figure} \centering \hspace*{-0.2cm}\resizebox{9.5cm}{8cm}{\includegraphics{fvar_and_lbol.pdf}} \caption{ Variation of F$_{var}$ with bolometric luminosity. Linear least squares fit to the data are shown for the complete data (solid line) and for the data set excluding the lowest luminosity source in our sample (dashed line)}. \label{Fig:8} \end{figure} \begin{figure} \centering \hspace*{-0.2cm}\resizebox{9.5cm}{8cm}{\includegraphics{fvar_and_bhmass.pdf}} \caption{Plot of F$_{var}$ against black hole mass. The solid line is the linear least squares fit to the data.} \label{Fig:9}. \end{figure} \section{Correlation between variability and other physical properties} \subsection{F$_{var}$ and L$_{bol}$} To find for the presence of any correlation between F$_{var}$ and bolometric luminosity (L$_{bol}$), we plot in Fig. \ref{Fig:8} the variation of F$_{var}$ with L$_{bol}$ . The F$_{var}$ values used in this correlation analysis is for the NUV band for the passband 2806 $\pm$ 64 \AA. We used the relation $L_{bol} = 13.2 \times L_{V}$ given by Elvis et al. (1994). Here, $L_V$ is the luminosity in the V-band which was derived using the V-band magnitude of the sources taken from SIMBAD\footnote{http://simbad.u-strasbg.fr/simbad/}, the zero-points taken from Bessel (1979) and the luminosity distance taken from NED\footnote{http://www.astro.ucla.edu/$\sim$wright/CosmoCalc.html}. Using all the F$_{var}$ values, we found indication of no correlation between F$_{var}$ and $L_{bol}$ with a low correlation coefficient of $-$0.08 and a probability of no correlation of P = 0.79. This trend for no correlation between F$_{var}$ and L$_{bol}$ is due to one low luminosity source NGC 4051. Neglecting this source and doing a linear least squares fit to the data gave evidence for a mild negative correlation between F$_{var}$ and L$_{bol}$. The linear least square fit is shown as a dashed line in Fig. \ref{Fig:8}. Correlation analysis indicates a mild negative correlation with a correlation coefficient of $-$0.39 with a probability of no correlation of P = 0.19. This is in agreement with what is known in literature. Using IUE data, Paltani \& Courvoisier (1997) found an anti-correlation between quasar variability and luminosity with high luminosity quasars showing low amplitude of variability. This anti-correlation is also seen in the optical bands (Vanden Berk et al. 2004, Meusinger \& Weiss 2013). Analysing large sample of quasars for UV variability using data from GALEX Welsh et al. (2011) found two different correlations between variability and luminosity. For time lags greater than 100 days, variability is negatively correlated with luminosity, while for time lags lesser than 100 days, variability is positively correlated with luminosity. The data analysed here too reveal a negative correlation between UV variability and luminosity. However, quality UV data (with similar time resolution and uniform coverage in both FUV and NUV) on a larger sample of sources are needed to firmly establish this finding. \subsection{F$_{var}$ and M$_{BH}$} Correlation between optical variability and black hole (BH) mass has been widely studied in the optical with no clear consensus. From an analysis of the long term optical variability of quasars, Wold et al. (2007) found a correlation between variability and BH mass with sources with large BH mass showing larger amplitude of variability. Such a correlation was also noticed by Wilhite et al. (2008), however, Meusinger \& Weiss (2013) and Zuo et al. (2012) could not find any correlation between optical variability and BH mass. From the data set analysed here, we looked for the existence of any correlation between UV variability and BH mass. In Fig. \ref{Fig:9} we show the correlation between F$_{var}$ and M$_{BH}$ where we found hint for a positive correlation between F$_{var}$ and M$_{BH}$. Correlation analysis gave a Pearson rank correlation coefficient of 0.18 with a probability for no correlation of 0.54. The F$_{var}$ values used in this correlation analysis too is in NUV for the passband 2806 $\pm$ 64 \AA. \section{Spectral variability} To know the spectral variability nature of the sources studied here, we examined the change in the spectral index relative to the flux of the sources. The optical to UV continuum slope of an AGN can be well represented as a power law, $F_{\nu} \propto \nu^{-\alpha}$, where $F_{\nu}$ is the observed flux density and $\alpha$ is the spectral index. For each of the sources studied here, we have observations in six UV passbands. We therefore calculated the spectral index by fitting a power law of the form \begin{equation} F_{\lambda} \propto \lambda^{\alpha-2} \end{equation} For NUV, $\alpha$ was determined using the above power law fit to three measurements and for FUV again three measurements were used to derive $\alpha$. The variation of $\alpha$ thus deduced against the flux of the sources in both SWP and LWP are shown in Fig.\ref{Fig:10} and Fig.\ref{Fig:11} respectively. For SWP, we selected the shortest wavelength of three and for LWP too, we selected the shortest wavelength of the three observations. This is only for the purpose of defining the flux values. For the analysis of correlation between $\alpha$ and flux, we considered only those points where the error in $\alpha$ and flux values are lower than the associated values of $\alpha$ and fluxes. The data were fit with a straight line by (i) assigning equal weight to all the points and (ii) taking into account the errors in both $\alpha$ and flux values. The un-weighted linear least squares fits are shown by dashed lines in Fig. 10 and Fig. 11, while the weighted linear least squares fits are shown by solid lines. From weighted linear least squares fit to the data we find that, for most of the sources, their spectra do not show any significant changes during the flux variations, however, for few sources, we found clear evidence of a hardening of the spectra with increase in flux. For some sources, we see structures in the variation of $\alpha$ with flux. The spectrum is found to harden with increasing flux, however, limited to certain moderate flux values, beyond which the spectrum is nearly steady showing no change with flux. This is seen in the sources NGC 3783, NGC 4151, NGC 4593 and Fairall 9 in SWP. In LWP, this is evident in the sources NGC 4151 and Fairall 9. The results of the linear least squares fit to the variation in $\alpha$ with flux is shown in Table~\ref{tab:specvar} . In FUV about 50\% of the sources showed a harder when brighter trend. The remaining sources too showed a harder when brighter behaviour but the correlation is moderate. The weighted and un-weighted linear least squares fit show similar trend for most of the sources, with the largest mismatch seen in Mrk 478. In the NUV band there is moderate correlation between $\alpha$ and the flux with a trend for a harder when brighter behaviour. Here too, large discrepancy between weighted and un-weighted linear least squares fits is seen in sources such as Mrk 478, NGC 4593 and 3C 390.3. These results to a large extent agree with the analysis of the UV continuum emission in AGN by Sakata et al. (2011) who too found a bluer when brighter trend in their sample. Similar conclusion was also arrived at by Wilhite et al. (2005) and Vanden Berk et al. (2004) in the optical band. Our results for a majority of the sources are also consistent with the observations of Paltani \& Walter (1996) who found that the UV spectra of Seyfert galaxies becomes flatter with increased brightness of the sources. To explain these observations, Paltani \& Walter (1996) proposed the two component model. According to this model, the observed flux is a superposition of two distinct spectral components, with constant spectral shapes. One component is flux variable while the other one is stable and the observed continuum variation is driven by the amplitude of the varying component. For some sources in our sample such as NGC 3783, NGC 4151, NGC 4593 and Fairall 9, we in fact observed a constancy of the spectral index with increasing flux, however, only beyond certain flux levels in them. This points to the complex nature of UV flux variations in AGN (cf. Paltani \& Walter (1996). The mean values of $\alpha$ for the sources in SWP and LWP are given in Table 2. Also, given in the same table are the mean $\alpha$ values reported by Paltani \& Walter (1996) estimated using IUE spectra covering the wavelength range of 1150 $-$ 3200 \AA. For the sources that are in common between this study and that of Paltani \& Walter (1996) the mean $\alpha$ values are similar, though the data analysed here is much more than that of Paltani \& Walter (1996). \begin{figure*} \centering \hspace*{-0.2cm}\includegraphics[scale=0.5]{swp_both_fits_to_flux_Vs_alpha.png} \caption{Variation of spectral index with flux in the shortest wavelength in FUV. The solid lines are the linear least squares fit to the data that take into account the errors in both $\alpha$ and fluxes, while the dashed lines are the un-weighted linear least squares fit to the data.} \label{Fig:10}. \end{figure*} \begin{figure*} \centering \hspace*{-0.2cm}\includegraphics[scale=0.5]{lwp_both_fits_to_flux_Vs_alpha.png} \caption{Variation of spectral index with flux in the NUV band. Linear least squares fit to the data that takes into account the errors in both the spectral indices and flux values are shown as solid lines. The un-weighted linear least squares fit to the data are shown with dashed lines.} \label{Fig:11} \end{figure*} \section{Lag between different wavebands} To check for inter-band time lags we used the discrete correlation function (DCF) technique of Edelson \& Krolik (1988). The cross-correlation analysis was done between the light curves of the shortest and longest wavelengths in both FUV and NUV for all the sources. We show in Fig.\ref{Fig:12} the results on one correlation analysis for the object Fairall 9 carried out between the light curves at 2203 and 2868 \AA . ~Here, the filled circles are those evaluated using the DCF method and the solid line is that obtained using the interpolated cross correlation function (ICCF) described in detail in Gaskell \& Sparke (1986) and Gaskell \& Peterson (1987). To evaluate the uncertainty in the derived lag, we followed a model independent Monte Carlo approach that incorporated both flux randomization (FR) and random subset sampling (RSS) described in Peterson et al. (1998). For each Monte Carlo iteration we found the lag using the centroid of the CCF utilizing all points within 60\% of the peak of the CCF in the case of DCF. However, for ICCF the peak of the CCF was considered as a representation of the lag between the light curves. This was repeated for 10,000 times and the distribution of the CCF lags were obtained for both DCF and ICCF methods. The mean of the distributions were taken to represent the lag between the light curves and the spread in the distributions was used to estimate the error in the lag. The distributions obtained using both using DCF (green histogram) and ICCF (black histogram) are given in Fig. \ref{Fig:12} for the source Fairall 9. We found no noticeable time lag between flux variations in NUV and FUV bands, though the flux variations between different NUV and FUV bands were correlated. This analysis repeated for all the sources studied here, yielded no measurable lags in any of them. \section{Conclusion} In the present work, we report the variability of fourteen Seyfert galaxies in the UV band using data from IUE acquired over a period of about 17 years. The flux values for the sources studied here in different NUV and FUV bands were taken from Dunn et al. (2006). Various analysis were performed to characterize the flux variability of the sources. The summary of the work is given below \begin{figure} \centering \hspace*{-0.2cm}\includegraphics[scale=0.5]{lwp_fairall.pdf} \caption{Cross correlation analysis between the light curves at 2203 \AA ~and 2868 \AA ~for the source Fairall 9, one of the sources in the sample. The red solid line is for ICCF and the blue filled circles are for the DCF. The green and black histograms show the distribution of centroids for DCF and ICCF respectively.} \label{Fig:12}. \end{figure} \begin{enumerate} \item All sources were found to show flux variations in the UV band. No statistically significant difference in the amplitude of flux variations between shorter and longer wavelengths was noticed. \item No time lag between flux variations in different NUV and FUV bands was observed \item We found a mild negative correlation of variability with bolometric luminosity with high luminous sources showing low variability than their less luminous counterparts. Also, a hint for a positive correlation is found between variability and black hole mass. These results are consistent with what is known in literature. \item Majority of source showed a bluer when brighter trend in the FUV data, however, such trend if any in NUV band is seen only in a minority of the sources that too moderately. Some sources showed a hardening of the spectrum with flux, however, the spectrum remained non-variable beyond certain flux level. The observed spectral variations are thus complex. \end{enumerate} \section*{Acknowledgement} We thank the anonymous referee for his/her critical comments that helped to improve the manuscript
1,108,101,564,426
arxiv
\section*{Supplemental material} \renewcommand{\thesubsection}{A\arabic{subsection}} \setcounter{subsection}{0} \renewcommand{\theequation}{A\arabic{equation}} \setcounter{equation}{0} \renewcommand{\thefigure}{A\arabic{figure}} \setcounter{figure}{0} \section*{Supplemental material} \subsection{Method of the numerical simulation}\label{ASec:numerical} \begin{figure*} \begin{center} \includegraphics[width=1.0 \linewidth, keepaspectratio]{A1} \end{center} \vspace{-5mm} \caption{ The cross-sectional profiles of elliptic vortices and the three dimensional texture of pseudo-director field ${\bm g}/|{\bm g}|$ for $q=2^{-n_q}~(n_q=3,5,9,13,17)$. The method of plot is the same as that of Fig.~\ref{Fig_dv} in the main text. The angle of view is changed from that in Fig.~\ref{Fig_dv} in the three dimensional plots in the bottom. } \label{Fig_A1} \end{figure*} Here, we describe the method of numerical simulation used in this work. The numerical solutions is obtained by minimizing the energy functional $$ G''=\int^{R_1}_{-R_1} dx \int^{R_2}_{-R_2} dy \int^{R_3}_{-R_3} dz d^3x({\cal G}+V_{\rm trap}n-\Omega l_z) $$ with the trapping potential $V_{\rm trap}$ and $l_z=\hbar\sum_m\Re[\Phi_m x\partial_y\Phi_m-y\partial_x\Phi_m]$. The space coordinates $(x,y,z)=(x_1,x_2,x_3)$ are discretized as $x_i(n_i)=-R_i+\Delta x n_i$ with $n_i=0,1,2,...N_i$ with $x_i(N_i)=R_i$. The spatial derivatives of $\Phi_m$ are computed with finite difference approximation; e.g., $\partial_x\Phi_m$ and $\partial_x^2\Phi_m$ are computed by the central difference of the first and second order, respectively. All solutions were obtained by minimizing the energy functional very carefully. The steepest descent method is performed by solving the imaginary time evolution $\frac{\partial \Phi_m}{\partial \tau}=-\frac{\delta G''}{\delta\Phi_m}$. The imaginary time $\tau$ is discretized as $\tau=\Delta \tau n_\tau$ with $n_\tau=0,1,2,....$. The time evolution is written as $\Phi_m(n_\tau+1)=\Phi_m(n_\tau)-\Delta\tau \frac{\delta G''}{\delta\Phi_m}(n_\tau)$. The evolutions were computed until the difference $G''(n_\tau)-G''(n_\tau-1000)$ becomes non-negative within the double precise by using Intel$\textsuperscript{\textregistered}$ Fortran Compiler. The solutions in a uniform system is approximately obtained in a cylindrical box potential $V_{\rm trap}=V_0[\tanh(r-R)+1]$ with $V_0/\mu=20$, $R=0.95R_\bot$, and $R_1=R_2=R_\bot$. Here, we solve two-dimensional equations by assuming that the wave functions are homogeneous along the $z$ axis and thus independent of $z$. The trap depth $V_0$ is taken to be so large that the order parameter damps quickly outside the cylinder and almost vanish nearby the system boundary. The boundary effect becomes significant only when the distance $l_{\rm spin}$ between the spin spots becomes $\gtrsim R_\bot/2$. The system size is set to be enough large to neglect the boundary effect for the results in the main text. For the non-rotating case of $\Omega=0$ (the results of Figs.~\ref{Fig_dv} and \ref{Fig_dtexture}), the numerical simulation was done with $2R_\bot=1024.5\xi_n$ with $N_1=N_2=2048$, $\Delta x=0.5\xi_n$, and $\Delta \tau =0.0025$. It was confirmed that our results do not change essentially for $\Delta x=0.3\xi_n$ and $\Delta x=0.4\xi_n$ except for the finite-size effect, which is of no interest to our main subject. The finite-size effect becomes important only for $\frac{q}{\mu}\leq 2^{-19}\approx 1.9 \times 10^{-6}$ for $\Delta=0.5\xi_n$. For very small values of $\frac{q}{\mu}$ the width of the elliptic vortex become on the order of or larger than the system size and we could not obtain the vortex state. The vortex solutions were obtained for $q/\mu=2^{-n_q}~(n_q=0,1,2,...)$ as shown in Fig.~\ref{Fig_A1}. The vortex has the normal core with $\Phi_{\pm 1}=0$ for $n_q<3$ (not shown). The protocol of the numerical simulation is as follows. First, the solution for $n_q=3$ is obtained. Then, the initial state of the time evolution is set as $\Phi_0=f_0(r)e^{i\varphi}$ and $\Phi_{\pm 1} =f_{\pm 1}$ with $f_0(r)=\sqrt{\max(0,n'_{\rm TF})}$, $n'_{\rm TF}=c_0^{-1}(\mu-V_{\rm trap}-\frac{\hbar^2}{2Mr^2})$ and $f_{\pm 1}(r)=\pm \sqrt{\frac{n_{\rm P}}{2}}e^{-r^2/\xi_n^2}$. The vortex can be stabilized in the center region even for the non-rotating case of $\Omega=0$ since the spatial gradient of $V_{\rm trap}$ is negligibly small there. The solution for $n_q+1$ is obtained by using the solution of $n_q$ as the initial state. The rotating case of Fig.~\ref{Fig_rot}~(a) is obtained with $2R_\bot=410\xi_n$ with $N_1=N_2=1048$ and $\Delta=0.4\xi_n$. The protocol is the same as the non-rotating case. For the three dimensional simulation in the harmonic trap of Fig.~\ref{Fig_rot}~(b), the system size is $2R_\bot=192.5\xi_n$ and $2R_z=128.5\xi_n$ with $N_1=N_2=384$, $N_3=256$ and $\Delta=0.5\xi_n$. In the local density approximation, the effective chemical potential is written as $\mu'=\mu-V_{\rm trap}$. According to Fig. \ref{Fig_dtexture}(b), the spin density decreases with $q/\mu'$. This is why the spin poles becomes thinner as they are away from the trap center. The vortex core size becomes thicker as the local healing length $\frac{\hbar}{\sqrt{M\mu'}}$ becomes larger for large $|z|$. \subsection{Computation of the hydrodynamic potential}\label{ASec:HydP} The velocity field ${\bm v}=(u,v)^T$ in a two dimensional potential flow is represented as $u=\partial_y \Psi=\partial_x \Phi$ and $v=-\partial_x \Psi=\partial_y \Phi$ with the Stream function $\Psi$ and the velocity potential $\Phi$. The complex velocity potential $W=\Phi+i\Psi$ of a point vortex with a circulation $\Gamma$ in the complex plane $(x,y)$ is written as $ W=-i\frac{\Gamma}{2\pi} \log~z $. The Joukowski transformation $ z=\zeta+\frac{a^2}{\zeta} $ with $\zeta=a e^{\phi +\psi}$ reads $x=2a\cosh \phi \cos \psi$ and $y=2a \sinh \phi \sin \psi$. This transformation corresponds to a mapping from a circle of radius $a$ to an ellipse of major radius $2a\cosh \phi$ and minor radius $2a \sinh\phi$ ($\phi \geq 0$) in the $xy$ plane. The ellipse reduces a segment of length $4a$ along the $x$ axis for $\phi=0$. The segment is along the $y$ axis if $a$ is replaced by $ia$ in the formula of $\zeta$. We used the formula $\zeta=a e^{\phi +i\psi}$ in the following computation without loss of generality. In a quantized vortex in a scalar superfluid, the velocity field diverges at the center of the vortex core, where the order parameter amplitude vanishes at the core. The density increases to the bulk value far from the core. The region within a circle of a radius $\xi_n$ with small density around the center is called the core region. To evaluate the energy of a quantized vortex per unit length, the contributions from the core region ($r<\xi_n$) and its outer region ($r>\xi_n$) is computed separately. Similarly, for the elliptic vortex, there exists the core region of an elliptic form around the band-shaped singularity and the energy is computed separately. To compute the energy analytically, we neglect the so-called quantum pressure term in the Thomas-Fermi (TF) approximation \cite{pethick2008bose}. Then, the density far from the vortex core can be written as $$ n \approx n_{\rm TF}=n_{\rm bulk}\left(1-\frac{M}{2\mu}{\bm v}^2\right) $$ with $n_{\rm bulk}=n_{\rm P}$ is the bulk density. In this approximation, one obtains the contribution to the energy functional $G$ from the outer region of area $S_{\rm out}$ up to the order of ${\cal O}\left(\frac{M}{2\mu}{\bm v}^2\right)$, \begin{eqnarray} E_{\rm out} &=&\int_{S_{\rm out}}d^2x {\cal G} \nonumber \\ &\approx&\int_{S_{\rm out}}d^2x\left[\frac{Mn_{\rm TF}}{2}{\bm v}^{2} +{\cal U} (n_{TF})\right] \nonumber \\ &=& \int_{S_{\rm out}}d^2x \left[\frac{Mn_{\rm P}}{2}{\bm v}^{2} +{\cal U}_{\rm P}\right]. \end{eqnarray} Here, ${\cal U} (n_{TF})$ is the energy density ${\cal U}$ evaluated in the TF approximation, and it reduces to, for the bulk P phase, $$ {\cal U}_{\rm bulk}={\cal U}_{\rm P}=-\frac{\mu^2}{2c_0}. $$ A local state, different from the P state, appears in the core region where the $m=0$ component vanishes. The contribution from the core region is written as $$ E_{\rm core}={\cal U}_{\rm core}S_{\rm core} $$ with the energy density ${\cal U}_{\rm core}$ and the area $S_{\rm core}$ of the core region. The vortex energy $E_{\rm vortex}$ is defined as an excess energy in the presence of the vortex, the difference between the total energy with a vortex and the energy $E_{\rm bulk}={\cal U}_{\rm bulk}(S_{\rm core} +S_{\rm out})$ in the absence of it; \begin{eqnarray} E_{\rm vortex} &=& E_{\rm out}+E_{\rm core}-E_{\rm bulk} \nonumber \\ &=& U_{\rm out}+U_{\rm core} \end{eqnarray} with $U_{\rm core(out)}=E_{\rm core(out)}-{\cal U}_{\rm P}S_{\rm core(out)}$. The potentials of the outer and core regions are rewritten as \begin{eqnarray} U_{\rm out} &=& \frac{Mn_{\rm bulk}}{2}\int _{S_{\rm out}} d^2x{\bm v}^{2} \nonumber \\ U_{\rm core} &=& \delta \mu n_{\rm bulk} S_{\rm core} \end{eqnarray} with $\displaystyle \delta =\frac{{\cal U}_{\mathrm{core}}}{\mu n_{\mathrm{bulk}}} +\frac{1}{2}$. The potential $U_{\rm out}$, which is reduced to the hydrodynamic potential $U_{\rm hyd}$ as shown later, is evaluated by computing the integral \begin{eqnarray} I&=& \int _{S_{\rm out}} dxdy {\bm v}^{2} \nonumber \\ &=& \left(\frac{\Gamma }{2\pi }\right)^{2}\int _{S_{\rm out}} dxdy\left| \frac{1}{\sqrt{z^{2} -4a^{2}}}\right| ^{2} \nonumber\\ &=& \left(\frac{\Gamma }{2\pi }\right)^{2}\int _{S_{\rm out}} dxdy\frac{1}{\sqrt{\left( x^{2} -y^{2} -4a^{2}\right)^{2} +4x^{2} y^{2}}}. \nonumber \end{eqnarray} Here, we used $$ |u|= \frac{\Gamma }{2\pi } \left|\Im \left[\frac{\partial _{x} \zeta }{\zeta }\right] \right| = \frac{\Gamma }{2\pi }\left| \Im \left[\frac{1}{\sqrt{z^{2} -4a^{2}}}\right] \right| $$ and $$ |v|=\frac{\Gamma }{2\pi }\left| \Im \left[\frac{\partial _{y} \zeta }{\zeta }\right] \right|=\pm \frac{\Gamma }{2\pi }\left| \Re \left[\frac{1}{\sqrt{z^{2} -4a^{2}}}\right]\right|. $$ According to the transformation (for the ellipse along the $x$ axis) $$ (x,y)=(2a\cosh \phi \cos \psi ,2a\ \sinh \phi \sin \psi ) $$ we have the determinant of the Jacobian matrix $$ \left| \frac{\partial ( x,y)}{\partial ( \phi ,\psi )}\right| = 2a^{2}(\cosh 2\phi -\cos 2\psi ). $$ The integral $I$ is computed as \begin{eqnarray} I &=& \left(\frac{\Gamma }{2\pi }\right)^{2}\int ^{\phi _{R}}_{\phi _{\rm core}} d\phi \int ^{\pi }_{-\pi } d\psi \frac{1}{2a^{2}\sqrt{(\cosh 2\phi -\cos 2\psi )^{2}}}\left| \frac{\partial ( x,y)}{\partial ( \phi ,\psi )}\right| \nonumber \\ &=& \frac{\Gamma ^{2}}{2\pi }\int ^{\phi _{R}}_{\phi _{\rm core}} d\phi \nonumber\\ &=&\frac{\Gamma ^{2}}{2\pi }( \phi _{R} -\phi _{\rm core}) \nonumber\\ &=&\frac{\Gamma ^{2}}{2\pi }\ln\frac{\rho _{R}}{\rho _{\rm core}} \nonumber \end{eqnarray} with the radius of the system boundary in the $\zeta$ plane $$ \rho _{R} =ae^{\phi _{R}} ~{\rm or}~ \phi _{R} =\ln\frac{\rho _{R}}{a} $$ and the cutoff radius for the core region $$ \rho _{\rm core} =ae^{\phi _{\rm core}} ~{\rm or}~ \phi _{\rm core} =\ln\frac{\rho _{\rm core}}{a} \ ( \rho _{\rm core} >a). $$ The system boundary and the cutoff circle in the $\zeta$ plane are mapped into ellipses in the $xy$ plane. The major and minor radiuses are written as \begin{eqnarray} && A_{R,{\rm core}} =2a\cosh \phi _{R,{\rm core}} =\frac{\rho ^{2}_{R,{\rm core}} +a^{2}}{\rho _{R,{\rm core}}} \nonumber\\ && B_{R,{\rm core}} =2a\sinh \phi _{R,{\rm core}} =\frac{\rho ^{2}_{R,{\rm core}} -a^{2}}{\rho _{R,{\rm core}}} \end{eqnarray} and they satisfy the relation $$ A_{R,{\rm core}} +B_{R,{\rm core}} =2\rho _{R,{\rm core}}. $$ Here, $A_{\rm core}$ and $B_{\rm core}$ correspond to $R_{+}$ and $R_{-}$ in the main text, respectively. In the limit $\frac{\rho_R}{a} \to \infty$, we have $$ A_R=B_R \to \rho_R =R $$ with the radius $R$ of the system boundary in the $xy$ plane. The size of the core region is parametrized by two parameters, $a$ and $$ r_{\rm core}\equiv \rho_{\rm core}-a. $$ Then, we have $$ U_{\rm out}\approx U_{\rm hyd}=\frac{Mn_{\rm P}}{2}\frac{\Gamma^2}{2\pi}\ln \frac{R}{a+r_{\rm core}}. $$ The area $S_{\rm core}$ of the core region is represented as $$ S_{\rm core}=\pi A_{\rm core} B_{\rm core} =\pi \frac{( a+r_{\rm core})^{4} -a^{4}}{( a+r_{\rm core})^{2}} $$ The oblateness $f$ of the core region is defined as $$ f=1-\frac{B_{\rm core}}{A_{\rm core}} =\frac{2}{1+( 1+r_{\rm core} /a)^{2}}. $$ For $\frac{r_{\rm core}}{a}\ll1$, asyptotic to the limit of the maximum oblateness ($f=1$), we have $$ A_{\rm core} \approx 2a,~~B_{\rm core} \approx 2 r_{\rm core} $$ and for $\frac{r_{\rm core}}{a}\gg 1$, asyptotic to the limit of the minimum oblateness ($f=0$), $$ A_{\rm core} = B_{\rm core} \approx r_{\rm core}. $$ The former limit corresponds to a vortex with a band-shaped core region in three dimensions, called a vortex band. The latter corresponds to a conventional vortex filament with cylindrical core region. \subsection{Computation of the spin interaction}\label{ASec:Espin} The local BA state, emerging around the edge of the elliptic vortex, is collateral in the existence of the local AF state in the vortex center. Therefore, the magnetization can be associated with the density at the origin $n_{\rm core}=n({\bm r}=0)$. According to the mean-field approach in the previous studies \cite{liu2020phase,underwood2020properties}, a continuous phase transition occurs at a critical point ($q=q_{\rm C}$) in the core of a topological defect. This approach is also well applicable to our case. We obtain similar behaviors $n_{\rm core} \propto 1-\frac{q}{q_{\rm C}}$ and $s_y^{\rm max}=\max s_y\propto \sqrt{1-\frac{q}{q_{\rm C}}}$. Since the density is asymptotic to $n_{\rm AF}$ far from the critical point for $q \ll q_{\rm C}$, a quantitative estimation is obtained by $$ n_{\rm core}\sim n_{\rm AF}\left(1-\frac{q}{q_{\rm C}}\right), $$ which is quantitatively agreed with the numerical result [Fig.~\ref{Fig_dtexture}(b)]. The magnetization is well described with this approach too. The magnetization happens in the two spin spots with $\arg \Phi_0=\pm \pi$ along the $y$ axis. Then the local spin density is written as $F_y\sim \pm 2\sqrt{2n_1 |\Phi_0|^2}$ with $\Phi_{+1}=-\Phi_{-1}=\sqrt{n_1}\propto\sqrt{n_{\rm core}}$. The density is almost constant $n\approx n_{\rm P}$ everywhere for small $q$ and then the maximum value is estimated by the relation between the arithmetic and geometric means with $2n_1=\frac{n}{2}\approx \frac{n_{\rm core}}{2}$ and $|\Phi_0|^2=\frac{n}{2}\approx \frac{n_{\rm P}}{2}$ as $$ s_y^{\rm max} \sim \frac{\sqrt{n_{\rm core}n_{\rm P}}}{1+\frac{c_2}{c_0}}, $$ where the factor in the denominator comes from the spin interaction in the presence of spin density. The size $r_{\rm spin}$ of the magnetic spot around the edges grows with the size of the AF-core, $\sim\xi_q$. The spin interaction becomes more important as the magnetic spot grows and the spot size finally reaches the spin healing length, estimated as $$ \xi_s =\frac{\hbar}{\sqrt{M\sigma}} $$ with $\sigma= c_2n_{\rm P}$. This crossover behavior of the spot size is described by a simple formula $$ r_{\rm spin}=\frac{1}{\xi_s^{-1}+C_{\rm spin}\xi_q^{-1}}, $$ with a constant $C_{\rm spin}\sim {\cal O}(1)$. Finally, the spin interaction energy is evaluated as $$ E_{\rm spin}\sim \frac{1}{2}c_2(s_y^{\rm max})^2\pi r_{\rm spin}^2 \sim \frac{\pi}{2}\frac{c_2}{c_0}\frac{n_{\rm core}}{n_{\rm P}}\left(\frac{r_{\rm spin}}{\xi_n}\right)^2\mu n_{\rm P}\xi_n^2. $$ This formula is well-consistent with the numerical result with $C_{\rm spin}=0.8$. \end{document}
1,108,101,564,427
arxiv
\section{Introduction} In granular media, although the grains roll and slide, in addition to being compressed and sheared, only the latter, the deformation of the grains, leads to reversible energy storage that sustains a static, elastic stress, while rolling and sliding heats up the system. The granular strain field $\varepsilon_{ij}$, therefore, has two contributions, an elastic one $u_{ij}$ accounting for deformation of the grains, and a plastic one $p_{ij}$ for the rest, where $\varepsilon_{ij}=u_{ij}+p_{ij}$. The elastic energy $w(u_{ij})$ is a function of $u_{ij}$, not of $\varepsilon_{ij}$, and the elastic contribution to the stress $\sigma_{ij}$ is given as $\pi_{ij}(u_{ij})\equiv-\partial w/\partial u_{ij}$. With $\sigma_{ij}=\pi_{ij}$ in statics, stress balance $\nabla_j\sigma_{ij}=0$ may be closed with $\pi_{ij}=\pi_{ij}(u_{ij})$ and uniquely determined employing appropriate boundary conditions~\cite{J-L,ge}. Because the plastic part of the strain needed for arriving at a given stress state is quite irrelevant for its determination, one may with certain justification consider static granular media, say a sand pile at rest, as fully elastic. If this sand pile is perturbed by periodic tapping, circumstances change qualitatively. Its conic form will then degrade until the surface becomes flat. This is because part of the grains in the pile loose contact with each other temporarily, during which their deformation decreases. This implies a relaxing elastic strain $u_{ij}$, and correspondingly, smaller elastic energy $w(u_{ij})$ and static stress $\pi_{ij}(u_{ij})$. Since the sand pile is no longer able to sustain static stresses, it is now a transiently elastic system, same as polymers -- though the respective microscopic mechanisms are of course very different: temporary unjamming and rearrangement of the grains versus slow disentanglement of polymer strands. Note that flattening a sand pile implies sizable granular rearrangement, requiring a considerable portion of plastic strain $p_{ij}$. Quantifying the random motion of the grains as granular temperature $T_g$, we may take the relaxation time $\tau$ of the elastic strain $u_{ij}$ as a function of $T_g$, with $\tau(T_g)\to\infty$ for $T_g\to0$. For vanishing $T_g$, there is no strain relaxation, the deformation of the grains are maintained, the sand pile keeps its conic shape, and the system is elastic. For finite $T_g$, the elasticity turns transient, with $u_{ij}$, $\pi_{ij}(u_{ij})$ and $w(u_{ij})$ relaxing. When granular media are being slowly sheared, circumstances are similar. In addition to moving with the large scale velocity $v_i$, the grains also move and slip in deviation of it. This allows temporary, partial unjamming, and implies a finite $T_g$, both again lead to transient elasticity. Since $T_g$ is not always an externally imposed parameter, as with tapping, but frequently internally produced, especially by shear flows, it is an independent variable of the granular hydrodynamic theory, to be accounted for by its own equation of motion. More specifically, the production of $T_g$ by shear flows should have great similarities to viscous heat production in normal fluids. Granular media has different phases that, in dependence of the grain's ratio of deformation to kinetic energy, may loosely be referred to as gaseous, liquid and solid. Moving fast and being free most of the time, the grains in the gaseous phase have much kinetic, but next to none elastic, energy~\cite{Haff}. In the denser liquid phase, say in chute flows, there is less kinetic energy, more deformation, and a rich rheology that has been scrutinized recently~\cite{chute}. In granular statics, with the grains deformed but stationary, the energy is all elastic. This state is legitimately referred to as solid because static shear stresses are sustained. If granular solid is slowly sheared, the predominant part of the energy remains elastic. As discussed, the system is transiently elastic, or quasi-solid. In this paper, we focus on the last two cases, and for simplicity refer to both as the solid granular phase. The transition between permanent and transient elasticity is a crucial key to understanding granular solids. And remarkably, it is as input quite sufficient for a formal and cogent derivation of the framework for granular solid hydrodynamics -- if one takes careful notice of all general principles of physics, especially symmetry and thermodynamic considerations. This is the first part of the present paper. The second part deals with an concrete expression for the granular elastic energy, how this expression is supported by extensive experimental data from granular statics. This is important because general principles only confines the {\em structure} of the hydrodynamic theory -- they yield a framework into which many different theories fit. The three sets of differential equations given below need the input of specific expressions for the thermodynamic energy and the transport coefficients. Only when their functional dependence on the thermodynamic variables is given, do the theories attain predictive power. In the following, we first recall the hydrodynamic theory of permanent and transient elasticity, in \S~\ref{pe-GSH} and \S~\ref{te-GSH}; then merge both to form granular hydrodynamics, in \S~\ref{ge-GSH}. All equations in these three subsections are valid irrespective what form the energy $w$ has. A specific energy density suitable for granular media is then reviewed and further discussed in \S~\ref{energy-GSH}. In an accompanying paper~\cite{JL3}, we compare hypoplasticity~\cite{Kolym}, a state of the art engineering model on the behavior of granular solids, with granular solid hydrodynamics as derived here, and specified using the elastic energy of \S~\ref{energy-GSH}. \section{Elasticity Theory} \subsection{Permanent Elasticity\label{pe-GSH}} The conserved, thermodynamic energy density $w$ of solids is a function of the symmetric strain field $u_{ij}=u_{ji}$, and of the densities of entropy $s$, mass $\rho$, momentum $\boldsymbol g$. So we write (neglecting gravity) \begin{equation} \label{1-GSH} {\rm d}w = T{\rm d} s + \mu {\rm d}\rho + v_{i} {\rm d} g_{i} - \pi_{ij} {\rm d} u_{ij}, \end{equation} denoting $T(s,\rho,u_{ij})\equiv\partial w/\partial s$, $\mu(s,\rho,g_i,u_{ij})\equiv\partial w/\partial\rho$, $v_i\equiv\partial w/\partial g_i=g_i/\rho$, $\pi_{ij}(s,\rho,u_{ij})\equiv-\partial w/\partial u_{ij}$. The equations of motion for the energy and its variables are \begin{eqnarray} \label{2-GSH} {\textstyle\frac\partial{\partial t}} w+\nabla_iQ_i=0, \qquad {\textstyle\frac\partial{\partial t}} s+\nabla_if_i=R/T, \\ \label{3-GSH} {\textstyle\frac\partial{\partial t}} \rho+\nabla_ij_i=0, \qquad {\textstyle\frac\partial{\partial t}} g_i+\nabla_j\sigma_{ij}=0, \\ \label{4-GSH} {\textstyle\frac{\rm d}{{\rm d} t}}u_{ij}-v_{ij}+[{\textstyle\frac12}\nabla_iy_j +u_{ik}\nabla_{j}v_k+i\leftrightarrow j]=0, \end{eqnarray} where ${\textstyle\frac{\rm d}{{\rm d} t}}={\textstyle\frac\partial{\partial t}}+v_k\nabla_k$ and $v_{ij}\equiv\frac12(\nabla_iv_j+\nabla_jv_i)$. Expressing conservation laws and entropy production, the first four equations are quite general and shared by all hydrodynamic theories. Alone, they describe normal fluids and represent the simplest hydrodynamic theory. The fifth equation is characteristic of elastic systems, especially ones that break the translational symmetry spontaneously. (More on why Eq~(\ref{4-GSH}) must have the above form is given in~\cite{temmen}.) Inserting Eqs~(\ref{2-GSH}-\ref{4-GSH}) into the temporal derivative of Eq~(\ref{1-GSH}), \begin{equation}\label{5-GSH} \textstyle\frac\partial{\partial t}w = T\frac\partial{\partial t} s + \mu \frac\partial{\partial t}\rho + v_{i}\frac\partial{\partial t} g_{i} - \pi_{ij} \frac\partial{\partial t} u_{ij}, \end{equation} and introducing the notations: $f^D_i$, $\sigma^D_{ij}$, taking them to be given as \begin{eqnarray}\label{6-GSH} f_i&\equiv& sv_i-f^D_i,\\ \sigma_{ij}&\equiv& \pi_{ij}-\pi_{ik}u_{jk}-\pi_{jk}u_{ik}\nonumber\\&&\quad +(Ts+v_ig_i+\mu\rho+g_iv_j-w)-\sigma^D_{ij}, \label{7-GSH} \end{eqnarray} we obtain \begin{eqnarray}\label{8-GSH} \nabla_iQ_i=\nabla_i(Tf_i+\mu j_i +v_j\sigma_{ij}-y_j\pi_{ij})\\\nonumber +f_i^D\nabla_iT+\sigma_{ij}^Dv_{ij} +y_i\nabla_j\pi_{ij}-R. \end{eqnarray} Clearly, one can write the left hand side of Eq~(\ref{5-GSH}) as the divergence of something, plus something else that vanishes in equilibrium (because the so-called thermodynamic forces, $\nabla_iT, v_{ij}$ and $\nabla_j\pi_{ij}$ do). Therefore, an inviting possibility is to identify the first with the energy flux $Q_i$, and the second with the entropy production $R$, a quantity that also vanishes in equilibrium, \begin{eqnarray}\label{9-GSH} Q_i=Tf_i+\mu j_i +v_j\sigma_{ij}-y_j\pi_{ij},\\\label{10-GSH} R=f^D_i\nabla_iT+\sigma^D_{ij}v_{ij}+y_i\nabla_j\pi_{ij}. \end{eqnarray} This identification is in fact unique. It is easy to verify that, as long as the energy $w$ remains general, unspecified, there is no other way to write the left hand side of Eq~(\ref{5-GSH}) as the sum of a divergence and an expression that vanishes in equilibrium. Taking in Eq~(\ref{10-GSH}) $(\nabla_iT, v_{ij}, \nabla_j\pi_{ij})$ as the thermodynamic forces, $(f^D_i, \sigma^D_{ij}, y_i)$ as the fluxes, and forming each into a 12-component vector, $\vec Y$ and $\vec Z$, the Onsager force-flux relation gives their linear connection as, \begin{equation}\label{10a-GSH} \vec Z=\hat c\cdot\vec Y, \end{equation} where $\hat c$ is the transport matrix, with diagonal elements that are positive, and off-diagonal ones that satisfy the Onsager reciprocity relation. The simplest example for $\hat c$ has only diagonal elements, all positive scalars, \begin{eqnarray} \label{11-GSH}f^D_i&=&\kappa\nabla_iT, \\ \label{12-GSH}\sigma_{ij}^D&=&\zeta v_{\ell\ell}\delta_{ij}+\eta v^0_{ij}, \\ \label{13-GSH} y_i&=&\beta^{P}\nabla_j\pi_{ij}. \end{eqnarray} Accounting for heat conduction and viscous stress, the first two equations are shared by all hydrodynamic theory. (The superscript $^0$, here and below, denotes the traceless part of a tensor, eg. $v^0_{ij}\equiv v_{ij}-\frac13\delta_{ij}v_{\ell\ell}$.) The third accounts for permeation and defect motion, and is specific to elastic media~\cite{perm}, see section~\ref{SolidCreep}. All elements of the matrix $\hat c$, usually referred to as transport coefficients, are functions of the thermodynamic variables, $s, \rho, u_{ij}$, or alternatively, of the conjugate variables, $T, \mu, \pi_{ij}$. In the generally accepted and above employed linear version of the Onsager relation, they do not depend on thermodynamic forces, $\nabla_iT, v_{ij}, \nabla_j\pi_{ij}$. So we may take the coefficients $\kappa, \eta,\zeta$ and $\beta^P$ to depend on the temperature, the pressure, and scalar combinations of the stress, such as $\pi_{\ell\ell}$ and $\pi_s^2\equiv\pi_{ij}^0\pi_{ij}^0$. \subsubsection{Solid Creep Motion\label{SolidCreep}} Enforcing a steady velocity at the surface of granular solid, the velocity field is observed to penetrate rather deep into the bulk of the granular medium, with a magnitude that decays exponentially with depth~\cite{creep}. The usual collective modes of velocities in hydrodynamic theories of elastic media are of course such that they reduce to a constant velocity when stationary (sound), or one that varies linearly in space (shear diffusion). But there is also a less-known one that decays exponentially, a consequence of Eq~(\ref{13-GSH}) and the less studied permeation coefficient $\beta^P$. We shall refer to this mode as ``solid creep motion." Linearized with respect to velocity, Eq~(\ref{4-GSH}) reduces, for the stationary case $\partial u_{ij}/\partial t=0$, to \begin{equation v_{ij}={\textstyle\frac12} \beta^P\nabla_k(\nabla_i\pi_{jk}+\nabla_j\pi_{ik}), \end{equation} implying that mass and shear flows are possible without any changes in the elastic strain field, or equivalently, in the elastic stress and elastic energy. Similarly, momentum conservation, or Eq~(\ref{3-GSH}), linearized and in steady flow, $\partial(\rho v_i)/\partial t=0$, reduces to \begin{equation \nabla_j(D\delta_{ij}+\pi_{ij}-\eta v^0_{ij})=0 \end{equation} (where $D\delta_{ij}$ stands for the diagonal terms that do not concern us here). Now, consider a half space $y>0$ filled with solid, which has its surface at $y=0$, moving with a given velocity along $x$. Permitting only a $y$-dependence in this one-dimensional geometry, we have \begin{equation v_{xy}={\textstyle\frac12}\beta^P\nabla^2_y\pi_{xy},\quad \nabla_y(\pi_{xy}-\eta v^0_{xy})=0. \end{equation} These two equations clearly imply exponentially decaying velocity $v_x$ and change of the elastic stress $\delta\pi_{xy}$, \begin{equation}\label{alpha} v_x, \delta\pi_{xy}\sim\exp\frac{-y}{\sqrt{{\textstyle\frac12}\eta\,\beta^P}}. \end{equation} In granular medium, this behavior will be modified, because the elasticity there is transient rather than permanent. But should solid creep motion retains its qualitative behavior under certain circumstances, Eq~(\ref{alpha}) would constitute a natural explanation of granular creep flow. \subsection{Transient Elasticity\label{te-GSH}} Although the equations of the last section are fairly general and account for all kinds of elasticity, linear as well as nonlinear, they do exclude transient elasticity, such as realized in polymers. In these,+ elasticity arises from entanglement of polymer strands, which are stretched and sheared, if not given enough time to disentangle. But if given enough time, the deformation, with it also the associated energy and stress, relax. So the system is to be accounted for by a set of equations which reduce to those of the last section for small time spans, but allow the deformation $u_{ij}$ to relax for longer time spans. The independent variables remain the same, so do the conservation laws. So Eqs~(\ref{1-GSH},\ref{2-GSH},\ref{3-GSH}) are unchanged, but Eq~(\ref{4-GSH}) is modified to allow for a relaxation term $X_{ij}$ \begin{equation}\label{15-GSH} {\textstyle\frac{\rm d}{{\rm d} t}} u_{ij}-v_{ij}+[{\textstyle\frac12}\nabla_iy_j +u_{ik}\nabla_{j}v_k+i\leftrightarrow j]=X_{ij}. \end{equation} The same calculation of Eq~(\ref{5-GSH}), with the same notation of Eq~(\ref{6-GSH},\ref{7-GSH}), then leads to the same energy flux $Q_i$, but a modified entropy production, \begin{equation}\label{16-GSH} R=f^D_i\nabla_iT+\sigma^D_{ij}v_{ij}+y_i\nabla_j\pi_{ij} +X_{ij}\pi_{ij}. \end{equation} This implies $\pi_{ij}$ is now not only a conjugate variable, but also a thermodynamic force, increasing the dimension of the 12-component vector $\vec Y$ in Eq~(\ref{10a-GSH}) by another 6 components. Similarly, the vector $\vec Z$ is also increased by the 6 components of $X_{ij}$, and $\hat c$ is now a $18\times18$-matrix. Other from that, Eq~(\ref{10a-GSH}) still holds. The simplest, diagonal and scalar example is again given by Eqs~(\ref{11-GSH}, \ref{12-GSH}, \ref{13-GSH}), in addition to \begin{equation}\label{17-GSH} X_{ij}=\beta \pi_{ij}^0+\beta_1\pi_{\ell\ell}\,\delta_{ij}, \end{equation} a term that permits $u_{ij}$ to relax, as long as $\pi_{ij}$ is nonzero. As discussed in the last paragraph of \S~\ref{pe-GSH}, the transport coefficients $\beta, \beta_1$ are functions of the thermodynamic variable $u_{ij}$, or equivalently, of $\pi_{ij}=\pi_{ij}(u_{ij})$. This remains true even though $\pi_{ij}$ is now also part of $R$, Eq~(\ref{16-GSH}), and hence an additional thermodynamic force. A point worth clarifying concerns the plastic strain $p_{ij}$: The total strain $\varepsilon_{ij}=u_{ij}+ p_{ij}$, a purely kinematic quantity, obeys the equation \begin{equation {\textstyle\frac{\rm d}{{\rm d} t}}\,\varepsilon_{ij} +[\varepsilon_{ik}\nabla_{j}v_k+i\leftrightarrow j]=v_{ij}. \end{equation} So, as a result of Eq~(\ref{15-GSH}), the plastic strain is determined by \begin{equation {\textstyle\frac{\rm d}{{\rm d} t}}\, p_{ij}+[-{\textstyle\frac12}\nabla_iy_j+p_{ik}\nabla_{j}v_k+i\leftrightarrow j] =-X_{ij}. \end{equation} If a transiently elastic medium is quickly and uniformly deformed, such that there is no time for relaxation, $X_{ij}\approx0$, we have $\varepsilon_{ij}=u_{ij}$ after the deformation. Holding it for a while, $v_{ij}=0$, the elastic deformation $u_{ij}$ relaxes to zero, while the plastic one $p_{ij}$ grows accordingly, until one replaces the other completely, and we have $p_{ij}=\varepsilon_{ij}$. The system now stays where it is, and the initial displacement is referred to as ``plastic," rather than elastic, because it does not have the tendency to return to the original position. Essentially this set of equations, as specified in this section, was recently shown well able of accounting for the full range of polymers' non-Newtonian behavior, including shear-thinning, elongational strain-hardening, rod climbing (the Weissenberg effect), and various empirical rules such as Cox-Merz and First Gleissle Mirror Rule~\cite{temmen,om} \subsection{Granular Elasticity\label{ge-GSH}} As discussed, sand and other granular media display both elastic and transiently elastic behavior -- depending on whether the granular temperature $T_g$ vanishes or not. Including the density of granular entropy $s_g$ as an additional, independent thermodynamic variables, the Gibbs relation of Eq~(\ref{1-GSH}) now reads \begin{equation} \label{20-GSH} {\rm d}w = T{\rm d} s +T_g{\rm d}s_g + \mu {\rm d}\rho + v_{i} {\rm d} g_{i} - \pi_{ij} {\rm d} u_{ij}. \end{equation} Granular temperature is not a new concept. Haff, also Jenkin and Savage~\cite{Haff}, were probably the first to introduce it in the context of granular gas, using it to denote the average kinetic energy of the grains. Hence $T_g\sim\epsilon_k$, where $\epsilon_k$ is the kinetic energy density. Nowadays, this $T_g$ is routinely used in considering granular gas and liquid~\cite{Lub}. Note that given this interpretation of $T_g$, we have $T_g=\partial \epsilon_k/\partial s_g\sim\partial T_g/\partial s_g$, and the granular entropy is uniquely determined, $s_g\sim \ln T_g$. More recently, there is much discussion of a configurational entropy $S_c$ in the literature. The original concept by Edwards was to approximate grains as infinitely rigid and all configurations as having identical energy~\cite{Edw}, so $S_c$ is a function only of the system's volume. When relaxing the rigidity approximation, and allowing the elastic energy to vary, $S_c$ is again a function of energy and volume $S_c=S_c(E,V)$, and a configurational temperature is naturally given as $T_c^{-1}=\partial S_c/\partial E$ (see~\cite{Nic} for a review). In thermodynamics, the energy change ${\rm d}w$ from all microscopic, implicit variables is subsumed as $T{\rm d}s$, with $s$ the entropy and $T\equiv\partial w/\partial s$ its conjugate variable. From this, we divide out the kinetic energy of granular random motion, executed by the grains in deviation from the ordered, large-scale motion, denoting it as $T_g{\rm d}s_g$, and calling $s_g$ and $T_g\equiv\partial w/\partial s_g$ granular entropy and temperature, respectively. In other words, we consider two heat reservoirs, the first containing the energy of granular random motion, the second the rest of all microscopic degrees of freedom, especially phonons. In equilibrium, $T_g=T$, and $s_g$ is part of $s$. (In fact, we may simply forget $s_g$, since it has far less degrees of freedom.) But when the granular system is being tapped or sheared, and $T_g$ is many orders of magnitude larger than $T$, then this leaky, intermediary heat reservoir can no longer be ignored. As $s_g$ then serves as a nonhydrodynamic, macroscopically slow degree of freedom, with $T_g$ its conjugate variable. Taking $s_g$ as the part of the entropy accounting for the granular kinetic energy, our definition is fairly close to the entropy of granular gas discussed above, as given by Haff, though its functional dependence will probably be modified, because it must be evaluated taking into consideration the effect of excluded volumes -- an overwhelming one in the dense solid phase [see Eq~(9) of the third of~\cite{Lub}]. The concept of configurational entropy, on the other hand, is closer to our second heat reservoir, the true entropy $s$, see section 6 of the first, and section 10 of the third, reference~\cite{ge}, for a discussion of their relationship. The functional dependence of $s_g(T_g)$, more precisely, the equation of state $T_g=T_g(s,s_g,\rho,u_{ij})$, is given once the energy $w$ is known. Although all equations of this and the last two sections remain valid irrespective of what special expression one chooses for $w$, concrete predictions certainly depend on it. Since it appears difficult, at least at present, to evaluate $w$ microscopically, one may alternatively employ experimental data in conjunction with general considerations to narrow down its possibility. We shall examine $w$'s dependence on $u_{ij}$ and $\rho$ in the next section, but defer that on $s_g$ to a future publication. Taking the balance equation for $s_g$, in the uniform case, as ${\frac\partial {\partial t}}s_g =R_g/T_g$, we first of all need $R_g$ to contain the term $-\gamma (T_g-T)^2$. This is because being a slow, nonhydrodynamic variable, the equation of motion for $s_g$ should have the usual relaxation form, ${\frac\partial {\partial t}}s_g =-\gamma (T_g-T)$, pushing $T_g$ towards the ambient temperature $T$. (Since any random motion of the grains implies such improbably high $T_g$, neglecting $T$ in this expression is always an excellent approximation. We shall therefore from here on always write ${\frac\partial {\partial t}}s_g =-\gamma T_g$.) Second, with the heat bath divided into two parts, viscous heat production should fill both baths simultaneously. Therefore, we keep the term $\sigma^D_{ij}v_{ij}$ in $R$, with $\sigma^D_{ij}=\eta v_{ij}^0+\zeta v_{\ell\ell}$, and write the analogous one, $\Sigma^D_{ij}v_{ij}$ into $R_g$, with $\Sigma^D_{ij}=\eta_g v_{ij}^0+\zeta_g v_{\ell\ell}$ denoting the viscous stress contribution from exciting granular random motion. The magnitude of the four viscosities depend on microscopic details and cannot be decided on general principles. For instance, while $\eta$ is probably a small quantity compared to $\eta_g$ in dry sand, because macroscopic shear flows excite granular random motion first, $\eta$ should be quite a bit larger in wet sand: A macroscopic shear flow implies much stronger microscopic shear flows in the fluid layers between grains, and the energy dissipated in these layers should predominantly go to $s$, rather than to $s_g$ first. Third, granular entropy production $R_g$ should have the term $\kappa_g\nabla_iT^2_g$, from an inhomogeneous granular temperature, in exact analogy to the term $\kappa\nabla_iT^2$ in $R$. So the final expression should be $R_g=\Sigma^D_{ij}v_{ij}+ \kappa_g\nabla_iT^2_g -\gamma T_g^2$. A direct and desirable consequence of this expression is that for stationarity, ${\frac\partial {\partial t}}s_g=R_g/T_g =0$, and a constant $T_g$, any shear flows excite the granular temperature of $\gamma T_g^2=\eta_g v_{ij}^0v_{ij}^0+\zeta_g v_{\ell\ell}^2$, which is (as discussed) what renders granular elasticity transient. We do not have good reasons for ruling out a term in $R_g$ analogous to $y_i\nabla_j\pi_{ij}$, or one $\sim\nabla_iT^2_g$ in $R$. But neither is there any experimental evidence demanding their existence. So although both are allowed for the general case, they are left out here for the simplicity of display. On the other hand, a term in $R_g$ analogous to $X_{ij}\pi_{ij}$ cannot exist, because we would then have $\gamma T_g^2=X_{ij}\pi_{ij}$ for granular statics, implying a finite $T_g$ and decaying sand piles. Given the above consideration specifying $R_g$, we may embark on the derivation of the equations of motion for granular elasticity, in the same way as above. We start from the following equations, \begin{eqnarray} \label{2y-GSH} {\textstyle\frac\partial{\partial t}} w+\nabla_iQ_i=0, \quad {\textstyle\frac\partial{\partial t}} \rho+\nabla_ij_i=0,\qquad \\ \label{3y-GSH} {\textstyle\frac\partial{\partial t}} s+\nabla_if_i=R/T,\quad {\textstyle\frac\partial{\partial t}} s_g+\nabla_iF_i=R_g/T_g, \\ {\textstyle\frac\partial{\partial t}} g_i+\nabla_j \sigma_{ij}=0,\qquad\qquad\qquad\label{3yA-GSH} \\ \label{4y-GSH} {\textstyle\frac{\rm d}{{\rm d} t}} u_{ij}-v_{ij}+[{\textstyle\frac12}\nabla_iy_j +u_{ik}\nabla_{j}v_k+i\leftrightarrow j]=X_{ij}. \end{eqnarray} Inserting these into Eq~(\ref{20-GSH}), \begin{equation}\label{5y-GSH} \textstyle\frac\partial{\partial t}w = T\frac\partial{\partial t} s +T_g\frac\partial{\partial t} s_g + \mu \frac\partial{\partial t}\rho + v_{i}\frac\partial{\partial t} g_{i} - \pi_{ij} \frac\partial{\partial t} u_{ij}, \end{equation} using the notations \begin{eqnarray}\label{6y-GSH} f_i\equiv sv_i-f^D_i,\quad F_i\equiv s_gv_i-F^D_i,\qquad\qquad\quad \\\nonumber \sigma_{ij}\equiv(-w+Ts+v_ig_i+\mu\rho+T_gs_g) \delta_{ij}\qquad\qquad\\ +\pi_{ij}-\pi_{ik}u_{jk}-\pi_{jk}u_{ik}+g_iv_j-\sigma^D_{ij}-\Sigma^D_{ij}, \label{7y-GSH} \end{eqnarray} we obtain \begin{eqnarray}\label{8y-GSH} \nabla_iQ_i=\nabla_i(Tf_i+T_gF_i+\mu j_i +v_j\sigma_{ij}-y_j\pi_{ij})\qquad\quad \\\nonumber -R+f_i^D\nabla_iT +y_i\nabla_j\pi_{ij +\sigma_{ij}^Dv_{ij}+X_{ij}\pi_{ij}+\gamma T_g^2 \\\nonumber -R_g+\Sigma_{ij}^Dv_{ij}+F_i^D\nabla_iT_g -\gamma T_g^2 \end{eqnarray} and deduce \begin{eqnarray}\label{9y-GSH} Q_i&=&Tf_i+T_gF_i+\mu j_i +v_j\sigma_{ij}-y_j\pi_{ij}, \\\nonumber R&=&f_i^D\nabla_iT +y_i\nabla_j\pi_{ij}\\ &&\qquad\quad+\sigma_{ij}^Dv_{ij}+X_{ij}\pi_{ij}+\gamma T_g^2,\label{10y-GSH}\\\label{11y-GSH} R_g&=&\Sigma_{ij}^Dv_{ij}+F_i^D\nabla_iT_g -\gamma T_g^2. \end{eqnarray} Given the expressions for $R$, we may take flux vector as $\vec Z=(f^D_i, y_i, \sigma^D_{ij}, X_{ij})$, the force vectors as $\vec Y=(\nabla_iT, \nabla_j\pi_{ij}, v_{ij}, \pi_{ij})$, and again formulate the Onsager force-flux relation as $\vec Z=\hat c\cdot\vec Y$. Analogously, given $R_g$, we have $\vec Z_g=\hat c_g\cdot\vec Y_g$, where $\vec Z_g=(F^D_i, \Sigma^D_{ij})$ and $\vec Y_g=(\nabla_iT_g, v_{ij})$. In addition, we require \begin{equation X_{ij}\to0\quad \text{for}\quad T_g\to0, \end{equation} to ensure permanent elasticity in granular statics. This completes the derivation and presentation of the structure of a hydrodynamics of permanent elasticity at $T_g=0$, and transient elasticity at finite $T_g$. To find Granular solid hydrodynamics, we still need to specify the energy $w$, and the functional dependence of the transport matrices, $\hat c, \hat c_g$. Instead of a microscopic derivation of these quantities starting from some specific interaction, we employ general considerations (such as requiring $w$ to have a positive curvature where the system is stable, see \S~\ref{yield}) and experimental data to narrow down the possibilities. Hereby, $w$ may be determined by static data alone, but $\hat c, \hat c_g$ must be considered using data from granular dynamics. The simplest example is again given by $\hat c, \hat c_g$ being both diagonal, \begin{eqnarray} \label{11y-GSHy}f^D_i=\kappa\nabla_iT,\qqua F^D_i=\kappa_g\nabla_iT_g,\quad y_i=\beta^{P}\nabla_j\pi_{ij}, \\ \label{12y-GSHy} \Sigma_{ij}^D=\zeta_g v_{\ell\ell}\delta_{ij}+\eta_g v^0_{ij},\quad \sigma_{ij}^D=\zeta v_{\ell\ell}\delta_{ij}+\eta v^0_{ij}, \\X_{ij}=\beta\pi_{ij}^0 +\beta_1\delta_{ij}\pi_{\ell\ell}.\qquad\qquad \end{eqnarray} In the next section, \S~\ref{energy-GSH}, an energy expression appropriate for granular media is presented, and shown to account for important features of granular statics. For the homogeneous case, with $\nabla_iT, \nabla_iT_g, \nabla_j\pi_{ij}=0$, we propose to combine this $w$ with the following transport structure, diagonal except for the two terms preceded by $\alpha$, \begin{eqnarray}\sigma_{ij}^D+ \Sigma_{ij}^D&=&(\zeta+\zeta_g) v_{\ell\ell}\delta_{ij}+(\eta+\eta_g) v^0_{ij}+\alpha\pi_{ij},\quad \label{13y-GSH} \\\label{14y-GSH} X_{ij}&=&-\alpha v_{ij}-\frac{u _{ij}^0}\tau -\frac{u_{\ell\ell}\,\delta_{ij}}{\tau_1}. \end{eqnarray} The first equation is simply a sum of the two dissipative stress contributions. The second equation uses the specific form of $w$, a result of which is \begin{equation \pi_{ij}\equiv-\frac{\partial w}{\partial u_{ij}} =\sqrt\Delta({\cal B}\Delta\,\delta _{ij}-2{\cal A}\, u_{ij}^0) +{\cal A} \frac{u_s^2}{2\sqrt\Delta}\delta _{ij}, \end{equation} see Eq~(\ref{8}) below. So the relaxation times are given as \begin{equation}\label{GSH1} \frac1\tau\equiv2\beta{\cal A}\sqrt\Delta, \quad \frac1{\tau_1}\equiv3\beta_1\sqrt\Delta\left({\cal B}+\frac{{\cal A}u_s^2}{2\Delta^2}\right). \end{equation} Obviously, a simplification is given by taking either $\beta$ and $\beta_1$, or $\tau$ and $\tau_1$, as independent from $u_{ij}$. Choosing the second possibility, and taking $\tau,\tau_1$ as proportional to $T_g$, all other coefficients (ie. $\zeta,\zeta_g, \eta,\eta_g,\alpha$) as constant gives us a complete and well specified theory. As will be shown in an accompanying paper~\cite{JL3}, this choice leads to a surprisingly good agreement with hypoplasticity~\cite{Kolym}, a modern engineering theory widely employed to model solid granular behavior, especially triaxial experiments. \subsubsection{Granular Gas} Since we are considering a hydrodynamic theory, we should expect the equations as given above to easily connect to that of granular gas, such as given in~\cite{Haff} by Haff. Taking the elastic strain to relax infinitely fast, $\tau,\tau_1\to0$, essentially eliminates $u_{ij}$ as an independent variable. As a result, we have $w=w(T,T_g,\rho)$ in the rest frame, and only Eqs~(\ref{2y-GSH},\ref{3y-GSH},\ref{3yA-GSH}) remain as equations of motion, with the dissipative currents given by the second of Eqs~(\ref{11y-GSHy}), and the first of Eqs~(\ref{12y-GSHy}). Following Haff, we may take $w\sim T_g$, $s_g\sim\ln T_g$, and the term $(T_gs_g+\mu\rho-w)\,\delta_{ij}$ as the main contribution to the pressure [see Eq~(\ref{7y-GSH})]; also \begin{equation \zeta_g,\,\eta_g,\, \kappa_gT_g,\, \gamma T_g \sim\,\rho \sqrt{T_g}. \end{equation} [Because $s$ is not included as an independent variable, the first of Eqs~(\ref{3y-GSH}), is ignored in~\cite{Haff}, as are $\kappa, \eta,\zeta$. Moreover, $\Sigma_{ij}^D$ are included only in $R_g$, not in the stress flux $\sigma_{ij}$, which is perhaps not quite consistent. The general gist, however, is certainly the same.] \section{A Granular Energy Expression\label{energy-GSH}} Linear elasticity is a simple, consistent and complete theory. It starts with an energy $w$ that depends on the strain, $u_{ij}=\frac12(\nabla _iU_j+\nabla _jU_i)$, with $U_i$ the displacement vector, \begin{equation}\label{1} w=\textstyle\frac12K\Delta^2+\mu u_s^2\quad (\Delta\equiv -u_{\ell\ell},\, u_s\equiv \sqrt{u^0_{ij}u^0_{ij}}), \end{equation} see~\cite{LL7}. $K,\mu>0$ are two material-dependent constants, referred to as the bulk and shear modulus. ($u_{\ell\ell}$ is the trace of $u_{ij}$, and $u^0_{ij}\equiv u_{ij}-\frac13u_{\ell\ell}\,\delta_{ij}$ its traceless part.) The stress-strain relation is obtained as a derivative, \begin{equation}\label{2} \sigma_{ij}=\pi_{ij}\equiv-\frac{\partial w}{\partial u_{ij}}=K\Delta\,\delta_{ij}-2\mu\, u^0_{ij}, \end{equation} which contains the pressure $P$ and the scalar shear stress $\sigma_s$, \begin{equation}\label{3} P\equiv\textstyle\frac13\sigma_{\ell\ell}=K\Delta,\quad \sigma_s\equiv\sqrt{\sigma^0_{ij}\sigma^0_{ij}}=2\mu u_s, \end{equation} both employed frequently below. Note that as there is no difference between $\sigma_{ij}$ and $\pi_{ij}$ in statics, we shall use them interchangeably here, in \S~\ref{energy-GSH}. Some ramifications of linear elasticity are: (1)~Since the stress $\sigma_{ij}$ is given as a function of three variables, $U_i$, the three components of the force balance $\nabla_j\sigma_{ij}=\rho G_i$ (with $\rho$ the density and $G_i$ the gravitational constant) suffice to uniquely determine $U_i$, from which the stress $\sigma_{ij}$ may be calculated for arbitrary geometry. (2)~The inverse compliance tensor, $M_{ijk\ell}$, linking the increments of stress and strain, $\text{d}\sigma_{ij}$ and $\text{d}u_{k\ell}$, is both isotropic and constant, \begin{eqnarray} \label{4} \text{d}\sigma_{ij}=\frac{\partial\sigma_{ij}}{\partial u_{k\ell}}\,\text{d} u_{k\ell}\equiv M_{ijk\ell}\,\text{d} u_{k\ell},\\ M_{ijk\ell}=K\delta_{ij}\delta_{k\ell}- \mu(\delta_{ik}\delta_{j\ell}+\delta_{jk}\delta_{i\ell}). \label{5} \end{eqnarray} (3)~As the pressure $P=K\Delta$ does not depend on the shear $u_s$, there is no volume dilatancy, $(\partial P/\partial u_s)|_\Delta=0$. (4)~Yield is not predicted. [Note that while the points (2), (3), (4) depend on the form of the energy $w$, the statement under (1) is quite general.] These equations account well for ordinary solids, but not for granular systems. Sand displays volume dilatancy, possesses a compliance tensor with significant stress-induced anisotropy, and most importantly, never strays far from yield, displaying significant irreversible, fluid-like, plastic movements in its vicinity. The first attempt to modify linear elasticity, so as to better account for granular behavior, was due to Boussinesq~\cite{Gudehus}. He assumed, around 1874, stress-dependent elastic moduli, $K,\mu\sim \Delta ^{1/2}\sim P^{1/3}$, in Eq~(\ref{2}), \begin{equation}\label{6} \sigma _{ij}\sim\sqrt\Delta\left( \Delta\, \delta _{ij}-\frac{3-6\nu }{1+\nu }\,u_{ij}^0\right),\quad \frac{3-6\nu }{1+\nu }=\frac{2\mu}K, \end{equation} with $\nu$ the constant Poisson ratio. This nonlinear stress-strain relation, sometimes referred to as the ``quasi-elastic model," is employed to understand granular compression~\cite{Evesque-de-Gennes} and sound velocity~\cite{Goddard}. Unfortunately, the above failure list of linear elasticity remains partly intact: \textbullet~As $P$ remains a function of $\Delta$ alone, dilatancy vanishes, $\partial P/\partial u_s|_\Delta=0$. \textbullet~Yield must still be postulated. In addition, Eq~(\ref{6}) contains a basic deficiency: No energy $w$ exists such that $\sigma_{ij}=-\partial w/\partial u_{ij}$ holds, because the associated Maxwell relation is violated, $\partial \sigma _{ij}/\partial u_{\ell k}\not =\partial \sigma _{\ell k}/\partial u_{ij}$. We choose the granular elastic energy to be~\cite{J-L} \begin{equation} w=\sqrt\Delta\left( \textstyle\frac 25{\cal B}\Delta ^2+{\cal A}u_s^2\right), \label{7} \end{equation} with ${\cal A,B}>0$ denoting two material constants. The associated stress is \begin{equation} \sigma _{ij}=\sqrt\Delta({\cal B}\Delta\,\delta _{ij}-2{\cal A}\, u_{ij}^0) +{\cal A} \frac {u_s^2}{2\sqrt\Delta}\delta _{ij}. \label{8} \end{equation} As compared to Eq~(\ref{6}), the only difference is the last term $\sim u_s^2/\sqrt\Delta$. This is, however, amazingly useful in accounting for granular behavior. As we shall see, it yields volume dilatancy, shear-induced anisotropy, and above all, predicts yield at the Coulomb condition, \begin{equation} \sigma _s/P=\sqrt{2{\cal A/B}}. \label{9} \end{equation} In granular materials, there is a regime in which dissipation is insignificant and elastic responses dominant: small-amplitude perturbations from given points in the stress space. This is shown by Kuwano and Jardine~\cite{Kuwano-Jardine} experimentally, who observed that stress increments become reversible if the strain fluctuations are around $10^{-4}$. It is also corroborated by Alonso-Marroquin and Herrmann~\cite{AH} in molecular-dynamic simulations: Reducing elastic strains to $10^{-6}$, the irreversible plastic contributions are found around $10^{-14}$, implying a line as the stress-strain response, rather than the usual ellipse at higher amplitudes. This fact is important because it makes a direct verification of Eq~(\ref{8}) possible: Measure $\text{d}\sigma_{ij}= ({\partial\sigma_{ij}}/{\partial u_{k\ell}})\,\text{d} u_{k\ell}$ and $\text{d} u_{k\ell}$ independently, and compare the result to $M_{ijk\ell}\equiv \,{\partial\sigma_{ij}}/{\partial u_{k\ell}}$ as calculated from Eq~(\ref{8}). The data in~\cite{Kuwano-Jardine} are extensive, comprising of 36 independent components of $M_{ijk\ell}$, all as functions of pressure, shear and the void ratio $e$. Comparing these data to the calculate $M_{ijk\ell}$ is the main result of this section, and represents an ambitious test of the energy $w$, Eq~(\ref{7}): Energy and stress of Eqs~(\ref{7},\ref{8}) depend only on two material parameters, $\cal A$ and $\cal B$, with their ratio fixed by the yield condition, Eq~(\ref{9}). Since the Ham river sand used in the experiment has a Coulomb yield angle of around $28^\circ$, implying $\xi\equiv{\cal B/A}=5/3$, only $\cal A$, a scale factor and a measure of the total hardness, is left as an adjustable parameter. Taking ${\cal A}=5100$~Mpa, we find satisfactory agreement with their data at all values of pressure and shear, for the void ratio $e=0.66$ --- except close to yield which, due to increased plastic contributions, represents an especially difficult experimental regime. Because Kuwano and Jardine noticed that $e$ only alters the total hardness, by the factor $f\equiv(2.17-e)^2/(1+e)$, taking ${\cal A,B}\sim f$ achieves agreement with respect to any other values of $e$ as well. Similar agreement to their data on ballotini (glass beads) was achieved by taking ${\cal A}=4200$~Mpa. Therefore, we take \begin{equation} \mathcal{A}= \mathcal{A}_0\times\frac{(2.17-e) ^2}{1.3736( 1+e)},\quad \xi\equiv{\cal\frac BA}=\frac53 \end{equation} with $\mathcal{A}_0=5100$ and 4200~Mpa being the value of $\cal A$ for $e=0.66$, for Ham river sand and ballotini, respectively. Given this experimental support on the functional dependence of $\sigma_{ij}$ on $U_k$, we have employed Eq~(\ref{8}) to evaluate static stress distributions in silos, sand piles and under point loads, not surprisingly with rather satisfactory results, see~\cite{ge}. Note that Eq~(\ref{8}) does not contain any fit parameters: $\xi=5/3$ is fixed by the yield angle, while ${\cal A}_0$, as a scale factor, does not enter the stress distribution at all. (Given a solution, one may change the strain by the factor $\alpha$, and ${\cal A}_0$ by $\alpha^{-1.5}$, with the stress unchanged and still a solution, provided the boundary conditions are the usual ones, either given in terms of stresses or require that the displacement vanishes.) \subsection{Yield and Energetic Instability}\label{yield} A thermodynamic energy must be a convex function of state variables to ensure stability -- this is why compressibility and specific heat are always positive, cf.~\cite{Callen}. Being a quadratic function of $\Delta$ and $u_s$, the energy of linear elasticity, Eq~(\ref{1}), is always convex. Conversely, the granular energy, Eq~(\ref{7}), is convex if and only if \begin{eqnarray} \left( \partial ^2w/\partial \Delta ^2\right) _{u_s} &\geq &0,\ \ \left( \partial ^2w/\partial u_s^2\right) _\Delta \geq 0, \label{10} \\ \left( \partial ^2w/\partial \Delta \partial u_s\right) ^2 &\leq &\left( \partial ^2w/\partial \Delta ^2\right) _{u_s} \left( \partial ^2w/\partial u_s^2\right) _\Delta \label{11} \end{eqnarray} hold. (See appendix on some subtleties in this context.) More explicitly, this implies \begin{equation} u_s^2/\Delta ^2\leq 2{\cal B/A}, \label{12} \end{equation} drawing the boundary for the region of stable strains. Deriving $4P/\sigma _s=(\Delta /u_s)\times \left( 2{\cal B}/{\cal A}+u_s^2/\Delta ^2\right)$ from Eq~(\ref {8}), and inserting $u_s^2/\Delta ^2= 2{\cal B/A}$ into it, Eq~(\ref{9}), the Drucker-Prager version of the Coulomb yield condition (cf. Schofield \& Wroth, 1968; Huang, 1983) is obtained. The actual Coulomb yield condition, $\sigma _s/P=(\sqrt{18+6L^2}\sin \varphi _c)/({L\sin \varphi _c+3})$, where $L\equiv\sqrt{3}\tan \left[ \frac 13\arcsin \left( \sqrt{6}\,\sigma _{ij}^0\sigma _{jk}^0\sigma _{ki}^0/\sigma _s^3\right) \right] $ denotes the Lode parameter, would only result if terms $\sim u_{ij}^0u_{jk}^0u_{ki}^0$ are included in Eq~(\ref{7}). In a classic paper, Goddard~\cite{Goddard} started from Hertz contacts between grains, and considered the structure of the energy and stress. He concluded that, if the topology of the grain contacts do not change with stress, the energy is a homogeneous function of degree $5/2$ in the strain $u_{ij}$, of the form $w=\Delta^{2.5}\times g(u_s^2/\Delta^2\!, \, u^0_{ij}u^0_{jk}u^0_{ki}/\Delta^3)$, where $g$ is an arbitrary function. As Eq~(7) is clearly a special case of this general energy, we take this as a further, microscopically founded support for our starting point. There is an instructive analogy between the granular stress-strain relation, Eq~(\ref{8}), and the van der Waals equation of state for real gases. The Boyle's law is stable everywhere while the van der Waals equation has a non-physical zone, the liquid-gas instability, in which the compressibility is negative. Similarly, the Hooke's law is stable everywhere, but the granular stress-strain relation has a forbidden region, that of yield. Note \begin{equation}\label{13} \left.\partial P/\partial \Delta\right|_{\sigma _s}\geq 0 \end{equation} is implied by Eqs~(\ref{10},\ref{11}), see appendix, so this forbidden region is also characterized by a negative compressibility. The actual innovation of the van der Waals theory is the fact that the condition for the onset of the liquid-gas transition, instead of being an extra input, is implied by the free energy. Similarly, yield is now a result of elasticity. \subsection{Granular Stress-Strain Relation} The granular stress-strain relation, Eq~(\ref{8}), and the definitions of Eq~(\ref{3}) imply \begin{eqnarray} P &=&\Delta ^{3/2}\left({\cal B}+\textstyle\frac 12{\cal A}u_s^2/\Delta ^2\right), \label{14} \\ \sigma _s &=&2{\cal A}\Delta ^{1/2}u_s . \label{15} \end{eqnarray} Eliminating $\Delta $, we obtain \begin{equation} {\cal B}\sigma _s^4-8{\cal A}^3Pu_s^3\sigma _s+8{\cal A}^5u_s^6=0. \label{16} \end{equation} \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{1.pdf} \end{center} \caption{Shear stress versus shear strain for given pressure: for granular elasticity, linear elasticity (upper insert), and elastoplastic theory (lower insert).} \label{fig1} \end{figure} Fig.~\ref {fig1} plots $\sigma _s$ versus $u_s$ for the fixed pressure of $P=0.1$ Mpa. Note how remarkably linear the plot is -- almost until yield, where the curve turns back abruptly. (Dashed lines are used throughout for unstable states.) This behavior is approximated by the elastoplastic model, frequently used in soil mechanics: Linear elasticity followed by yield and flat plastic motion, see the lower inserts in Fig.~\ref {fig1}. Nonlinearity is relevant only when yield is close. \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.7]{2} \end{center} \caption{Thick line: Pressure versus compression at fixed shear. Dashed lines represent unstable states. Thin straight line: The same curve for linear elasticity. Insert: The analogous instability in the isothermal curve of the van der Waals equation of state.} \label{fig2} \end{figure} If instead $u_s$ is eliminated from Eqs(\ref{14},\ref{15}), the expression \begin{equation} \sigma _s^2+8{\cal AB}\Delta ^3-8{\cal A}P\Delta^{3/2}=0 \label{17} \end{equation} allows a plot of pressure $P$ versus compression $\Delta$, at given $\sigma _s=0.1$ Mpa, see Fig.~\ref {fig2}. The pressure increases with the compression, implying a positive compressibility, only in the region of large $\Delta $. The compressibility is negative where $\Delta $ is small, and the stability condition, Eq~(\ref{9}) or (\ref{13}), is violated. The van der Waals equation of state, $\left( P-a/v^2\right) (v-b)=RT$, is quite similar, where $1/v$ corresponds to $\Delta$, $R$ is the gas constant and $v$ the molar volume, see eg.~\cite{Callen}. The system can be either in the dense liquid state or the rarefied gaseous phase, with the zone in between forbidden, see insert of Fig.~\ref{fig2}. \begin{figure}[bp]% \begin{center} \includegraphics[scale=0.7]{3} \end{center} \caption{Compression $\Delta$ versus shear strain $u_s$, at fixed pressure. The dashed line is again unstable. In linear elasticity, the same curve is a horizontal straight line.} \label{fig3} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.7]{4} \end{center} \caption{Pressure $P$ versus shear stress $\sigma _s$, at fixed compression. The dashed line is unstable. In linear elasticity, the same curve is a horizontal straight line. } \label{fig4} \end{figure} Alternatively, we may plot $\Delta$ versus $u_s$ at fixed $P$, or $P$ versus $\sigma _s$ at fixed $\Delta$, see Figs.~\ref{fig3} and \ref{fig4}, both showing clear evidence of ``volume dilatancy," the fact (first noticed by Reynold) that granular systems expand with shear, or $\partial\Delta/\partial u_s|_P\not=0$, or $\partial P/\partial\sigma_s |_\Delta\not=0$. For linear elasticity, these plots are simply horizontal, and the derivatives vanish. If the Boussinesq model, Eq~(\ref{6}), were employed, all four plots would be indistinguishable from those of linear elasticity. So the last term of Eq~(\ref{8}) is indeed essential. (Plastic motion, not considered here, contribute to additional dilatancy, and may dominate.) \subsection{Shear-Dependence of the Elastic Moduli} The Hooke's law, Eq~(\ref{2}), $\sigma _{ij}=K\Delta \delta _{ij}-2\mu u_{ij}^0$, may be written as \begin{equation} u_{ij}=\frac \nu E\sigma _{nn}\delta _{ij}-\frac{\sigma _{ij}}{2\mu }, \label{18} \end{equation} with the Poisson ratio $\nu $ and the Young modulus $E$ given as \begin{equation} E =\frac{9\mu K}{3K+\mu }, \quad \nu =\frac{3K-2\mu }{6K+2\mu }. \label{19} \end{equation} Requiring the granular stress-strain relation Eq~(\ref{8}) to assume these familiar forms, either Eq~(\ref{2}) or (\ref{18}), leads to strain-dependency of $K,\mu$, \begin{eqnarray} K &=&\Delta ^{1/2}\left({\cal B}+\textstyle{\frac 12} {\cal A}u_s^2/\Delta ^2\right), \label{20}\\ \mu &=&{\cal A}\Delta ^{1/2}, \label{21} \end{eqnarray} and via Eq~(\ref{19}) also of $E,\nu$. As this is an intuitive way to characterize nonlinear elastic behavior, we shall consider their shear and pressure dependency more closely here. Using Eqs~(\ref{14},\ref{15}), we write these moduli as \begin{eqnarray} \mu &=&\widetilde{\mu }\xi ^{1/3}, \ \ \ \ \ \ \ \ K=\widetilde{K}\xi ^{-2/3}, \nonumber \\ E &=&\widetilde{E}\frac{3{\cal B}+{\cal A}}{3{\cal B}+{\cal A}\xi }\xi ^{\frac 13},\ \ \nu =\frac{3{\cal B}-2{\cal A}\xi }{6{\cal B}+2{\cal A}\xi }; \label{22} \end{eqnarray} where $\xi$ quantifies shear, \begin{equation} \xi =\textstyle\frac 12\left[ 1\pm \sqrt{1-\left( {\cal B}/2{\cal A}\right) \left( \sigma _s/P\right) ^2}\right], \qquad\qquad\label{24} \end{equation} and $\widetilde{\mu }$, $\widetilde{K}$, $\widetilde{E}$, $\widetilde{\nu}$ denote the respective value without shear, at $\xi=1$, \begin{equation}\widetilde{\mu }={\cal A}\left( \frac P{\cal B}\right)^{\frac 13},\ \ \widetilde{K} ={\cal B}\left( \frac P{\cal B}\right) ^{\frac 13},\ \ \widetilde{E}=\frac{9{\cal AB}}{3{\cal B}+{\cal A}} \left( \frac P{\cal B}\right) ^{\frac 13}, \label{23} \end{equation} see Fig.~\ref{fig5}. (The positive sign in Eq~(\ref{24}) is the stable branch, which meets the unstable branch with the negative sign at yield, where the square root vanishes.) \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.7]{5} \end{center} \caption{Variations of $K,\mu,E,\nu$ with $\sigma_s/P$. The moduli are rescaled by their values at $\sigma _s=0$, denoted respectively with a twiddle. Their variation $\sim P^{1/3}$ is shown in the insert.} \label{fig5} \end{figure} As mentioned in the introduction, the $P^{1/3}$-dependence of the twiddled letters is well-known. For typical granular behavior, however, the more relevant dependence is that on shear, which derives -- same as yield and dilatancy -- from the last term of Eq~(\ref{8}). \subsection{The Compliance Tensor} \subsubsection{Theoretical Expressions} Starting from Eq~(\ref{8}), the tensor $M_{ijk\ell}$ of Eq~(\ref{4}) is calculated as \begin{eqnarray} M_{ijkl} ={\cal A}\sqrt\Delta\,[({u_s^2}/{4\Delta ^2}+4/3- {3{\cal B}}/{2{\cal A}}) \delta _{ij}\delta _{kl} \nonumber \\ -\delta _{ik}\delta _{jl}-\delta _{il}\delta _{jk}+(u_{ij}\delta _{kl}+\delta _{ij}u_{kl})/\Delta]. \label{25} \end{eqnarray} The compliance tensor $\lambda _{ijk\ell}$, defined via \begin{equation} \text{d}u_{ij}=\lambda _{ijk\ell}\text{d}\sigma _{k\ell}, \label{26} \end{equation} is obtained by inverting $M_{ijk\ell}$, \begin{eqnarray}\nonumber \lambda _{ijk\ell} &=&\frac{\left[{\cal A}u_s^2+2 \left( {\cal A}-{\cal B}\right) \Delta ^2\right] \delta _{k\ell}\delta _{ij}}{6{\cal A}\Delta ^{1/2} \left( {\cal A}u_s^2-2{\cal B}\Delta ^2\right) }- \frac{\delta _{ik}\delta _{j\ell}+\delta _{i\ell}\delta _{jk}}{4{\cal A}\Delta ^{1/2}} \\ &&+\frac{u_{ij}\Delta \delta _{k\ell}+u_{k\ell}\Delta \delta _{ij}+u_{ij}u_{k\ell}}{ 3\Delta ^{1/2}\left({\cal A}u_s^2-2{\cal B}\Delta ^2\right) },\\ &=&\frac{9{\cal A}^5\sigma _s^2+8\left( 4{\cal A}-9{\cal B}\right) \mu ^6}{54\mu \left( {\cal A}^5\sigma _s^2-8\mu ^6{\cal B}\right) }\delta _{k\ell}\delta _{ij}-\frac{\delta _{ik}\delta _{j\ell}+\delta _{i\ell}\delta _{jk}}{4\mu } \nonumber \\ &&-\frac{4{\cal A}^3\mu ^3\left( \sigma _{ij}^0\delta _{k\ell}+\sigma _{k\ell}^0\delta _{ij}\right) -3{\cal A}^5\sigma _{ij}^0\sigma _{k\ell}^0}{9\mu \left( {\cal A}^5\sigma _s^2-8\mu ^6{\cal B}\right)}. \label{27} \end{eqnarray} In the first expression $\lambda _{ijk\ell}$ is strain-, in the second stress-dependent --- where the conversion is calculated using $\Delta =\mu ^2/{\cal A}^2$, $u_{ij}^0=-\frac12\sigma _{ij}^0/\mu$, $u_s=\frac12\sigma _s/\mu$, with $\mu ={\cal A}(\xi P/{\cal B})^{1/3}$, cf. Eqs~(\ref{22},\ref {23}). The second expression -- a surprisingly complicated one if the starting expression for the energy serves as a benchmark -- is what may be compared to experiments directly. Before we do this, it is useful to pause and notice that the last term of both Eq~(\ref{25}) and (\ref{27}) deviates structurally from the isotropic form of Eq~(\ref{5}). More generally, for an isotropic medium and in the presence of pure compression ($\sigma^0_{ij}=0,\, P\not=0$), we may (quite independent of the specific form of the elastic energy) take $\lambda _{ijk\ell}$ to be \begin{equation}\label{lambda0} \lambda^0_{ijk\ell}=\lambda_1\delta_{ij}\delta_{k\ell} +\lambda_2(\delta_{ik}\delta_{j\ell} +\delta_{i\ell}\delta_{jk}), \end{equation} with $\lambda_1,\lambda_2$ arbitrary scalar functions of $\Delta,u_s$, and the Lode parameter $L$. This is because \textbullet~both $\sigma_{ij}$ and $u_{k\ell}$ are symmetric, hence $\lambda_{ijk\ell}=\lambda _{jik\ell}=\lambda _{ij\ell k}$; \textbullet~the Maxwell relation holds, $\partial ^2w/\partial u_{ij}\partial u_{lk}=\partial ^2w/\partial u_{lk}\partial u_{ij}$, hence $\lambda _{ijk\ell}=\lambda _{k\ell ij}$. In the presence of shear, $\sigma^0_{ij}\not=0$, $\lambda _{ijk\ell}$ can take on many more terms. To linear order in $\sigma^0_{ij}$, these are \[\lambda_3(\sigma^0_{ij}\delta_{k\ell}+\delta_{ij}\sigma^0_{k\ell})+ \lambda_4(\sigma^0_{ik}\delta_{j\ell}+\sigma^0_{i\ell}\delta_{jk} +\sigma^0_{j\ell}\delta_{ik}+\sigma^0_{jk}\delta_{i\ell}).\] To second order, we may substitute all above $\sigma^0_{ij}$ with $\sigma^0_{ik}\sigma^0_{kj}$, and also add the terms: $\sigma^0_{ij}\sigma^0_{k\ell}$ and $\sigma^0_{ik}\sigma^0_{j\ell}+ \sigma^0_{jk}\sigma^0_{i\ell}$. We shall refer to $\lambda^0_{ijk\ell}$ as being isotropic, and the $\sigma^0_{ij}$-dependent ones as displaying ``shear-induced anisotropy." If the medium were inherently anisotropic, say because the grains are pressed into some quasi-periodic array, leading to a preferred direction $\bf{n}$, the above expression is more complicated, because $\delta_{ij}$ in Eq~(\ref{lambda0}) may now be substituted by three different tensors: $\delta_{ij}-n_in_j$, $n_in_j$, and $\epsilon_{ijk}n_k$. For triclinic symmetry and without the Maxwell relation, all 36 elements of $\lambda_{ijk\ell}$ are independent -- even in the absence of shear. As mentioned, this ``fabric anisotropy" is not included in the present consideration, because the starting expression for the energy, Eq~(\ref{7}), is isotropic. \subsubsection{Comparison with Experiments} Because $\sigma _{ij}$ and $u_{ij}$ are symmetric, each characterized by six independent components, Eq~(\ref{26}) may be written as a vector equation, $\text{d} \vec u=\hat\lambda\text{d}\vec\sigma$, with $\hat\lambda$ a 6x6 matrix, and $\text{d}u, \text{d}\sigma$ given as in Eq~(\ref{28}). In the so-called ``principle system" of coordinates, in which $\sigma _{ij}$ is diagonal (but not $\delta\sigma _{ij}$), Kuwano and Jardine take this vector equation to be given as~\cite{Kuwano-Jardine} \begin{equation} \left( \begin{array}{l} du_{11} \\ du_{22} \\ du_{33} \\ 2du_{23} \\ 2du_{13} \\ 2du_{12} \end{array} \!\!\right) =\left( \begin{array}{cccccc} & & & 0 & 0 & 0 \\ & \hat{C} & & 0 & 0 & 0 \\ & & & 0 & 0 & 0 \\ 0 & 0 & 0 & G_{23}^{-1} & 0 & 0 \\ 0 & 0 & 0 & 0 & G_{13}^{-1} & 0 \\ 0 & 0 & 0 & 0 & 0 & 2G_{12}^{-1} \end{array} \!\!\right)\!\! \left( \begin{array}{l} d\sigma _{11} \\ d\sigma _{22} \\ d\sigma _{33} \\ -d\sigma _{23} \\ -d\sigma _{13} \\ -d\sigma _{12} \end{array} \!\!\right) \label{28} \end{equation} with \begin{eqnarray} \hat{C}=\left( \begin{array}{ccc} {-1}/{E_1}\, & {\nu _{12}}/{E_2}\, & {\nu _{13}}/{E_3} \\ {\nu _{21}}/{E_1}\, & {-1}/{E_2}\, & {\nu _{23}}/{E_3} \\ {\nu _{31}}/{E_1}\, & {\nu _{32}}/{E_2}\, & {-1}/{E_3} \end{array} \right).\label{29} \end{eqnarray} $G_{ij}$ is referred to as the shear modulus in the $i-j$ plane, $E_i$ the Young modulus along $i$, and $\nu_{ij}$ the Poisson ratio for ``the effect of the $i$-strain on $j$-strain." Identifying these moduli with components of the $\lambda_{ijk\ell}$ tensor, \begin{eqnarray}\nonumber G_{ij}={-1}/{4\lambda _{ijij}},\\\nonumber E_i={-1}/{\lambda _{iiii}},\\ \nu _{ij}=-{\lambda _{iijj}}/{\lambda _{jjjj}}\label{30} \end{eqnarray} (for $i\neq j$ and without summation over $i$ or $j$), we may employ Eq~(\ref{27}) to obtain \begin{eqnarray} G_{13}=G_{23}=G_{12}=\mu, \qquad\qquad\quad \label{31} \\ E_i =\frac{27\mu \left( {\cal A}^5\sigma _s^2-8\mu ^6{\cal B}\right)}{9{\cal A}^5\sigma _s^2-72\mu ^6{\cal B}-{\cal A}s_i^2},\qquad \label{32} \\ \nu _{ij} =\frac 12\frac{9{\cal A}^5\sigma _s^2-72\mu ^6{\cal B}+2{\cal A}s_is_j}{9{\cal A}^5\sigma _s^2-72\mu ^6{\cal B}-{\cal A}s_j^2}, \label{33} \end{eqnarray} with $\mu ={\cal A}(\xi P/{\cal B})^{1/3}$, $s_i\equiv 3{\cal A}^2\sigma _i^0-4\mu ^3$, $\sigma _i^0\equiv\sigma _i-P$, and $\sigma_i$ denoting the three diagonal components of $\sigma_{ij}$ in the principle system. Before embarking on a comparison, we shall first establish a few qualitative features from theory: \textbullet~Without shear, $\sigma _i^0\rightarrow 0$, all $E_i$ are equal, \begin{equation} E_i\rightarrow E_{\sec }= \frac{27{\cal A}{\cal B}}{2{\cal A}+9{\cal B}}\left( \frac P{\cal B}\right) ^{\frac 13}, \label{34} \end{equation} where $E_{\sec }$ is called the secant Young modulus. Same holds for the Poisson ratios, \begin{equation} \nu _{ij}\rightarrow \widetilde{\nu }^{*}= \frac 12\frac{9{\cal B}-4{\cal A}}{9{\cal B}+2{\cal A}}. \label{35} \end{equation} (Note $\widetilde{ \nu }^{*}$ differs from $\widetilde{\nu }$, and $E_{\sec }$ from $\widetilde{E}$, by a constant factor.) \textbullet~ Because of Eq~(\ref{lambda0}) and irrespective of the energy specified, we have $E_1=E_2=E_3$, $\nu_{12}=\nu_{13}=\nu_{23}$, and $G_{12}=G_{13}=G_{23}$ in the absence of shear, $\sigma^0_{ij}=0$. Any discrepancy with experiment therefore implies fabric anisotropy. \textbullet~Finite shear will split $E_i$ and $\nu _{ij}$, but not $G_{ij}$, cf. Eq~(\ref{31}) --- though this is an energy-related feature. \textbullet~Because of the Maxwell relation, the matrix $\hat\lambda$ of Eq~(\ref {28}) is symmetric, implying especially (no summation) \begin{equation} \nu _{ij}E_i=\nu _{ji}E_j. \label{36} \end{equation} This symmetry was noted by Love (1927) and adopted by Kuwano and Jardine in interpreting their data~\cite{Kuwano-Jardine}. \textbullet~The moduli $E, \mu, \nu $ are related as $E=2\mu \left( \nu +1\right) $, see Eqs.(\ref{19}). A similar relation holds for $\mu$, $E_i$, $\nu _{ik}$ [no summation, see Eqs~(\ref{32},\ref{33})], \begin{equation} E_i\left( 6\mu \nu _{ij}-E_j\right) ^2=4E_j\left( 3\mu -E_i\right) \left(3\mu -E_j\right) \label{37} \end{equation} \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.35]{6} \end{center} \caption{Variation with pressure $P$ of the shear moduli $G_{vh}, G_{hh}$, Young moduli $E_{v}, E_{h}$ and Poisson ratios $\nu_{vh}$, $\nu_{hh}$ (insert), at $\sigma _h/\sigma _v=0.45$. Symbols are the same data on Ham River sand, at a void ratio of 0.66 by Kuwano \& Jardine, (2002).} \label{fig6} \end{figure} It is important to realize that all formulas of this section hold not only for Cartesian coordinates, $i\to x,y,z $, but also for cylindrical ones, $i\to z,\rho,\varphi$. Taking $\Delta=u_{\varphi\varphi}+u_{\rho\rho}+u_{zz}$, and similarly for $u_s$, we may again start from the same energy, Eq~(\ref{7}), and derive all the results here. [Spatial differentiation is what mars the similarity. Yet once the strain components $u_{\rho\rho}, u_{\rho\varphi}\dots$ are given, no spatial differentiation is needed.] The one difference is, for any constant $\sigma_{ij}$ in Cartesian coordinates, there is always a principle system. In cylindrical coordinates, this holds only if the stress is also cylindrically symmetric. In other words, only if the stress is uniaxially diagonal, $\sigma_{ij}=\text{diag} (\sigma_1, \sigma_2, \sigma_3)$ with $\sigma_2=\sigma_1$ in Cartesian coordinates, will it be diagonal cylindrically. Because Kuwano and Jardine~\cite{Kuwano-Jardine} used an axialsymmetric device for their measurements, the stress they apply is indeed cylindrically symmetric, with: $G_{\rho z}=G_{\varphi z}$, $E_\rho=E_\varphi$, $\nu _{\rho z}=\nu _{\varphi z}$, $\nu _{z\rho}=\nu _{z\varphi}$, cf. Eqs.(\ref{31}-\ref{33}) noting $s_\rho=s_\varphi$. In addition, Eq~(\ref{36}) leads to $\nu _{\rho\varphi}=\nu _{\varphi\rho}$. Following them, we refer to the response coefficients being measured as: $G_{hh}\equiv G_{\rho\varphi}$, $G_{vh}\equiv G_{\rho z}=G_{\varphi z}$, $E_h\equiv E_\rho=E_\varphi$, $E_v\equiv E_z$, $\nu_{hh}\equiv \nu _{\rho\varphi}=\nu _{\varphi\rho}$, $\nu _{hv}\equiv\nu _{\rho z}=\nu _{\varphi z}$, $\nu _{vh}\equiv \nu _{z\rho}=\nu _{z\varphi}$, where $h$ is the horizontal directions, either $\rho$ or $\varphi$, and $v$ the vertical direction $z$, see the cylinder of Fig.~\ref{fig6}. The main plots of Fig.~\ref{fig6} compare the theoretical curve [calculated by taking $\sigma _\rho=\sigma _\varphi=\sigma _h$ and $\sigma _z=\sigma _v$ in Eqs.(\ref{31}-\ref{33})] and the experimental data [measured with Ham River sand] of $E_h$, $E_v$, $G_{vh}$, $G_{hh}$, as functions of $P$, for $\sigma _h=0.45\, \sigma _v$. The insert shows the same comparison for $\nu _{vh},\nu _{hh}$. We especially note that theory and experiment agree on the ordering of the induced anisotropy, ie $\nu _{vh}>\nu _{hh}$, $E_v>E_h$ and $G_{hh}\approx G_{vh}$, which are pairwise equal in linear elasticity and the Boussinesq model. (The slight difference between $G_{hh}$, $G_{vh}$ is, as mentioned, the result of fabric anisotropy present in the sample.) For a theory without any useful fit parameter, the agreement must be considered a convincing verification of the elastic approach which, instead of postulating the stress-dependence of 21 (or even 36) independent components of $\lambda_{ijkl}$ directly, looks for one appropriate scalar expression for the energy $w$. Even if it is heavy-handedly simplified, a large number of geometric correlation is preserved by the mere fact that $\lambda_{ijkl}$ is obtained via a double differentiation. This must be the main reason why the calculated $\lambda_{ijkl}$ stands up so surprisingly well when compared to the extensive data of~\cite{Kuwano-Jardine}. \begin{figure}[tbp] \begin{center} \includegraphics[scale=0.7]{7} \end{center} \caption{Variation of Young and shear moduli, $E_{sec}$ and $\mu$, with pressure $P$, for the case of vanishing shear, $\sigma _v=\sigma _h$. The dotted lines are the empirical formula of Kuwano \& Jardine (2002), for the Ham River sand at the void ratio $e=0.66$. The split is proof of fabric anisotropy.} \label{fig7} \end{figure} Kuwano and Jardine~\cite{Kuwano-Jardine} employ the following empirical formulas (in Mpa) for the Ham River sand, \begin{eqnarray} E_v &=&204f\left( \sigma _v/P_a\right) ^{0.52} \label{Ev-exp} \\ E_h &=&174f\left( \sigma _h/P_a\right) ^{0.53} \label{Eh-exp} \\ G_{vh} &=&72f\left( \sigma _v/P_a\right) ^{0.32}\left( \sigma _h/P_a\right) ^{0.2} \label{Gvh-exp} \\ G_{hh} &=&81f\left( \sigma _v/P_a\right) ^{-0.04}\left( \sigma _h/P_a\right) ^{0.53} \label{Ghh-exp} \end{eqnarray} where $P_a=0.1013$ Mpa is the atmospheric pressure and $f=(2.17-e)^2/(1+e)$. ($f=1.3736$ for the void ratio $e=0.66$.) Fig.~\ref{fig7} shows the theoretical and experimental values for $E_h$, $E_v$, $G_{vh}$ and $G_{hh}$, as functions of $P$ for the isotropic case $\sigma _h=\sigma _v$. The fact that $E_h$, $E_v$ and $G_{vh}$, $G_{hh}$ are pairwise different, indicates (as discussed above) fabric anisotropy. Moreover, the theoretical curves are $\sim P^{1/3}$, yet experimental ones seem to back a larger power: $\sim P^{1/2}$. As discussed, \textbullet~this is a known contradiction between Hertz contact and sound data, with possible explanations provided by Goddard\cite{Goddard} and de Gennes~\cite{deGennes96}, \textbullet~and a question of simplicity versus accuracy in the present approach. \begin{figure}[t] \begin{center} \includegraphics[scale=0.53]{8} \end{center} \caption{Upper, middle, and lower figures show the Young moduli, shear moduli and Poisson ratios as functions of $\sigma_s/P$. The dotted lines present the empirical formulas of Kuwano \& Jardine (2002) for the Ram River sand , at the void ratio $e=0.66$.} \label{fig8} \end{figure} Fig.\ref{fig8} displays the effect of shear on different moduli, with $\sigma _h\not=\sigma _v$. The upper, middle and lower figures respectively plot the Young moduli $E_i$, the shear modulus $\mu $ (both scaled by their isotropic values, $E_{\rm sec}, \tilde\mu$), and the Poisson ratios $\nu_{ij}$. In agreement with the empirical formulas Eqs~(\ref{Ev-exp}-\ref{Ghh-exp}), $E_v$ increases with $\sigma _s/P$, while $E_h$ decreases, in the region away from yield. As yield is approached, both drop quickly to zero. This critical, pre-yield behavior is clearly absent for the empirical formulas and is of interests for future experiments. In theory, $G_{vh},G_{hh}$ are equal, decreasing with $\sigma _s/P$ moderately, by less than 20\%. In experiments, the shear moduli are split, with one increasing, the other decreasing. The discrepancy between the theory and experiment is in the range from $\sigma _s/P=0$ to $0.6$ within 20\%. This need not be a result from fabric anisotropy, as a more complicated energy expression will also do. Variation of the Poisson ratios $\nu _{vh},\nu _{hv},\nu _{hh}$ is given by Eq~(\ref{33}). As depicted, $\nu _{vh}$ and $\nu _{hv}$ increase, while $\nu _{hh}$ decreases, with $\sigma _s/P$, all being divergent at yield. No empirical formulae for the ratios are given in~\cite{Kuwano-Jardine}, and the two circles in the plot simply depict the values from the insert of Fig~\ref{fig1}. However, $\nu _{hh}=E_h/(2G_{hh})-1$ was assumed to hold by the authors, and interestingly, it may be derived by taking $i=h$, $j=h$ in Eq~(\ref{37}), yielding $\nu _{hh}=E_h/(2\mu )-1$. Assuming that both coefficients $\cal A,B$ of Eq~(\ref{7}) are proportional to $f$ of Eqs~(\ref{Ev-exp}-\ref{Ghh-exp}), agreement between experiment and theory is extended to all values of the void ratio. Comparison was also made to Kuwano and Jardine's data gained using glass ballotini~\cite{Kuwano-Jardine}. Taking ${\cal A}=4200$, ${\cal B}=\frac53{\cal A}=7000$, we find similar agreement. \subsection{The Elastic Part of Flow Rules} The increment relation, Eq~(\ref{4}), may also be written in the matrix form $\text{d}\vec{\sigma }=\hat{M}\text{d}\vec{u}$, with $\hat{M}$ a symmetric $6\times 6$ matrix, and $\text{d}\vec{\sigma }, \text{d}\vec{u}$ still given as in Eq~(\ref{28}). The determinant, $\det \hat{M}=9{\cal A}^5\left( 2{\cal B}\Delta ^2-{\cal A}u_s^2\right) \Delta$, calculated from Eq~(\ref{25}), vanishes at the yield surface, ${\cal A}u_s^2=2{\cal B}\Delta ^2$, because an Eigenvalue, call it $m_1$, also does. (This is not a coincidence as $\hat{M}$ is the Jacobian matrix of the energy function, which is positive only in the stable region. It may be of interest to note that the determinant of the Bousinesq model, $\det\hat{M}=9{\cal A}^5 \left(3{\cal B}+4{\cal A}\right) \Delta^3$, never vanishes.) The associated Eigenvector $\vec{m}_1$ points along the direction at which a finite deformation $\text{d}\vec{u}\neq 0$ may take place under constant stress $\text{d}\sigma _{ij}=0$. We refer to $\vec{m}_1$ as the elastic flow direction, since $\vec{m}_1\|\text{d}\vec{u}$ is only the elastic contribution of the strain. Setting $\text{d}\sigma_{ij}=0$ in Eq~(\ref{4}) and using ${\cal A}u_s^2=2{\cal B}\Delta ^2$, we obtain \[\text{d}u_{ij}=-\frac 12\left( \delta _{ij}+ \frac{u_{ij}}\Delta \right)\text{d}\Delta =\left( \sqrt{\frac {\cal B}{2{\cal A}}}\frac{\sigma _{ij}^0}{\sigma _s}-\frac{\delta _{ij} }3\right)\text{d}\Delta.\] The calculated $\text{d}u_{ij}\to \text{d}\vec u$ is the Eigenvector $\vec m_1$. Remarkably, one can rewrite this equation as $\text{d}u_{ij}/\text{d}\Delta =\partial g/\partial \sigma _{ij}$, or \begin{equation} \vec{m}_1\parallel \partial g/\partial \vec{\sigma}, \quad \text{with}\quad g=\sqrt{{\cal B}/2{\cal A}}\sigma _s-P, \label{flow-potential} \end{equation} implying that the elastic flow direction is perpendicular to the yield surface, as defined by the equation $g=0$. If the plastic contribution to the strain field may be neglected for some reasons, this property is referred to as the \textit{associated flow rule} see~\cite{wroth,Huang}.
1,108,101,564,428
arxiv
\section{Introduction} Let $G$ be a finitely presented group with presentation $P := \langle x_1,\ldots,x_n \ | \ R_1,\ldots,R_s \rangle$. Assume, every relation $R_j$ is of the form $x_k x_i x_k^{-1} = x_j$ with $x_i \neq x_j$. We associate a graph $\Gamma(P)$ to $P$ in the following way: \begin{itemize} \item We define the vertex set as $V(\Gamma(P)) = \{x_1,\ldots,x_n\}$, i.e., we introduce one vertex for every generator. \item For every relation $x_k x_i x_k^{-1} = x_j$ we add a directed edge from $x_i$ to $x_j$. \item We define a labeling function $l: E(\Gamma(P)) \to V(\Gamma(P))$. Namely, if $e$ is the edge from $x_i$ to $x_j$ induced by the relation $x_k x_i x_k^{-1} = x_j$, then we set $l(e) := x_k$. \end{itemize} We say that $\Gamma(P)$ is a \textit{labeled oriented graph (LOG)}. And if $\Gamma(P)$ is a tree, then we call $\Gamma(P)$ a \textit{labeled oriented tree (LOT)}. If the context is clear, then we simply write $\Gamma$. See Figure \ref{Fig:LOTExample} for an example. Note that analogously every LOG $\Gamma$ yields a \textit{LOG presentation} $P(\Gamma)$, i.e., a group presentation in the upper sense. Therefore, we call each $x_j$ simultaneously a \textit{vertex} and a \textit{generator} each $e_{ij}$ simultaneously an \textit{edge} and a \textit{relation} respectively. Following the literature \cite{Harlander:Rosebrock} we call a LOG $\Gamma$ \textit{interior reduced} if the label $l(e_{ij})$ of every edge $e_{ij} = (x_i,x_j)$ satisfies $l(e_{ij}) \notin \{x_i,x_j\}$. See \cite{Howie:Whitehead,Howie:RibbonDiscs, Rosebrock:WhiteheadOverview, Rosebrock:LOTComplexity} for more information. \begin{figure} \begin{tikzpicture} \draw [->] (2,0) -- (2,1); \draw (2,1) -- (2,2); \draw [->] (0,2) -- (1,1); \draw (1,1) -- (2,0); \draw [->] (2,0) -- (3,1); \draw (3,1) -- (4,2); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (0,2) circle [radius=0.1]; \draw [fill] (4,2) circle [radius=0.1]; \draw [fill] (2,2) circle [radius=0.1]; \node [above] at (0,2.2) {$a$}; \node [above] at (2,2.2) {$b$}; \node [above] at (4,2.2) {$c$}; \node [below] at (2,-0.3) {$d$}; \node [right] at (1.1,1) {$c$}; \node [right] at (2,1) {$c$}; \node [right] at (3.1,1) {$a$}; \end{tikzpicture} \caption{The LOT corresponding to the presentation $\langle a,b,c,d \ | \ cac^{-1} = d, cdc^{-1} = b, ada^{-1} = c\rangle$.} \label{Fig:LOTExample} \end{figure} It was shown by Howie \cite{Howie:Whitehead, Howie:RibbonDiscs} that LOTs are decisively connected to the Whitehead Conjecture \cite{Whitehead:Conjecture}, which claims that every subcomplex of an aspherical 2-complex is aspherical. Recall in this context that a 2-complex is aspherical if its second homotopy group is trivial. In particular, Howie showed that if every 2-complex corresponding to a LOT presentation (see Section \ref{Sec:Preliminaries} for details on the associated 2-complex) is aspherical, then the Andrews-Curtis Conjecture \cite{Andrews:Curtis:Conjecture} implies the finite Whitehead Conjecture (see Section \ref{Sec:Whitehead} for details).\\ Following a similar construction of Ivanov \cite{Ivanov}, the \textit{complexity} $\cp(\Gamma)$ of a LOG $\Gamma$ was introduced by Rosebrock in \cite{Rosebrock:LOTComplexity} as an invariant carrying information about the second homotopy group of the 2-complex associated to $\Gamma$. Particularly, Rosebrock showed as a main result in \cite{Rosebrock:LOTComplexity} that every LOT $\Gamma$ satisfying $\cp(\Gamma) = 2$ has an aspherical 2-complex. Essentially, the complexity $\cp(\Gamma)$ is the minimal cardinality of a subset $S \subseteq V(\Gamma)$ of the vertices of $\Gamma$, which allows to reach every vertex of $\Gamma$ by \begin{enumerate} \item starting at the vertices in $S$ and \item only passing edges, which are labeled with vertices in $S$ or formerly visited vertices. \end{enumerate} We give a formal definition in Section \ref{Sec:Complexity}. It was conjectured by Rosebrock\footnote{We remark that in the final, printed version of the article \cite{Rosebrock:LOTComplexity} the conjecture is already announced as solved by the first author in his master thesis (Diplomarbeit) \cite{Christmann:Diplomarbeit}.} in \cite{Rosebrock:LOTComplexity} (and shown for LOTs with vertices of valency at most two) that the complexity $\cp(\Gamma)$ of an (interior reduced) LOT $\Gamma$ with $m$ vertices is bounded from above by $\frac{m + 1}{2}$. In the first part of the paper, we prove the conjecture. We do this in a constructive way, i.e., we give an algorithm, which yields for every $\Gamma$ a set $S \subset V(\Gamma)$ such that such that every vertex is reachable from $S$ and $|S| \leq \frac{m + 1}{2}$. \begin{thm}[Rosebrock's Conjecture] Let $\Gamma$ be an interior reduced, connected LOG with $m$ vertices. Then the complexity is bounded from above by \begin{eqnarray} \cp(\Gamma) & \leq & \frac{m + 1}{2} \, . \label{Equ:ComplexityLOTs} \end{eqnarray} \label{Thm:UpperBound} \end{thm} In the second part of this article, we explicitly describe the LOTs for which the upper bound given in Theorem \ref{Thm:UpperBound} is satisfied with equality; see Theorem \ref{Thm:MaximalComplexity}. In particular, we show that the 2-complexes corresponding to such LOTs are always aspherical. Thus, LOTs of maximal complexity cannot disprove the Whitehead Conjecture. \begin{thm} The bound given in Theorem \ref{Thm:UpperBound} is sharp. In particular, if a LOT $\Gamma$ satisfies \eqref{Equ:ComplexityLOTs} with equality, then its corresponding 2-complex is aspherical. \label{Thm:Aspherical} \end{thm} The article is organized as follows. In Section \ref{Sec:Preliminaries}, we fix some notation. In Section \ref{Sec:Whitehead}, we recall the connection of LOTs to the Whitehead Conjecture. In Section \ref{Sec:Complexity}, we prove the Theorems \ref{Thm:UpperBound} and Theorem \ref{Thm:Aspherical}. Excluding some cited technical statements, all proofs are purely combinatorial.\\ The content of this article was part of the master thesis (Diplomarbeit) \cite{Christmann:Diplomarbeit} of the first author. \section*{Acknowledgements} We thank Wolfgang Metzler for his support on the development of this article. We thank Martina Juhnke-Kubitzke, Chris O'Neill and Stephan Rosebrock for their many helpful comments. The second author was partially supported by DFG grant TH 1333/2-1 and DFG grant MA 4797/3-2. \section{Preliminaries} \label{Sec:Preliminaries} In this section we fix some notation. We begin with graphs. For additional background on graph theory see \cite{Diestel}. For a given graph $\Gamma$ we denote its \textit{vertex set} as $V(\Gamma)$ and its \textit{edge set} as $E(\Gamma)$. The cardinalities of these sets are denoted as $|V(\Gamma)|$ and $|E(\Gamma)|$, respectively. Although the graphs we investigate come with an orientation, it will be of no need for our purposes. Thus, we will restrict ourselves to the undirected version of a given graph in Section \ref{Sec:Complexity}. If $e \in E(\Gamma)$, then $v(e) = \{x,y\} \subset V(\Gamma)$ denotes the set of vertices incident to $e$ and we write $e = (x,y)$. Furthermore, we assume our graph is \textit{simple}, i.e., we always assume that no vertex is adjacent to itself and that two vertices are adjacent via at most one edge. If two vertices $x$ and $y$ are adjacent, we also denote the corresponding edge as $(x,y)$. For our purposes, a \textit{labeling} of a graph $\Gamma$ is a function $l: E(\Gamma) \to V(\Gamma)$. For a path $\gamma = (V(\gamma),E(\gamma)) \subset (V(\Gamma),E(\Gamma))$ given by a tuple of vertices $V(\gamma)$ and a tuple of edges $E(\Gamma)$ we want to talk about labelings as tuples, not as sets. Thus, for us, paths are always directed. With slight abuse of notation we denote $v(\gamma) \in V(\Gamma) \times V(\Gamma)$ as the pair of start- end endpoint of $\gamma$ and $l(\gamma) \in V(\Gamma)^{|E(\gamma)|}$ as the tuple of length $\# \gamma := |E(\gamma)|$ containing the label of the $j$-th edge of $\gamma$ as $j$-th entry.\\ Let $G$ be a finitely presented group with presentation $P := \langle x_1,\ldots,x_n \ | \ R_1,\ldots,R_s \rangle$. For additional background on combinatorial group theory, see \cite{Stillwell}. If every relation $R_j$ satisfies $R_j = x_k x_i x_k^{-1} = x_j$, then $P$ induces a \textit{labeled oriented graph (LOG)} $\Gamma(P)$ as shown in the introduction. If the context is clear, we simply write $\Gamma$. Analogously, every LOT $\Gamma$ induces a LOT presentation $P(\Gamma)$ in the upper sense. Recall that every presentation $P$ induces a CW-complex, more specific a standard 2-complex, $C(P)$, where, next to one $0$ cell, every generator $x_i$ corresponds to a $1$-cell and every relation $R_j$ given by a word $x_{j_1} \cdots x_{j_k} = 1$ to a $2$-cell $D_j^2$ satisfying $\partial D_j^2 = x_{j_1} \cdots x_{j_k}$. Thus, every LOT $\Gamma$ has an associated standard 2-complex $C(P(\Gamma))$. Recall that a CW-complex $C$ is called \textit{aspherical} if its second homotopy group $\pi_2(C)$ is trivial, i.e., every continuous map $S^2 \to C$ is homotopy equivalent to a map $S^2 \to {p}$ with $p \in C$. If a 2-complex $C(P(\Gamma))$ associated to a LOT $\Gamma$ is aspherical, then, for convenience, we also say the LOT $\Gamma$ is aspherical. For more information about CW-complexes and 2-complexes respectively, see \cite{Hatcher,Stillwell}. \section{Labeled Oriented Trees and the Whitehead Conjecture} \label{Sec:Whitehead} In this section we briefly recall the connection between LOTs and the Whitehead conjecture. It does not contain new results, but motivates them (particularly Corollary \ref{Cor:MaxComplLOTAspherical}) and sets as a brief survey of the literature. For a more detailed overview, see the survey \cite{Rosebrock:WhiteheadOverview}.\\ \noindent We start with the Whitehead Conjecture itself. \begin{conj}(Whitehead \cite{Whitehead:Conjecture}, 1941) Let $L$ be an aspherical 2-complex and let $K$ be a subcomplex of $L$. Then $K$ is aspherical. \label{Conj:Whitehead} \end{conj} In 1983 Howie showed that if the conjecture was wrong, then there are two specific ways, how it can be falsified. \begin{thm}(Howie \cite[Theorem 3.4.]{Howie:Whitehead}) If Conjecture \ref{Conj:Whitehead} is false, then there exists a counterexample $K \subset L$ satisfying one of the following two conditions: \begin{enumerate} \item $L$ is finite and contractible, and $K = L \setminus e$ for some 2-cell $e$ of $L$ or \item $L$ is the union of an infinite ascending chain of finite, non-aspherical subcomplexes $K = K_0 \subset K_1 \subset \cdots$ such that each inclusion map $K_i \to K_{i+1}$ is null-homotopic. \end{enumerate} \label{Thm:HowieCounterexampeType} \end{thm} It was shown by Luft \cite{Luft} that if there exists a counterexample for the Whitehead Conjecture, then there exists one of Type (2). However, Type (1) is more interesting and accessible for our purposes. Due to the latter statement we denote the case that no counterexample of Type (1) exists as the \textit{Finite Whitehead Conjecture}. We concentrate on this case in the following. \begin{conj}(Finite Whitehead Conjecture) Let $L$ be a finite, aspherical 2-complex and let $K$ be a subcomplex of $L$. Then $K$ is aspherical. \label{Conj:FiniteWhitehead} \end{conj} A \textit{3-deformation} of a $2$-complex $K$ to a $2$-complex $K'$ is, roughly spoken, given by successively gluing finitely many $3$-balls up to an open $2$-cell to the boundary of the original $2$-complex respectively by doing the inverse operation. On the group theoretical side such $3$-deformations of a $2$-complex associated to a group presentation correspond to particular transformations of this presentation, which are called $Q^{**}$ \textit{transformations}. For convenience of the reader we omit detailed definitions, which are not needed for the remaining article. For more information see \cite{Hog-Angeloni:Metzler,Wright}. A priori it is unclear how strong the assumption is that a finite, contractible 2-complex can be 3-deformed to a point. However, the Andrews-Curtis conjecture claims precisely that this is always possible. \begin{conj}(Andrews, Curtis \cite{Andrews:Curtis:Conjecture}) Let $L$ be a finite, contractible 2-complex. Then $L$ 3-deforms to a single vertex. \end{conj} In particular, it was shown by Howie \cite[Theorem 4.2.]{Howie:Whitehead} that if the Andrews-Curtis Conjecture \cite{Andrews:Curtis:Conjecture} is true, then any 2-complex of the form $K = L \setminus e$, where $L$ is a finite contractible 2-complex and $e$ a 2-cell of $L$, has the simple homotopy type of the complement of a \textit{ribbon disc} (we omit the precise definition; see \cite{Howie:Whitehead} for further information). Thus, on the one hand ribbon discs are closely related to the Finite Whitehead Conjecture due to Theorem \ref{Thm:HowieCounterexampeType}. On the other hand, it was shown by Howie \cite[Propositions 3.1. and 3.2.]{Howie:RibbonDiscs} that ribbon discs are closely related to LOTs. For convenience of the reader we subsume the Theorem \ref{Thm:HowieCounterexampeType} and Howie's results \cite[Theorem 4.2.]{Howie:Whitehead}, and \cite[Propositions 3.1. and 3.2.]{Howie:RibbonDiscs} and obtain the following corollary, which motivates studying the topology of LOT complexes; see also \cite[Corollary 4.2.]{Rosebrock:WhiteheadOverview}. \begin{cor} If the Andrews-Curtis Conjecture is true and all LOTs are aspherical, then the finite Whitehead Conjecture \ref{Conj:FiniteWhitehead} is true (i.e., no counterexample of Type (1) in Theorem \ref{Thm:HowieCounterexampeType} exists). \end{cor} The set of possible counterexamples has been reduced in the past. Particularly, in \cite{Rosebrock:LOTComplexity} Rosebrock proved that all LOTs of complexity two are aspherical, see also Lemma \ref{Thm:LOTComplexity2Aspherical}. Furthermore, a is LOG \textit{injective} if its labeling function is injective, i.e., every vertex is assigned at most once to an edge as a label. Recently, Harlander and Rosebrock showed in \cite{Harlander:Rosebrock} that every injective LOT is aspherical. A consequence of our results in the following section is that every LOT with maximal complexity is aspherical, see Theorems \ref{Thm:UpperBound} and \ref{Thm:Aspherical}. \section{The Complexity of Labeled Oriented Trees} \label{Sec:Complexity} In this section we prove our main Theorems \ref{Thm:UpperBound} and \ref{Thm:Aspherical}. We begin with a formal definition of the complexity of a labeled oriented graph. Let $\Gamma$ be a LOG and $S \subseteq V(\Gamma)$. We define the set $T_S \subseteq V(\Gamma)$ of \textit{reachable} (Rosebrock calls them, following Ivanov \cite{Ivanov}, ``good'') vertices (from $S$) recursively as follows. \begin{enumerate} \item If $x_i \in S$, then $x_i$ is reachable. \item If $x_i$ is incident to an edge $e = (x_i,x_j)$ and $x_j$ as well as the label $l(e)$ are reachable, then $x_i$ is also reachable (the orientation does not matter). \end{enumerate} If every vertex is reachable from $S$, i.e., $T_S = V(\Gamma)$, then we say $\Gamma$ is \textit{reachable} (from $S$).\\ In \cite{Rosebrock:LOTComplexity} Rosebrock defines the \textit{complexity} $\cp(\Gamma) \in \mathbb{N}$ of $\Gamma$ as the minimum of the cardinalities of all sets $S \subseteq V(\Gamma)$ such that $\Gamma$ is reachable from $S$. See Figure \ref{Fig:Reachable} for an example. Since the orientations of the edges have no impact on the complexity, we omit it from here on and investigate the corresponding undirected graph. \begin{figure} \begin{tikzpicture} \draw (0,0) -- (2,0); \draw (2,0) -- (2,2); \draw [very thick, blue] (2,2) -- (2,4); \draw (0,2) -- (2,0); \draw (2,0) -- (4,2); \draw [fill] (0,0) circle [radius=0.1]; \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (0,2) circle [radius=0.1]; \draw [fill=red] (2,4) circle [radius=0.2]; \draw [fill=red] (4,2) circle [radius=0.2]; \draw [fill=blue] (1.85,1.85) rectangle (2.15,2.15); \node [above, red] at (2,4.3) {$x_1$}; \node [above] at (0,2.3) {$x_2$}; \node [left, above, blue] at (1.7,2.2) {$x_3$}; \node [above, red] at (4,2.3) {$x_4$}; \node [below] at (0,-0.3) {$x_5$}; \node [below] at (2,-0.3) {$x_6$}; \node [right, red] at (2,3) {$x_4$}; \node [right] at (2,1) {$x_2$}; \node [right, above] at (1.1,1.1) {$x_1$}; \node [right, below] at (3.1,0.9) {$x_5$}; \node [below] at (1,0) {$x_4$}; \draw (5,0) -- (7,0); \draw [very thick, blue] (7,0) -- (7,2); \draw (7,2) -- (7,4); \draw [very thick, blue] (5,2) -- (7,0); \draw (7,0) -- (9,2); \draw [fill] (5,0) circle [radius=0.1]; \draw [fill=red] (7,0) circle [radius=0.2]; \draw [fill=blue] (4.85,1.85) rectangle (5.15,2.15); \draw [fill=red] (7,4) circle [radius=0.2]; \draw [fill] (9,2) circle [radius=0.1]; \draw [fill=blue] (6.85,1.85) rectangle (7.15,2.15); \node [above, red] at (7,4.3) {$x_1$}; \node [above, blue] at (5,2.3) {$x_2$}; \node [left, above, blue] at (6.7,2.2) {$x_3$}; \node [above] at (9,2.3) {$x_4$}; \node [below] at (5,-0.3) {$x_5$}; \node [below, red] at (7,-0.3) {$x_6$}; \node [right] at (7,3) {$x_4$}; \node [right, blue] at (7,1) {$x_2$}; \node [right, above, red] at (6.1,1.1) {$x_1$}; \node [right, below] at (8.1,0.9) {$x_5$}; \node [below] at (6,0) {$x_4$}; \draw [very thick, blue] (10,0) -- (12,0); \draw [very thick, blue] (12,0) -- (12,2); \draw [very thick, blue] (12,2) -- (12,4); \draw [very thick, blue] (10,2) -- (12,0); \draw [very thick, blue] (12,0) -- (14,2); \draw [fill=blue] (9.85,-0.15) rectangle (10.15,0.15); \draw [fill=red] (12,0) circle [radius=0.2]; \draw [fill=blue] (9.85,1.85) rectangle (10.15,2.15); \draw [fill=red] (12,4) circle [radius=0.2]; \draw [fill=red] (14,2) circle [radius=0.2]; \draw [fill=blue] (11.85,1.85) rectangle (12.15,2.15); \node [above, red] at (12,4.3) {$x_1$}; \node [above, blue] at (10,2.3) {$x_2$}; \node [left, above, blue] at (11.7,2.2) {$x_3$}; \node [above, red] at (14,2.3) {$x_4$}; \node [below, blue] at (10,-0.3) {$x_5$}; \node [below,red] at (12,-0.3) {$x_6$}; \node [right, red] at (12,3) {$x_4$}; \node [right, blue] at (12,1) {$x_2$}; \node [right, above, blue] at (11.1,1.1) {$x_1$}; \node [right, below, blue] at (13.1,0.9) {$x_5$}; \node [below, red] at (11,0) {$x_4$}; \end{tikzpicture} \caption{A LOT $\Gamma$ with three different sets $S \subset V(\Gamma)$ (red big circles) and their reachable sets $T_S \subseteq V(\Gamma)$ (red big circles plus blue squares). Only for the right set $S$ the entire LOT $\Gamma$ is reachable.} \label{Fig:Reachable} \end{figure} Obviously, the complexity of every interior reduced LOG $\Gamma$ is bounded from below by $2$. Furthermore, it is obviously bounded from above by the minimum of $|V(\Gamma)|$ and 1 plus the cardinality of the image of the labeling $l$ of $\Gamma$.\\ In order to prove our first main result, we will need the following technical statement. \begin{lemma} Let $\Gamma$ be a connected, interior reduced labeled oriented tree and $S \subset V(\Gamma)$. For every $x_i \in V(\Gamma)$ holds that if $V(\Gamma) \setminus \{x_i\}$ is reachable from $S$, then $V(\Gamma)$ is reachable from $S$. \label{Lem:AllReachable} \end{lemma} \begin{proof} Let $V(\Gamma) \setminus \{x_i\}$ be reachable from $S$ for some $x_i \in V(\Gamma)$. Since $\Gamma$ is connected, there exists an edge $e = (x_i,x_j)$ and label $l(e) = x_k$ for some $x_j \in V \setminus \{x_i\}$ and $x_k \in V$. Since $\Gamma$ is interior reduced, we know $x_k \neq x_i$. Thus, $x_j$ and $x_k$ are reachable, since $V(\Gamma) \setminus \{x_i\}$ is reachable. And since $e$ represents the relation $x_k x_j x_k^{-1} = x_i$ (up to orientation), $x_i$ is reachable. \end{proof} Now, we can prove Theorem \ref{Thm:UpperBound}, stating that the complexity of an interior reduced, connected LOG with $m$ vertices is bounded from above by $\frac{m + 1}{2}$. It suffices to prove the statement for LOTs since every connected graph has a minimal spanning tree and additional edges can only decrease the complexity. The proof is constructive. We give an algorithm, that successively constructs a subset $S$ of the generators of $P$ such that in the end $S$ has cardinality at most $\lfloor (m+1)/2 \rfloor$ and such that every vertex $x_j$ in $\Gamma$ can be visited from a vertex in $x_i \in S$ along a path only using edges labeled with elements in $S$ or vertices, which were visited before in the successive steps. \begin{proof}(Theorem \ref{Thm:UpperBound}) We construct $S$ successively as $S_1 \subset S_2 \subset \cdots \subset S$, where we set $S_1 := \{x_1\}$ for some arbitrary generator $x_1$. As an outline, the key idea of the proof is to choose in every step a generator, which, say, gives at least one additional generator for free. I.e., we choose a generator, which allows us to reach at least one additional generator, which we had not chosen before. We denote $T_k \subset V(\Gamma)$ as the set of generators $x_i \in V(\Gamma)$ reachable from $S_k$. If after $k$ steps $T_k = V(\Gamma)$, then we set $S := S_k$ and stop. Otherwise, we define $S_{k+1} := S_k \cup \{x_i\}$ for some $x_i$ such that \begin{enumerate} \item $x_i \notin T_k$ and \item there exists an edge $e \in E(\Gamma)$ such that \begin{enumerate} \item $l(e) = x_i$ and \item for $v(e) = \{x_j,x_j'\}$ holds: $x_j \in T_k$ and $x_j' \in V(\Gamma) \setminus T_k$. \end{enumerate} \end{enumerate} Such an $x_i$ always exists: since $T_k \neq V(\Gamma)$ and $\Gamma$ is connected we find an edge $e \in E(\Gamma)$ connecting some $x_j \in T_k$ with some $x_j' \in V(\Gamma) \setminus T_k$. But, if we had now $l(e) \in T_k$, then $x_j'$ was reachable from $S_k$ by definition of $T_k$, and thus we had $x_j' \in T_k$, which is a contradiction. Hence, $l(e) \in V(\Gamma) \setminus T_k$ and we can set $x_i := l(e)$. Therefore, with $S_{k+1} := S_k \cup \{x_i\}$ we obtain $T_{k+1} \subseteq T_k \cup \{x_i,x_j'\}$ where $T_k \cap \{x_i,x_j'\} = \emptyset$. Furthermore, $x_i \neq x_j'$ since $\Gamma$ is interior reduced. This construction ensures that for every $k > 1$, the cardinality of the set $T_k$ increases by at least two but the cardinality of $S_k$ increases only by one. Hence, for $m$ odd $T_{(m+1)/2}$ equals $V(\Gamma)$, and for $m$ even $T_{m/2}$ equals $V(\Gamma)$ with Lemma \ref{Lem:AllReachable}. I.e., all elements are reachable from $S_{(m+1)/2}$ respectively $S_{m/2}$ and, since the cardinality of $S_k$ equals $k$, we have $\cp(\Gamma) \leq (m + 1)/2$. \end{proof} In order to tackle the second main statement, Theorem \ref{Thm:Aspherical}, which states that the upper bound for the complexity is sharp and LOTs of maximal complexity are aspherical, we need to introduce some more notation. \begin{defn} Let $\Gamma$ and $\Gamma_1,\Gamma_2$ be LOTs. We say that $\Gamma$ is \textit{decomposable} into $\Gamma_1,\Gamma_2$, noted as \begin{eqnarray} \Gamma & = & \Gamma_1 \sqcup \Gamma_2, \label{Equ:Decomposition} \end{eqnarray} if $\Gamma_1$ and $\Gamma_2$ have distinct vertices and labels and $\Gamma$ is obtained by identifying one vertex of $\Gamma_1$ with one vertex of $\Gamma_2$. If such a decomposition exists, then we say $\Gamma_1$ and $\Gamma_2$ are contained in $\Gamma$. \end{defn} Note that group theoretically that means if $\Gamma_1$ corresponds to a presentation $P_1 = \langle x_1,\ldots,x_n \ | \ R_1,\ldots,R_k \rangle$ and $\Gamma_2$ to a presentation $P_2 = \langle y_1,\ldots,y_m \ | \ S_1,\ldots,S_l \rangle$, then $\Gamma = \Gamma_1 \sqcup \Gamma_2$ has the presentation \begin{eqnarray*} P & = & \langle x_1,\ldots,x_n, y_1,\ldots,y_m \ | \ R_1,\ldots,R_k, S_1,\ldots,S_l, x_i = y_j \rangle, \end{eqnarray*} where $x_i$ and $y_j$ are the generators, which are identified. This presentation can obviously be transformed into at LOT presentation \begin{eqnarray*} P' & = & \langle x_1,\ldots,x_n, y_1,\ldots,y_{j-1}, y_{j+1},\ldots,y_m \ | \ R_1,\ldots,R_k, S_1',\ldots,S_l' \rangle, \end{eqnarray*} where each $S_r'$ is obtained from $S_r$ by replacing $y_j$ by $x_i$. We remark that the transformation used here is actually a $Q^{**}$ transformation, which corresponds to a 3-deformation in the associated 2-complex; see for example \cite{Christmann:Diplomarbeit, Hog-Angeloni:Metzler} for further details. Thus, it is in particular not violating the assumptions in Howie's Theorems, see Section \ref{Sec:Whitehead}. It is easy to see that the group $G$ presented by $P$ respectively $P'$ is an amalgam of the groups $G_1$ and $G_2$ presented by $P_1$ and $P_2$ with the free group generated by $x_i$ as joint subgroup, see \cite{Stillwell}.\\ Similarly, we say that a LOT $\Gamma$ is \textit{decomposable} into LOTs $\Gamma_1,\ldots,\Gamma_s$, noted as \begin{eqnarray} \Gamma & = & \Gamma_1 \sqcup \cdots \sqcup \Gamma_s, \label{Equ:Decomposition2} \end{eqnarray} if $\Gamma$ can be successively decomposed in the sense of \eqref{Equ:Decomposition}, i.e., \begin{eqnarray} \Gamma & = & (\cdots((\Gamma_1 \sqcup \Gamma_2) \sqcup \Gamma_3) \cdots \Gamma_{s-1}) \sqcup \Gamma_s. \label{Equ:Decomposition3} \end{eqnarray} Note that in this case $G$ is not necessarily an amalgam of $G_1,\ldots,G_s$ anymore, since the process of identifying generators is not transitive. Instead, $G$ is the result of a recursive process of taking amalgams. But it is an immediate consequence of the LOT representation $\Gamma$ and $\Gamma_1,\ldots,\Gamma_s$ that the process of identifying vertices respectively generators is associative. So we can safely leave out parenthesis in \eqref{Equ:Decomposition3} and the Notation \eqref{Equ:Decomposition2} is well defined. See Figure \ref{Fig:Decomposable} for a depiction of decomposition of LOTs.\\ \begin{figure} \begin{tikzpicture}[scale=1] \draw [blue] (0,2) -- (2,2); \draw [blue] (0,0) -- (0,2); \draw [blue] (0,2) -- (0,4); \draw [fill, blue] (2,2) circle [radius=0.1]; \draw [fill, blue] (0,2) circle [radius=0.1]; \draw [fill, blue] (0,0) circle [radius=0.1]; \draw [fill, blue] (0,4) circle [radius=0.1]; \node [left, blue] at (0,0) {$x_1$}; \node [left, blue] at (0,2) {$x_2$}; \node [left, blue] at (0,4) {$x_3$}; \node [above, blue] at (2,2) {$x_4$}; \draw [red] (1.5,0) -- (3,1.5); \draw [red] (4.5,0) -- (3,1.5); \draw [red] (3,1.5) -- (3,5.5); \draw [fill, red] (1.5,0) circle [radius=0.1]; \draw [fill, red] (3,1.5) circle [radius=0.1]; \draw [fill, red] (4.5,0) circle [radius=0.1]; \draw [fill, red] (3,3.5) circle [radius=0.1]; \draw [fill, red] (3,5.5) circle [radius=0.1]; \node [below, red] at (1.5,-0.2) {$x_5$}; \node [right, red] at (3.2,1.5) {$x_6$}; \node [right, red] at (3.2,3.5) {$x_7$}; \node [right, red] at (3.2,5.5) {$x_8$}; \node [below, red] at (4.5,-0.2) {$x_9$}; \draw [DarkGreen] (6,0) -- (6,4); \draw [fill, DarkGreen] (6,0) circle [radius=0.1]; \draw [fill, DarkGreen] (6,2) circle [radius=0.1]; \draw [fill, DarkGreen] (6,4) circle [radius=0.1]; \node [right, DarkGreen] at (6.2,0) {$x_{10}$}; \node [right, DarkGreen] at (6.2,2) {$x_{11}$}; \node [right, DarkGreen] at (6.2,4) {$x_{12}$}; \draw [<->, line width = 3pt] (7.5,2) -- (9,2); \draw [blue] (11,3) -- (12.5,1.5); \draw [blue] (11,1) -- (11,3); \draw [blue] (11,3) -- (11,5); \draw [red] (11,0) -- (12.5,1.5); \draw [red] (14,0) -- (12.5,1.5); \draw [red] (12.5,1.5) -- (12.5,5.5); \draw [DarkGreen] (12.5,3.5) -- (14.5,3.5); \draw [DarkGreen] (14.5,3.5) -- (14.5,1.5); \draw [fill, blue] (11,1) circle [radius=0.1]; \draw [fill, blue] (11,3) circle [radius=0.1]; \draw [fill, blue] (11,5) circle [radius=0.1]; \node [left, blue] at (11,1) {$x_1$}; \node [left, blue] at (11,3) {$x_2$}; \node [left, blue] at (11,5) {$x_3$}; \draw [fill, red] (11,0) circle [radius=0.1]; \draw [fill, MyPurple] (12.5,1.5) circle [radius=0.17]; \draw [fill, red] (14,0) circle [radius=0.1]; \draw [fill, MyYellow] (12.5,3.5) circle [radius=0.17]; \draw [fill, red] (12.5,5.5) circle [radius=0.1]; \node [below, red] at (11,-0.2) {$x_5$}; \node [right, MyPurple] at (12.7,1.5) {$x_4 = x_6$}; \node [right, MyYellow] at (12.7,3.8) {$x_7 = x_{12}$}; \node [right, red] at (12.7,5.5) {$x_8$}; \node [below, red] at (14,-0.2) {$x_9$}; \draw [fill, DarkGreen] (14.5,3.5) circle [radius=0.1]; \draw [fill, DarkGreen] (14.5,1.5) circle [radius=0.1]; \node [right, DarkGreen] at (14.7,3.5) {$x_{11}$}; \node [right, DarkGreen] at (14.7,1.5) {$x_{10}$}; \end{tikzpicture} \caption{A LOT, which is decomposable in three single LOTs via two identified vertices. The edge labels are left out for clarity of the figure.} \label{Fig:Decomposable} \end{figure} Motivated by constructions of Rosebrock used in \cite{Rosebrock:LOTComplexity} we make the following definition. \begin{defn} We call a LOT \textit{Rosebrock} if it is has three vertices and is of the form as shown in Figure \ref{Fig:Rosebrock3LOI}. \label{Defn:RosebrockLOT} \end{defn} \begin{figure} \begin{tikzpicture} \draw (0,0) --(4,0); \draw [fill] (0,0) circle [radius=0.1]; \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \node [below] at (0,-0.3) {$a$}; \node [below] at (2,-0.3) {$b$}; \node [below] at (4,-0.3) {$c$}; \node [above] at (1,0.3) {$c$}; \node [above] at (3,0.3) {$a$}; \end{tikzpicture} \caption{The Rosebrock LOT.} \label{Fig:Rosebrock3LOI} \end{figure} As a main step towards Theorem \ref{Thm:Aspherical} we show the following statement, which characterizes the LOTs which have maximal complexity. \begin{thm} Let $\Gamma$ be a connected, interior reduced LOT with $m$ vertices. Then \begin{eqnarray*} \cp(\Gamma) & = & \frac{m + 1}{2} \end{eqnarray*} if and only if $\Gamma$ is decomposable as \begin{eqnarray} \Gamma & = & \Gamma_1 \sqcup \cdots \sqcup \Gamma_s, \label{Equ:RosebrockLotDecomp} \end{eqnarray} such that every $\Gamma_i$ is a Rosebrock LOT. \label{Thm:MaximalComplexity} \end{thm} Observe that a LOT with a decomposition into $s$ Rosebrock LOTs as in \eqref{Equ:RosebrockLotDecomp} has exactly $2s + 1$ vertices and $2s$ edges. Furthermore, the following lemma holds, which is needed for the proof of the Theorem \ref{Thm:MaximalComplexity}. \begin{lemma} Let $\Gamma$ be a connected, interior reduced LOT with $2s + 1$ vertices. If every edge $e$ of $\Gamma$ is an edge of a Rosebrock LOT $\Gamma_i$ contained in $\Gamma$, then there exists a unique decomposition \begin{eqnarray} \Gamma & = & \Gamma_1 \sqcup \cdots \sqcup \Gamma_s, \end{eqnarray} such that every $\Gamma_i$ is a Rosebrock LOT. \label{Lem:UniqueDecomposition} \end{lemma} This means that the existence of a decomposition of the Form \eqref{Equ:RosebrockLotDecomp} into Rosebrock LOTs is a local property of the graph. \begin{proof}(Lemma \ref{Lem:UniqueDecomposition}) We prove the statement by strong induction over $s$. If $s = 1$, then $\Gamma$ has two edges and since $\Gamma$ is interior reduced, it has to be Rosebrock; see Figure \ref{Fig:Rosebrock3LOI}. Assume the statement holds for all interior reduced LOTs with at most $2s$ edges and let $|E(\Gamma)| = 2(s + 1)$. We investigate an edge $e_1$ connecting a leaf $a$ of $\Gamma$ with some vertex $b$. Since $\Gamma$ is interior reduced, we have $l(e_1) = c$ with $c \notin \{a,b\}$. Since $e_1$ is an edge of a Rosebrock LOT $\Gamma_1$ contained in $\Gamma$, it follows from Definition \ref{Defn:RosebrockLOT} that there exists an edge $e_2$ connecting the vertices $b$ and $c$ and satisfying $l(e_2) = a$; see Figure \ref{Fig:Rosebrock3LOI} once more. Thus, we can make a decomposition \begin{eqnarray*} \Gamma & = & \Gamma_1 \sqcup \Gamma_b \sqcup \Gamma_c, \end{eqnarray*} where $\Gamma_b$ and $\Gamma_c$ are the subtrees of $\Gamma$ containing $b$ respectively $c$ (one tree is possibly just one vertex). We investigate an arbitrary edge $e_3$ in $\Gamma_b$. Since $e_3$ is edge of $\Gamma$ it is the edge of a Rosebrock LOT $\Gamma_2$. \textbf{Claim}: $\Gamma_2$ is contained in $\Gamma_b$. Namely, $\Gamma_b$ and $\Gamma_c$ are only connected via the edge $e_2$, since $\Gamma$ is a tree. Furthermore, $\Gamma_2$ is a Rosebrock LOT in $\Gamma$. Thus, if $\Gamma_2$ was not contained in $\Gamma_b$, then the only possibility is that the second edge of $\Gamma_2$ is $e_1$ or $e_2$. But then $e_3$ would connect the vertices $b$ and $a$ or $b$ and $c$, which is impossible, since $e_3$ is not contained in $\Gamma_1$ and every vertex appears only once. Thus, $\Gamma_2$ is contained in $\Gamma_b$. But since $\Gamma_b$ furthermore has at most $2s$ edges and is an interior reduced LOT (since $\Gamma$ is an interior reduced LOT), it follows by the induction hypothesis that there exists a unique decomposition $\Gamma_b = \Gamma_{b_1} \sqcup \cdots \sqcup \Gamma_{b_k}$ such that every $\Gamma_{b_j}$ is a Rosebrock LOT. Analogously for $\Gamma_c$. Hence, we obtain a unique decomposition \begin{eqnarray*} \Gamma & = & \Gamma_1 \sqcup \Gamma_{b_1} \sqcup \cdots \sqcup \Gamma_{b_k} \sqcup \Gamma_{c_1} \sqcup \cdots \sqcup \Gamma_{c_l} \end{eqnarray*} of $\Gamma$ into Rosebrock LOTs. And since every Rosebrock LOT has two edges we have $k + l + 1 = 2s$. \end{proof} Now we can prove Theorem \ref{Thm:MaximalComplexity}. \begin{proof}(Theorem \ref{Thm:MaximalComplexity}) First, we prove that if a LOT is decomposable as stated, then it has the claimed complexity. We make an induction over $s$ with respect to the decomposition described in \eqref{Equ:RosebrockLotDecomp}. Obviously, for $s = 1$, i.e., $\Gamma$ is a LOT as depicted in Figure \ref{Fig:Rosebrock3LOI}, $\Gamma$ has complexity two. Now, assuming that the claim holds true for $\Gamma = \Gamma_1 \sqcup \cdots \sqcup \Gamma_s$, we prove it holds for $\Gamma' = \Gamma \sqcup \Gamma_{s+1} = \Gamma_1 \sqcup \cdots \sqcup \Gamma_{s+1}$. Assume again without loss of generality that $\Gamma_{s+1}$ is given as in Figure \ref{Fig:Rosebrock3LOI} and that $a$ is the vertex, where $\Gamma$ and $\Gamma_{s+1}$ are identified. Then $\Gamma'$ has two more vertices than $\Gamma$, namely $2(s+1) + 1$. But for the complexity of $\Gamma'$ we have $\cp(\Gamma') = \cp(\Gamma) + 1$: On the one hand, since $a$ is reachable in $\Gamma$, both $b$ and $c$ are reachable, if we choose one of them. Hence, $\cp(\Gamma') \leq \cp(\Gamma) + 1$. On the other hand, adding either $b$ or $c$ to the set of generators does not lower the complexity of the $\Gamma$ part of $\Gamma'$, since, by assumption, no edge in $\Gamma$ has label $b$ or $c$. Thus, $\cp(\Gamma') \geq \cp(\Gamma) + 1$. But then we are done, since we obtain with the induction hypothesis \begin{eqnarray*} \cp(\Gamma') \ = \ \cp(\Gamma) + 1 \ = \ \frac{(2s + 1) + 1}{2} + 1 \ = \ \frac{(2(s+1) + 1) + 1}{2} \ = \ \frac{m + 1}{2}, \end{eqnarray*} where the last equation holds since $\Gamma_{s+1}$ has $2(s+1) + 1$ vertices and $m$ denotes the cardinality of the vertex set.\\ Now, assume that $\Gamma$ is a LOT, which is not decomposable in Rosebrock LOTs. By Lemma \ref{Lem:UniqueDecomposition} this means in particular that there exists an edge $e \in E(\Gamma)$, which is not part of a Rosebrock LOT. Let $l(e) = z$ and $v(e) = \{x,y\}$. By investigating different cases, we conclude that $\Gamma$ cannot have maximal complexity.\\ \noindent \textbf{Case 1}: $z$ is connected to $x$ or $y$ (without loss of generality: $x$) via an edge: \begin{center} \begin{tikzpicture} \draw [dashed] (0,0) -- (2,0); \draw (2,0) -- (6,0); \draw [dashed] (6,0) -- (8,0); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (6,0) circle [radius=0.1]; \node [below] at (2,-0.3) {$z$}; \node [below] at (4,-0.3) {$x$}; \node [below] at (6,-0.3) {$y$}; \node [above] at (3,0.3) {$w \neq y$}; \node [above] at (5,0.3) {$z$}; \end{tikzpicture} \end{center} As depicted, the edge $e'$ connecting $z$ and $x$ with $l(e') = w$ has to satisfy $w \neq y$. Otherwise, the depicted part of $\Gamma$ would be a Rosebrock LOT (see Figure \ref{Fig:Rosebrock3LOI}), which we excluded. We start the algorithm presented in the proof of Theorem \ref{Thm:UpperBound} at $z$ and choose $w$ in the second step. With this choice we reach at least the vertices $x,y,w,z$. Afterwards, we proceed as in the proof of Theorem \ref{Thm:UpperBound}. Since we do not need to choose an initial vertex, we need to choose at most $\lfloor ((m-4) + 1)/2 \rfloor - 1$ additional vertices in order to reach every vertex of $\Gamma$. Hence, in total \begin{eqnarray} \cp(\Gamma) \ \leq \ \left\lfloor \frac{m+1}{2} \right\rfloor - 1 \ < \ \frac{m+1}{2} \, . \label{Equ:ProofMaximalComplexity1} \end{eqnarray} \noindent \textbf{Case 2}: $z$ is not connected to $x$ or $y$ via an edge: \begin{center} \begin{tikzpicture} \draw [dashed] (0,0) -- (4,0); \draw (4,0) -- (10,0); \draw [dashed] (10,0) -- (12,0); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (6,0) circle [radius=0.1]; \draw [fill] (8,0) circle [radius=0.1]; \draw [fill] (10,0) circle [radius=0.1]; \node [below] at (2,-0.3) {$z$}; \node [below] at (4,-0.3) {$\neq z$}; \node [below] at (6,-0.3) {$x$}; \node [below] at (8,-0.3) {$y$}; \node [below] at (10,-0.3) {$\neq z$}; \node [above] at (7,0.3) {$z$}; \end{tikzpicture} \end{center} Since $\Gamma$ is a tree, $z$ is connected to $x$ or $y$ (without loss of generality: $x$) via a path $\gamma$ given by edges $e_1,\ldots,e_k$ such that the edge $e$ is not part of $\gamma$. \textbf{Case 2.1}: First assume that no edge $e' \in E(\gamma) \subset E(\Gamma)$ satisfies $l(e') \in \{x,y\}$, i.e., \begin{center} \begin{tikzpicture} \draw [dashed] (0,0) -- (4,0); \draw (4,0) -- (6,0); \draw [dashed] (6,0) -- (8,0); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (6,0) circle [radius=0.1]; \node [below] at (2,-0.3) {$z$}; \node [below] at (4,-0.3) {$x$}; \node [below] at (6,-0.3) {$y$}; \node [above] at (5,0.3) {$z$}; \end{tikzpicture} \end{center} We choose $z$ as the initial vertex in the algorithm in the proof of Theorem \ref{Thm:UpperBound}. In steps $2$ through (at most) $k$ we choose successive vertices $l(e_1),\ldots,l(e_k)$. Without loss of generality this yields two new reachable vertices in every step (not more since otherwise the maximal complexity cannot be attained anymore). But in step $k$, by our choice of $l(e_k)$, we obtain $x$ as a reachable vertex and since $z$ is reachable, we also obtain $y$ as reachable. Since $l(e_1),\ldots,l(e_k) \notin \{x,y\}$, after $k+1$ steps we have chosen at most $k+1$ vertices but reach at least $2(k+1)$ vertices. If we proceed as in Case 1, then we obtain \eqref{Equ:ProofMaximalComplexity1}. \textbf{Case 2.2}: Now assume that there exists some edge $e_j$ in the path $\gamma$ with $j \neq k$, $l(e_j) \in \{x,y\}$, and $l(e_k) = w \notin \{x,y\}$. \begin{center} \begin{tikzpicture} \draw [dashed] (0,0) -- (4,0); \draw (4,0) -- (6,0); \draw [dashed] (6,0) -- (8,0); \draw (8,0) -- (12,0); \draw [dashed] (12,0) -- (14,0); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (6,0) circle [radius=0.1]; \draw [fill] (8,0) circle [radius=0.1]; \draw [fill] (10,0) circle [radius=0.1]; \draw [fill] (12,0) circle [radius=0.1]; \node [below] at (2,-0.3) {$z$}; \node [below] at (4,-0.3) {$a_{j-1}$}; \node [below] at (6,-0.3) {$a_j$}; \node [below] at (10,-0.3) {$x$}; \node [below] at (12,-0.3) {$y$}; \node [above] at (5,0.3) {$x$ \, or \, $y$}; \node [above] at (9,0.3) {$w \neq y$}; \node [above] at (11,0.3) {$z$}; \end{tikzpicture} \end{center} Again, we choose $z$ as the initial vertex in our algorithm and $l(e_1),\ldots,l(e_{j-1})$ successively afterwards (skipping those already reachable), which yields two new reachable vertices in every step but the first one. Since $l(e_j) \in \{x,y\}$ and $z$ was already chosen, by choosing $l(e_j)$ in step $j+1$, we get three new reachable vertices, namely, $x,y$ and $a_j$. Hence, after $j+1$ steps we have chosen at most $j+1$ vertices but reach at least $2(j+1)$ vertices. If we proceed on as in Case 1, then we obtain \eqref{Equ:ProofMaximalComplexity1}. \textbf{Case 2.3}: Finally, assume that the final edge $e_k$ satisfies $v(e_k) = \{a_k,x\}$ and $l(e_k) = y$. \begin{center} \begin{tikzpicture} \draw [dashed] (0,0) -- (4,0); \draw (4,0) -- (8,0); \draw [dashed] (8,0) -- (10,0); \draw [fill] (2,0) circle [radius=0.1]; \draw [fill] (4,0) circle [radius=0.1]; \draw [fill] (6,0) circle [radius=0.1]; \draw [fill] (8,0) circle [radius=0.1]; \node [below] at (2,-0.3) {$z$}; \node [below] at (4,-0.3) {$a_k \neq z$}; \node [below] at (6,-0.3) {$x$}; \node [below] at (8,-0.3) {$y$}; \node [above] at (5,0.3) {$y$}; \node [above] at (7,0.3) {$z$}; \end{tikzpicture} \end{center} In the first two steps of our algorithm, we choose $z$ and one of the vertices $x$ or $y$. After these two steps at least the vertices $x,y,z$ and $a_k$ are reachable. Afterwards, we proceed as in Case 1 and, again, we obtain \eqref{Equ:ProofMaximalComplexity1}. \end{proof} To prove the asphericity of (2-complexes of) LOTs with maximal complexity we use the following lemma by Rosebrock (see \cite{Rosebrock:LOTComplexity}; we adjusted it to our notation). \begin{lemma}[Rosebrock] Let $\Gamma$ be a LOT with a decomposition $\Gamma = \Gamma_1 \sqcup \Gamma_2$. If both 2-complexes corresponding to $\Gamma_1$ and $\Gamma_2$ are aspherical, then also the 2-complex corresponding to $\Gamma$ is aspherical. \label{Lem:Amalgam} \end{lemma} As already mentioned the main result of Rosebrock in \cite{Rosebrock:LOTComplexity} is the asphericity of LOTs of complexity two. We recall the statement here. \begin{thm}[Rosebrock] Let $\Gamma$ be a LOT of complexity two. Then its corresponding 2-complex is aspherical. \label{Thm:LOTComplexity2Aspherical} \end{thm} Now, we have everything needed to finish the proof of our second main Theorem \ref{Thm:Aspherical}. \begin{cor} Let $\Gamma$ be a LOT with $m$ vertices and maximal complexity, i.e., $\cp(\Gamma) = (m+1)/2$. Then its corresponding 2-complex is aspherical. \label{Cor:MaxComplLOTAspherical} \end{cor} \begin{proof}(Corollary \ref{Cor:MaxComplLOTAspherical} / Theorem \ref{Thm:Aspherical}) Let $\Gamma$ be a LOT of maximal complexity. Thus, by Theorem \ref{Thm:MaximalComplexity}, $\Gamma$ has a decomposition $\Gamma = \Gamma_1 \sqcup \cdots \sqcup \Gamma_{s}$ such that every $\Gamma_j$ is a Rosebrock LOT. Every Rosebrock LOT has complexity 2, i.e., its corresponding 2-complex is aspherical by Theorem \ref{Thm:LOTComplexity2Aspherical}. Therefore, by Lemma \ref{Lem:Amalgam}, the 2-complex corresponding to $\Gamma$ is aspherical. \end{proof} \bibliographystyle{amsplain}
1,108,101,564,429
arxiv
\section{Introduction} Let $E$ and $E_*$ be separable Hilbert spaces and recall that the \emph{Schur class $\mathcal{S}_d(E, E_*)$} \nomenclature{$\mathcal{S}_d(E, E_*) $}{$d$ variable Schur Class} is the set of holomorphic functions $\Phi: \D^d \rightarrow \mathcal{L}(E,E_*)$ such that each $\Phi(z): E \rightarrow E_*$ is a linear contraction. In one variable, the structure of these functions is well-understood and they play key roles in many areas of both pure and applied mathematics. For example, they are objects of interest in $H^{\infty}$ control theory, act as scattering functions of single-evolution Lax-Phillips scattering systems, and serve as the transfer functions of one-dimensional dissipative, linear, discrete-time input/state/output (i/s/o) systems \cite{bsv05, he74, hj99}. Moreover, every $\Phi \in \mathcal{S}_1(E, E_*)$ can actually be realized as both a scattering function of a Lax-Phillips scattering system and a transfer function of a dissipative, linear, discrete-time i/s/o system. For simplicity, we omit the discussion of the connection to the interesting topic of von Neumann inequalities; see \cite{ampi, bsv05, gkvw08}. The situation in several variables is more complicated; although Schur functions are still the scattering functions of $d$-evolution scattering systems and transfer functions of $d$-dimensional dissipative, linear, discrete-time i/s/o systems, the converse is not always true; there are functions in $\mathcal{S}_d(E,E_*)$ that cannot be realized as transfer functions of dissipative i/s/o systems. To make this precise, let $\mathcal{M}= \mathcal{M}_1 \oplus \dots \oplus \mathcal{M}_d$ be a separable Hilbert space, and for each $z \in \mathbb{D}^d$, define the multiplication operator $\mathcal{E}_z: = z_1P_{\mathcal{M}_1} + \dots + z_d P_{\mathcal{M}_d},$ where each $P_{\mathcal{M}_r}$ is the projection onto $\mathcal{M}_{r}$. \begin{defn} Let $\Phi \in \mathcal{S}_d(E,E_*).$ A \emph{Transfer Function Realization} (T.F.R.) of $\Phi$ consists of a Hilbert space $\mathcal{M}= \mathcal{M}_1 \oplus \dots \oplus \mathcal{M}_d$ and a contraction $U: \mathcal{M} \oplus E \rightarrow \mathcal{M} \oplus E_*$ such that if $U$ is written as \[ U = \left[ \begin{array}{cc} A & B \\ C & D \end{array} \right] : \left[ \begin{array}{c} \mathcal{M} \\ E \end{array} \right] \rightarrow \left[ \begin{array}{c} \mathcal{M} \\ E_* \end{array} \right], \] then $\Phi(z) = D + C\left( I_{\mathcal{M}} - \mathcal{E}_zA \right)^{-1}\mathcal{E}_z B$. The Hilbert space $\mathcal{M}$ is called the \emph{state space} and the contraction $U$ is called the \emph{colligation}. One can associate a d-dimensional dissipative, linear, discrete-time $i/s/o$ system with the pair $(\mathcal{M}, U)$. The transfer function realization is called isometric, coisometric, or unitary whenever $U$ is isometric, coisometric, or unitary.\end{defn} In \cite{ag1,ag90}, J. Agler showed that every function in $\mathcal{S}_2(E,E_*)$ has a T.F.R. and used the realizations to generalize the Pick interpolation theorem to two variables. Since Agler's seminal results, these formulas have been used frequently to both generalize one-variable results and address strictly multivariate questions on the polydisc as in \cite{agmc_isb, agmc_dv, mcc10a, amy10a, baltre98, kn07b, kn08ua, mcc12}. There is also a simple relationship between transfer function realizations and positive kernels: \begin{thm} \label{thm1} \emph{(Agler \cite{ag90}).} Let $\Phi \in \mathcal{S}_d(E,E_*)$. Then, $\Phi$ has a transfer function realization if and only if there are positive holomorphic kernels $K_1, \dots, K_d: \mathbb{D}^d \times \mathbb{D}^d \rightarrow \mathcal{L}(E_*)$ such that for all $z,w \in \D^d$ \begin{equation*} I_{E_*} - \Phi(z) \Phi(w)^* = (1-z_1 \bar{w}_1) K_1(z,w) + \dots + (1-z_d \bar{w}_d) K_d(z,w). \end{equation*} \end{thm} This decomposition using positive kernels is called an \emph{Agler decomposition} of $\Phi$. In two variables, it is convenient to reverse the ordering, and throughout this paper, positive kernels $(K_1, K_2)$ are called \emph{Agler kernels of} $\Phi \in \mathcal{S}_2(E,E_*)$ if for all $z,w \in \D^2$ \begin{equation}\label{eqn:agdecomp} I_{E_*} - \Phi(z) \Phi(w)^* = (1 -z_1 \bar{w}_1) K_2(z,w) + (1-z_2\bar{w}_2) K_1(z,w). \end{equation} \nomenclature{$(K_1,K_2)$}{Agler kernels of $\Phi$} Agler proved the existence of a pair of Agler kernels for each function in $\SEE$ and then showed this gives a transfer function realization via Theorem \ref{thm1}. It is often easier to go from kernels to realizations because positive kernels immediately bring operator theory and reproducing kernel Hilbert space methods into the picture. We review some of these concepts related to positive kernels below. \begin{rem} Recall that $K : \Omega \times \Omega \rightarrow \mathcal{L}(E)$ is a \emph{positive kernel on $\Omega$} if for each $N \in \NN$ \[ \sum_{i,j =1}^N \LL K(x_i,x_j) \eta_j, \eta_i \RR_{E} \ge 0 \] for all $x_1, \dots, x_N \in \Omega$ and $\eta_1, \dots, \eta_N \in E.$ Similarly, $\mathcal{H}$ is a \emph{reproducing kernel Hilbert space on $\Omega$} if $\mathcal{H}$ is a Hilbert space of functions on defined $\Omega$ such that evaluation at $x$ is a bounded linear operator for each $x \in \Omega.$ Then there is a unique positive kernel $K: \Omega \times \Omega \rightarrow \mathcal{L}(E)$ with \[ \LL f, K(\cdot, y) \eta \RR_{\mathcal{H}} = \LL f(y), \eta \RR_{E} \qquad \forall \ f \in \mathcal{H}, y \in \Omega, \text{ and } \eta \in E. \] Conversely, given any positive kernel $K$ on $\Omega$, there is a reproducing kernel Hilbert space, denoted $\mathcal{H}(K)$, on $\Omega$ with $K$ as its reproducing kernel. For details, see \cite{bv03b}. \end{rem} The kernels $K_1,K_2$ are written in reverse order in \eqref{eqn:agdecomp} because upon dividing the equation through by $(1-z_1\bar{w}_1)(1-z_1\bar{w}_2)$, an Agler decomposition can be given a much more natural interpretation in terms of de Branges-Rovnyak spaces. \begin{rem} Assume $(K_1, K_2)$ are Agler kernels of $\Phi$ and rewrite (\ref{eqn:agdecomp}) as follows: \begin{equation} \label{eqn:agdecomp2} \frac{I -\Phi(z) \Phi(w)^*}{(1-z_1\bar{w}_1) (1-z_2\bar{w}_2)} = \frac{K_1(z,w)}{1-z_1\bar{w}_1} + \frac{K_2(z,w)}{1-z_2\bar{w}_2}. \end{equation} Each term in (\ref{eqn:agdecomp2}) is a positive kernel and so, we can define the following reproducing kernel Hilbert spaces: \[ \mathcal{H}_{\Phi}:= \mathcal{H} \left( \frac{I -\Phi(z) \Phi(w)^*}{(1-z_1\bar{w}_1) (1-z_2\bar{w}_2)} \right) \ \ \text{ and } \ \ H_j : = \mathcal{H} \left( \frac{K_j(z,w)}{1-z_j\bar{w}_j} \right), \] for $j=1,2.$ The Hilbert space $\mathcal{H}_{\Phi}$ \nomenclature{$\mathcal{H}_{\Phi}$}{two-variable de Branges-Rovnyak space} is \emph{the two-variable de Branges-Rovnyak space associated} to $\Phi$. For $j=1,2,$ define the function $Z_j$ by $Z_j(z):= z_j$. Then the $H_j$ Hilbert spaces have the following properties: \begin{itemize} \item[(1)] $Z_j H_j \subseteq H_j$ and multiplication by $Z_j$ on $H_j$ is a contraction. \item[(2)] The reproducing kernels of the $H_j$ sum to the kernel of $\mathcal{H}_{\phi}$. \end{itemize} Basic facts about reproducing kernels imply that if Hilbert spaces $H_1$ and $H_2$ satisfy $(1)$ and $(2)$, then the numerators of their reproducing kernels are Agler kernels of $\Phi.$ \end{rem} Agler used non-constructive methods to obtain Agler kernels, and a major stride was made in this theory when Ball-Sadosky-Vinnikov proved the existence of Agler kernels through constructive Hilbert space geometric methods. Indeed, our analysis is motivated by their work on two-evolution scattering systems and scattering subspaces associated to $\Phi \in \SEE.$ In \cite{bsv05}, they showed that such scattering subspaces have canonical decompositions into subspaces $S_1$ and $S_2$, each invariant under multiplication by $Z_1$ or $Z_2.$ This work was continued in \cite{gkvw08} where a specific scattering subspace associated to $\Phi$, denoted $\Kphi$, was used to show that canonical decompositions of $\Kphi$ yield Agler kernels $(K_1,K_2)$ of $\Phi$. The analysis from \cite{bsv05} was also extended in \cite{bkvsv}; here, many results from \cite{bsv05} are illuminated or extended via the theory of formal reproducing kernel Hilbert spaces. While more explicit, the approaches so far do not shed much light on the actual structure of the Hilbert spaces $\mathcal{H}(K_j)$ and the functions contained therein for general Schur functions. The spaces $\mathcal{H}(K_j)$ have been shown to possess a very rich structure when $\Phi$ is an \emph{inner} function or a \emph{rational inner} function \cite{bic12, bk12, colwer99, knapde}. This has led to applications in the study of two variable matrix monotone functions in \cite{mcc10} and in the study of \emph{three} variable rational inner functions in \cite{bk12}. This structure is also important in the Geronimo-Woerdeman characterizations of bivariate Fej\'er-Riesz factorizations as well as the related bivariate auto-regressive filter problem \cite{gw04}. The theory is much simpler in these cases because Agler kernels can be constructed directly from orthogonal decompositions of $\Hphi$. Therefore, the major goal of this paper is to show directly that the rich Agler kernel structure present when $\Phi$ is inner is still present when $\Phi$ is not an inner function. A direct application of this will be to prove that every function in $\SEE$ possesses a \emph{coisometric} transfer function realization with state space $\mathcal{H}(K_1)\oplus \mathcal{H}(K_2)$ for some pair of Agler kernels $(K_1,K_2)$; this construction answers a question posed by Ball and Bolotnikov in \cite{bb11}. We also generalize classical work of Nagy-Foias connecting regularity of $\Phi \in \mathcal{S}_1(E,E_{*})$ on the boundary to the regularity of functions in its associated de Branges-Rovnyak space. See \cite{sar94} for a discussion. We now outline the rest of the paper. The structure of $\Hphi$ is revealed by embedding an isometric copy into the larger scattering subspace $\Kphi$ alluded to above. The reader need not know anything about scattering theory---the basic facts we need are built from scratch in Section \ref{sect:scattering}. In Section \ref{sect:construction}, canonical orthogonal decompositions of $\Kphi$ are projected down to canonical decompositions of $\Hphi$ and these yield certain pairs of extremal Agler kernels of $\Phi$ denoted \[ (K^{max}_1, K^{min}_2) \ \ \text{ and } \ \ (K^{min}_1, K^{max}_2). \] These pairs are related by a positive kernel $G:\D^2\times \D^2 \to \mathcal{L}(E_*)$ \[ G(z,w) := \frac{K_1^{max}(z,w) - K_1^{min}(z,w)}{1-z_1\bar{w}_1} = \frac{K_2^{max}(z,w) - K_2^{min}(z,w)}{1-z_2\bar{w}_2}. \] In section 4, we show that all Agler kernels of $\Phi$ can be characterized in terms of the special kernels $K_1^{min}, K_2^{min}, G$: \begin{theorem*}[ \ref{thm:maxmin}] Let $\Phi \in \mathcal{S}_2(E, E_*)$ and let $K_1, K_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$. Then $(K_1, K_2)$ are Agler kernels of $\Phi$ if and only if there are positive kernels $G_1,G_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$ such that \[ \begin{aligned} K_1(z,w) =& K_1^{min}(z,w) + (1-z_1 \bar{w}_1) G_1(z,w) \\ K_2(z,w) =& K_2^{min}(z,w) + (1-z_2 \bar{w}_2) G_2(z,w) \end{aligned} \] and $G = G_1 + G_2.$ \end{theorem*} While Ball-Sadosky-Vinnikov \cite{bsv05} proved the existence of analogous maximal and minimal decompositions in the scattering subspace $\Kphi$, our contribution here is to show that many of these extremality properties also hold in the space of interest $\Hphi$. On the path to our regularity result, we obtain explicit characterizations of the spaces $\mathcal{H}(K^{max}_j)$ and $\mathcal{H}(K^{min}_j)$ and use those to show that all $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ are contained inside ``small'', easily-studied subspaces of $\Hphi$. Section \ref{sect:functions} has the details. In Section 5, we consider applications of this Agler kernel analysis. When $\Phi$ is square matrix valued, the containments allow us to characterize when $\Phi$ and the elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend analytically past portions of $\partial \D^2$, thus generalizing the regularity result of Nagy-Foias mentioned above. A key point is that $\Hphi$ is too big of a space for these characterizations, and it really is necessary to study Agler kernels to investigate the regularity of $\Phi$. We now state the main regularity theorem found in Section \ref{sect:extensions}. Let $X \subseteq \mathbb{T}^2$ be an open set and define the sets \begin{align*} X_1 & := \left \{ x_1 \in \mathbb{T} : \text{ such that } \exists \ x_2 \text{ with } (x_1, x_2) \in X \right \} \\ X_2 & := \left \{ x_2 \in \mathbb{T} : \text{ such that } \exists \ x_1 \text{ with } (x_1, x_2) \in X \right \} \end{align*} using $X$ and the sets $\mathbb{E} := \mathbb{C} \setminus \overline{\D}$ and $S := \left \{ 1 / \bar{z}: \det \Phi(z) = 0 \right\}.$ Then, we obtain the following result: \begin{theorem*}[\ref{thm:extension}] Let $\Phi \in \mathcal{S}_2(E, E_*)$ be square matrix valued. Then the following are equivalent: \begin{itemize} \item[$(i)$] $\Phi$ extends continuously to $X$ and $\Phi$ is unitary valued on $X$. \item[$(ii)$] There is some pair $(K_1,K_2)$ of Agler kernels of $\Phi$ such that the elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend continuously to $X.$ \item[$(iii)$] There exists a domain $\Omega$ containing \beq \D^2 \cup X \cup (X_1 \times \D) \cup (\D \times X_2) \cup (\mathbb{E}^2 \setminus S ) \eeq such that $\Phi$ and the elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend analytically to $\Omega$ for every pair $(K_1, K_2)$ of Agler kernels of $\Phi.$ Moreover the points in the set $\Omega$ are points of bounded evaluation of every $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2).$ \end{itemize} \end{theorem*} In Section \ref{sect:tfr}, we return to the setting of transfer function realizations. We use the canonical Agler kernels $(K^{max}_1, K^{min}_2)$ to construct a T.F.R. of $\Phi$ with refined properties. Specifically we prove: \begin{theorem*}[\ref{thm:canonicalcmf}] Let $\Phi \in \mathcal{S}_2(E,E_*)$ and consider its Agler kernels $(K^{max}_1, K^{min}_2).$ Then, $\Phi$ has a coisometric transfer function realization with state space $\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1).$ \end{theorem*} This construction answers a question posed by Ball and Bolotnikov in \cite{bb11}. We also obtain additional information about the block operators $A, B, C,$ and $D$ of the associated coisometric colligation $U$. In Section \ref{sect:opkernels}, we provide an appendix outlining results concerning operator valued reproducing kernels used in the paper. We supply the commonly used symbols and table of contents below for convenience. \newpage \renewcommand{\nomname}{List of Symbols} \printnomenclature[1in] \tableofcontents \section{Decompositions of Scattering Subspaces} \label{sect:scattering} For brevity, this paper only outlines the structure of particular scattering systems defined for $\Phi \in \SEE$. Many details of these scattering systems also appear in \cite{bsv05} and \cite{bkvsv}. For a review of the general theory of one- and multi-evolution scattering systems, see \cite{bsv05}. \subsection{Notation and Operator Ranges} Before proceeding to scattering systems, we require some notation. Let $E$ be a Hilbert space. Then $L^2(E):= L^2(\mathbb{T}^2) \otimes E$, i.e. the space of $E$ valued functions on $\mathbb{T}^2$ with square summable Fourier coefficients. Similarly, $H^2(E) := H^2(\D^2) \otimes E$ denotes the space of $E$ valued holomorphic functions on $\D^2$ whose Taylor coefficients around zero are square summable. Recall that $Z_1, Z_2$ denote the coordinate functions $Z_j(z_1,z_2) = z_j$. We will define some standard subspaces of $L^2(E)$ according to their Fourier series support. Let $\ZZ_+ = \{0,1,2,\dots\}$ and $\ZZ_{-} = \{-1,-2,-3,\dots\}$. If $N\subset \ZZ^2$ and $f \in L^2(E)$, the statement $\text{supp}(\hat{f}) \subset N$ means $\hat{f}(n_1,n_2) = 0$ for $(n_1,n_2) \not \in N$. Now define \[ \begin{aligned} L^2_{++}(E) &:= \{f \in L^2(E): \text{supp}(\hat{f}) \subset \ZZ_+ \times \ZZ_+ \}\\ L^2_{+\bullet}(E) &:= \{f \in L^2(E): \text{supp}(\hat{f}) \subset \ZZ_+ \times \ZZ \} \\ L^2_{-\bullet}(E) &:= \{f \in L^2(E): \text{supp}(\hat{f}) \subset \ZZ_{-} \times \ZZ\} \\ L^2_{+-}(E) &:= \{f \in L^2(E): \text{supp}(\hat{f}) \subset \ZZ_{+} \times \ZZ_{-}\} \\ L^2_{--}(E) &:= \{f \in L^2(E): \text{supp}(\hat{f}) \subset \ZZ_{-} \times \ZZ_{-}\}, \end{aligned} \] and similarly one can define $L^2_{\bullet +}(E),$ $L^2_{\bullet -}(E)$, and $L^2_{-+} (E).$ It is well-known that associating an $H^2(E)$ function $f$ with the $L^2$ function whose Fourier coefficients agree with the Taylor coefficients of $f$ maps $f$ unitarily to its radial boundary value function in $L^2_{++}(E).$ We will denote both the function in $H^2$ and the associated function in $L^2_{++}$ by $f$. We also require the following definition and simple lemma about operator ranges; for more details, see the first chapter of \cite{sar94}. \begin{defn} Let $\mathcal{K}$ be a Hilbert space and let $T: \mathcal{K} \rightarrow \mathcal{K}$ be a bounded linear operator on $\mathcal{K}$. Then the \emph{operator range} of T, denoted $\mathcal{M}(T)$, \nomenclature{$\mathcal{M}(T)$}{operator range} is the Hilbert space consisting of elements in the image of $T$ endowed with the inner product \[ \LL T x, Ty \RR_{\mathcal{M}(T)} := \LL P_{(\ker T)^{\perp} } x, y \RR_{\mathcal{K}} \qquad \forall \ x,y \in \mathcal{K}. \] \end{defn} \begin{lem} \label{lem:oprange} Let $\mathcal{K}$ be a Hilbert space and let $T: \mathcal{K} \rightarrow \mathcal{K}$ be a bounded linear self-adjoint operator on $\mathcal{K}$. Then the operator range $\mathcal{M}(T)$ is the closure of the image of $T^2$ in the $\mathcal{M}(T)$ norm and $\LL T x, T^2 y \RR_{\mathcal{M}(T)} = \LL T x, y \RR_{\mathcal{K}},$ for all $x,y \in \mathcal{K}.$ \end{lem} \begin{proof} We show that if $\eta \in \mathcal{M}(T)$ and $\eta \perp T^2 \mathcal{K}$, then $\eta \equiv 0.$ Fix such an $\eta$ and choose $x \in ( \ker T )^{\perp}$ such that $Tx =\eta.$ Then, for each $y \in \mathcal{K}$, \[ 0 = \LL \eta, T^2 y \RR_{\mathcal{M}(T)} = \LL x, T y \RR_{\mathcal{K}} = \LL Tx, y \RR_{\mathcal{K}} =\LL \eta, y \RR_{\mathcal{K}} ,\] which implies $\eta \equiv 0.$ Moreover, for any $x,y \in \mathcal{K}$, \[ \LL T x, T^2 y \RR_{\mathcal{M}(T)} = \LL P_{(ker T)^{\perp}} x, Ty \RR_{\mathcal{K}} = \LL T P_{(ker T)^{\perp}} x, y \RR_{\mathcal{K}} = \LL T x, y \RR_{\mathcal{K}}, \] as desired. \end{proof} \begin{ex} \label{ex:hphi} Let $\Phi \in \SEE$. The two-variable de Branges-Rovnyak space $\Hphi$ is also the operator range of the bounded linear self adjoint operator \[ \Dphi : =( I - \Phi P_{H^2(E)} \Phi^*)^{1/2}: H^2(E_*) \rightarrow H^2(E_*). \nomenclature{$\Dphi$}{The operator $(I - \Phi P_{H^2(E)} \Phi^*)^{1/2}$}\] To see this notice first that by Lemma \ref{lem:oprange}, $\Dphi ^2 H^2(E_*)$ is dense in $\mathcal{M}(\Dphi)$ and \[ \LL \Dphi f, \Dphi^2 g \RR_{\mathcal{M}(\Dphi)} = \LL \Dphi f, g \RR_{H^2(E_*)}\] for all $f,g \in H^2(E_*).$ Let $k_z$ be the Szeg\H{o} kernel on the bidisk. Then, the reproducing kernel of $H^2(E_{*})$ is $k_z\otimes I_{E_*}$. Given $f \in \mathcal{M}(\Dphi)$, $z \in \D^2, v \in E_{*}$, we see that \[ \LL f, \Dphi^2 k_z v \RR_{\mathcal{M}(\Dphi)} = \LL f, k_z v \RR_{H^2(E_{*})} = \LL f(z), v \RR_{E_{*}} \] and therefore the operator range of $\Dphi$ is a reproducing kernel Hilbert space on $\D^2$ with reproducing kernel \[ \frac{I-\Phi(z)\Phi(w)^*}{(1-z_1\bar{w}_1)(1-z_2\bar{w}_2)}. \] Specifically, $\mathcal{M}(\Dphi)$ is equal to the de Branges-Rovnyak space associated to $\Phi$, which is $\Hphi.$ This follows from the standard identity for reproducing kernels $P_{H^2} \Phi^* k_z v = \Phi(z)^* k_z v$ and the computation $\Dphi^2 k_z v = (I-\Phi P_{H^2} \Phi^*) k_z v = k_z v -\Phi \Phi(z)^*k_z v$. \end{ex} The following consequence of Douglas's lemma \cite{do66} is found on page 3 of \cite{sar94}. \begin{lem} \label{lem:douglas} Let $\mathcal{K}$ be a Hilbert space and let $A:\mathcal{K} \to \mathcal{K}, B:\mathcal{K} \to \mathcal{K}$ be bounded linear operators. Then, $\mathcal{M}(A) = \mathcal{M}(B)$ if and only if $AA^* = BB^*$. \end{lem} \subsection{The de Branges-Rovynak Models} Now we proceed to scattering systems: \begin{defn} A \emph{two-evolution scattering system} $\mathcal{S} = (\mathcal{H}, \mathcal{U}_1, \mathcal{U}_2, \mathcal{F}, \mathcal{F}_*)$ consists of a Hilbert space $ \mathcal{H}$, two unitary operators $\mathcal{U}_1$, $\mathcal{U}_2: \mathcal{H} \rightarrow \mathcal{H},$ and two wandering subspaces $\mathcal{F}, \mathcal{F}_* \subseteq \mathcal{H}$ of $\mathcal{U}_1$ and $\mathcal{U}_2$, i.e. \[ \mathcal{F} \perp \U_1^{n_1} \U_2^{n_2} \F \ \ \text{ and } \ \ \ \mathcal{F}_* \perp \U_1^{n_1} \U_2^{n_2} \F_* \ \qquad \forall \ (n_1, n_2) \in \mathbb{Z}^2 \setminus(0,0). \] \end{defn} Given any $\Phi \in \SEE$, one can define the de Branges-Rovnyak model for $\Phi$. This is a concrete transcription of the (almost) unique minimal scattering system whose scattering function coincides with $\Phi.$ See \cite{bsv05} for the proof and additional theory. \begin{defn}The \emph{de Branges-Rovnyak model for $\Phi \in \SEE$} consists of the operator range, denoted $\mathcal{H}$, \nomenclature{$\mathcal{H}$}{de Branges-Rovnyak model for $\Phi$} of the following bounded linear self-adjoint operator: \[ \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right]^{1/2} : \left[ \begin{array}{c} L^2(E_*) \\ L^2(E) \end{array} \right] \rightarrow \left[ \begin{array}{c} L^2(E_*) \\ L^2(E) \end{array} \right]. \] Then $\mathcal{H}$ has inner product given by \[ \left \langle \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right]^{1/2} \left[ \begin{array}{c} f \\ g \end{array} \right] , \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right]^{1/2} \left[ \begin{array}{c} f' \\ g' \end{array} \right] \right \rangle_{\mathcal{H}} := \left \langle P_{Q^{\perp}} \left[ \begin{array}{c} f \\ g \end{array} \right], \left[ \begin{array}{c} f' \\ g' \end{array} \right] \right \rangle_{L^2(E_*) \oplus L^2(E)}, \] where $Q = \ker \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right]^{1/2}.$ Lemma \ref{lem:oprange} implies the image of the operator $\begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}$ is dense in $\mathcal{H}$ and that \[ \left \langle \begin{bmatrix} f \\ g \end{bmatrix}, \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \left[ \begin{array}{c \right \rangle_{\mathcal{H}} = \left \langle \begin{bmatrix} f \\ g \end{bmatrix}, \left[ \begin{array}{c \right \rangle_{L^2(E_*) \oplus L^2(E)},\qquad \forall \ \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{H}. \] The de Branges-Rovnyak model also contains the following two subspaces of $\mathcal{H}$: \[ \F: = \left[ \begin{array} {c} \Phi \\ I \end{array} \right] E = \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right] \left[ \begin{array}{c} 0 \\ E \end{array} \right] \ \ \text{ and } \ \ \F_*: = \left[ \begin{array} {c} I \\ \Phi^* \end{array} \right] E_* = \left[ \begin{array}{cc} I & \Phi \\ \Phi^*& I \end{array} \right] \left[ \begin{array}{c} E_* \\ 0 \end{array} \right]\] and the two operators $\U_1, \U_2: \mathcal{H} \rightarrow \mathcal{H}$ defined by \[ \U_j := \left[ \begin{array}{cc} Z_j I_{E_*} & 0 \\ 0 & Z_j I_E \end{array} \right] \qquad \text{ for } j=1,2. \] Each $\U_j$ is onto since \[ \U_j \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}^{1/2} = \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}^{1/2} \U_j \ \ \text{ and } \ \ \U_j \Big( L^2(E_*) \oplus L^2(E) \Big) = L^2(E_*) \oplus L^2(E). \] To see that $\U_j$ is isometric, observe that $\U_j$ preserves the $\mathcal{H}$ norm on the image of $\begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}$ since: \begin{align*} \Bigg \| \U_j \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} f \\ g \end{bmatrix} \Bigg \|^2_{\mathcal{H}} & = \LL \U_j \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} f \\ g \end{bmatrix}, \U_j \begin{bmatrix} f \\ g \end{bmatrix} \RR_{L^2(E_*) \oplus L^2(E)} \\ &= \LL \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} Z_j f \\ Z_j g \end{bmatrix} , \begin{bmatrix} Z_j f \\ Z_j g \end{bmatrix} \RR_{L^2(E_*) \oplus L^2(E)} \\ & = \Bigg \| \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} f \\ g \end{bmatrix} \Bigg \|^2_{\mathcal{H}}. \end{align*} Since said image is dense in $\mathcal{H}$, each $\U_j$ is unitary. Observe that $\F$ is \emph{wandering} for $\U_1$ and $\U_2$ since if $\eta, \nu \in E$ and $(n_1,n_2) \ne (0,0)$, then \[ \begin{aligned} \LL \left[ \begin{array}{c \eta, \ \U_1^{n_1} \U_2^{n_2} \left[ \begin{array}{c \nu \RR_{\mathcal{H}} &= \LL \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \left[ \begin{array}{c} 0 \\ \eta \end{array} \right], \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \left[ \begin{array}{c} 0 \\ Z_1^{n_1}Z_2^{n_2} \nu \end{array} \right] \RR_{\mathcal{H}} \\ & \\ &= \LL \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \left[ \begin{array}{c} 0 \\ \eta \end{array} \right], \left[ \begin{array}{c} 0 \\ Z_1^{n_1}Z_2^{n_2} \nu \end{array} \right] \RR_{L^2(E_*) \oplus L^2(E)} \\ &\\ & = \LL \eta, \ Z_1^{n_1}Z_2^{n_2} \nu \RR_{L^2(E)}, \end{aligned} \] which is zero. Analogous arguments show $\F_*$ is wandering. We will usually just write $\U_j = Z_j$, unless we wish to emphasize the connection to scattering systems. \end{defn} The following remarks detail additional facts about $\mathcal{H}$. \begin{rem} \label{rem:hspace} \textbf{Alternate Characterization of $\mathcal{H}$.} Define the bounded linear self-adjoint operators \[ \begin{aligned} \Delta: &= (I - \Phi^*\Phi)^{1/2}: L^2(E) \rightarrow L^2(E) \\ \Delta_*: &= (I - \Phi \Phi^*)^{1/2}: L^2(E_*) \rightarrow L^2(E_*). \end{aligned} \nomenclature{$\Delta, \Delta_{*}$}{ $(I-\Phi^*\Phi)^{1/2}, (I-\Phi\Phi^*)^{1/2}$} \] By Lemma \ref{lem:douglas}, the factorizations \[ \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}^{1/2} \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix}^{1/2} = \begin{bmatrix} I & 0 \\ \Phi^* & \Delta \end{bmatrix} \begin{bmatrix} I & \Phi \\ 0 & \Delta \end{bmatrix} = \begin{bmatrix} \Delta_{*} & \Phi \\ 0 & I \end{bmatrix} \begin{bmatrix} \Delta_{*} & 0 \\ \Phi^{*} & I \end{bmatrix} \] show that \begin{align} \mathcal{H} &= \mathcal{M} \left( \begin{bmatrix} I & 0 \\ \Phi^* & \Delta \end{bmatrix} \right) = \left\{\begin{bmatrix} f \\ g \end{bmatrix}: f\in L^2(E_*), g \in L^2(E), g- \Phi^* f \in \Delta L^2(E) \right\} \label{eqn:hphichar} \\ &= \mathcal{M} \left( \begin{bmatrix} \Delta_{*} & \Phi \\ 0 & I \end{bmatrix} \right) = \left\{\begin{bmatrix} f \\ g \end{bmatrix}: f\in L^2(E_*), g \in L^2(E), f- \Phi g \in \Delta_{*} L^2(E_{*}) \right\}. \end{align} where the equality is on the level of Hilbert spaces, not just as sets.These characterizations of $\mathcal{H}$ can be used to show that the linear maps \beq \begin{bmatrix} f \\ g \end{bmatrix} \mapsto f \text{ and } \begin{bmatrix} f \\ g \end{bmatrix} \mapsto g \eeq are contractive operators from $\mathcal{H}$ onto $L^2(E_*)$ and $L^2(E)$ respectively. To see this, note that for each element in $\mathcal{H}$, there is an $h \in L^2(E)$ such that \[ \begin{bmatrix} f \\ g \end{bmatrix} = \begin{bmatrix} I & 0 \\ \Phi^* & \Delta \end{bmatrix} \begin{bmatrix} f \\ h \end{bmatrix}, \text{ where } \begin{bmatrix} f \\ h \end{bmatrix} \perp \ker \begin{bmatrix} I & 0 \\ \Phi^* & \Delta \end{bmatrix}. \] Since $\mathcal{H}$ and the operator range of $ \begin{bmatrix} I & 0 \\ \Phi^* & \Delta \end{bmatrix}$ coincide as Hilbert spaces, \begin{equation} \label{eqn:fnorm} \left \| \begin{bmatrix} f \\ g \end{bmatrix} \right \|^2_{\mathcal{H}} = \| f \|^2_{L^2(E_*)} + \|h \|^2_{L^2(E)} \ge \| f \|^2_{L^2(E_*)}. \end{equation} Similarly, the equality between $\mathcal{H}$ and the operator range of $\begin{bmatrix} \Delta_{*} & \Phi \\ 0 & I \end{bmatrix}$ shows that for each element of $\mathscr{H}$, \begin{equation} \label{eqn:gnorm} \| g \|_{L^2(E)} \le \left \| \begin{bmatrix} f \\ g \end{bmatrix} \right \|_{\mathcal{H}}. \end{equation} \end{rem} The following remark discusses additional subspaces of $\mathcal{H}$ that are important for the structure of the scattering system: \begin{rem} \label{rem:kphi} \textbf{The Scattering Subspace $\Kphi.$} The incoming subspace $\W_*$ and outgoing subspace $\W$ \nomenclature{$\W_*, \W$}{Incoming and outgoing subspaces} of the de Branges-Rovnyak model are defined as follows: \[ \begin{aligned} \W_*&:= \bigoplus_{n \in \ZZ^2 \setminus \mathbb{Z}^2_+} \U_1^{n_1} \U_2^{n_2} \F_* = \left[ \begin{array}{c L^2 \ominus H^2(E_*) \\ \W &: = \bigoplus_{n \in \mathbb{Z}^2_+} \U_1^{n_1} \U_2^{n_2} \ \F = \left[ \begin{array}{c H^2(E). \end{aligned} \] An easy calculation shows $\W \perp \W_*$ in $\mathcal{H}$. This means $\mathcal{H}$ decomposes as \[ \mathcal{H} = \W_* \oplus \Kphi \oplus \W, \] where $\Kphi := \mathcal{H} \ominus (\W \oplus \W_*)$ is called the \emph{scattering subspace}. \nomenclature{$\Kphi$}{the scattering subspace of $\Phi$} A simple computation shows that \[ \begin{bmatrix} f \\ g \end{bmatrix} \perp \W_* \text{ iff } f \in H^2(E_*) \text{ and } \begin{bmatrix} f \\ g \end{bmatrix} \perp \W \text{ iff } g \in L^2\ominus H^2(E). \] This means that the scattering subspace \[ \begin{aligned} \Kphi &: = \mathcal{H} \ominus ( \W \oplus \W_*) \\ & = \left \{ \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{H} : f \in H^2(E_*), \ g \in L^2\ominus H^2(E) \right \}. \end{aligned} \] Using the alternate characterizations of $\mathcal{H}$ from Remark \ref{rem:hspace}, it follows that \[ \begin{aligned} \Kphi & = \left \{ \begin{bmatrix} f \\ g \end{bmatrix} : f \in H^2(E_*), \ g \in L^2\ominus H^2(E) , \ g-\Phi^*f \in \Delta L^2(E) \right \} \\ & = \left \{ \begin{bmatrix} f \\ g \end{bmatrix} : f \in H^2(E_*), \ g \in L^2\ominus H^2(E), \ f- \Phi g \in \Delta_* L^2(E_*) \right \}. \end{aligned} \] The following operator gives the orthogonal projection onto $\Kphi:$ \[ P_{\Phi} : = \left[ \begin{array}{cc} P_+ & -\Phi P_+ \\ -\Phi^*P_{-} & P_{-} \end{array} \right], \] \nomenclature{$P_{\Phi}$}{Projection onto $\Kphi$} where $P_+ = P_{H^2}$, and $P_{-} =P_{L^2 \ominus H^2}$, \nomenclature{$P_{+}, P_{-}$}{Projection onto $H^2, L^2\ominus H^2$} for either $L^2 \ominus H^2(E)$ or $L^2 \ominus H^2(E_*).$ It is easy to check that $P_{\Phi}^2 = P_{\Phi}$, $P_{\Phi}|_{\Kphi} \equiv I$ and $ P_{\Phi}|_{\W \oplus \W_*} \equiv 0.$ \end{rem} \begin{rem}[Inner functions] When $\Phi$ is an inner function, namely when $\Phi^*\Phi = I, \Phi\Phi^* = I$ a.e.~on $\mathbb{T}^2$, the above machinery simplifies significantly and scattering systems are not really necessary. In this case, $\Delta = 0, \Delta_{*} = 0$, so that \[ \Kphi = \left\{ \begin{bmatrix} f \\ \Phi^* f \end{bmatrix} : f \in H^2(E_*), \Phi^*f \in L^2\ominus H^2(E) \right\}. \] Evidently, the first component in this space is $f \in H^2(E_{*})$ such that $\Phi^*f \in L^2\ominus H^2(E)$. This is equivalent to saying $f \in H^2(E_{*}) \ominus \Phi H^2(E)$. This space is the usual model space associated to the inner function $\Phi$; it is studied in \cite{bsv05} and is studied in great depth in \cite{bk12}. Although in this paper we recover many results from \cite{bk12}, there are many results related to rational inner functions in \cite{bk12} that we do not mention here. In general, the paper \cite{bk12} is a more accessible introduction to the present material. \end{rem} \subsection{Decompositions of $\Kphi$} In \cite[Theorem 5.5]{bsv05}, Ball-Sadosky-Vinnikov prove the following canonical decomposition of $\Kphi.$ For completeness, we include a simple proof here as well. \begin{thm} \label{thm:kdecomp} Define these subspaces of the scattering subspace $\Kphi$: \[ \begin{aligned} S^{max}_1 &= \left \{ \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi: Z_1^k \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi \ \forall \ k \in \NN \right \} \ S^{min}_1 = \text{closure }P_{\Phi} \left[ \begin{array}{c L^2_{+-}(E) \\ S^{max}_2 &= \left \{ \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi: Z_2^k \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi \ \forall \ k \in \NN \right \} \ S^{min}_2 = \text{closure }P_{\Phi} \left[ \begin{array}{c L^2_{-+}(E), \end{aligned} \nomenclature{$S_j^{max}, S_j^{min}$}{Subspaces of the scattering subspace} \] where each closure is taken in $\Kphi.$ Then, each $S^{max}_j$ and $S^{min}_j$ is invariant under multiplication by $Z_j$ and \begin{align} \label{eqn:kdecomp} \Kphi = S^{max}_1 \oplus S^{min}_2 = S^{min}_1 \oplus S^{max}_2. \end{align} \end{thm} \begin{proof} Our first observation is that $S_1^{max}$ is equal to \begin{multline*} \left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2\ominus H^2(E_*)\right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{\bullet+}(E)\right)^{\perp} \\ = \left\{ \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{H}: f \in H^2(E_*), g \in L^2_{\bullet-}(E)\right\} \end{multline*} since $Z_1^k \begin{bmatrix} f \\ g \end{bmatrix} \perp \begin{bmatrix} \Phi \\ I \end{bmatrix} H^2(E)$ for all $k\geq 0$ if and only if $\begin{bmatrix} f \\ g \end{bmatrix} \perp \begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{\bullet+}(E)$, which is equivalent to saying $g \in L^2_{\bullet-}(E)$. Therefore, $S_1^{max}$ is equal to \begin{multline*} \left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2\ominus H^2(E_*)\right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} H^2(E)\right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{-+}(E)\right)^{\perp} \\ = \Kphi \ominus P_{\Phi} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{-+}(E)\right). \end{multline*} Hence, \[ \Kphi \ominus S_{1}^{max} = \text{closure } P_{\Phi} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{-+}(E)\right) =S_2^{min}, \] which shows $\Kphi = S_1^{max} \oplus S_2^{min}$ and similarly $\Kphi = S_1^{min} \oplus S_2^{max}$. It is also clear that $S_j^{max}$ is invariant under $Z_j$ for $j=1,2$. Showing the same is true for $S_j^{min}$ requires more work. Define the following subspace of $\mathcal{H}$ \[ \mathcal{Q} = \left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2_{\bullet -}(E_*) \right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{\bullet+}(E)\right)^{\perp} \] and notice that $\mathcal{Q}$ is invariant under both $Z_1$ and $\bar{Z}_1$. Projection onto $\mathcal{Q}$ is given by \[ P_{\mathcal{Q}} = \begin{bmatrix} P_{\bullet+} & -\Phi P_{\bullet+} \\ -\Phi^* P_{\bullet -} & P_{\bullet-} \end{bmatrix} \] where $P_{\bullet\pm}$ is projection onto the appropriate $L^2_{\bullet\pm}$ space; the proof of this fact is similar to the proof of the formula for $P_{\Phi}$. Now it can be directly checked that \[ P_{\Phi} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right) = P_{\mathcal{Q}} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right). \] The key things to notice are that since $\Phi L^2_{+-}(E) \subset L^2_{+\bullet}(E_*)$, it follows that $P_{\bullet+} \Phi L^2_{+-}(E) = P_{+} \Phi L^2_{+-}(E)$, $P_{\bullet+}L^2_{+-} =0 = P_{+}L^2_{+-}$, $P_{\bullet-} \Phi L^2_{+-}(E) = P_{-} \Phi L^2_{+-}(E)$, and $P_{\bullet-} L^2_{+-}(E)= P_{-} L^2_{+-}$. However, since $\mathcal{Q}$ is invariant under $Z_1$ and $\bar{Z}_1$, it follows that $P_{\mathcal{Q}}$ commutes with $Z_1$. Since $\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)$ is invariant under $Z_1$, we see that \[ P_{\mathcal{Q}} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right) \] is invariant under $Z_1$, and hence so is its closure. This shows $S_1^{min}$ is invariant under $Z_1$ and the proof that $S_2^{min}$ is invariant under $Z_2$ is similar. \end{proof} \begin{defn} \label{defn:residspace} \textbf{The Residual Subspace $\mathcal{R}$.} It is also useful to consider the residual subspace $\mathcal{R}$ of $\Kphi$ defined initially as $ \mathcal{R}:= S^{max}_1 \ominus S^{min}_1.$ Using the decomposition in (\ref{eqn:kdecomp}), it is basically immediate that \[ \mathcal{R} = S^{max}_2 \ominus S^{min}_2 = S^{max}_1 \cap S^{max}_2. \] \end{defn} \nomenclature{$\mathcal{R}$}{The residual subspace of the scattering subspace} \section{Constructing Agler Decompositions}\label{sect:construction} \subsection{Connections between $\Kphi$ and $\Hphi$} The decompositions of $\Kphi$ into $S^{max}_j$ and $S^{min}_j$ can be used to construct similar decompositions of $\Hphi.$ The following results link $\Kphi$ and $\Hphi$. \begin{lem} \label{lem:isom} There is an isometry $V: \Hphi \rightarrow \Kphi$ such that \[ \begin{aligned} Vf = \begin{bmatrix} f \\ g \end{bmatrix} \text{ for some } g \in L^2 \ominus H^2(E) \text{ and } V^* \begin{bmatrix} f \\ g \end{bmatrix} = f \ \ \forall g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi. \end{aligned} \] \end{lem} \nomenclature{$V$}{Canonical isometry from $\Hphi$ to $\Kphi$} \begin{proof} As was mentioned in Example \ref{ex:hphi}, the set $\Dphi^2 H^2(E_*)$ is dense in $\Hphi$. Define the operator $V$ on $\Dphi^2 H^2(E_*)$ by \[ V \Dphi^2 h = P_{\Phi} \left[ \begin{array}{c h \qquad \forall \ h \in H^2(E_*). \] Notice that this equals \[ \begin{bmatrix} P_{+} & -\Phi P_{+} \\ -\Phi^* P_{-} & P_{-} \end{bmatrix} \begin{bmatrix} I \\ \Phi^* \end{bmatrix} h = \left[ \begin{array}{c} \Dphi^2 h \\ P_- \Phi^* h \end{array} \right] = \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}. \] The computation \[ \begin{aligned} \left \|\begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix} \right\|^2_{\mathcal{H}} &= \ip{ \begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}}{ \begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}}_{L^2(E_*) \oplus L^2(E)} \\ &= \ip{\begin{bmatrix} \Dphi^2 h \\ P_{-} \Phi^* h\end{bmatrix}}{\begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}}_{L^2(E_*) \oplus L^2(E)} \\ &= \ip{\Dphi^2 h}{h}_{L^2(E_{*}) }\\ &= \|\Dphi^2 h\|^2_{\Hphi} \end{aligned} \] at once shows that $V$ is well-defined ($\Dphi^2 h=0$ implies $V\Dphi^2 h = 0$) and isometric, and therefore extends to an isometry from $\Hphi$ to $\Kphi$. To see that the first component of $Vf$ is always $f$, it suffices to notice that since the projection $\pi:\begin{bmatrix} f \\ g \end{bmatrix} \mapsto f$ is bounded from $\mathcal{H}$ to $L^2(E_*)$ and since we have $\pi V f = f$ for the dense set of $f \in \Dphi^2 H^2(E_*)$, the identity $\pi V f = f$ must hold for all $f\in \Hphi$ by boundedness of $\pi V$. Now, $V^*$ is a partial isometry from $\K_{\phi}$ onto $\Hphi$, and \[ \text{ker } V^* =(\text{range } V)^{\perp} = \left\{ \begin{bmatrix} 0 \\ g \end{bmatrix} : g \in L^2\ominus H^2(E) \cap \Delta L^2(E)\right\}. \] The latter equality can be seen from the following computation. If $\begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi$ is orthogonal to the range of $V$, then for any $h \in H^2(E_*)$ \[ \begin{aligned} 0=\ip{\begin{bmatrix} I & \Phi \\ \Phi^* & I \end{bmatrix} \begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}} {\begin{bmatrix} f\\ g \end{bmatrix}}_{\Kphi} &= \ip{\begin{bmatrix} h \\ -P_{+} \Phi^* h \end{bmatrix}} {\begin{bmatrix} f\\ g \end{bmatrix}}_{L^2(E_*)\oplus L^2(E)}\\ & = \ip{h}{f}_{L^2(E_*)}, \end{aligned} \] since $f \in H^2(E_*)$ and $g \in L^2\ominus H^2(E).$ Upon setting $h=f$, this yields $f=0$. On the other hand, the above computation shows that if $\begin{bmatrix} 0 \\ g \end{bmatrix} \in \Kphi,$ then this element is orthogonal to the range of $V$. So, the action of $V^*$ on $\Kphi$ can be directly computed as follows. Any $\begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi$ can be written as $Vf + \begin{bmatrix} 0 \\ h \end{bmatrix}$ for some $h \in L^2\ominus H^2(E) \cap \Delta L^2(E)$. Then, $V^* \begin{bmatrix} f \\ g \end{bmatrix} = f$. \end{proof} An immediate corollary of the above theorem is: \begin{cor} As sets, $\Hphi = \left \{ f \in H^2(E_*): \text{ there is a } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \Kphi \right \}.$ \end{cor} \subsection{Hilbert Spaces in $\Hphi$} Using the partial isometry $V^*$ and the decompositions of $\Kphi$ given in Theorem \ref{thm:kdecomp}, we can construct Hilbert spaces yielding Agler decompositions. First, we make some general observations. Let $K$ be a closed subspace of $\Kphi$, and denote the operator range of $V^*|_K$ by $H_K$. \nomenclature{$H_K$}{Operator range of $V^*\mid_{K}$} Then, $f \in H_K$ if and only if there exists $g$ such that $\begin{bmatrix} f \\ g \end{bmatrix} \in K$. Essentially by the definition of operator range, $V^*\mid_{K}$ is a unitary from $K\ominus (K\cap \ker V^*)$ onto $H_K$, and the inverse of this unitary will be of the form $f \mapsto \vecs{f}{A_K f}$ where $A_K:H_K \to L^2(E)$ is some linear operator. \nomenclature{$A_K$}{Component of isometry from $H_K$ into $K$} By \eqref{eqn:gnorm}, $A_K$ is contractive, i.e. : \begin{equation} \label{eqn:genhk} \|A_K f\|_{L^2(E)} \leq \left\| \vecs{f}{A_K f}\right\|_{\mathcal{H}} = \|f\|_{H_{K}} \end{equation} and it is worth pointing out the following representation of the norm \[ \|f\|_{H_K} = \min \left\{ \left\| \begin{bmatrix} f \\ g \end{bmatrix} \right\|_{\mathcal{H}}: g \text{ satisfies } \begin{bmatrix} f \\ g \end{bmatrix} \in K\right\}. \] Let \[ k_w (z) = \frac{I}{(1-z_1\bar{w}_1)(1-z_2\bar{w}_2)} \] be the Szeg\H{o} kernel on $H^2(E_{*})$. \begin{lem} \label{lem:opkern} The reproducing kernel for $H_K$ is given by \[ V^* P_{K} V \Dphi^2 k_w(z). \] Moreover, if $K$ is an orthogonal direct sum, $K = \bigoplus_{j=1}^{\infty} K_j$, then the reproducing kernel for $H_K$ is the sum of the reproducing kernels for $H_{K_j}$. \end{lem} \begin{proof} Take any $f\in H_K$; this means $f = V^* \begin{bmatrix} f \\ g \end{bmatrix}$, for some $\begin{bmatrix} f \\ g \end{bmatrix} \in K\ominus \left [ K\cap \ker V^* \right]$. Then, for $w\in \D^2$ and $v \in E_*$ \[ \begin{aligned} \ip{f}{V^* P_{K} V \Dphi^2 k_w v}_{H_K} &= \ip{\begin{bmatrix} f \\ g \end{bmatrix}}{V \Dphi^2 k_w v}_{\Kphi} \\ &= \ip{V^* \begin{bmatrix} f \\ g \end{bmatrix}}{\Dphi^2 k_ w v}_{\Hphi} \\ &= \ip{f}{k_w v}_{H^2(E_*)} = \ip{f(w)}{v}_{E_*}. \end{aligned} \] The assertion about direct sums follows from noticing $P_K = \sum_{j=1}^{\infty} P_{K_j}$ in the strong operator topology. \end{proof} The Hilbert spaces of primary interest are defined as follows: \begin{defn} Define the Hilbert spaces $H^{max}_j$ and $H^{min}_j$ to be the operator ranges of $V^*|_{S^{max}_j}$ and $V^*|_{S^{min}_j}$. Then \[ f \in H^{max}_j \text{ if and only if } \exists \ g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in S^{max}_j, \] and the $H^{max}_j$ norm is given by \[ \| f \|_{H^{max}_j} := \left \| P_{S^{max}_j \ominus \left [S^{max}_j \cap \ker V^* \right ]} \begin{bmatrix} f \\ g \end{bmatrix} \right \|_{S^{max}_j} = \min \left \{ \left \| \begin{bmatrix} f \\ \tilde{g} \end{bmatrix} \right \|_{S^{max}_j} : \begin{bmatrix} f \\ \tilde{g} \end{bmatrix} \in S^{max}_j \right \}. \] \end{defn} \nomenclature{$H_j^{max}, H_j^{min}$}{Operator ranges of $V^*|_{S_j^{max}}, V^*|_{S_j^{min}}$} \begin{lem}[Wold decompositions] \[ S^{max}_j = \bigoplus_{n \in \NN} Z_j^n \big( S^{max}_j \ominus Z_j S^{max}_j \big ) \ \oplus M_j^{max} \] \[ S^{min}_j = \bigoplus_{n \in \NN} Z_j^n \big( S^{min}_j \ominus Z_j S^{min}_j \big ) \ \oplus M^{min}_j \] where $M_j^{max}, M_j^{min} \subset \ker V^*$. \end{lem} \begin{proof} Since multiplication by $Z_j$ is an isometry on $S^{max/min}_j$, the classical Wold decomposition says that $S_j^{max}, S_j^{min}$ can be decomposed as above where \[ M_j^{max} = \bigcap_{n\geq 0} Z_j^n S_j^{max} \text{ and } M_j^{min} = \bigcap_{n\geq 0} Z_j^n S_j^{min} \] so the only thing to show is $M_j^{max} \subset \ker V^*$, since $M_j^{min} \subset M_j^{max}$. So, if $\begin{bmatrix} f \\ g \end{bmatrix} \in \bigcap_{n\geq 0} Z_1^n S_1^{max}$, then $\bar{Z}_1^n f \in H^2(E_*)$ for all $n\geq 0$, which can only happen if $f=0$. This shows $\begin{bmatrix} f \\ g \end{bmatrix} \in \ker V^*$. \end{proof} \begin{lem} \label{lem:hkernels} Let $K_j^{max}, K_j^{min}$ be the reproducing kernels for the operator ranges of $V^*\mid_{S_j^{max} \ominus Z_j S_j^{max}}, V^*\mid_{S_j^{min} \ominus Z_j S_j^{min}}$. \nomenclature{$K_j^{max}, K_j^{min}$}{Reproducing kernels for $H_{S_j^{max}\ominus Z_j S_j^{max}}, H_{S_j^{min}\ominus Z_j S_j^{min}}$} Then, the reproducing kernels for $H_j^{max}$ and $H_j^{min}$ are given by \[ \frac{K_j^{max}(z,w)}{1-z_j \bar{w}_j} \text{ and } \frac{K_j^{min}(z,w)}{1-z_j \bar{w}_j}. \] In addition, if $G$ is the reproducing kernel for the operator range of $V^*|_{\mathcal{R}}$, \nomenclature{$G$}{Reproducing kernel for $H_{\mathcal{R}}$} then \begin{equation} \label{eqn:KKG} \frac{K_j^{max}(z,w)}{1-z_j \bar{w}_j} = \frac{K_j^{min}(z,w)}{1-z_j \bar{w}_j} + G(z,w). \end{equation} \end{lem} \begin{proof} We can focus on $H_1^{max}$ which has reproducing kernel $V^* P_{S_1^{max}} V \Dphi^2 k_w$ by previous remarks. Let $P_1$ denote orthogonal projection onto $S_1^{max} \ominus Z_1 S_1^{max}$. Then, orthogonal projection onto $Z_1^n(S_1^{max} \ominus Z_1 S_1^{max})$ is given by $Z_1^nP_1 \bar{Z}_1^n$. We now claim that the reproducing kernel for the operator range of $V^*$ restricted to $Z_1^n(S_1^{max} \ominus Z_1 S_1^{max})$ satisfies \[ V^*Z_1^n P_1 \bar{Z}_1^n V \Dphi^2 k_{w}v = \bar{w}_1^n Z_1^n V^* P_1 V \Dphi^2 k_{w}v. \] Now for $\begin{bmatrix} f \\ g \end{bmatrix} \in S_1^{max},$ we have $Z_1^n V^* \begin{bmatrix} f \\ g \end{bmatrix} = V^* Z_1^n \begin{bmatrix} f \\ g \end{bmatrix}$. This means $V^* Z_1^n P_1 = Z_1^n V^* P_1$ and so, for any $f \in \Hphi$, $v \in E_*$, \[ \begin{aligned} \ip{f}{V^* Z_1^n P_1 \bar{Z}_1^n V \Dphi^2 k_{w}v}_{\Hphi} &= \ip{V^* Z_1^n P_1 \bar{Z}_1^n V f}{\Dphi^2 k_{w}v}_{\Hphi} \\ &= \ip{Z_1^n V^* P_1 \bar{Z}_1^n V f}{\Dphi^2 k_{w}v}_{\Hphi} \\ &= w_1^n \ip{f}{V^* Z_1^n P_1 V \Dphi^2 k_{w}v}_{\Hphi} \\ &= \ip{f}{\bar{w}_1^n Z_1^n V^* P_1 V \Dphi^2 k_{w}v}_{\Hphi}, \end{aligned} \] so that $V^*Z_1^n P_1 \bar{Z}_1^n V \Dphi^2 k_{w}v = \bar{w}_1^n Z_1^n V^* P_1 V \Dphi^2 k_{w}v$. If we break up $S_1^{max}$ according to its Wold decomposition, then since $V^*$ annihilates $M_1^{max}$, then Lemma \ref{lem:opkern} implies that the reproducing kernel of $H_1^{max}$ is given by \[ \sum_{n \geq 0} \bar{w}_1^n z_1^n V^* P_1 V \Dphi^2 k_{w}(z) = \frac{V^* P_1 V \Dphi^2 k_{w}(z)}{1-z_1 \bar{w}_1} = \frac{K_1^{max}(z,w)}{1-z_1\bar{w}_1}. \] The formulas for $H_2^{max}$ as well as the $H_j^{min}$ follow similarly. The formula \eqref{eqn:KKG} follows from the orthogonal decomposition $S_j^{max} = S_j^{min} \oplus \mathcal{R}$ and Lemma \ref{lem:opkern}. \end{proof} \subsection{Construction of Agler Kernels} As above, let $K_j^{max}, K_j^{min}$ be the reproducing kernels for the operator ranges of $V^*|_{S_j^{max} \ominus Z_j S_j^{max}}$ and $V^*|_{S_j^{min} \ominus Z_j S_j^{min}}$ respectively. \begin{thm} \label{thm:agdecomp} The pairs $(K^{max}_1, K^{min}_2)$ and $(K^{min}_1, K^{max}_2)$ are Agler kernels of $\Phi$, i.e. for all $z,w \in \D^2,$ \begin{equation} \label{eqn:hdecomp} \frac{I_{E_*} - \Phi(z) \Phi(w)^*}{(1-z_1\bar{w}_1)(1-z_2\bar{w}_2)} \ = \ \frac{K^{max}_1(z,w)}{1-z_1\bar{w}_1} + \frac{K^{min}_2(z,w)}{1- z_2\bar{w}_2} \ = \ \frac{K^{min}_1(z,w)}{1-z_1\bar{w}_1} + \frac{K^{max}_2(z,w)} {1-z_2\bar{w}_2}. \end{equation} \end{thm} \begin{proof} The reproducing kernel of $\Hphi$, namely \[ \Dphi^2 k_w(z) = \frac{I_{E_*}-\Phi(z)\Phi(w)^*}{(1-z_1\bar{w}_1)(1-z_2\bar{w}_2)} \] is the sum of the kernels for $H_1^{max}$ and $H_2^{min}$ by Lemma \ref{lem:opkern}, and these kernels are given by \[ V^* P_{S_1^{max}} V \Dphi^2 k_w(z) \text{ and } V^* P_{S_2^{min}} V \Dphi^2 k_w(z). \] By Lemma \ref{lem:hkernels}, these kernels can be computed directly in terms of the reproducing kernels of $K_1^{max}$ and $K_2^{min}$ to give us the formula \eqref{eqn:hdecomp}. \end{proof} We remark that by \eqref{eqn:KKG} we get the formula \[ \frac{I_{E_*}-\Phi(z)\Phi(w)^*}{(1-z_1\bar{w}_1)(1-z_2\bar{w}_2)} = \frac{K_1^{min}(z,w)}{1-z_1\bar{w}_1} + \frac{K_2^{min}(z,w)}{1-z_2 \bar{w}_2} + G(z,w) \] where $G(z,w) = V^* P_{\mathcal{R}} V\Dphi^2 k_{w}(z)$ is the reproducing kernel of $H_{\mathcal{R}}$, the operator range of $V^*|_{\mathcal{R}}.$ \section{General Agler Kernels} \subsection{Characterizations of General Agler Kernels} \label{sect:characterization} Assume $(K_1, K_2)$ are Agler kernels of $\Phi \in \mathcal{S}_2(E, E_*)$ and define the Hilbert spaces \begin{equation} \label{eqn:hspaces} H_1 : = \mathcal{H} \left( \frac{K_1(z,w) }{1-z_1\bar{w}_1} \right) \text{ and } H_2 : = \mathcal{H} \left( \frac{K_2(z,w) }{1-z_2\bar{w}_2} \right). \end{equation} Our goal is to use these auxiliary Hilbert spaces $H_1$ and $H_2$ to characterize $(K_1,K_2)$ in terms of the extremal kernels $K^{max/min}_1$ and $K^{max/min}_2.$ The first main result is the following theorem: \begin{thm} \label{thm:mincontain} Let $\Phi \in \mathcal{S}_2(E,E_*)$ and let $(K_1,K_2)$ be Agler kernels of $\Phi$. Define $H_1, H_2$ as in \eqref{eqn:hspaces}. Then \[ H_1 \subseteq H_1^{max} \ \text{ and } \ H_2 \subseteq H_2^{max} \] and these containments are contractive, i.e. for $j=1,2$ \[ \|f \|_{H_j^{max}} \le \| f \|_{H_j} \ \qquad \forall \ f \in H_j. \] \end{thm} \begin{proof} Let $f \in H_1$ and assume $\|f\|_{H_1} = 1$. Then for all $n\geq 0$, $Z_1^n f \in H_1 \subset \Hphi$ and $\|Z_1^n f\|_{\Hphi} \leq \|Z_1^n f\|_{H_1} \leq 1$, since multiplication by $Z_1$ is a contraction in $H_1$. For each $n$ we can choose $g_n \in L^2\ominus H^2(E)$ such that $\vecs{Z_1^n f}{g_n} \in \Kphi \ominus \ker V^*$ and \[\left\|\vecs{f}{\bar{Z}_1^n g_n}\right\|_{\mathcal{H}} = \left\|\vecs{Z_1^n f}{g_n}\right\|_{\mathcal{H}} = \|Z_1^n f\|_{\Hphi} \leq 1. \] Notice $F_n:= \vecs{f}{\bar{Z}_1^n g_n} \in \Kphi \ominus \bar{Z}_1^n \ker V^*,$ since $\bar{Z}_1^n g_n \in \bar{Z}_1^n (L^2\ominus H^2(E))$. The sequence $\{F_n\}\subset \Kphi$ is bounded in norm and therefore has a subsequence $\{F_{n_j}\}$ that converges weakly to some $F:= \left[ \begin{array}{c$. We claim that $f=f'$ and $g' \in L^2_{\bullet-}(E)$. Since \[ \ip{F_{n_j}}{ \left[ \begin{array}{c h}_{\mathcal{H}} = \ip{f}{h}_{L^2(E_*)} \to \ip{F}{ \left[ \begin{array}{c h}_{\mathcal{H}} = \ip{f'}{h}_{L^2(E_*)} \text{ as } j \to \infty \] for all $h \in L^2(E_*)$, we see that $f=f'$. Next, for any $v \in E$ and $n\in \ZZ, m\geq 0$ \[ \ip{F_{n_j}}{ \left[ \begin{array}{c Z_1^n Z_2^mv}_{\mathcal{H}} = \ip{\bar{Z}_1^{n_j} g_{n_j}}{Z_1^n Z_2^mv}_{L^2(E)} = 0 \] for $j$ large enough that $n_j + n \geq 0$ since $g_{n_j} \perp H^2(E)$. By weak convergence, the above expression converges to \[ \ip{F}{ \left[ \begin{array}{c Z_1^n Z_2^mv}_{\mathcal{H}} = \ip{g'}{Z_1^nZ_2^m v}_{L^2(E)} = \ip{\widehat{g'}(n,m)}{v}_{E} = 0 \] so we see that $g' \perp L^2_{\bullet+}(E)$ and therefore $g' \in L^2_{\bullet-}(E)$. Hence we conclude that \[ F = \vecs{f}{g'} \in S_1^{max} \] and so $f = V^{*}F$ must be in $H_1^{max}$. To show $\|f\|_{H_1^{max}} \leq 1$, observe that \[ |\ip{F_{n_j}}{F}_{\mathcal{H}}| \to \|F\|^2_{\mathcal{H}} \] and \[ |\ip{F_{n_j}}{F}_{\mathcal{H}}| \leq \|F_{n_j}\|_{\mathcal{H}} \|F \|_{\mathcal{H}} \leq \|F\|_{\mathcal{H}} \] so that $\| F\|_{\mathcal{H}} \leq 1$. Finally, $\|f\|_{H_1^{max}} \leq \| F\|_{\mathcal{H}} \leq 1$ as desired. Thus, $H_1$ is contractively contained in $H_1^{max}$.\end{proof} Using the previous result, it is possible to characterize all Agler kernels in terms of the canonical kernels $K^{min}_1$, $K^{min}_2$ and $G$ as follows: \begin{thm} \label{thm:maxmin} Let $\Phi \in \mathcal{S}_2(E, E_*)$ and let $K_1, K_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$. Then $(K_1, K_2)$ are Agler kernels of $\Phi$ if and only if there are positive kernels $G_1,G_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$ such that \[ \begin{aligned} K_1(z,w) =& K_1^{min}(z,w) + (1-z_1 \bar{w}_1) G_1(z,w) \\ K_2(z,w) =& K_2^{min}(z,w) + (1-z_2 \bar{w}_2) G_2(z,w) \end{aligned} \] and $G = G_1 + G_2.$ \end{thm} \begin{proof} ($\Rightarrow$) Assume $(K_1,K_2)$ are Agler kernels of $\Phi$. By Theorem \ref{thm:mincontain} and Theorem \ref{thm:kerdiff}, there are positive kernels $G_1, G_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$ such that \begin{align*} G_1(z,w) &= \frac{K_1^{max}(z,w)}{1-z_1\bar{w}_1} - \frac{K_1(z,w)}{1-z_1\bar{w}_1} =\frac{K_1(z,w)}{1-z_1\bar{w}_1} - \frac{K^{min}_1(z,w)}{1-z_1\bar{w}_1} \\ G_2(z,w) &= \frac{K_2^{max}(z,w)}{1-z_2\bar{w}_2} - \frac{K_2(z,w)}{1-z_2\bar{w}_2}= \frac{K_2(z,w)}{1-z_2\bar{w}_2} - \frac{K^{min}_2(z,w)}{1-z_2\bar{w}_2}. \end{align*} To show $G_1 + G_2 = G$, recall that since $(K_1,K_2)$ are Agler kernels of $\Phi$, \[ \begin{aligned} \frac{K_1^{min}(z,w)}{1-z_1 \bar{w}_1 }+ G_1(z,w) + \frac{K_2^{min}(z,w)}{1-z_2 \bar{w}_2} &+ G_2(z,w) = \frac{K_1(z,w)}{1-z_1 \bar{w}_1} + \frac{K_2(z,w)} {1-z_2 \bar{w}_2} \\ &\\ & = \frac{ I_{E_*} - \Phi(z) \Phi(w)^*}{(1-z_1 \bar{w}_1) (1-z_2 \bar{w}_2)} \\ &\\ & = \frac{K_1^{min}(z,w)}{1-z_1 \bar{w}_1 } +\frac{K_2^{min} (z,w)}{1-z_2 \bar{w}_2}+ G(z,w), \end{aligned} \] which implies $G = G_1 +G_2.$ \\ ($\Leftarrow$) Now assume $(K_1,K_2)$ are positive kernels with positive kernels $G_1,G_2: \D^2 \times \D^2 \rightarrow \mathcal{L}(E_*)$ satisfying \[ \begin{aligned} K_j(z,w) =& K_j^{min}(z,w) + (1-z_j \bar{w}_j) G_j(z,w) \end{aligned} \] for $j=1,2$ and $G= G_1 +G_2.$ Then \[ \begin{aligned} \frac{K_1(z,w)}{1-z_1 \bar{w}_1} + \frac{K_2(z,w)}{1-z_2 \bar{w}_2} & = \frac{K_1^{min}(z,w)}{1-z_1 \bar{w}_1 } + \frac{K_2^{min}(z,w)}{1-z_2 \bar{w}_2}+ G(z,w) \\ &\\ & =\frac{ I_{E_*} - \Phi(z) \Phi(w)^*}{(1-z_1 \bar{w}_1)(1-z_2 \bar{w}_2)}, \end{aligned} \] which implies $(K_1,K_2)$ are Agler kernels of $\Phi.$ \end{proof} \subsection{Containment Properties of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$} \label{sect:functions} In this section, we consider the set of functions that can be contained in $\mathcal{H}(K_1)$ or $\mathcal{H}(K_2).$ This result generalizes a result about inner functions from \cite{bk12}. We require two additional subspaces $\mathcal{R}_1$ and $\mathcal{R}_2$ of $\mathcal{H}$, defined as follows: \[ \mathcal{R}_j = \left \{ \begin{bmatrix} f \\ g \end{bmatrix}: f \in H^2(E_*), \ g \in Z_j L^2_{- -}(E), f-\Phi g \in \Delta_{*} L^2(E_*) \right \} \] \nomenclature{$\mathcal{R}_j$}{Slight enlargements of $\mathcal{R}$} for $j=1,2.$ These are slight enlargements of the residual subspace $\mathcal{R}.$ We can now state the result: \begin{thm} \label{thm:containment} Let $\Phi \in \mathcal{S}_2(E,E_*)$. Then for $j=1,2$ \begin{align*} \mathcal{H}(K^{max}_j) &= \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \ominus Z_j \mathcal{R} \right \} \\ \mathcal{H}(K^{min}_j) &= \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \ominus \mathcal{R} \right \}. \end{align*} If $(K_1,K_2)$ are general Agler kernels of $\Phi,$ then for $j=1,2$ \[ \begin{aligned} \mathcal{H}(K_j) &\subseteq \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \right \} \\ &= \left \{ f \in H^2(E_*): f\in \big ( \Phi Z_j L^2_{--}(E) + \Delta_* L^2(E_*) \big ) \right \}. \end{aligned} \] \end{thm} The proof of this result requires several auxiliary results about the functions in $S^{max}_j \ominus Z_j S^{max}_j$ and $S^{min}_j \ominus Z_j S^{min}_j.$ \begin{prop} \label{prop:Smin} For $j=1,2$, the following equality holds: \[ S^{min}_j \ominus Z_j S^{min}_j = \mathcal{R}_j \ominus \mathcal{R}. \] \end{prop} \begin{proof} We prove the result for $S^{min}_1.$ We shall make use of the proof of Theorem \ref{thm:kdecomp}. Recall the space $\mathcal{Q}$ defined there: \[ \mathcal{Q} = \left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2_{\bullet -}(E_*) \right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{\bullet+}(E)\right)^{\perp}. \] We define and manipulate a related space \[ \begin{aligned} \mathcal{M} &= \left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2_{\bullet -}(E_*) \right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2\ominus L^2_{--}(E)\right)^{\perp} \\ &=\left(\begin{bmatrix} I \\ \Phi^* \end{bmatrix} L^2_{\bullet -}(E_*) \right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{\bullet+}(E)\right)^{\perp} \cap \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right)^{\perp}\\ &= \mathcal{Q} \ominus P_{\mathcal{Q}} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right). \end{aligned} \] Also, note $\mathcal{M} = \left \{ \begin{bmatrix} f \\ g \end{bmatrix}\in \mathcal{H} : f \in L^2_{\bullet +}(E_*), g \in L^2_{- -}(E)\right \}$. Then, \[ \begin{aligned} \mathcal{Q} \ominus \mathcal{M} &= \text{closure}_{\mathcal{H}} P_{\mathcal{Q}} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right) \\ &= \text{closure}_{\mathcal{H}} P_{\Phi} \left(\begin{bmatrix} \Phi \\ I \end{bmatrix} L^2_{+-}(E)\right) = S_1^{min}, \end{aligned} \] using the proof of Theorem \ref{thm:kdecomp}. Observe that $\mathcal{M} \subseteq Z_1 \mathcal{M} \subseteq \mathcal{Q}$ and $Z_1 \mathcal{Q} = \mathcal{Q}$. Since multiplication by $Z_1$ is an isometry on $\mathcal{H}$, we can calculate \[ \begin{aligned} S^{min}_1 \ominus Z_1 S^{min}_1 &= \left( \mathcal{Q} \ominus \mathcal{M} \right) \ominus Z_1 \left( \mathcal{Q} \ominus \mathcal{M} \right)\\ & = \left( \mathcal{Q} \ominus \mathcal{M} \right) \ominus \left( Z_1 \mathcal{Q} \ominus Z_1 \mathcal{M} \right)\\ & = \left( \mathcal{Q} \ominus \mathcal{M} \right) \ominus \left( \mathcal{Q} \ominus Z_1 \mathcal{M} \right)\\ & = Z_1 \mathcal{M} \ominus \mathcal{M}. \end{aligned} \] As $S^{min}_1 \ominus Z_1 S^{min}_1 \subseteq S^{max}_1$, we can conclude \[ \begin{aligned} S^{min}_1 \ominus Z_1 S^{min}_1 =& \big( Z_1 \mathcal{M} \cap S^{max}_1 \big) \ominus \big( \mathcal{M} \cap S^{max}_1 \big) \\ =& \left \{ \begin{bmatrix} f \\ g \end{bmatrix}\in \mathcal{H} : f \in H^2(E_*), \ g \in Z_1 L^2_{--}(E) \right \} \\ &\ominus \left \{ \begin{bmatrix} f \\ g \end{bmatrix}\in \mathcal{H} : f \in H^2(E_*), \ g \in L^2_{--}(E) \right \} \\ =& \mathcal{R}_1 \ominus \mathcal{R}, \end{aligned} \] as desired. The proof follows similarly for $S^{min}_2.$ \end{proof} We also obtain similar characterizations of $S^{max}_j \ominus Z_j S^{max}_j$. \begin{prop} \label{prop:smax} For $j=1,2$ the following equalities hold: \[ S^{max}_j \ominus Z_j S^{max}_j = \mathcal{R}_j \ominus Z_j \mathcal{R}. \] \end{prop} \begin{proof} Recall that $S^{max}_j = \mathcal{R} \oplus S_j^{min}$ and $\mathcal{R}, Z_j \mathcal{R} \subseteq \mathcal{R}_j.$ Now \[ \begin{aligned} S_j^{max} &= (S_j^{max} \ominus Z_j S_j^{max}) \oplus Z_j S_j^{max} \\ &= (S_j^{max} \ominus Z_j S_j^{max}) \oplus Z_j \mathcal{R} \oplus Z_j S_j^{min} \end{aligned} \] while $S_j^{max}$ can also be decomposed as \[ \begin{aligned} & \mathcal{R} \oplus (S_j^{min} \ominus Z_j S_j^{min}) \oplus Z_j S_j^{min} \\ =& \mathcal{R} \oplus (\mathcal{R}_j \ominus \mathcal{R}) \oplus Z_j S_j^{min} \\ =& \mathcal{R}_j \oplus Z_j S_j^{min}. \end{aligned} \] Together these show $S_j^{max} \ominus Z_j S_j^{max} = \mathcal{R}_j \ominus Z_j\mathcal{R}$. \end{proof} Now we can prove Theorem \ref{thm:containment}. \begin{proof} The definitions of $ \mathcal{H}(K^{max}_j)$ and $\mathcal{H}(K^{min}_j)$ combined with Propositions \ref{prop:Smin} and \ref{prop:smax} imply that \begin{align*} \mathcal{H}(K^{max}_j) &= \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \ominus Z_j \mathcal{R} \right \} \\ \mathcal{H}(K^{min}_j) &= \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \ominus \mathcal{R} \right \}, \end{align*} and then the definition of $\mathcal{R}_j$ implies: \[ \begin{aligned} \mathcal{H}(K^{max/min}_j ) &\subseteq \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \right \} \\ & = \left \{ f \in H^2(E_*): f\in \big ( \Phi Z_j L^2_{--}(E) + \Delta_* L^2(E_*) \big ) \right \}. \end{aligned} \] Now let $(K_1,K_2)$ be any pair of Agler kernels of $\Phi$. By Theorem \ref{thm:maxmin}, there are positive kernels $G_1, G_2$ such that each \[ K_j(z,w) = K^{min}_j(z,w) + (1-z_j\bar{w}_j)G_j(z,w) \] and $G= G_1 + G_2.$ This means \[ \Big( K^{min}_1 (z,w) + G(z,w) \Big) - K_1(z,w) = G_2(z,w) + z_1 \bar{w}_1 G_1(z,w) \] is a positive kernel. Similar results hold for $K_2$, so that Theorem \ref{thm:kerdiff} implies $ \mathcal{H}(K_j)$ is contained contractively in $\mathcal{H}(K^{min}_j + G).$ But then, Theorem \ref{thm:kersum} implies that each $f \in \mathcal{H}(K_j)$ can be written as $f = f_1 + f_2$, for $f_1 \in \mathcal{H}(K^{min}_j)$ and $f_2 \in \mathcal{H}(G).$ Our above arguments give the desired result for $f_1$ and the definition of $\mathcal{H}(G)$ gives the desired result for $f_2.$ This means \[ \begin{aligned} \mathcal{H}(K_j) &\subseteq \left \{ f : \text{ there exists } g \text{ with } \begin{bmatrix} f \\ g \end{bmatrix} \in \mathcal{R}_j \right \} \\ & = \left \{ f \in H^2(E_*): f\in \big ( \Phi Z_j L^2_{--}(E) + \Delta_* L^2(E_*) \big ) \right \}. \end{aligned} \] as desired. \end{proof} \section{Applications} \subsection{Analytic Extension Theorem} \label{sect:extensions} In this section, we restrict to the situation where $E$ and $E_*$ are finite dimensional with equal dimensions, so after fixing orthonormal bases of $E$ and $E_*$, we can assume $\Phi$ is a square matrix of scalar valued $H^\infty (\D^2)$ functions. The containment results in Theorem \ref{thm:containment} allow us to give conditions for when such $\Phi$ and the elements of any $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ associated to Agler kernels of $\Phi$ extend analytically past portions of $\partial \D^2$. We first make some preliminary comments about defining functions in the canonical spaces outside of the bidisk. Any Hilbert space contractively contained in $H^2(E_*)$ clearly has bounded point evaluations at points of $\D^2$. On the other hand, for the spaces $\mathcal{R},\mathcal{R}_1,\mathcal{R}_2$ we can construct points of bounded evaluation at certain points of $\mathbb{E}^2$, where $\mathbb{E} = \mathbb{C} \setminus \overline{\D}$. Using the notation of \eqref{eqn:genhk}, there is a unitary map from $H_{\mathcal{R}}$ onto $\mathcal{R} \ominus (\mathcal{R}\cap \ker V^*)$ of the form \[ f \mapsto \vecs{f}{A_{\mathcal{R}} f} \] where $A_{\mathcal{R}}$ is a contractive linear map from $H_{\mathcal{R}}$ to $L^2_{--}(E)$. If $f\in H_\mathcal{R}$, then $\vecs{f}{A_{\mathcal{R}} f} \in \mathcal{H}$ and so \[ f = \Phi A_{\mathcal{R}} f + (I-\Phi \Phi^*)^{1/2} h \text { by } \eqref{eqn:hphichar} \] for some $h \in L^2(E_*)$. Let \[ S = \{z \in \mathbb{E}^2: \Phi(1/\bar{z}) \text{ is not invertible} \}. \] Since $A_{\mathcal{R}} f \in L^2_{--}(E)$, we can write $A_{\mathcal{R}} f = \overline{Z_1 Z_2 g}$ for $g \in H^2(E)$ and then evaluation at $z \in \mathbb{E}^2 \setminus S$ is defined by \begin{equation} \label{extdef} f(z) := (\Phi(1/\bar{z})^*)^{-1} \frac{1}{z_1 z_2}\overline{g(1/\bar{z})}. \end{equation} Since $\D^2$ and $\mathbb{E}^2$ are disjoint, for the moment this is just a formal definition. However, with additional assumptions on $\Phi$, it is this definition of $f$ in $\mathbb{E}^2$ that provides a holomorphic extension of $f$. This evaluation is bounded since $|g(1/\bar{z})| \leq C \|g\|_{H^2(E)} = C \|A_{\mathcal{R}} f \|_{L^2(E)}$ for some $C>0$ and then \[ |f(z)| \leq C \frac{1}{|z_1z_2|}\|(\Phi(1/\bar{z})^*)^{-1}\| \|A_{\mathcal{R}} f\|_{L^2(E)} \leq C \frac{1}{|z_1z_2|}\|(\Phi(1/\bar{z})^*)^{-1}\| \| f\|_{H_{\mathcal{R}}}. \] This shows evaluation at $z \in \mathbb{E}^2\setminus S$ is a bounded linear functional of $H_{\mathcal{R}} = \mathcal{H}(G)$. Analogous analysis can be applied to $\mathcal{R}_1,\mathcal{R}_2$ so that $H_{\mathcal{R}_1}, H_{\mathcal{R}_2}$ possess bounded point evaluations at points of $\mathbb{E}^2 \setminus S$. In the case of $f \in H_{\mathcal{R}_1}$, since $A_{\mathcal{R}_1}f \in Z_1 L^2_{--}$, we can write $f = Z_1 \overline{Z_1Z_2 g} = \bar{Z}_2 \bar{g}$ for some $g \in H^2(E_*)$ and then we replace \eqref{extdef} with \[ f(z) := (\Phi(1/\bar{z})^*)^{-1} \frac{1}{z_2}\overline{g(1/\bar{z})} \] for $z \in \mathbb{E}^2 \setminus S$. For $H_{\mathcal{R}_2}$ we simply switch the roles of $z_1,z_2$. Since $\mathcal{H}(K_j^{max/min})$ is contractively contained in $H_{\mathcal{R}_j}$, we can define point evaluations at points of $\mathbb{E}^2\setminus S$ for the canonical Agler kernel spaces as well. We proceed to study analytic extensions of $\Phi$ past the boundary. Let $X \subseteq \mathbb{T}^2$ be an open set and define the related sets \[ \begin{aligned} X_1 & := \left \{ x_1 \in \mathbb{T} : \text{ such that } \exists \ x_2 \text{ with } (x_1, x_2) \in X \right \} \\ X_2 & := \left \{ x_2 \in \mathbb{T} : \text{ such that } \exists \ x_1 \text{ with } (x_1, x_2) \in X \right \} . \end{aligned} \] Then we have the following result: \begin{thm} \label{thm:extension} Let $\Phi \in \mathcal{S}_2(E, E_*)$ be square matrix valued. Then the following are equivalent: \begin{itemize} \item[$(i)$] $\Phi$ extends continuously to $X$ and $\Phi$ is unitary valued on $X$. \item[$(ii)$] There is some pair $(K_1,K_2)$ of Agler kernels of $\Phi$ such that the elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend continuously to $X.$ \item[$(iii)$] There exists a domain $\Omega$ containing \beq \D^2 \cup X \cup (X_1 \times \D) \cup (\D \times X_2) \cup (\mathbb{E}^2 \setminus S ) \eeq such that $\Phi$ and the elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend analytically to $\Omega$ for every pair $(K_1, K_2)$ of Agler kernels of $\Phi.$ Moreover the points in the set $\Omega$ are points of bounded evaluation of every $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2).$ \end{itemize} \end{thm} \begin{proof} We prove $(i) \Rightarrow (iii) \Rightarrow (ii) \Rightarrow (i).$ A similar result for inner functions appears as Theorem 1.5 in \cite{bk12}. Many of the arguments in this situation are similar. Thus, we outline the proof and provide more details on the points where the two proofs diverge. Since most of the work occurs in $(i) \Rightarrow (iii),$ let us consider this implication first. The proof involves $3$ claims. \\ \textbf{Claim 1: $\Phi$ extends analytically to $\Omega.$} \\ Since $\Phi$ extends continuously to $X$ and is unitary valued there, there is a neighborhood $W^+ \subseteq \D^2$ such that $\Phi$ is invertible on $W^+$ and $X \subseteq \overline{ W^+}.$ Then \begin{equation} \label{eqn:phiextend} \Phi(z): = \left[ \Phi \left ( 1 / \bar{z} \right)^* \right]^{-1} \end{equation} defines an analytic function on $\mathbb{E}^2 \setminus S$ that is meromorphic on $\mathbb{E}^2$. Define $W^- = \left \{ 1 / \bar{z} : z \in W^+ \right \}.$ Then $\Phi$ is analytic on $W^+ \cup W^-$ and continuous on $W^+ \cup X \cup W^-.$ By Rudin's continuous edge-of-the-wedge theorem, which appears as Theorem A in \cite{rudeow}, there is a domain $\Omega_0$ containing $W^{+}\cup X \cup W^{-},$ where $\Phi$ extends analytically. This domain only depends on $X, W^{\pm}.$ Also $\Phi$ is already holomorphic on $\D^2$, meromorphic on $\mathbb{E}^2$, and holomorphic on $\mathbb{E}^2 \setminus S$ using definition (\ref{eqn:phiextend}). We can extend this domain further using Rudin's Theorem 4.9.1 in \cite{rud69}. It roughly says that if a holomorphic function $f$ on $\D^2$ extends analytically to a neighborhood $N_x$ of some $x=(x_1,x_2) \in \mathbb{T}^2,$ then $f$ extends analytically to an open set containing $\{x_1\} \times \D$ and $\D \times \{x_2\}.$ As the edge-of-the-wedge theorem guarantees $\Phi$ extends to a neighborhood $N_x$ of each $x \in X$, Rudin's Theorem 4.9.1 implies $\Phi$ extends analytically to an open set $\Omega_1$ containing $(X_1 \times \D ) \cup (\D \times X_2).$ The \emph{proof} of Theorem 4.9.1 implies that $\Omega_1$ only depends on the $\{N_x\}_{x \in X}$. Thus, $\Phi$ extends analytically to \[ \Omega := \D^2 \cup \Omega_1 \cup \Omega_0 \cup \left( \mathbb{E}^2 \setminus S \right). \] \textbf{Claim 2: Elements of $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$ extend analytically to $\Omega.$}\\ Let $(K_1,K_2)$ be Agler kernels of $\Phi$ and let $f \in \mathcal{H}(K_1).$ By the containment result in Theorem \ref{thm:containment}, \[ f =\Phi A_{\mathcal{R}_1}f + (I - \Phi \Phi^*)^{1/2} h, \] for some $h \in L^2(E_*)$ and $A_{\mathcal{R}_1}f \in Z_1 L^2_{--}(E).$ Then $g:= \overline{Z_2 A_{\mathcal{R}_1} f} \in H^2(E),$ and we can define $f$ analytically on $\mathbb{E}^2 \setminus S$ as before: \[ f(z) = \Phi(z) \frac{1}{z_2} \overline{g(1/\bar{z})}. \] Then $f$ is analytic on $W^+ \cup W^-$ and $f=\Phi A_{\mathcal{R}_1}f $ on $X$. As in the proof of Theorem 1.5 in \cite{bk12}, we can use the distributional edge-of-the-wedge theorem, which appears as Theorem B in \cite{rudeow}, to extend $f$ to $\Omega_0.$ As before, by an application of Rudin's Theorem 4.9.1 in \cite{rud69}, we can analytically extend $f$ to $\Omega_1$, the set containing $X_1 \times \D$ and $\D \times X_2$ mentioned earlier. As $f$ is already holomorphic in $\D^2 \cup (\mathbb{E}^2 \setminus S),$ we can conclude that every $f \in \mathcal{H}(K_1)$ is holomorphic in $\Omega$.\\ \textbf{Claim 3: Points in $\Omega$ are points of bounded evaluation in $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2).$}\\ The proof for inner functions given in \cite{bk12} essentially goes through to give bounded point evaluations in $\Omega$. Recall from the previous section that points of $\D^2$ and $\mathbb{E}^2\setminus S$ are points of bounded evaluation for $\mathcal{H}(K_1)$ and $\mathcal{H}(K_2)$. The next step is to show that the set of points of bounded evaluation is relatively closed in $\Omega$. This follows using the uniform boundedness principle as in \cite{bk12}. To show evaluation at points of $\Omega_0$ are bounded, we merely note as we did in \cite{bk12} that the proof of the edge-of-the-wedge theorem in \cite{rudeow} produces the extended values via an integral over a compact subset $K$ of $W^{+}\cup X \cup W^{-}$. Since evaluation at any point of $K$ is bounded in $\mathcal{H}(K_j)$ and since elements of $\mathcal{H}(K_j)$ are analytic in a neighborhood of $K$, \[ \sup\{ \|f(z)\|_{E_*}: z \in K\} < \infty \] for each $f \in \mathcal{H}(K_j)$ and therefore by the uniform boundedness principle there exists $M$ such that \[ \|f(z)\|_{E_*} \leq M \|f\|_{\mathcal{H}(K_j)} \qquad \forall \ f \in \mathcal{H}(K_j) \] and $z \in K$. So, since values of $f$ in $\Omega_0 $ are given by an integral of $f$ over $K$, it follows that evaluation at points in $\Omega_0$ are bounded in $\mathcal{H}(K_j)$. Now consider the points in $\Omega_1.$ As Rudin's Theorem 4.9.1 in \cite{rud69} also constructs the extension of $f$ using values of $f$ at points in compact sets $K \subset \Omega_0$, the uniform boundedness principle implies that the points in $\Omega_1$ are also points of bounded evaluation.\\ $(iii) \Rightarrow (ii)$ is immediate. \\ Now consider $(ii) \Rightarrow (i)$. \\ First, we will show that there is a point $w\in \D^2$ where $\Phi(w)$ is invertible. To do this, take any sequence $\{z^n\} \subset \D^2$ converging to a point $x \in X \subset \mathbb{T}^2$. Since elements of $\mathcal{H}(K_j)$ extend continuously to $X$, for each fixed $f \in \mathcal{H}(K_j)$ the set \[ \{\|f(z^n)\|_{E*}: n =1,2,\dots\} \] is bounded. Therefore by the uniform boundedness principle for each $j=1,2$ the set \[ \{ \|f(z^n)\|_{E_*}: f \in \mathcal{H}(K_j), \|f\|_{\mathcal{H}(K_j)}\leq 1, n=1,2,\dots\} \] is bounded by say $M>0$, and this is enough to show evaluation at $x\in X$ is bounded in $\mathcal{H}(K_j)$ and \[ \| K_j(z^n,z^n) \|_{E_* \rightarrow E_*} \leq M^2 \text{ for each } n \text{ and } \| K_j(x,x)\|_{E_* \rightarrow E_*} \leq M^2 \] for $j=1,2.$ It follows immediately that \begin{equation} \label{limsup} \limsup_{n\to \infty} (1-|z_1^n|^2) K_2(z^n,z^n) = 0 \ \ \text{ and } \ \ \limsup_{n\to \infty} (1-|z_2^n|^2) K_1(z^n,z^n) = 0. \end{equation} This shows that \[ \lim_{n\to \infty} I-\Phi(z^n) \Phi(z^n)^* = \lim_{n\to \infty} (1-|z_1^n|^2) K_2(z^n,z^n)+ (1-|z_2^n|^2) K_1(z^n,z^n) = 0 \] and therefore for some $N \in \mathbb{N}$, $I-\Phi(z^N) \Phi(z^N)^* \leq \frac{1}{2} I$, which implies $\Phi(z^N)$ is invertible. Set $w=z^N$. Since $\Phi$ satisfies \[ I-\Phi(z) \Phi(w)^* = (1-z_1\bar{w}_1) K_{2,w}(z) + (1-z_2 \bar{w}_2) K_{1,w} (z) \] we can extend $\Phi$ continuously to $X$ via the formula \[ \Phi(z) = (I-(1-z_1\bar{w}_1) K_{2,w}(z) - (1-z_2 \bar{w}_2) K_{1,w} (z))(\Phi(w)^*)^{-1} \] since the right hand side is assumed to be continuous. Finally, $\Phi$ is unitary on $X$ since for any $x\in X$, if we take a sequence $\{z^n\}$ in $\D^2$ converging to $x$ as above, then we will again get the result in \eqref{limsup}. However, now that we know $\Phi$ is continuous at $x$, \[ 0=\lim_{n\to \infty} I-\Phi(z^n) \Phi(z^n)^* = I-\Phi(x)\Phi(x)^*, \] which completes the proof. \end{proof} \subsection{Canonical Realizations} \label{sect:tfr} Unlike the previous section, we no longer assume $E,E_{*}$ are finite dimensional. Let $\Phi \in \mathcal{S}_1(E,E_*)$ and define its de Branges-Rovnyak space $\mathcal{H}_{\Phi}$ to be the Hilbert space with reproducing kernel \[ K_{\Phi}(z,w) := \frac{ I -\Phi(z) \Phi(w)^*} {1-z\bar{w}}.\] Then, $\Phi$ has an (almost) unique coisometric transfer function realization with state space equal to $\mathcal{H}_{\Phi}$ and colligation defined by \[ U := \left[ \begin{array}{cc} A & B \\ C & D \end{array} \right] : \left[ \begin{array}{c} \mathcal{H}_{\Phi} \\ E \end{array} \right] \rightarrow \left[ \begin{array}{c} \mathcal{H}_{\Phi} \\ E_* \end{array} \right] \] with block operators given by \[ \begin{aligned} A&: f(z) \mapsto \frac{f(z) - f(0)}{z}\ \ && B: e \mapsto \frac{\Phi(z) - \Phi(0)}{z} e \\ C &: f(z) \mapsto f(0) \ \ &&D: e \mapsto \Phi(0)e. \end{aligned} \] Then, $ \Phi(z) = D + Cz \left( I - Az \right)^{-1}B$, and this representation is unique up to a minimality condition and unitary equivalence \cite{bb11}. In two variables, transfer function realizations are more complicated and rarely unique. Traditionally, T.F.R.'s associated to $\Phi \in \mathcal{S}_2(E,E_*)$ are constructed using Agler kernels $(K_1,K_2)$ of $\Phi$. In \cite{bb11}, Ball-Bolotnikov studied T.F.R.'s defined using pairs of Agler kernels and obtained partial characterizations of the associated block operators $A$, $B$, $C$, and $D.$ Refined results about unitary T.F.R.'s for a subclass of $\mathcal{S}_d(\mathbb{D}^d)$ appear in \cite{bkvsv}; these are constructed in the related, but different setting of minimal augmented Agler decompositions. Nevertheless, open questions about the structure of Agler kernels often go hand in hand with open questions about the structure of T.F.R.'s. In this section, we use our previous analysis to clear up one such question. Specifically, we use the concrete Agler kernels $(K^{max}_1, K^{min}_2)$ to construct a coisometric T.F.R. with an explicit state space $\mathcal{M}$ and colligation $U.$ The construction answers a question posed by Ball and Bolotnikov in \cite{bb11}. \begin{rem}\label{rem:contfr}\textbf{Constructing Transfer Function Realizations.} There is a canonical way to obtain transfer function realizations from Agler kernels. To illustrate this method, let $(K_1,K_2)$ be Agler kernels of $\Phi$. Then, they satisfy \begin{equation} \label{eqn:agform2} I_{E_*} - \Phi(z) \Phi(w)^* = (1 -z_1 \bar{w}_1) K_2(z,w) + (1-z_2\bar{w}_2) K_1(z,w). \end{equation} Define the kernel functions $K_{j,w}\nu (z):= K_j(z,w) \nu$ and define the operator $V$ by \[ V: \begin{bmatrix} \bar{w}_1 K_{2,w} \nu \\ \bar{w}_2 K_{1,w} \nu \\ \nu \end{bmatrix} \mapsto \begin{bmatrix} K_{2,w} \nu \\ K_{1,w} \nu \\ \Phi(w)^* \nu \end{bmatrix} \quad \forall \ w \in \mathbb{D}^2, \ \nu \in E_*. \] Then (\ref{eqn:agform2}) guarantees that V can be extended to an isometry mapping the space \beq \mathcal{D}_V := \bigvee_{w \in \D^2, \nu \in E_*} \begin{bmatrix} \bar{w}_1 K_{2,w} \nu \\ \bar{w}_2 K_{1,w} \nu \\ \nu \end{bmatrix} \subseteq \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus E_* \eeq onto the space \[ \mathcal{R}_V := \bigvee_{w \in \D^2, \nu \in E_*} \begin{bmatrix} K_{2,w} \nu \\ K_{1,w} \nu \\ \Phi(w)^* \nu \end{bmatrix} \subseteq \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus E. \] Transfer function realizations with state space $\mathcal{H}(K_2) \oplus \mathcal{H}(K_1)$ are obtained by extending $V$ to a contraction from \[ \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus E \rightarrow \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus E_*\] and setting $U=V^*.$ In Ball-Bolotnikov \cite{bb11}, such a $U$ is called a \emph{canonical functional model (c.f.m.) colligation} of $\Phi$ associated to $(K_1, K_2).$ Similarly, coisometric transfer function realizations are obtained by extending $V$ to an isometry mapping \[ \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus \mathcal{H} \oplus E \rightarrow \mathcal{H}(K_2) \oplus \mathcal{H}(K_1) \oplus \mathcal{H} \oplus E_*,\] where $\mathcal{H}$ is an arbitrary infinite dimensional Hilbert space only added in when required, and $U$ is defined to be $V^*.$ \end{rem} \begin{question} Let $\Phi \in \mathcal{S}(E, E_*)$. Currently, it is an open question as to whether there always exists a coisometric transfer function realization of $\Phi$ with state space $\mathcal{H}(K_2) \oplus \mathcal{H}(K_1)$ for every pair of Agler kernels $(K_1,K_2)$. In Section 3.2 of \cite{bb11}, Ball-Bolotnikov posed the following related question, which was originally stated in the d-variable setting: {\begin{center} Let $\Phi \in \mathcal{S}_2(E, E_*)$. Is there \emph{any} pair of Agler kernels $(K_1,K_2)$ of $\Phi$ such that $\Phi$ has a \emph{coisometric} c.f.m. colligation associated to $(K_1, K_2)$? \end{center}} This is equivalent to asking if the construction in Remark \ref{rem:contfr} gives a coisometric transfer function realization of $\Phi$ with state space $\mathcal{H}(K_2) \oplus \mathcal{H}(K_1).$ \end{question} The following theorem answers that question in the affirmative. \begin{thm} \label{thm:canonicalcmf} Let $\Phi \in \mathcal{S}_2(E,E_*)$ and consider its Agler kernels $(K^{max}_1, K^{min}_2).$ The construction in Remark \ref{rem:contfr} gives a unique, coisometric transfer function realization of $\Phi$ with state space $\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1).$ \end{thm} \begin{proof} Consider the construction in Remark \ref{rem:contfr} using Agler kernels $(K^{max}_1, K^{min}_2)$. The operator $V$ is initially defined by \beq V: \begin{bmatrix} \bar{w}_1 K^{min}_{2,w} \nu \\ \bar{w}_2 K^{max}_{1,w} \nu \\ \nu \end{bmatrix} \mapsto \begin{bmatrix} K^{min}_{2,w} \nu \\ K^{max}_{1,w} \nu \\ \Phi(w)^* \nu \end{bmatrix} \quad \forall \ w \in \mathbb{D}^2, \ \nu \in E_* \eeq and extended to an isometry on the space \beq \mathcal{D}_V := \bigvee_{w \in \D^2, \nu \in E_*} \begin{bmatrix} \bar{w}_1 K^{min}_{2,w} \nu \\ \bar{w}_2 K^{max}_{1,w} \nu \\ \nu \end{bmatrix} \subseteq \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1) \oplus E_*. \eeq Then, transfer function realizations with state space $\mathcal{H}(K_2) \oplus \mathcal{H}(K_1)$ are obtained by extending $V$ to a contraction on $\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1) \oplus E_*$. We will show $\mathcal{D}_V = \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1) \oplus E_*.$ Then, the result will follow because $V$ will already be an isometry on $\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1) \oplus E_*$ and so we can immediately set $U=V^*$. Define \[ \mathcal{D} := \bigvee_{w \in \D^2, \nu \in E_*} \begin{bmatrix} \bar{w}_1 K^{min}_{2,w} \nu \\ \bar{w}_2 K^{max}_{1,w}\nu \end{bmatrix} \subseteq \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1). \] Examining the case $w=0$ shows that $\mathcal{D}_V$ coincides with $\mathcal{D} \oplus E_*$, so it suffices to show $\mathcal{D}= \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1).$ Assume \beq \begin{bmatrix} f_2 \\ f_1 \end{bmatrix} \in \left[ \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1) \right] \ominus \mathcal{D}. \eeq Then for each $w \in \mathbb{D}^2$ and $\nu \in E_*$, \begin{align*} 0 &= \LL \begin{bmatrix} f_2 \\ f_1 \end{bmatrix}, \begin{bmatrix} \bar{w}_1 K^{min}_{2,w} \nu \\ \bar{w}_2 K^{max}_{1,w}\nu \end{bmatrix} \RR_{\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)} \\ && \\ &= w_1 \LL f_2, K^{min}_{2,w}\nu \RR_{\mathcal{H}(K^{min}_2)}+ w_2 \LL f_1, K^{max}_{1,w}\nu \RR_{\mathcal{H}(K^{max}_1)} \\ &\\ & = \LL w_1f_2(w) + w_2f_1(w), \nu \RR_{E_*}, \end{align*} which implies $Z_1f_2 + Z_2f_1 = 0.$ Thus, there is some $F \in H^2(E_*)$ such that $f_1 =Z_1 F.$ Now, since $f_1 \in \mathcal{H}(K^{max}_1)$, there is a $g_1 \in Z_1 L^2_{--}(E)$ such that \begin{equation} \label{eqn:maxcon} \begin{bmatrix} f_1 \\ g_1 \end{bmatrix} \in \mathcal{R}_1 \ominus Z_1 \mathcal{R}.\end{equation} This also gives $g_1 - \Phi^* f_1 \in \Delta L^2(E)$ and a $G \in L^2_{--}(E)$ with $g = Z_1 G.$ Since $\Delta L^2(E)$ is invariant under $Z^*_1$, is is clear that $G - \Phi^* F\in \Delta L^2(E)$ as well. Then \beq \begin{bmatrix} f_1 \\ g_1 \end{bmatrix} = Z_1 \begin{bmatrix} F \\ G \end{bmatrix} \text{ and } \begin{bmatrix} F \\ G \end{bmatrix} \in \mathcal{R}. \eeq Given this, (\ref{eqn:maxcon}) forces $f_1 \equiv 0$, so $f_2 \equiv 0$ and $\mathcal{D} = \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1).$ \end{proof} \begin{rem}\textbf{The Canonical Block Operators.} Let $U$ be the operator associated to the transfer function realization given in Theorem \ref{thm:canonicalcmf}. Much can be said about its block operators $A,B,C,D$. In the setting of general $(K_1,K_2)$, much of this analysis already appears in \cite{bb10} and \cite{bb11}. We will first give the formulas for $A,B,C,D$ and then discuss the derivations. Specifically, for every $ f := \begin{bmatrix} f_1 \\ f_2 \end{bmatrix} \in \mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)$ and $\eta \in E$, \[ C: \begin{bmatrix} f_1 \\ f_2 \end{bmatrix} \mapsto f_1(0) + f_2(0) \ \text{ and } \ D: \eta \mapsto \Phi(0) \eta.\] For $A$ and $B$, let us first simplify notation by setting \[ \begin{bmatrix} (Af)_1 \\ (Af)_2 \end{bmatrix} :=A \begin{bmatrix} f_1 \\ f_2 \end{bmatrix} \ \text{ and } \begin{bmatrix} (B \eta)_1 \\ (B \eta)_2 \end{bmatrix} := B \eta. \] Then $(Af)_2$ and $(B \eta)_2$ are the unique functions in $\mathcal{H}(K^{max}_1)$ satisfying \begin{align*} \left(Af \right)_2 (0, w_2) &= \frac{ f_1(0, w_2) - f_1(0) + f_2(0,w_2)-f_2(0)}{w_2} \\ \left( B \eta \right)_2(0,w_2) & = \frac{\Phi(0,w_2)- \Phi(0)}{w_2} \eta, \end{align*} for all $w_2 \in \mathbb{D} \setminus \{0\},$ and $(Af)_1$ and $(B \eta)_1$ are the unique functions in $\mathcal{H}(K^{min}_2)$ satisfying \begin{align*} \left( Af \right)_1(w) &= \frac{f_1(w) - f_1(0) + f_2(w)-f_2(0) -w_2 \left(Af \right)_2 (w)}{w_1} \\ \left( B \eta \right)_1(w) & = \frac{\left( \Phi(w)- \Phi(0) \right)\eta - w_2 \left( B \eta \right)_2(w)}{w_1}, \end{align*} for all $w \in \mathbb{D}^2$ with $w_1 \ne 0.$ The results for $C$ and $D$ follow because, by definition \[ U^* = \begin{bmatrix} A^* & C^* \\ B^* & D^* \end{bmatrix} : \begin{bmatrix} \bar{w}_1 K_{2,w}^{min} \nu \\ \bar{w}_2 K_{1,w}^{max} \nu \\ \nu \end{bmatrix} \mapsto \begin{bmatrix} K_{2,w}^{min} \nu \\ K_{1,w}^{max} \nu \\ \Phi(w)^* \nu \end{bmatrix} \quad \forall \ w \in \mathbb{D}^2, \ \nu \in E_*. \] Setting $w=0$ immediately implies that \[ C^*: \nu \mapsto \begin{bmatrix} K^{min}_{2,0}\nu \\ K^{max}_{1,0} \nu \end{bmatrix} \text{ and } D^*: \nu \mapsto \Phi(0)^*\nu \] for all $\nu \in E_*$. Then the calculations \[ \LL C \begin{bmatrix} f_1 \\ f_2 \end{bmatrix}, \nu \RR_{E_*} =\LL \begin{bmatrix} f_1 \\ f_2 \end{bmatrix}, \begin{bmatrix} K^{min}_{2,0} \nu \\ K^{max}_{1,0}\nu \end{bmatrix} \RR_{\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)} = \LL f_1(0) + f_2(0), \nu \RR_{E_*} \] and \[ \LL D \eta, \nu \RR_{E_*} = \LL \eta, D^* \nu \RR_{E} =\LL \eta, \Phi(0)^* \nu \RR_{E} =\LL \Phi(0) \eta, \nu \RR_{E_*} \] give the formulas for $C$ and $D$. Moreover, The results about $C^*$ and $D^*$ imply that \[ A^*: \begin{bmatrix} \bar{w}_1 K_{2,w}^{min} \nu \\ \bar{w}_2 K_{1,w}^{max} \nu \end{bmatrix} \mapsto \begin{bmatrix} \left( K_{2,w}^{min} - K^{min}_{2,0} \right) \nu \\ \left( K_{1,w}^{max} - K^{max}_{1,0} \right) \nu \end{bmatrix}\] and \[ B^*: \begin{bmatrix} \bar{w}_1 K_{2,w}^{min} \nu \\ \bar{w}_2 K_{1,w}^{max} \nu \end{bmatrix} \mapsto \left( \Phi(w)^* - \Phi(0)^* \right) \nu.\] Then \[ \begin{aligned} \LL w_1(Af)_1 (w) + w_2(Af)_2 (w), \nu \RR_{E_*} &= \LL Af, \begin{bmatrix} \bar{w}_1 K_{2,w}^{min} \nu \\ \bar{w}_2 K_{1,w}^{max} \nu \end{bmatrix}\RR_{\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)}\\ &\\ & = \LL \begin{bmatrix} f_1 \\ f_2 \end{bmatrix}, \begin{bmatrix} \left( K_{2,w}^{min} - K^{min}_{2,0} \right) \nu \\ \left( K_{1,w}^{max} - K^{max}_{1,0} \right) \nu \end{bmatrix} \RR_{\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)} \\ &\\ & = \LL f_1(w) - f_1(0) + f_2(w) - f_2(0), \nu \RR_{E_*}, \end{aligned} \] and similarly, \[ \LL w_1(B \eta)_1 (w) + w_2(B \eta)_2 (w), \nu \RR_{E_*} = \LL \left( \Phi(w) - \Phi(0) \right)\eta, \nu \RR_{E_*}. \] Therefore, we have \begin{align} \label{eqn:gleason1} w_1 \left( Af \right)_1(w) + w_2 \left(Af \right)_2 (w) &= f_1(w) - f_1(0) + f_2(w)-f_2(0) \\ \label{eqn:gleason2} w_1 \left( B \eta \right)_1(w) + w_2 \left( B \eta \right)_2(w) & = \left( \Phi(w)- \Phi(0) \right)\eta. \end{align} Operators that solve $(\ref{eqn:gleason1})$ or $(\ref{eqn:gleason2})$ are said to solve the structured Gleason problem for $\mathcal{H}(K^{min}_2) \oplus \mathcal{H}(K^{max}_1)$ or for $\Phi$, respectively. In general, such operators are not unique. However, in this situation, $A$ and $B$ are uniquely determined. The proof of this rests on two observations. First, when $w_1=0$ and $w_2 \ne 0,$ $(\ref{eqn:gleason1})$ and $(\ref{eqn:gleason2})$ become \begin{align} \label{eqn:gleason3} \left(Af \right)_2 (0, w_2) &= \frac{ f_1(0, w_2) - f_1(0) + f_2(0,w_2)-f_2(0)}{w_2} \\ \label{eqn:gleason4} \left( B\eta \right)_2(0,w_2) & = \frac{ \Phi(0,w_2)- \Phi(0)}{w_2}\eta. \end{align} It is also true that the set $\{ (0,w_2) : w_2 \in \mathbb{D} \setminus \{0\}\}$ is a set of uniqueness for $\mathcal{H}(K^{max}_1).$ Indeed, suppose two functions $g_1, g_2 \in \mathcal{H}(K^{max}_1)$ satisfy $g_1(0,w_2) = g_2(0,w_2)$ for all $w_2 \ne 0$. This immediately implies $g_1(0,0)=g_2(0,0)$ and \[ g_1- g_2 =Z_1 h \] for some $h \in H^2(E_*).$ Arguments identical to those in the proof of Theorem \ref{thm:canonicalcmf} show that $h$ must be zero, so $g_1=g_2.$ As $\left(Af \right)_2$ and $\left( B \eta \right)_2$ are in $\mathcal{H}(K^{max}_1)$, they must be the unique such functions satisfying $(\ref{eqn:gleason3})$ and $(\ref{eqn:gleason4})$ respectively. Then, the other components $\left(Af \right)_1$ and $\left( B \eta \right)_1$ are uniquely determined by $(\ref{eqn:gleason1})$ and $(\ref{eqn:gleason2}).$ In one-variable, $Af$ and $B \eta$ can be explicitly written in terms of $f$ and $\eta.$ Given that, our characterizations of $A$ and $B$ seem slightly unsatisfying. This motivates the question \begin{question} Assume $g \in \mathcal{H}(K^{max}_1)$. Is there an explicit way to construct $g$ using only the function $g(0,w_2)?$ \end{question} A clean answer would also provide nice formulas for the operators $A$ and $B$. It seems possible that the refined results in \cite{bkvsv} about unitary T.F.R.'s associated to minimal augmented Agler decompositions might suggest methods of answering this question. \end{rem} \section{Appendix: Vector valued RKHS's} \label{sect:opkernels} In this section, we record several facts about vector valued reproducing kernel Hilbert spaces that were used in earlier sections. The results are well-known in the scalar valued case. See, for example \cite{aro50}, \cite{bv03b}, Chapter 2 in \cite{alp01}, and Chapter 2 in \cite{ ampi}. We outline how the needed vector valued results follow from the known scalar valued results. Let $\Omega$ be a set and $E$ be a separable Hilbert space. We will frequently use the following observation: \begin{rem} For each function $f: \Omega \rightarrow E$ there is an associated scalar valued function $\tilde{f}: \Omega \times E \rightarrow \mathbb{C}$ defined as follows: \[ \tilde{f}(z, \eta) := \LL f(z), \eta \RR_{E}. \] If functions $f,g: \Omega \rightarrow E$ and $\tilde{f} \equiv \tilde{g}$, then $f \equiv g.$ \end{rem} \begin{defn} \label{defn:scalarhs} Let $\mathcal{H}(K)$ be a reproducing kernel Hilbert space of $E$ valued functions on $\Omega$. For $w\in \Omega$ and $\nu \in E$, define the function $K_w \nu:= K( \cdot, w)\nu.$ An associated reproducing kernel Hilbert space of scalar valued functions on $\Omega \times E$ can be defined as follows: Define the set of functions \[ \mathcal{H} := \left \{ \tilde{f}: f \in \mathcal{H}(K) \right \} \] and equip $\mathcal{H}$ with the inner product \[ \LL \tilde{f}, \tilde{g} \RR_{\mathcal{H}} = \LL f ,g \RR_{\mathcal{H}(K)}. \] It is routine to show that $\mathcal{H}$ is a Hilbert space with this inner product and since \[ \tilde{f}(w,\nu) = \LL f(w), \nu \RR_{E} = \LL f, K_w \nu \RR_{\mathcal{H}(K)} = \LL \tilde{f}, \widetilde{ K_w \nu} \RR_{\mathcal{H}}, \] $\mathcal{H}$ is a reproducing kernel Hilbert space with reproducing kernel \[ L \big( (z,\eta), (w,\nu) \big) := \widetilde{K_w\nu} ( z, \eta) = \LL K(z,w) \nu, \eta \RR_{E} = \eta^* K(z,w) \nu. \] Then $f \in \mathcal{H}(K)$ if and only if $\tilde{f} \in \mathcal{H}(L)$. It is also clear that $\|f \|_{\mathcal{H}(K)} = \| \tilde{f}\|_{\mathcal{H}(L)}.$ \end{defn} The following results are well-known for scalar valued reproducing kernel Hilbert spaces and follow easily for vector valued reproducing kernel Hilbert spaces. \begin{thm} \label{thm:kerdiff} Let $\mathcal{H}(K)$ and $\mathcal{H}(K_1)$ be reproducing kernel Hilbert spaces of $E$ valued functions on $\Omega$. Then $\mathcal{H}(K_1) \subseteq \mathcal{H}(K)$ contractively if and only if \[ K(z,w) - K_1(z,w) \text{ is a positive kernel.} \] \end{thm} \begin{proof} As in Definition \ref{defn:scalarhs}, consider the Hilbert spaces $\mathcal{H}(L)$ and $\mathcal{H}(L_1)$ of scalar valued functions on $\Omega \times E$ with reproducing kernels given by \[ L \big( (z,\eta), (w,\nu) \big) :=\eta^* K(z,w) \nu \ \ \text{ and } L_1 \big( (z,\eta), (w,\nu) \big) :=\eta^* K_1(z,w) \nu. \] It is routine to show that $\mathcal{H}(K_1) \subseteq \mathcal{H}(K)$ contractively if and only if $\mathcal{H}(L_1) \subseteq \mathcal{H}(L)$ contractively. It follows from well-known scalar results, which appear on page 354 of \cite{aro50}, that $\mathcal{H}(L_1) \subseteq \mathcal{H}(L)$ contractively if and only if \[ L(z,w) - L_1(z,w) \text{ is a positive kernel.} \] The result follow from the fact that $L(z,w) - L_1(z,w)$ is a positive kernel if and only if $K(z,w)-K_1(z,w)$ is a positive kernel. \end{proof} Similarly, the following two results can be deduced from the scalar-valued case: \begin{thm} \label{thm:kermult} Let $\mathcal{H}(K)$ be a reproducing kernel Hilbert space of $E$ valued functions on $\Omega$ and let $\psi: \Omega \rightarrow \mathbb{C}$. Then $\psi$ is a multiplier of $\mathcal{H}(K)$ with multiplier norm bounded by one if and only if \[ \big( 1 - \psi(z) \overline{ \psi(w)} \big) K(z,w) \text{ is a positive kernel}. \] \end{thm} \begin{proof} When we say ``$\psi$ is a multiplier of $\mathcal{H}(K)$," we mean that $\psi \otimes I_{\mathcal{H}(K)}$ maps $\mathcal{H}(K)$ into $\mathcal{H}(K).$ Now, using the definition of $\mathcal{H}(L)$, it is easy to show that $\psi$ is a multiplier of $\mathcal{H}(K)$ with multiplier norm bounded by one if and only if $\psi$ is a multiplier of $\mathcal{H}(L)$ with multiplier norm bounded by one. By the analogous scalar valued result, which appears as Corollary 2.3.7 in \cite{ampi}, it follows that $\psi$ is a multiplier of $ \mathcal{H}(L)$ with multiplier norm bounded by one if and only if \[ \big( 1 - \psi(z) \overline{ \psi(w)} \big) L \big( (z, \eta), (w, \nu) \big) \text{ is a positive kernel}. \] The result then follows by using the definition of a positive kernel to show that $\big( 1 - \psi(z) \overline{ \psi(w)} \big) L \big( (z, \eta), (w, \nu) \big)$ is a positive kernel if and only if $\big( 1 - \psi(z) \overline{ \psi(w)} \big) K(z,w) $ is a positive kernel. \end{proof} \begin{thm} \label{thm:kersum} Let $\mathcal{H}(K_1), \mathcal{H}(K_2)$ be reproducing kernel Hilbert spaces of $E$ valued functions on $\Omega$. Then $\mathcal{H}(K_1 + K_2)$ is precisely the Hilbert space composed of the set of functions \[ \mathcal{H}(K_1) + \mathcal{H}(K_2) := \left \{ f_1 +f_2 : f_j \in \mathcal{H}(K_j) \right \}. \] equipped with the norm \[ \| f \|^2_{\mathcal{H}(K_1+K_2)} = \min_{\substack{ f = f_1 + f_2 \\ f_j \in \mathcal{H}(K_j)}} \|f_1\|^2_{\mathcal{H}(K_1)} + \|f_2 \|^2_{\mathcal{H}(K_2)}. \] \end{thm} \begin {proof} As before consider the related scalar valued reproducing kernel Hilbert spaces $\mathcal{H}(L_1)$ and $\mathcal{H}(L_2)$, where \[ L_1 \big( (z,\eta), (w,\nu) \big) :=\eta^* K_1(z,w) \nu \ \ \text{ and } L_2 \big( (z,\eta), (w,\nu) \big) :=\eta^* K_2(z,w) \nu. \] The analogous scalar valued result, which appears on page 353 in \cite{aro50}, states $\mathcal{H}(L_1 + L_2)$ is precisely the Hilbert space composed of the set of functions \[ \mathcal{H}(L_1) + \mathcal{H}(L_2) := \left \{ f_1 +f_2 : f_j \in \mathcal{H}(L_j) \right \}. \] equipped with the norm \[ \| f \|^2_{\mathcal{H}(L_1+L_2)} = \min_{\substack{ f = f_1 + f_2 \\ f_j \in \mathcal{H}(L_j)}} \|f_1\|^2_{\mathcal{H}(L_1)} + \|f_2 \|^2_{\mathcal{H}(L_2)}, \] Using this and the connections between $\mathcal{H}(L_j)$ and $\mathcal{H}(K_j)$, it is easy to deduce the desired result. The details are left as an exercise.\end{proof}
1,108,101,564,430
arxiv
\section{Introduction} \label{sec:intro} Surfactant molecules in water self-assemble into various structures such as micelles and bilayer membranes, which display a rich variety of rheological properties under flow. \cite{lars99b} Even if a basic structure remains to be a bilayer membrane, its mesoscale structure can assume several different states, such as fluid $L_\alpha$ or ripple $P_\beta$ phase. Under shear flow, lamellae can be oriented parallel or perpendicular to the shear-gradient direction. Diat and Roux first discovered 20 years ago closely-packed multi-lamellar vesicle (MLV) structures, so-called the onion phase, in nonionic surfactant-water mixtures under shear flow. \cite{93Diat,93RouxEPL,95Diat} In the last two decades, this onion structure has been studied experimentally using light, \cite{93Diat,95Diat,02Panizza,09Richtering,10Suganuma,12Fujii} neutron, \cite{93Diat,95Diat,03Nettesheim} and X-ray scattering, \cite{10Suganuma,11Ito,12Fujii} and also by freeze-fracture electron microscopy, \cite{02Panizza} and the rheo-NMR method. \cite{08Medronho,09Medronho,10Medronho} Its rheology has been of large interest. \cite{93RouxEPL,01Richtering,03Nettesheim,07Miyazawa,09Richtering,12Fujii} Typically, a critical shear rate $\dot{\gamma}_c$ separates the lamellae and onion phases, where the latter phase consists of mono-disperse onions containing hundreds of lamellar layers. The onion radius $R(\dot{\gamma})$ is reversible and can be described by a unique decreasing function of the shear rate $\dot{\gamma}$. \cite{05Richtering} Time-resolving small-angle neutron scattering experiments have revealed that a two-dimensional intermediate structure is formed during the lamellar-to-onion transition with increasing shear rate. \cite{03Nettesheim} A cylindrical or wavy lamellar structure was speculated to be the transient intermediate structure, but could not be distinguished from the scattering pattern alone. Recent small-angle X-ray scattering experiments with increasing temperature at constant shear rate also indicate a similar pattern around the lamellar-to-onion transition. \cite{11Ito} Thus, there are some experimental evidences of a transient state, but its structure is still under debate. An alternative experimental approach to gain insight into the structural changes is to characterize defects observed in the lamellar state for moderate shear rates, both in surfactant membranes \cite{10Medronho,11Medronho} and in thermotropic liquid crystals. \cite{10Fujii,11Fujii} It is also worth mentioning that stable cylindrical structures on a ten-micrometer length scale are observed when strong shear flow is applied in the lamellar-sponge coexistence state. \cite{07Miyazawa} Several theoretical attempts have been made to tackle this complex problem of structural evolution under shear flow, which consider either instability of the lamellar phase due to undulations \cite{ZG,02Olmsted} or the break-up of droplets. \cite{96vdLinden,93vdLinden} Recently, a ``dynamical'' free energy of MLV under shear flow has been proposed, \cite{LuPRL} which takes into account the slow modes induced by the solvent between the membranes together with their bending and stretching forces. The scaling relations for the MLV size and the terminal shear rate are predicted in agreement with the experiments of Refs.~\onlinecite{93Diat,93RouxEPL}. In these theories, while the hydrodynamic effects of the solvent are taken into account, the analysis is performed for geometrically simple structures, asp spherical onions or planar lamellae. Thus, the kinetic process of the transformation from the lamellar to the onion phase could not be investigated theoretically so far. In this paper, we study the detailed structural evolution of surfactant membranes under simple shear flow using large-scale particle simulations. A few simulations have been performed for the formation of lamellar phases in shear flow, while onion formation has never been addressed so far. Oriented lamellae have been obtained in simulations of a coarse-grained molecular model for lipids, \cite{02Guo,04Soddemann,07Guo} while defect dynamics has been investigated in simulations of a phase-field model of a smectic-A system. \cite{12Coveney} Onion and intermediate states have large-scale structures of the order of micrometers, which are beyond typical length scales accessible to molecular dynamics (MD) simulations of coarse-grained surfactant molecules. In our study, we employ a meshless-membrane model, \cite{NogRev,Nog06PRE,Nog06JCP,11Shiba} where a membrane particle represents not a surfactant molecule but rather a patch of bilayer membrane -- in order to capture the membrane dynamics on a micrometer length scale. This model is well suitable to study membrane dynamics accompanied by topological changes. Alternatively, membranes can be modeled as triangulated surfaces, \cite{NogRev,gg:gomp04c,fedo13} which require however discrete bond reconnections to describe topological changes. \cite{gg:gomp98c,gg:gomp04c,gg:gomp12g} After the first meshless-membrane model was proposed in 1991, \cite{Drouffe} several meshless-membrane models have been developed. \cite{Nog06PRE,Nog06JCP,11Shiba,popo08,09Kohyama,2010Yuan1,10PLGeissler} In contrast to other meshless-membrane approaches, our models \cite{Nog06PRE,Nog06JCP,11Shiba} are capable of separately controlling the membrane bending rigidity $\kappa$ and the line tension $\Gamma$ of membrane edges. Previously, \cite{NogRev,Nog06JCP} we have combined our meshless-membrane model with multi-particle collision (MPC) dynamics, \cite{kapr08,gg:gomp09a} a particle-based hydrodynamic simulation technique. With MPC, the hydrodynamic interactions are properly taken into account, but due to the frictional coupling of membrane and solvent, solvent particles can penetrate through the membrane. We here extend the meshless-membrane model into explicit solvent simulation model, in which the fluid particles interact with each other and with the membrane via short-ranged repulsive potentials, so that the solvent can hardly penetrate the membrane, and simulate it with dissipative particle dynamics (DPD), another hydrodynamics simulation technique. The simulation model and methods are introduced in Sec.~\ref{sec:model}. Then basic membrane properties, including bending rigidity and line tension, are described in Sec.~\ref{sec:prop}. In Sec.~\ref{sec:results}, structure formation in surfactant-water mixtures under shear flow is investigated for variety of shear rates $\dot{\gamma}$ and membrane volume fractions $\varphi$. At high $\dot{\gamma}$ and high $\varphi$, a novel structure of rolled-up membranes is found, which are oriented in the flow direction. A summary and some perspectives are given in Sec.~\ref{sec:sum}. \section{Model and Methods} \label{sec:model} \subsection{Coarse-Grained Model and Interaction Potentials} To simulate the structure formation in surfactant-membranes systems, we employ a meshless-membrane model with explicit solvent. In this model, two types of particles $\mathcal{A}$ and $\mathcal{B}$ are employed, which denote membrane and solvent particles, respectively. The number of these particles is $N_\mathcal{A}$ and $N_\mathcal{B}$, which defines the particle density $\phi=(N_\mathcal{A}+N_\mathcal{B})/V$ -- where $V$ is the volume of the simulation box -- and the membrane volume fraction $\varphi=N_\mathcal{A}/(N_\mathcal{A}+N_\mathcal{B})$. The particles interact via a potential $U$, which consists of repulsive, attractive, and curvature interactions, $U_{\rm rep},\ U_{\rm att},$ and $U_\alpha$, respectively, \begin{equation} \frac{U}{k_{\rm B}T} = \sum_{i<j} U_{\rm rep} + \sum_{i\in \mathcal{A}} \epsilon U_{\rm att} + k_\alpha U_\alpha. \end{equation} Here, the former sum for repulsive interactions is taken over all pairs of particles, the latter only over the membrane particles. All neighbor particle pairs interact via the short-ranged repulsive potential \begin{equation} U_{\rm rep} = \left\{ \begin{array}{ll} \epsilon_c \left( \frac{\sigma}{r_{ij}} \right)^{12} - B & (r_{ij}<r_c^{\rm rep}) \\ 0 & (r_{ij} \ge r_c^{\rm rep}) \end{array} \right. \end{equation} where $r_{ij}$ is the distance between particles $i$ and $j$. This potential is cut off at a distance $r_c^{\rm rep} = 3.2\sigma$. The length $\sigma$, representing the particle diameter, is employed as the length unit. The constant $B$ is chosen such as to ensure the continuity of the potential at $r=r_c^{\rm rep}$. The (dimensionless) potential strength is set to $\epsilon_c=4$. To favor the assembly of membrane particles into smoothly curved sheets in three-dimensional (3D) space, membrane particles interact via the additional potentials $U_{\rm att}$ and $U_\alpha$, which have been introduced in the implicit-solvent version of the model previously. \cite{Nog06PRE,Nog06JCP,11Shiba} With these potentials, the membranes particles self-assemble into a single-layer sheet, which is a model representation of a bilayer membrane. Here, the attractive interaction is given by \begin{equation} U_{\rm att} = 0.25 \ln \{ 1+ \exp [-4 (\rho_i -\rho^* ) ] \} - C, \end{equation} which is a function of the local density $\rho_i$ of the membrane particles defined by \begin{equation} \rho_i = \sum_{i\in \mathcal{A}} f_{\rm cut} (r_{ij}/ \sigma ). \label{eq:rho} \end{equation} $C=0.25 \ln [1+\exp (4\rho^*)]$ is chosen such that $U_{\rm att}(0) = 0$. The cutoff function $f_{\rm cut}$ in Eq.~(\ref{eq:rho}) is a $C^\infty$ function represented as \begin{equation} f_{\rm cut}(s) = \left\{ \begin{array}{ll} \exp \left[ a (1+ \frac{1}{ (|s| /s_{\rm cut} )^n -1} ) \right] & (s<s_{\rm cut}) \\ 0 & (s\ge s_{\rm cut}) \end{array} \right. \end{equation} where $n=12$, $a=\ln(2) \{(s_{\rm {cut}}/s_{\rm {half}})^n-1\}$ with $s_{\rm cut}=2.1$ and $s_{\rm {half}}=1.8$ are used. We set $\rho^* = 6$ to study a fluid membrane in an explicit solvent. The curvature potential \begin{equation} U_\alpha = \sum_{i\in\mathcal{A}} \alpha_{\rm pl} (\bm{r}_i) \end{equation} is introduced to incorporate the membrane bending rigidity. Here, the aplanarity $\alpha_{\rm pl}$ provides a measure for the degree of deviation of membrane particle alignment from a planar reference state. It is defined by \begin{equation} \alpha_{\rm pl} = \frac{9D_{\rm w}}{T_{\rm w}M_{\rm w}} = \frac{9\lambda_1 \lambda_2 \lambda_3}{(\lambda_1+\lambda_2+\lambda_3) (\lambda_1 \lambda_2 + \lambda_2\lambda_3 +\lambda_3 \lambda_1)}, \end{equation} where $\lambda_1 \le \lambda_2 \le \lambda_3$ are the three eigenvalues of the local gyration tensor $a_{\alpha\beta}$ of the membrane near particle $i$, which is defined as $a_{\alpha\beta} = \sum_{j\in\mathcal{A}} (\alpha_j -\alpha_G)(\beta_j -\beta_G) w_{\rm cv} (r_{ij})$, with $\alpha,\beta \in \{ x,y,z\}$. Here, $\bm{r}_G = \sum_{j\in\mathcal{A}} \bm{r}_j w_{\rm cv}(r_{ij}) / \sum_{j\in\mathcal{A}} w_{\rm cv}(r_{ij})$ is the locally weighted center of mass, and $w_{\rm cv} (r_{ij})$ is a truncated Gaussian function \begin{equation} w_{\rm cv}(r_{ij} ) = \left\{ \begin{array}{ll} \exp \left( \frac{ (r_{ij}/r_{\rm ga})^2 }{ (r_{ij}/r_{\rm cc})^n -1} \right) & (r_{ij}<r_{\rm cc} ) \\ 0 & (r_{ij} \ge r_{\rm cc} ) \end{array} \right. \label{eq:wcv} \end{equation} which is smoothly cut off at $r_{ij} = r_{\rm cc}$. The constants are set as $n=12,\ r_{\rm ga} = 1.5\sigma$, and $r_{\rm cc} = 3.2\sigma$. In all our simulations, the volume is chosen such that the number density $\phi$ of the particles is constant, \begin{equation} \phi = N/V = 0.64\sigma^{-3},\ N=N_{\mathcal{A}} + N_{\mathcal{B}}. \end{equation} For higher solvent densities, the system is closer to the melting point, and strong attractive interaction between membranes sheets have been found (see Appendix A for details). The density $\phi = 0.64\sigma^{-3}$ is chosen in order to avoid such an attraction. \subsection{Thermostats} We simulated the membranes in the NVT ensemble under shear flow. To keep the temperature constant, we employ the dissipative particle dynamics (DPD) thermostat, \cite{hoog92,95Espanol,groo97,07Nog,07Nog2} in which friction and noise forces are applied to the relative velocities of pairs of neighboring particles. Thus, linear and angular momentum is conserved, which implies that the system shows hydrodynamic behavior on sufficiently large length and time scales. The equation of motion for the $i$th particle is given by \begin{equation} m\frac{d\bm{v}_i}{dt} = -\frac{\partial U}{\partial\bm{r}_i} + \sum_{j\neq i} \{ -w_{ij} (\bm{v}_i-\bm{v}_j) \cdot \hat{\bm{r}}_{ij} +\sqrt{w_{ij}} \xi_{ij}(t) \} \hat{\bm{r}}_{ij} , \end{equation} where $\hat{\bm{r}}_{ij} = \bm{r}_{ij} /r_{ij}$. Here, the weight function $w_{ij}$ is $w_{ij}(r_{ij}) = \gamma \theta ( A \sigma - r_{ij})$ with $A=2.7$, where $\theta(r)$ is the Heaviside step function. This type of weight has also been used in the Lowe-Andersen thermostat. \cite{lowe99} The scheme is discretized with Shardlow-S1 splitting method, \cite{Shardlow} whose time step $\Delta t^b = 0.2 t_0$ is different from that for contributions from the molecular interactions, where $\Delta t=0.005t_0$ ($t_0=m/\gamma$ is simulation time unit). In the following, we use $\gamma = \sqrt{mk_{\rm B}T}/\sigma$, where $k_{\rm B}T$ is the thermal energy unit. Although the DPD thermostat is usually employed for soft interaction potentials, \cite{09MarrinkRev,SmitRev,04Laradji} it can also be employed for systems with steeper potentials, such as the Weeks-Chandler-Andersen potential. \cite{04Soddemann,02Guo} Shear flow with velocity $v_x=\dot{\gamma}z$ in the $x$-direction and gradient in the $z$-direction is imposed by Lees-Edwards boundary condition. The code is optimized for use on a parallel computer architecture by domain decomposition (see Appendix B). \section{Model Properties} \label{sec:prop} The meshless-membrane model with explicit solvent is expected to exhibit very similar equilibrium properties as the original implicit-solvent version. \cite{Nog06PRE,Nog06JCP,11Shiba} We now confirm this dependence of surface tension, line tension, and bending rigidity on the control parameters for the explicit-solvent model. \begin{figure} \includegraphics[width=0.85\linewidth, bb=0 0 360 252]{SLtension.pdf} \caption{\label{fig:SurfT} Area dependence of surface tension $\gamma_{\rm s}$ for the explicit solvent meshless membrane model is plotted for $k_\alpha = 2.5, 5, 10,$ and 20 with $\epsilon=4$. } \end{figure} First, a planar fluctuating membrane is simulated with the particle numbers $N_\mathcal{A}=1600$ (and $N=48\ 000$ or $64\ 000$). For various membrane projected areas $A_{xy}=L_xL_y$, the surface tension \begin{equation} \gamma_{\rm s} = \langle P_{zz} - (P_{xx}+P_{yy})/2 \rangle L_z \label{eq:sftens} \end{equation} is investigated. Here, $P_{\mu\nu}$ is the pressure tensor given by \begin{equation} P_{\mu\nu } = \sum_{i=1}^N \left( mv_i^\mu v_i^\nu - \mu_i \frac{\partial U}{\partial \nu_i}\right) \Big/ V, \end{equation} where $\{\mu, \nu\} \in \{x,y,z\}$, $\bm{v}_i = (v_i^x, v_i^y, v_i^z)$, and the sum is taken over all the particles including the solvent component. In calculating $P_{\mu\nu}$, the periodic image $\mu_i + nL_\mu$ nearest to the other interacting particles is employed, when the potential interaction crosses one of the periodic boundaries. Figure \ref{fig:SurfT} shows the dependence of $\gamma_{\rm s}$ on the projected membrane area $A_{xy}$ for $k_\alpha = 2.5, 5, 10$, and 20, with $\epsilon = 4$. For $\gamma_{\rm s} \simeq 0$, the intrinsic area $A$ is larger than $A_{xy}$ due to the membrane undulations; buckling of the membrane occurs for $A < A_{xy}$ (the flat region at $\gamma_{\rm s} < 0$ in Fig.~\ref{fig:SurfT}). \cite{nogu11a} For tension-less membranes with $N_\mathcal{A}=1600$, the projected area $A^0_{xy}$ is given by $A_{xy}^0 = a^0_{xy} N_\mathcal{A}$ with $a_{xy}^0 = 1.1,\ 1.27,\ 1.33,\ 1.35$ for $k_\alpha = 2.5,\ 5,\ 10,\ 20$, respectively. The projected membrane area increases with increasing $k_\alpha$, because both membrane bending fluctuations and protrusions are suppressed at larger $k_\alpha$. \begin{figure} \includegraphics[width=0.8\linewidth, bb=0 0 360 330]{kalpha.pdf} \caption{\label{fig:kalpha} Dependence of the bending rigidity $\kappa$ on $k_\alpha$ is plotted for $\epsilon = 4$, estimated with the use of a tension-less planar membrane at $N_{\mathcal{A}} = 1600$ through Eq.~(\ref{eq:Helfrich}). As shown in the inset, the height spectrum of the membrane exhibits a $q^{-4}$ spectrum. } \end{figure} \begin{figure} \includegraphics[width=0.75\linewidth, bb=0 0 360 252]{SLTplot.pdf} \caption{\label{fig:LineT} $\epsilon$ dependence of the line tension $\Gamma$ of explicit-solvent meshless-membrane model, calculated for $k_\alpha = 5$ and $10$ with the use of Eq.~(\ref{eq:linet}). } \end{figure} Figure \ref{fig:kalpha} shows the bending rigidity $\kappa$ as a function of $k_\alpha$. Here, the bending rigidity is estimated from the height spectrum \cite{Helfrich} \begin{equation} \langle |h(q)|^2\rangle = \frac{k_{\rm B}T}{\gamma_{\rm s} q^2 + \kappa q^4}. \label{eq:Helfrich} \end{equation} of the tension-less membrane ($\gamma_{\rm s}=0$). In calculating $\langle |h(q)|^2\rangle$, the raw positional data of the membrane particles ($\bm{r}_i,\ i\in\mathcal{A}$) is employed. \cite{Nog06PRE,11Shiba} Because of the slow dynamics of long-wavelength height fluctuations, it becomes more time-consuming to obtain precise data than for the implicit-solvent model. \cite{Nog06PRE,11Shiba} Therefore, $\langle |h(q)|^2\rangle$ is measured using 16 independent runs for each $k_\alpha$, and the averaged spectrum is then fitted to Eq.~(\ref{eq:Helfrich}) in the range $q < 1.2\sigma^{-1}$. As in the implicit model, the bending rigidity is found to be proportional to $k_\alpha$, which demonstrates the controllability of $\kappa$ in our model. In Fig.~\ref{fig:LineT}, the $\epsilon$ dependence of the line tension $\Gamma$ is displayed for $k_\alpha = 5$ and 10 for a membrane strip with two edges with lengths equal to $L_x$ with $N_{\mathcal A} = 1600$. Here, $\Gamma$ is determined via the relation \cite{11Shiba} \begin{equation} \Gamma = \langle (P_{yy} + P_{zz})/2 - P_{xx}\rangle L_yL_z/2. \label{eq:linet} \end{equation} At a small $\epsilon$, $\Gamma$ is proportional to $\epsilon$ and almost independent of $k_\alpha$, similarly to the implicit-solvent meshless membrane model \cite{Nog06PRE}. While $\Gamma$ increases linearly up to $\epsilon = 8$ for $k_\alpha = 10$, it levels off and saturates at $\epsilon =4$ for $k_\alpha=5$. When $\epsilon$ exceeds the value where $\Gamma$ saturates --- a value which becomes larger for larger $k_\alpha$ --- the membrane particles prefer to reside at the edges, because the curvature force is not strong enough to avoid aggregation due to the stronger attractive forces. For our simulations of membrane ensembles in shear flow, we choose the model parameters $k_\alpha = 5$ and $\epsilon = 4$, where the membrane has bending rigidity $\kappa/k_BT = 11\pm 1$. With the estimated values for the line tension $\Gamma\sigma /k_{\rm B}T = 4.2 \pm 0.2$, the relaxation time scale of structural transitions can be characterized by \begin{equation} \tau = \frac{\eta R_c^3}{\kappa}, \end{equation} where $\eta$ is the solvent viscosity, and $R_c$ is the critical radius of a flat disk. Assuming a flat disk with radius $R$ and a vesicle with the same membrane area, we obtain the corresponding elastic free energies $\mathcal{F}_d = 2\pi R\Gamma$ and $\mathcal{F}_s = 8\pi (\kappa + \bar{\kappa} /2) \simeq 4\pi \kappa$; thus, a disk exhibits transition to closed vesicle shape if the radius is around \begin{equation} R_c = 2\kappa /\Gamma \sim (5.3 \pm 0.5 )\sigma . \label{eq:r_c} \end{equation} With the solvent viscosity $\eta = (2.1\pm 0.1)\times m(\sigma t_0)^{-1}$ obtained from a solvent-only simulation, an estimation of the relaxation time yields $\tau = 28.4t_0$. In the following, the time will be measured in unit of $\tau$. Since the membrane thickness, typically around 5nm for non-ionic surfactants like polyethylene-glycol-ethers C$_n$E$_m$, corresponds to the size $\sigma$ of the membrane particles in our simulation, $\tau$ is equivalent to about $0.36\mu$s (with the viscosity of water at 300K, $\eta_{\rm w} \simeq 0.8 {\rm mPa}\cdot{\rm s}$). \section{Structure Formation with and without Shear Flow} \label{sec:results} \begin{figure*} \includegraphics[width=0.80\linewidth, bb = 0 0 367 257]{M6M12M18.pdf} \caption{\label{fig:M6_ves} Snapshots of the configuration of membrane particles for $\varphi = 0.0625$ (a) without shear ($\dot{\gamma}=0$) and (b) under shear flow with shear rate $\dot{\gamma}\tau=0.0284$, and (c) for $\varphi = 0.125$ with $\dot{\gamma}\tau = 0.0568$. Snapshots for $\varphi=0.1875$ under shear flow are shown with shear rates (d) $\dot{\gamma}\tau = 0.0284$, (e) $\dot{\gamma}\tau = 0.142$, and (f) $\dot{\gamma}\tau = 0.568$. In (b)-(f), the (average) imposed flow velocity is ${\bf v} = \dot\gamma z {\bf e}_x$. Solvent particles are not displayed. } \end{figure*} We now employ the meshless-membrane model with explicit solvent to study structure formation in surfactant-water mixtures, both with and without shear flow. In all simulations, the total particle number is fixed at $N=N_\mathcal{A}+N_\mathcal{B}= 960\ 000$, and thus, the system size is a cubic box with side length $L=114.47\sigma$. Simulations are performed for various membrane volume fraction $\varphi = N_\mathcal{A}/N$, with $\varphi = 0.0625$, $0.125$, $0.1875$, $0.25$, and $0.3125$. The dynamical evolution is integrated over a total time interval of $1.2\times 10^7$ MD steps, corresponding to $2.11\times 10^3\tau$. All the particles of both species are initially distributed randomly in the simulation box. Averages are calculated over the last $2\times 10^6$ steps, where the system is assumed to have reached a stationary state. After briefly explaining the structures obtained by equilibrium simulations without shear in Sec.~\ref{sec:noshear}, we present results for the structure formation in a system under linear shear flow in Secs.~\ref{sec:pd} and \ref{sec:roll}. \subsection{Mesophases in Thermal Equilibrium} \label{sec:noshear} \begin{figure} \includegraphics[width=0.7\linewidth, bb = 0 0 228 418]{c3125_sh0.pdf} \caption{\label{fig:noshears} (a) Snapshot of the configuration of membrane particles for $\varphi = 0.3125$ without shear flow ($\dot{\gamma}=0$). To facilitate visualization, only a thin planar slice is displayed. Solvent particles are not shown. (b) Structure factor $S_{\mathcal{A}}[\bm{q} = (0,q_y,q_z) ]$ for $\varphi =0.3125$ at $\dot{\gamma}=0$. } \end{figure} At a low membrane volume fraction $\varphi = 0.0625$, membrane particles self-assemble into vesicles, each of which is composed of around $100$ membrane particles (see Fig.~\ref{fig:M6_ves}(a)). This result is consistent with Eq.~(\ref{eq:r_c}), which predicts the critical particle number of a vesicle to be $N_c = \pi R_c^2 / a^0 \simeq 75$. At a higher membrane volume fraction $\varphi = 0.125\ (N_\mathcal{A}=120\ 000)$, the membrane surfaces is found to percolate through the whole system via the periodic boundaries. This behavior is not unexpected, because each disk with radius $R_c$ covers a region of volume $v_c = (2R_c)^3$ by rotational diffusion, and therefore these disks should overlap and merge when more than $n_c\sim V/v_c = 1200$ vesicles are present, which exceeds the number of vesicles with critical size ($N_\mathcal{A}/N_c$). For $\varphi = 0.25$ and 0.3125, periodic lamellar states are formed owing to the repulsive interactions between the membranes, as shown in Fig.~\ref{fig:noshears}(a). The membranes are curved to fill up space with random orientations. In thermal equilibrium, these lamellar layers would probably have a unique orientation throughout the system; this well-ordered state is difficult to reach in simulations because the structural relaxation time well exceeds the accessible simulation time scale. The 3D structure factor of the membrane density is calculated as \begin{equation} S_{\mathcal{A}}(\bm{q}) = \int d\bm{r}\ e^{i\bm{q}\cdot\bm{r} } \langle \delta \hat{n}_\mathcal{A} (\bm{r}) \delta \hat{n}_\mathcal{A} (\bm{0} ) \rangle , \label{eq:sk_a} \end{equation} where $\delta \hat{n}_\mathcal{A} (\bm{r}) = \sum_{i\in \mathcal{A}} \sigma^3\delta (\bm{r} - \bm{r}_i) -\phi$ is the local deviation of the membrane particle density from its average. Figure \ref{fig:noshears}(b) shows $S_\mathcal{A} (\bm{q})$ for $\varphi = 0.3125$. The scattering intensity is spherically symmetric, as demonstrated by a two-dimensional (2D) color-map for $q_z=0$. Peaks arising from the inter-lamellar distance are observed at $|\bm{q}| = q_1= 1.74\sigma^{-1}$ and $q_2=3.49\sigma^{-1} = 2q_1$ with heights 9.8 and 0.78, respectively. The former corresponds to length of $L = 2\pi / q_1 = 3.61\sigma$, which provides a precise estimate of the interlayer distance. \subsection{Dynamic Phase Diagram under Shear Flow} \label{sec:pd} \begin{figure} \includegraphics[width=0.9\linewidth, bb=0 0 219 195]{newphase10.pdf} \caption{\label{fig:phase} Dynamic phase diagram of the explicit-solvent meshless-membrane model as a function of the volume fraction $\varphi$ of the membrane component and the shear rate $\dot{\gamma}$. The parameters are $k_\alpha =5$, $\epsilon =4$, $\phi=0.64\sigma^{-3}$, and $N=960,000$. The dashed lines are guides to the eye. } \end{figure} In shear flow, the vesicle and lamellar states depend on the concentration $\varphi$ and the shear rate $\dot{\gamma}$, as displayed in Fig.~\ref{fig:phase}. At $\varphi =0.0625$, assemblies of uni-lamellar vesicles are observed for $\dot{\gamma}\tau< 0.02$. At a higher shear rate, the membrane particles tend to assemble into plate-like membrane disks, which then align parallel to the shear flow direction, as demonstrated by the comparison of snapshots in Figs.~\ref{fig:M6_ves} (a) and (b). At $\varphi = 0.125$, lamellar layers with non-uniform lamellar distances are observed for $\dot{\gamma}\tau < 0.06$, as shown in Fig.~\ref{fig:M6_ves}(c). There is a larger variety of phases at $\varphi = 0.1875$; at low shear rates $\dot{\gamma}\tau < 0.1$, the system is a mixture of lamellae and cylinders as shown in Fig.~\ref{fig:M6_ves}(d); at $\dot{\gamma}\tau = 0.142$, the lamellae are rolled up collectively, see Fig.~\ref{fig:M6_ves}(e); finally, at large shear rates $\dot{\gamma}\tau \gtrsim 0.3$, the system exhibits a reentrant lamellar state, as shown in Fig.~\ref{fig:M6_ves}(f). \begin{figure*} \includegraphics[width=0.75\linewidth, bb = 0 0 346 225]{M30_roll.pdf} \caption{ \label{fig:roll} Snapshots membrane conformations for volume fraction $\varphi =0.3125$, with shear rates (a) $\dot{\gamma}\tau= 0.0284$, (b) $\dot{\gamma}\tau = 0.0568$, and (c) $\dot{\gamma}\tau = 0.142$. Views from the flow ($x$) direction are shown in the upper panels. Corresponding cross-sectional views (with $-3.0\sigma < x < 3.0\sigma$) are shown in the lower panels. } \end{figure*} At higher $\varphi$, the lamellar states align perpendicularly to the shear-gradient ($z$) direction in the regime of low shear rates ($\dot{\gamma}\tau < 0.1$). At larger $\dot{\gamma}$, they exhibit an instability to a rolled-up shape whose axis is parallel to the flow direction, which will be investigated in more detail in Sec.~\ref{sec:roll} below. At $\varphi=0.25$, there is again a reentrant behavior of lamellar states, {\em i.e.} nearly planar aligned layers appear at large shear rates $\dot{\gamma}\tau \ge 0.284$. Note that experimental phase diagrams often exhibit reentrant behaviors \cite{93RouxEPL,93Diat} with increase in the shear rate at a certain membrane volume fraction; the mixture changes from lamellar state to onion state, and then after going through the coexistence region it enters again into an oriented lamellar state again. Although the onion state with densely packed MLVs has not been obtained in our simulations, the reentrant behavior is qualitatively consistency with the experimental observations. \subsection{Rolled-up Lamellar Structures} \label{sec:roll} \subsubsection{Structure Analysis} Snapshots of membrane conformations are shown in Fig.~\ref{fig:roll} for $\varphi = 0.3125$ at various shear rates $\dot{\gamma}\tau= 0.0284$, $0.0568$, and $0.142$. In all simulations for $\varphi \ge 0.125$, the membranes are completely aligned with the flow direction. Thus, as shown in the bottom panel of Fig.~\ref{fig:roll}, the membrane configurations can be visualized by cross-sectional slices perpendicular to the flow direction. Figure \ref{fig:roll} demonstrates the transition from the lamellar state to the rolled-up state, which is stable in the region $\varphi \gtrsim 0.175$ and $\dot\gamma\tau \gtrsim 0.1$ in the phase diagram of Fig.~\ref{fig:phase}. The structural changes accompanying this instability can be characterized by the average orientation and the mean-square local curvature of the membrane. The normal unit vector $\bm{n}$ is calculated from the first-order moving least-squares (MLS) method \cite{Nog06PRE,bely96,lanc81} applied to the configurations of membrane particles. Using a weighted gyration tensor $a_{\alpha\beta} = \sum_j (\alpha_j' -\alpha_G' ) (\beta_j' -\beta_G') w_{\rm cv} (r_{ij})$, where $\alpha,\ \beta\in \{x, y,z\}$, $\bm{n}$ is obtained as an eigenvector corresponding to the minimum eigenvalue of $a_{\alpha\beta}$, which together with the other two eigenvectors $\bm{e}_1$ and $\bm{e}_2$ constitutes an orthonormal basis. Here, the cut-off function $w_{\rm cv} (r_{ij})$ in Eq.~(\ref{eq:wcv}) is employed with the same cut-off lengths $r_{cc}'=3.2\sigma$. As a quantitative measure for the undulation instability, we employ the orientational order parameter \begin{equation} S_z = [2({\bm{n}}\cdot \hat{\bm{e}}_z )^2-1 ] = \cos(2\theta) \label{eq:angle} \end{equation} with the normalization for the two-dimensional order of cylindrical and planar symmetry. \begin{figure} \includegraphics[width=0.85\linewidth, bb = 0 0 341 385]{angle3.pdf} \caption{\label{fig:angle} (a) Membrane orientational order parameter $\langle S_z\rangle$ in Eq.~(\ref{eq:angle}) for $\varphi =0.125$, $0.1875$, $0.25$, and $0.3125$, and (b) the mean-square local curvature $\langle H^2 \rangle$ for $\varphi =0.1875$, $0.25$, and $0.3125$, both as function of the shear rate $\dot{\gamma}\tau$. } \end{figure} \begin{figure*} \includegraphics[width=0.8\linewidth, bb = 0 0 527 205]{str30.pdf} \caption{\label{fig:str} Color maps of the structure factor $S_\mathcal{A}(0,q_y,q_z)$ for volume fraction $\varphi = 0.3125$ and shear rates (a) $\dot{\gamma}\tau =0.0284$, (b) $\dot{\gamma}\tau =0.0568$, and (c) $\dot{\gamma}\tau =0.142$. Since the structure is uniform in the flow ($x$) direction, only data for $q_x =0$ are shown. } \end{figure*} The instability can also be characterized by calculating the mean-square curvature. The second-order MLS method \cite{Nog06PRE} provides an estimate of the membrane curvature from the particle configurations in the following way. For each particle $i$, we perform a rotational transformation into the principal coordinate system of the gyration tensor of the neighbor particles $j$ around $i$'s weighted center of mass $\bm{r}_G$ by \begin{equation} \left( \begin{array}{c} X_j \\ Y_j \\ Z_j \end{array}\right) = \left( \begin{array}{c} \bm{e}_1 \\ \bm{e}_2 \\ \bm{n} \\ \end{array}\right) ( \bm{r}_j - \bm{r}_G)^T , \end{equation} and then employ the parabolic fit function \begin{equation} \begin{array}{rl} \Lambda_2(\bm{r}_i) &= \frac{1}{w_0} \sum_j \left( z_0 + h_xX_j + h_y Y_j + \frac{1}{2}h_{xx} X_j^2 \right. \\ & + \left. \frac{1}{2} h_{yy} Y_j^2 + h_{xy} X_jY_j - Z_j \right)^2 w_{\rm cv} (r_{ij}), \end{array} \end{equation} where the coefficients of the Taylor expansion $z_0,\ h_x,\ h_y,\ h_{xx},\ h_{yy}$ and $h_{xy}$ are fitting parameters. \cite{Nog06PRE} By a least-squares fit, the estimated value of the mean curvature $H = (C_1+C_2)/2$ for particle $i$ is then obtained as \begin{equation} H = \frac{(1+h_x^2) h_{yy} + (1+h_y^2 )h_{xx} - 2h_xh_yh_{xy} }{2 (1+h_x^2+h_y^2)^{3/2}}. \end{equation} Figure \ref{fig:angle} displays the results for the spatial average $\langle S_z\rangle$ of the 2D orientational order parameter and for the mean-square local curvature $\langle H^2 \rangle$. When the membranes are aligned with the $x-y$ plane orthogonal to the shear-gradient direction, $S_z$ becomes unity. As the membranes roll up, $S_z$ decreases. Here, $\langle S_z\rangle =0$ for perfectly cylindrical state (where $({\bm{n}}\cdot \hat{\bm{e}}_x)^2 =0$ and $({\bm{n}}\cdot \hat{\bm{e}}_y)^2 =({\bm{n}}\cdot \hat{\bm{e}}_z)^2 =1/2$ ), and $\langle S_z\rangle = -1$ for a perfectly flat lamellar layers perpendicular to the vorticity ($y$) direction (where $({\bm{n}}\cdot \hat{\bm{e}}_z)^2 =0$). For all $\varphi\ge 0.1875$, the values of $\langle S_z\rangle$ and $\langle H^2\rangle$ provide evidence for rolled-up structures at $\dot{\gamma}\tau = 0.142$. For $\varphi = 0.1875$ and 0.25, $\langle S_z\rangle$ increases again at $\dot{\gamma}\tau = 0.284$, indicating reentrant alignment of the membranes in a lamellar stack. However, for $\varphi = 0.3125$, $\langle S_z\rangle$ remains small (and $\langle H^2 \rangle$ large), which implies that rolled-up structures exist also at $\dot{\gamma}\tau = 0.284$. Thus, the evolution of rolled-up conformations is the most pronounced at the highest membrane volume fraction $\varphi =0.3125$. In Fig.~\ref{fig:str}, results for the structure factor $S_{\mathcal{A}}(\bm{q})$ at $q_x=0$ are shown for the same set of data as in Fig.~\ref{fig:roll}. Due to the nearly complete alignment of membranes in the flow ($x$) direction, all the structural features are reflected in $S_{\mathcal{A}}(\bm{q})$ in the $q_y-q_z$ wave-vector plane. Since lamellar layers are nearly planar at the low shear rate $\dot{\gamma}\tau = 0.0284$, a sharp peak is observed around $(q_y,q_z) = (2\pi q_1,0)$. Because of the undulation instability, the pattern becomes more circular at higher $\dot{\gamma}$. In the small-angle neutron \cite{03Nettesheim} and X-ray \cite{11Ito} scattering experiments, the scattering beams can be injected from two directions, either radial or tangential to the shear cell, which correspond to shear-gradient and flow directions, respectively. After the shear flow is applied, after a while a Bragg peak of the radial beam develops in the vorticity direction, while the scattering pattern becomes isotropic for the tangential beam. This suggests 2D-isotropic undulations of the lamellar structure perpendicular to the flow direction. Later in time, the radial beam is scattered isotropically in the tangential direction, indicating the formation of onion structures. The structure factor $S(\bm{q})$ in our simulation (Fig.~\ref{fig:str}(c)), agrees well with $S(\bm{q})$ of this transient states in these scatting experiments. Measurements of solvent diffusion in a Rheo-NMR experiment of a non-ionic surfactant system \cite{08Medronho} show a diffusion anisotropy in the direction of shear flow in the intermediate state. These experimental results suggest that the membranes are aligned in the direction of shear flow in the intermediate state of the lamellae-to-onion transition, although more detailed information on the structural membrane arrangement could not be obtained. The rolled-up lamellar structures observed in our simulations match the experimental evidence, so that they are a good candidate for the intermediate states. \subsubsection{Temporal Evolution of Membrane Structures} \label{sec:ev} In simulations of a large system, even if extended over a long time interval, the resultant structure often exhibits dependence on the initial conditions -- owing to the slow processes involved in the dynamics. Therefore, we compare here the time evolution at $\varphi =0.3125$ starting from both, a lamellar state and a random distribution of membrane particles (the latter corresponding to the simulations described in the previous subsections). For an initial lamellar state, the final configuration of a simulation run with $1.2\times 10^7$ MD steps at a small shear rate $\dot{\gamma}\tau = 0.0284$ (see Fig.~\ref{fig:roll}(a)), is employed. \begin{figure} \includegraphics[width=0.80\linewidth, bb = 0 0 322 479]{tdep312_5.pdf} \caption{\label{fig:tdep} Comparison of structure formation from a random initial configuration and from a lamellar state at $\varphi =0.3125$. (a) Time evolution of the orientational order parameter $\langle S_z\rangle$. The dotted and solid lines show the data starting from the random and lamellar initial states, respectively. Blue, red, and black lines represent the shear rates $\dot{\gamma}\tau = 0.568$, $0.284$, and $0.142$, respectively. Snapshots of the final configurations for $\dot{\gamma}\tau = 0.284$ at $t=2.11\times 10^3 \tau$, as they have developed from (b) the random and (c) the lamellar initial states, are also shown. } \end{figure} In Fig.~\ref{fig:tdep}(a), the orientation order parameter $\langle S_z\rangle$ for both initial conditions are shown for $\dot{\gamma}\tau= 0.568$, $0.284$, and $0.142$. For the case of a random initial distribution, small disks merge into randomly oriented surfaces, which then align in the shear flow to become a nearly perfect lamellar stack with some defects. Afterwards, lamellae roll up into slightly larger rolls. During rolling up, $\langle S_z\rangle$ exhibits an overshoot (see Fig.~\ref{fig:tdep}(c)), and finally approaches a constant as the structure relaxes into a (meta)stable state. The overshoot amplitude depends on initial states and random noises. Figs.~\ref{fig:tdep}(b) and (c) show the membrane conformations after an elapsed time of $t=2.11\times 10^3\tau$ (for $\dot{\gamma}\tau = 0.284$) for the two types of initial conditions, and explain the origin of the substantially different values of the order parameter $\langle S_z\rangle$ in Fig.~\ref{fig:tdep}(a). When the random state is taken as initial condition, rolled-up structures are considerably more pronounced; this may be traced back to the presence of defects in the lamellar structure which forms at short times $t/\tau \simeq 100$. For the case of a lamellar initial configuration, the undulation instability becomes more conspicuous when the applied shear flow is stronger. Moreover, while strong undulations are observed at low $\dot{\gamma}$ with the random initial configuration, less undulations take place with the lamellar initial configuration, as indicated by $\langle S_z\rangle \sim 1$. Thus, the initial conditions play an important role in the selection of the transition path and structure formation. It may depend not only on the shear rate and relaxation time but also on the distribution of structural defects in the lamellar layers. Thus, more systematic studies are required in the future to clarify the hysteresis of these systems. \section{Summary} \label{sec:sum} In this paper, we have constructed an explicit-solvent meshless-membrane model for surfactant-water mixtures. The model reproduces properties of an earlier implicit-solvent meshless membrane model, where membrane bending rigidity and line tension can be independently controlled to a large extent. The model enables large-scale simulations of structural changes, where dynamical effects of hydrodynamic interactions have to be taken into account. At present, such a large simulation with as many as one-million particles can be realized by parallelized molecular dynamics simulation methods. Our main results concern the effects of shear flow on the structure formation of membrane ensembles. Various structures including vesicles, lamellae, and multi-lamellar states with nearly cylindrical symmetry have been found, most of which are qualitatively consistent with experimental observations of non-ionic surfactant membranes under shear flow. Especially, a cylindrical instability of multi-lamellar membrane is predicted to occur perpendicularly to the flow direction. The corresponding scattering patterns are in qualitatively agreement with the results of small angle neutron (and X-ray) scattering experiments under shear deformation. The rolled-up lamellae are a good candidate for the intermediate structures on the way to the onion state, which are observed in the experiments. Our simulations do not reproduce onion formation, which is ubiquitously observed in experiments on $\mu$m length scales. We speculate that it is due to the limited system size, which can be overcome by larger-scale calculations in the future. On the other hand, in the experiments, strains larger than $\dot{\gamma}t \gtrsim 10^4$ are necessary to reach the onion state, which indicates that very long simulation runs are required to obtain these states. The control of the physical parameters including the line tension $\Gamma$, the bending rigidity $\kappa$, and the Gaussian modulus $\bar{\kappa}$ is another challenge. While $\Gamma$ and $\kappa$ are easily controlled in our model, $\bar{\kappa}$ is more difficult. The Gaussian modulus $\bar{\kappa}$ might also play an important role in the structure formation, because it is directly related to topological changes happening on the way of onion formation. \cite{2012Hu} \begin{appendix} \section{Solvent-Mediated Forces at Higher Solvent Density} In the present simulation model, solvent particles have a similar size as membrane particles. Here, we discuss the finite-size effects of the solvent particles, if much higher solvent densities are used than in the present study. When the solvent particles are densely packed, the system approaches a crystallization transition, and local crystalline order emerges to bring about interactions between closely spaced lamellar layers. As an example, we study the explicit-solvent meshless-membrane model for higher number density (denoted system $(\mathcal{S}2$), where $\phi = (\sigma_{\mathcal{A}}^3N_{\mathcal{A}} + \sigma_{\mathcal{B}}^3N_\mathcal{B}) /V = 0.72$ with the total particle number $N=N_{\mathcal{A}} + N_{\mathcal{B}}= 480\ 000$). The particle radii of the two components are chosen such as $\sigma_{\mathcal{B}} =1.2\sigma_{\mathcal{A}}$, where $\sigma_{\mathcal{A}}$ and $\sigma_{\mathcal{B}}$ denote the radii of membrane and solvent components, respectively. The cut-off lengths of the interactions are set to $r_c^{\rm rep} =2.7\sigma_{\mathcal{A}} , r_{\rm ga}=1.5\sigma_{\mathcal{A}},\ r_{\rm cc}=3.0\sigma_{\mathcal{A}}$, respectively. The repulsive inverse twelfth-power potential exhibits melting transition around the volume fraction around 0.43 (corresponding to $\phi \simeq 0.8$ in our definition). \cite{70Hoover} Thus, our density $\phi =0.72$ is close to the crystallization line. \begin{figure} \includegraphics[width=0.65\linewidth, bb=0 0 474 460]{MLV.pdf} \caption{\label{fig:mlv_dep} Snapshot of the MLV state due to the solvent-mediated attractive interactions for the system $\mathcal{S}2$ at $\phi = 0.72$, $\varphi = 0.06$ and $N=480,000$ with different size ratio $\sigma_{\mathcal{B}}/\sigma_{\mathcal{A}} = 1.2$, for shear rate $\dot{\gamma}\tau = 0.0142$. } \end{figure} As an example, Fig.~\ref{fig:mlv_dep} shows a snapshot for membrane volume fraction $\varphi = N_\mathcal{A}\sigma_{\mathcal{A}}^3 / (N_{\mathcal{A}}\sigma_{\mathcal{A}}^3 + N_{\mathcal{B}}\sigma_{\mathcal{B}}^3) = 0.06$ for a shear rate $\dot{\gamma}\tau = 0.0142$. After an initial relaxation from a random configuration of the membrane particles, the membrane layers assemble into stacks, and then form MLVs after gathering a certain amount of the membrane patches. In the snapshot, the layers are stacked with distances $d\simeq 3.0\sigma_{\mathcal{B}}$, which seems to arise from the discreteness of the interstitial solvent particles -- here it should be noted that interlayer distance cannot be smaller than $d \simeq 2.0\sigma_{\mathcal{B}}$ because it is inside the range of $U_{\rm rep}$ and $U_\alpha$, which both act repulsively. \begin{figure} \includegraphics[width=0.65\linewidth, bb=0 0 227 222]{SLdist.pdf} \caption{\label{fig:SLdist} Equilibrium lamellar distance $d_{\rm lam}$ versus initial lamellar distance $d_{\rm ini}$ between a tension-less planar membrane and two (smaller) membrane discs. For the systems $\mathcal{S}1$ and $\mathcal{S}2$, lengths are represented in units of $\sigma$ and $\sigma_{\mathcal{A}}$, respectively. The dotted line indicates the line $d_{\rm ini}=d_{\rm lam}$. } \end{figure} To avoid this solvent-mediated force, a longer cut-off range of the repulsive forces and a lower density are employed in the simulation described in the main text (denoted system ($\mathcal{S}1$)). To demonstrate the difference between the systems $\mathcal{S}1$ and $\mathcal{S}2$, we simulate three lamellar layers in the following way. A periodic simulation box with lengths $L_x=L_y = \sqrt{A_{xy}^0}/\sigma_{\mathcal{A}}$, $L_z = V/ (L_xL_y)$ is set up so that a tension-less membrane (with particle number $N_{\mathcal{A}0}) = 1600$ is put along the $xy$-plane. Here, the projected areas are set to $A_{xy}^0 = 1.267N_{\mathcal{A}0}$ for $\mathcal{S}1$ and $A_{xy}^0 = 1.335N_{\mathcal{A}0}$ for $\mathcal{S}2$, respectively. With various interlayer distance $d_{\rm ini}$, circular membrane disks with particle number $N_{\mathcal{A}1}=N_{\mathcal{A}2}=400$ are put on both sides of the membrane. Solvent particles are then inserted to fill up the system; they are relaxed for fixed membrane configuration for $10^5$ MD steps. Afterward, a simulation of the full system is performed for $1.5\times 10^6$ MD steps to obtain an equilibrium interlayer distance $d_{\rm lam}$. As shown in Fig.~\ref{fig:SLdist}, while the interlayer distance increases with time in the $\mathcal{S}1$ case, there are stable lamellar distances around $d_{\rm lam} = 3.5\sigma_{\mathcal{A}}$ and $4.7\sigma_{\mathcal{A}}$ in the $\mathcal{S}2$ case with higher solvent density, which correspond to $3\sigma_{\mathcal{B}}$ and $4\sigma_{\mathcal{B}}$, respectively. Thus, in $\mathcal{S}2$, the effective attractive potentials lead to the stable multi-lamellar layers due to the attractive interactions. \section{Numerics and Parallelization} \begin{figure} \includegraphics[width=0.9\linewidth, bb = 0 0 563 496]{LE.pdf} \caption{\label{fig:LE} Schematic picture in two dimensions for the implementation of Lees-Edwards boundary conditions in a program parallelized with MPI communication. } \end{figure} Numerical simulations have been carried out on massively parallel supercomputers. On 256 CPUs of Intel Xeon X5570 (2.93GHz) in SGI Altix ICE 8400EX at ISSP, it costs 72 hours to perform a run of $1.2\times 10^7$ simulation steps for $\varphi = 0.3125$ and $N=960,000$. Here, about 10\% of the total time is for the force calculation of $U_{\rm rep}$, 30\% for $U_\alpha$, around 20\% for the construction of the buffer, and 20\% for the communication between the MPI processes. The system is divided into cubic (or rectangular) boxes, each of which is calculated by one MPI process. We also parallelize calculations of each process with the use of OpenMP by performing calculations of different far away pairs at once on different threads. The code is optimized to achieve 15\% performance compared with the theoretical limit on X5570 processors. Each process has separate cell lists and neighbor lists for solvent and membrane particles. To apply shear, we employ a simultaneous affine deformation of the total system, each MPI process box, and each cell for neighbor search, so that a square is transformed into a parallelogram shape consistent with the shear deformation, as illustrated in Fig.~\ref{fig:LE}. When the strain reaches 0.5, these parallelograms are reflected to make the strain -0.5, see Fig.~\ref{fig:LE}, which basically requires an all-to-all communication of all the position and velocity data. \end{appendix} \begin{acknowledgments} This work was supported by Grant-in-Aid for Young Scientists 24740285 from JSPS in Japan, Computational Materials Science Initiative (CMSI) from MEXT in Japan, and the European Soft Matter Infrastructure project ESMI (Grant No. 262348) in the EU. We would like to thank S. Fujii, S. Komura, H. Watanabe, U. Schiller, M. Peltom\"aki, G.A. Vliegenthart, and D. Y. Lu for informative discussions. The numerical calculations were carried out on SGI Altix ICE 8400EX and NEC SX-9 at ISSP in University of Tokyo (Japan), Fujitsu FX10 at Information Technology Center in University of Tokyo (Japan), Hitachi SR16000 at YITP in Kyoto University (Japan), and JUROPA at J\"ulich Supercomputing Center at Forschungszentrum J\"ulich (Germany). \end{acknowledgments} \vskip5mm \hspace{-0.3cm}
1,108,101,564,431
arxiv
\section*{Acknowledgements} \noindent We express our gratitude to our colleagues in the CERN accelerator departments for the excellent performance of the LHC. We thank the technical and administrative staff at the LHCb institutes. We acknowledge support from CERN and from the national agencies: CAPES, CNPq, FAPERJ and FINEP (Brazil); MOST and NSFC (China); CNRS/IN2P3 (France); BMBF, DFG and MPG (Germany); INFN (Italy); NWO (Netherlands); MNiSW and NCN (Poland); MEN/IFA (Romania); MSHE (Russia); MICINN (Spain); SNSF and SER (Switzerland); NASU (Ukraine); STFC (United Kingdom); DOE NP and NSF (USA). We acknowledge the computing resources that are provided by CERN, IN2P3 (France), KIT and DESY (Germany), INFN (Italy), SURF (Netherlands), PIC (Spain), GridPP (United Kingdom), RRCKI and Yandex LLC (Russia), CSCS (Switzerland), IFIN-HH (Romania), CBPF (Brazil), PL-GRID (Poland) and OSC (USA). We are indebted to the communities behind the multiple open-source software packages on which we depend. Individual groups or members have received support from AvH Foundation (Germany); EPLANET, Marie Sk\l{}odowska-Curie Actions and ERC (European Union); A*MIDEX, ANR, Labex P2IO and OCEVU, and R\'{e}gion Auvergne-Rh\^{o}ne-Alpes (France); Key Research Program of Frontier Sciences of CAS, CAS PIFI, Thousand Talents Program, and Sci. \& Tech. Program of Guangzhou (China); RFBR, RSF and Yandex LLC (Russia); GVA, XuntaGal and GENCAT (Spain); the Royal Society and the Leverhulme Trust (United Kingdom). \section{Introduction} \label{sec:Introduction} There is a long history of studies of ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar{}^{(*)}}}\xspace K$ decays, where {\ensuremath{\PB}}\xspace represents a {\ensuremath{\Bu}}\xspace or a {\ensuremath{\B^0}}\xspace meson, {\ensuremath{\D^{(*)}}}\xspace is a {\ensuremath{\D^0}}\xspace, {\ensuremath{\D^{*0}}}\xspace, {\ensuremath{\D^+}}\xspace, or {\ensuremath{\D^{*+}}}\xspace meson, {\ensuremath{\Dbar{}^{(*)}}}\xspace is a charge conjugate of one of the {\ensuremath{\D^{(*)}}}\xspace mesons, and {\ensuremath{\PK}}\xspace is either a {\ensuremath{\kaon^+}}\xspace or {\ensuremath{\kaon^0}}\xspace meson.\footnote{The inclusion of charge conjugated processes is implied throughout, unless otherwise stated.} The first observations of ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar{}^{(*)}}}\xspace K$ decays were made public in 1997 and 1998 by the \mbox{CLEO}\xspace~\cite{cleoconf} and \mbox{ALEPH}\xspace~\cite{Barate:1998ch} collaborations. They fully reconstructed a number of these decay modes in order to probe the discrepancy between the measured values of branching fractions for hadronic and semileptonic decays of the {\ensuremath{\PB}}\xspace meson~\cite{Bigi:1993fm}, the at that time unresolved `charm-counting problem'. In 2003, the \mbox{BaBar}\xspace collaboration published the first comprehensive investigation of ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar{}^{(*)}}}\xspace K$ decays, reporting observations or limits for 22 channels~\cite{Aubert:2003jq}. Later, in 2011, the measurements were updated using a five times larger data sample~\cite{delAmoSanchez:2010pg}. The \mbox{LHCb}\xspace data collected during Run 1 and Run 2 of the Large Hadron Collider (\mbox{LHC}\xspace) provide an opportunity to obtain an order of magnitude larger yields with smaller backgrounds than those measured previously. This paper reports measurements of relative branching fractions of \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace}, \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}, and \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decays with respect to the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decay for the first two, and the \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decay for the third mode. The decays used for normalisation are chosen due to their similarity to the signal decays in multiplicity and topology, providing the best cancellation of systematic uncertainties on the ratio. Additionally, a relative branching fraction of the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} decays is reported. The analysis is based on a sample of {\ensuremath{\Pp}}\xspace\proton collisions corresponding to a total integrated luminosity of $9\ensuremath{\mbox{\,fb}^{-1}}\xspace$ collected at centre-of-mass energies of 7, 8\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace (Run~1), and 13\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace (Run~2) by the \mbox{LHCb}\xspace experiment. The modes containing the excited {\ensuremath{\D^*}}\xspace meson are hereafter collectively denoted as \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} and the modes containing only pseudoscalar {\ensuremath{\PD}}\xspace mesons as \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\PD}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace}. Decays of these types can proceed at the tree level via three different processes: pure external {\ensuremath{\PW}}\xspace emission, pure internal {\ensuremath{\PW}}\xspace emission, also called colour-suppressed, and the interference of both. Figure~\ref{fig:feynman} shows tree-level diagrams for the processes relevant for this analysis. \begin{figure}[tbp] \centering \includegraphics[width=0.885\linewidth]{figs/Fig1.pdf} \caption{Top left: internal {\ensuremath{\PW}}\xspace -emission diagram for the decays \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}. Top right: external {\ensuremath{\PW}}\xspace -emission diagram for the decays \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace}. Bottom row: (left) external and (right) internal {\ensuremath{\PW}}\xspace -emission diagrams contributing to the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decay. } \label{fig:feynman} \end{figure} The decays of type ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar{}^{(*)}}}\xspace K$ also allow for spectroscopy studies through their intermediate resonant structures, especially for investigations of {\ensuremath{\Pc}}\xspace{\ensuremath{\overline \squark}}\xspace resonances via the {\ensuremath{\D^{(*)}}}\xspace{\ensuremath{\PK}}\xspace system and charmonium resonances via the {\ensuremath{\D^{(*)}}}\xspace{\ensuremath{\Dbar{}^{(*)}}}\xspace system. The specific topology of these decays allows for strong suppression of combinatorial background in fully reconstructed decays, and the small energy release leads to an excellent {\ensuremath{\PB}}\xspace-mass resolution. These features make them good candidates for future amplitude analyses. To date, only two amplitude analyses~\cite{Brodzicka:2007aa,Lees:2014abp} have been performed in this family of decays, none of which involved an excited {\ensuremath{\D^*}}\xspace meson. Furthermore, both of them are sensitive only to resonant states with natural spin-parity assignments, \mbox{\itshape i.e.}\xspace $J^P = 0^+, 1^-, 2^+, 3^-$, \mbox{\itshape etc.}\xspace Relatively little is known about states with unnatural spin-parity, and ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^*}}\xspace {\ensuremath{\Dbar}}\xspace K$ decays provide an interesting probe for their study. \section{Detector and simulation} \label{sec:Detector} The \mbox{LHCb}\xspace detector~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002} is a single-arm forward spectrometer covering the \mbox{pseudorapidity} range $2<\eta <5$, designed for the study of particles containing {\ensuremath{\Pb}}\xspace or {\ensuremath{\Pc}}\xspace quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the $pp$ interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about $4{\mathrm{\,Tm}}$, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet. The tracking system provides a measurement of the momentum, \mbox{$p$}\xspace, of charged particles with a relative uncertainty that varies from 0.5\% at low momentum to 1.0\% at 200\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. The minimum distance of a track to a primary {\ensuremath{\Pp}}\xspace\proton collision vertex (PV), the impact parameter (IP), is measured with a resolution of $(15+29/\mbox{$p_{\mathrm{ T}}$}\xspace)\ensuremath{{\,\upmu\mathrm{m}}}\xspace$, where \mbox{$p_{\mathrm{ T}}$}\xspace is the component of the momentum transverse to the beam, in\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers. The datasets employed correspond to integrated luminosities of 3\,\ensuremath{\mbox{\,fb}^{-1}}\xspace and 6\,\ensuremath{\mbox{\,fb}^{-1}}\xspace, collected during LHC Run 1 (2011 and 2012) and Run 2 (2015--2018). The online event selection is performed by a trigger, which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage at which the full event is reconstructed. Events passing the hardware trigger are considered in two categories: one in which the trigger criteria are satisfied by energy deposits in the calorimeter associated with the signal candidate decay, and a second in which any of the various muon or calorimeter trigger criteria are met by activity independent of that decay. The software trigger stage requires a two-, three- or four-track secondary vertex with a significant displacement from any primary $pp$ interaction vertex. At least one charged particle must have a transverse momentum $\mbox{$p_{\mathrm{ T}}$}\xspace > 1.6\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and be inconsistent with originating from a PV. A multivariate algorithm~\cite{BBDT,LHCb-PROC-2015-018} is used for the identification of secondary vertices consistent with the decay of a {\ensuremath{\Pb}}\xspace hadron. Simulated samples are produced to model the effect of the detector acceptance and selection requirements, and to guide subsequent fits to the data. To produce these samples, $pp$ collisions are generated using \mbox{\textsc{Pythia}}\xspace~\cite{Sjostrand:2007gs,*Sjostrand:2006za} with a specific \mbox{LHCb}\xspace configuration~\cite{LHCb-PROC-2010-056}. Decays of unstable particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:2001uf}, in which final-state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{Golonka:2005pn}. The interaction of the generated particles with the detector, and its response, are implemented using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve, *Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}. \section{Selection} \label{sec:Selection} For this analysis, {\ensuremath{\D^+}}\xspace mesons are reconstructed via their decay to the ${\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^+}}\xspace$ final state, and {\ensuremath{\D^0}}\xspace mesons are reconstructed through their decays to both the ${\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace$, denoted as {\ensuremath{{\ensuremath{\PD}}\xspace^0_{K\pi}}}\xspace, and ${\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace$, denoted as {\ensuremath{{\ensuremath{\PD}}\xspace^0_{K3\pi}}}\xspace, final states. However, for decays involving two {\ensuremath{\D^0}}\xspace mesons at least one must be reconstructed via the two-body decay. The {\ensuremath{\D^{*+}}}\xspace meson is reconstructed through its decay to {\ensuremath{\D^0}}\xspace{\ensuremath{\pion^+}}\xspace, and is labelled as {\ensuremath{{\ensuremath{\PD}}\xspace^{*+}_{K\pi}}}\xspace ({\ensuremath{{\ensuremath{\PD}}\xspace^{*+}_{K3\pi}}}\xspace) if decaying into {\ensuremath{{\ensuremath{\PD}}\xspace^0_{K\pi}}}\xspace{\ensuremath{\pion^+}}\xspace ({\ensuremath{{\ensuremath{\PD}}\xspace^0_{K3\pi}}}\xspace{\ensuremath{\pion^+}}\xspace). The decays analysed are summarised in Table~\ref{tab:modes}. \begin{table}[tpb] \centering \caption{Decays under study. In the first column no assumption about the {\ensuremath{\PD}}\xspace final state is made. In the second column, however, the particular {\ensuremath{\PD}}\xspace decays are specified.} \label{tab:modes} \begin{tabular}{l | l } \toprule Decay channel & Studied mode \\ \midrule \multirow{2}{*} {\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}} & \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKtpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} \\ \midrule \multirow{2}{*} {\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace}} & \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKtpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} \\ \midrule \multirow{3}{*} {\decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace}} & \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKpi{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} \\ \midrule \multirow{2}{*} {\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace}} & \decay{{\ensuremath{\B^+}}\xspace}{\DzbKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^+}}\xspace}{\DzbKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} \\ \midrule \multirow{2}{*} {\decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace}} & \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKpi{\ensuremath{\kaon^+}}\xspace} \\ & \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKtpi{\ensuremath{\kaon^+}}\xspace} \\ \bottomrule \end{tabular} \end{table} Well-reconstructed final-state tracks are required. A standard threshold for the \ensuremath{\chi^2_{\text{IP}}}\xspace of each track is applied ($>4$), where \ensuremath{\chi^2_{\text{IP}}}\xspace is defined as the difference in the vertex-fit \ensuremath{\chi^2}\xspace for the PV associated with the {\ensuremath{\PB}}\xspace-meson candidate when it is reconstructed with or without the track under consideration. The PV that fits best to the flight direction of the {\ensuremath{\PB}}\xspace candidate is taken as the associated PV. All charged final-state particles must have momentum greater than $1\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and transverse momentum above $0.1 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$. At least one of them must have $p > 10 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and $\mbox{$p_{\mathrm{ T}}$}\xspace > 1.7 \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$, whilst also having an impact parameter with respect to the {\ensuremath{\PB}}\xspace candidate associated PV of at least $0.1\ensuremath{\mathrm{ \,mm}}\xspace$. The invariant masses of {\ensuremath{\PD}}\xspace candidates are required to lie within $20$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace of their known values~\cite{PDG2019} and their decay vertices must be well reconstructed, having a fit $\chi^2$ less than 10. The {\ensuremath{\PB}}\xspace ({\ensuremath{\PD}}\xspace) candidates have to satisfy the requirement that the minimum of the cosine of the angle between their reconstructed momentum and the line connecting their production and decay vertices should be greater than 0.999 (0). The flight time (distance $\chi^2$) from the associated PV for the {\ensuremath{\PB}}\xspace- ({\ensuremath{\PD}}\xspace)-meson candidates is required to exceed 0.2\,\ensuremath{{\mathrm{ \,ps}}}\xspace (36). Finally, particle identification (PID) information is employed to aid distinction of final-state {\ensuremath{\PK}}\xspace and {\ensuremath{\Ppi}}\xspace mesons. The simulated PID response is corrected in order to match the data. This is achieved using calibration ${\ensuremath{\D^{*+}}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^0}}\xspace {\ensuremath{\pion^+}}\xspace$ samples as a function of track kinematics and multiplicity. An unbinned method is employed, where the probability density functions are modelled using kernel density estimation~\cite{Poluektov:2014rxa}. A Boosted Decision Tree (BDT)~\cite{Breiman,AdaBoost} classifier is used to further reduce combinatorial background, consisting of random combinations of tracks that mimic the signal. The BDT is trained using a simulated sample to represent signal and data from the upper sideband of the reconstructed {\ensuremath{\PB}}\xspace-candidate invariant-mass distribution to represent combinatorial background. The variables entering the BDT are: the quality of the reconstructed {\ensuremath{\PB}}\xspace- and {\ensuremath{\PD}}\xspace-meson decay vertices; the \ensuremath{\chi^2_{\text{IP}}}\xspace of the {\ensuremath{\PB}}\xspace- and {\ensuremath{\PD}}\xspace-meson candidates, as well as the \ensuremath{\chi^2_{\text{IP}}}\xspace of the {\ensuremath{\PD}}\xspace-meson decay products; and the particle identification variables of the final-state {\ensuremath{\PK}}\xspace and {\ensuremath{\Ppi}}\xspace mesons. The threshold for the obtained BDT response is set by optimising the significance of the {\ensuremath{\PB}}\xspace meson signal yield in a fit to data. The signals are sufficiently large that this approach is found to introduce no significant bias to the results. Consistency, within statistical uncertainties, is seen between simulated samples and signal-weighted data for the variables used by the BDT, and the BDT response itself. A significant peaking background arises from {\ensuremath{\PB}}\xspace-meson decays where the final state is the same but which proceed without one or both of the intermediate charm mesons. The level of this background is estimated by performing a fit to the invariant mass for {\ensuremath{\PB}}\xspace candidates where the reconstructed mass of one or both {\ensuremath{\PD}}\xspace-meson candidates lies far from the known mass and extrapolating the obtained {\ensuremath{\PB}}\xspace signal yield into the {\ensuremath{\PD}}\xspace-meson signal regions. To suppress contributions from these decays, the reconstructed {\ensuremath{\PD}}\xspace-meson decay vertex is required to be downstream of the reconstructed {\ensuremath{\PB}}\xspace-meson decay vertex and a lower bound is placed on the flight distance significance along the beam axis for {\ensuremath{\PD}}\xspace mesons. This requirement suppresses the peaking background to the level of a few percent of the signal yield, and this remaining contamination is later subtracted. \section{Mass fit} \label{sec:mass_fit} After selecting the signal candidates an unbinned extended maximum-likelihood fit is performed to the distribution of reconstructed {\ensuremath{\PB}}\xspace-candidate mass, $m({\ensuremath{\D^{(*)}}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace)$, where the reconstruction is performed with {\ensuremath{\PD}}\xspace-candidate masses constrained to their known values~\cite{PDG2019} and the {\ensuremath{\PB}}\xspace-candidate direction of flight to be originating at the PV. The fit to the mass distribution is performed in the range from $5210$ to $5390 \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$, separately for Run 1 and Run 2 data. The shape used to fit the distribution consists of two components: one to describe the decays of a signal $B$ meson, and a second to model the combinatorial background. The signal shape is modelled using a Double-Sided Crystal Ball (DSCB)~\cite{Skwarnicki:1986xj} function. The asymmetric shape and non-Gaussian tails account for both the mass-resolution effects on both sides and energy loss due to final-state radiation. The values of tail parameters of the DSCB shapes are fixed to those found in simulated decays while the Gaussian core parameters are extracted from the fit together with the signal yield. To model the combinatorial background an exponential function is used. The lower bound on the range of invariant mass considered excludes any significant background from partially reconstructed decays. The combined Run 1 and Run 2 invariant-mass distributions and fit results are shown in Fig.~\ref{fig:massfit}. The fit is used to extract a signal weight for each candidate using the \mbox{\em sPlot}\xspace technique~\cite{Pivk:2004ty}. \begin{figure}[h!tbp] \centering \includegraphics[width=0.44\linewidth]{figs/Fig2a.pdf} \includegraphics[width=0.44\linewidth]{figs/Fig2b.pdf} \caption{Fits to the invariant-mass distributions $m({\ensuremath{\D^{(*)}}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace)$ of (left) \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} and (right) \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\PD}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} for the combined Run 1 and Run 2 samples. The stacked components are (red) combinatorial background and (blue) signal shape.} \label{fig:massfit} \end{figure} \section{Efficiencies} \label{sec:Efficiencies} The efficiencies $\varepsilon$ of the selection of signal candidates are calculated separately for Run 1 and Run 2 in two stages: \begin{equation} \varepsilon = \varepsilon^{\mathrm{acc}} \cdot \varepsilon^{\mathrm{sel}}, \end{equation} where the geometric \mbox{LHCb}\xspace acceptance efficiencies $\varepsilon^{\mathrm{acc}}$ are calculated using simulated samples, and correspond to the fraction of generated events where all final-state particles lie within the \mbox{LHCb}\xspace acceptance. The trigger, reconstruction, and selection efficiencies $\varepsilon^{\mathrm{sel}}$ are also determined using simulated samples as the fraction of reconstructed candidates passing the trigger, reconstruction, and selection criteria, given that they pass the geometrical acceptance requirement. The efficiencies are evaluated as a function of the position in the phase space of the decay. Due to the presence of a pseudoscalar particle in the initial state and one vector ({\ensuremath{\D^*}}\xspace) plus two pseudoscalar particles in the final state, decays of the type \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} have four independent degrees of freedom. These are chosen to be the two-body squared invariant masses $m^2({\ensuremath{\D^*}}\xspace{\ensuremath{\PK}}\xspace)$ and $m^2({\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace)$, and two helicity angles: the angle $\chi$ between the decay planes of the {\ensuremath{\D^*}}\xspace meson and the ${\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace {\ensuremath{\PK}}\xspace$ system in the {\ensuremath{\PB}}\xspace-meson rest frame, and the {\ensuremath{\D^*}}\xspace-meson helicity angle $\theta$ defined as the angle between the direction of the {\ensuremath{\Ppi}}\xspace meson coming from the {\ensuremath{\D^*}}\xspace meson in the {\ensuremath{\D^*}}\xspace-meson rest frame, and the {\ensuremath{\D^*}}\xspace meson in and {\ensuremath{\PB}}\xspace-meson rest frame. In the case of \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\PD}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} decays only two degrees of freedom are required, and these are chosen to be the two-body squared invariant masses $m^2({\ensuremath{\PD}}\xspace{\ensuremath{\PK}}\xspace)$ and $m^2({\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace)$. Whilst the efficiency varies considerably across the two-body invariant-mass planes and the {\ensuremath{\D^*}}\xspace-meson helicity angle $\theta$, it does not depend significantly on the angle $\chi$. Two-dimensional efficiency distributions, as functions of $m^2({\ensuremath{\D^*}}\xspace{\ensuremath{\PK}}\xspace)$ and $m^2({\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace)$, are obtained in four equal bins of $\cos(\theta)$. The efficiency distributions are further smoothed using a kernel density estimation (KDE) technique~\cite{Poluektov:2014rxa}. The efficiency in the two-body invariant-mass distribution integrated over the two helicity angles are shown in Figs.~\ref{fig:efficiency_run1} and~\ref{fig:efficiency_run2} for the \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} samples from Run 1 and Run 2, respectively. The relative statistical uncertainties on the total efficiencies are in range $10-20\%$. \begin{figure}[h!tbp] \centering \includegraphics[width=0.91\linewidth]{figs/Fig3.pdf} \llap{\shortstack{% \includegraphics[width=0.5\linewidth]{figs/eff_scale_run1.png}\\ \rule{0ex}{0.25in}% } \rule{0.25in}{0ex}} \caption{Selection and reconstruction efficiency, $\varepsilon^{\mathrm{sel}}$, as a function of position in the two-body squared invariant-mass plane for the seven \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} modes, obtained using Run~1 simulated samples. A KDE smoothing has been applied. The blue lines indicate the kinematic boundaries and the numbers indicate the value of the efficiency at several points in the phase space.} \label{fig:efficiency_run1} \end{figure} \begin{figure}[h!tbp] \centering \includegraphics[width=0.91\linewidth]{figs/Fig4.pdf} \llap{\shortstack{% \includegraphics[width=0.5\linewidth]{figs/eff_scale_run2.png}\\ \rule{0ex}{0.25in}% } \rule{0.25in}{0ex}} \caption{Selection and reconstruction efficiency, $\varepsilon^{\mathrm{sel}}$, as a function of position in the two-body squared invariant-mass plane for the seven \decay{{\ensuremath{\PB}}\xspace}{{\ensuremath{\D^*}}\xspace{\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace{\ensuremath{\PK}}\xspace} modes, obtained using Run~2 simulated samples. A KDE smoothing has been applied. The blue lines indicate the kinematic boundaries and the numbers indicate the values of the efficiency at several points in the phase space.} \label{fig:efficiency_run2} \end{figure} \section{Corrected yields} \label{sec:yields} The ratios of branching fractions are calculated using signal yields corrected by applying candidate-by-candidate background subtraction and efficiency correction, and accounting for the decays of the {\ensuremath{\PD}}\xspace mesons into the final states. The branching fraction of a ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar}}\xspace K$ decay is proportional to the corrected yield, {\ensuremath{N^{\mathrm{corr}}}}\xspace, calculated as \begin{equation} \label{eq:corr_yield} {\ensuremath{N^{\mathrm{corr}}}}\xspace = \frac{\displaystyle \sum_{i} \frac{W_i}{\epsilon^{\mathrm{sel}}_i (x_i) \cdot \epsilon^{\mathrm{acc}}} - n^{\mathrm{corr}}_{\mathrm{peaking}}}{\displaystyle {\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\D^{(*)}}}\xspace)\cdot{\ensuremath{\mathcal{B}}}\xspace({\kern 0.2em\overline{\kern -0.2em \PD}{}}\xspace)}. \end{equation} Here the index $i$ runs over all candidates in the fitted sample, $W_i$ is the signal weight for candidate $i$ (see Section~\ref{sec:mass_fit}), $\epsilon_i^{\mathrm{sel}}$ is the selection efficiency for candidate $i$ as a function of its position $x_i$ in the relevant phase space, and $\epsilon^{\mathrm{acc}}$ is the efficiency of the acceptance cut for the given mode (see Section~\ref{sec:Efficiencies}). Since the efficiency-weighted sum over candidates includes a small (peaking) background contribution, the efficiency-corrected residual peaking background $n^{\mathrm{corr}}_{\mathrm{peaking}}$ is subtracted from the signal region. The value of $n^{\mathrm{corr}}_{\mathrm{peaking}}$ is obtained by taking the estimated yield of the peaking background and dividing it by an average efficiency of the sample, since the distribution of the peaking background in the phase space of the decay is not known. Finally, the denominator is used to correct for the {\ensuremath{\PD}}\xspace-meson decay branching fractions, which are: \begin{equation*} \begin{alignedat}{3} {\ensuremath{\mathcal{B}}}\xspace &({\ensuremath{\D^0}}\xspace \rightarrow {\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace) &&= (3.999\pm0.045)\%\quad &&\text{~\cite{Amhis:2019ckw}}, \\ {\ensuremath{\mathcal{B}}}\xspace &({\ensuremath{\D^0}}\xspace \rightarrow {\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace) &&= (8.23\pm0.14)\%\quad &&\text{~\cite{PDG2019}}, \\ {\ensuremath{\mathcal{B}}}\xspace &({\ensuremath{\D^+}}\xspace \rightarrow {\ensuremath{\kaon^-}}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^+}}\xspace) &&= (9.38\pm0.16)\%\quad &&\text{~\cite{PDG2019}}, \\ {\ensuremath{\mathcal{B}}}\xspace &({\ensuremath{\D^{*+}}}\xspace \rightarrow {\ensuremath{\D^0}}\xspace {\ensuremath{\pion^+}}\xspace) &&= (67.7\pm0.5)\%\quad &&\text{~\cite{PDG2019}}. \end{alignedat} \end{equation*} Table~\ref{tab:yields_all} summarises the values of signal yields $N$ obtained from the mass fits as well as the corrected yields {\ensuremath{N^{\mathrm{corr}}}}\xspace for all studied modes. \renewcommand{\arraystretch}{1.15} \begin{table}[tbp] \centering \caption{Table of all signal yields $N$ and efficiency and {\ensuremath{\PD}}\xspace-meson branching fraction corrected yields {\ensuremath{N^{\mathrm{corr}}}}\xspace with the residual peaking background subtracted. The values of corrected yields are rounded to the order of $10^6$. The uncertainties are statistical only.} \label{tab:yields_all} \begin{tabular}{@{}l|r@{$\,\pm\,$}lr@{$\,\pm\,$}l|r@{$\,\pm\,$}lr@{$\,\pm\,$}l@{}} \toprule \multirow{2}{*}{Mode} & \multicolumn{4}{c|}{Run 1} & \multicolumn{4}{c}{Run 2} \\ & \multicolumn{2}{c}{$N$} & \multicolumn{2}{c|}{${\ensuremath{N^{\mathrm{corr}}}}\xspace\,(10^6)$} & \multicolumn{2}{c}{$N$} & \multicolumn{2}{c}{${\ensuremath{N^{\mathrm{corr}}}}\xspace\,(10^6)$} \\ \midrule \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} & $212$ & $16$ & $289$ & $21$ & $869$ & \phantom{0}$32$ & $854$ & \phantom{0}$32$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKtpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} & $116$ & $11$ & $286$ & $28$ & $606$ & \phantom{0}$26$ & $997$ & \phantom{0}$44$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} & $210$ & $15$ & $313$ & $23$ & $912$ & \phantom{0}$32$ & $1009$ & \phantom{0}$36$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKtpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} & $153$ & $13$ & $371$ & $32$ & $566$ & \phantom{0}$25$ & $969$ & \phantom{0}$45$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $605$ & $26$ & $1196$ & $52$ & $2409$ & \phantom{0}$52$ & $3495$ & \phantom{0}$76$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $321$ & $20$ & $949$ & $57$ & $1706$ & \phantom{0}$44$ & $3541$ & \phantom{0}$92$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $331$ & $20$ & $1105$ & $64$ & $1544$ & \phantom{0}$41$ & $3812$ & $104$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DzbKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $477$ & $24$ & $517$ & $26$ & $2564$ & \phantom{0}$56$ & $1823$ & \phantom{0}$39$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DzbKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $622$ & $28$ & $527$ & $23$ & $2853$ & \phantom{0}$60$ & $1720$ & \phantom{0}$35$ \\ \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKpi{\ensuremath{\kaon^+}}\xspace} & $2443$ & $54$ & $651$ & $14$ & $9071$ & $104$ & $2039$ & \phantom{0}$23$ \\ \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $864$ & $32$ & $648$ & $23$ & $3867$ & \phantom{0}$69$ & $2040$ & \phantom{0}$36$ \\ \bottomrule \end{tabular} \end{table} \section{Systematic uncertainties} \label{sec:syst} Many systematic effects cancel exactly in the ratios of branching fractions, such as the uncertainties in the {\ensuremath{\bquark\bquarkbar}}\xspace-production cross-section and fragmentation fractions as well as the uncertainties in the luminosity. The kinematics differ most between numerator and denominator for the slow pion in modes involving a {\ensuremath{\D^*}}\xspace decay, but the tracking efficiency of the slow pion produced in the {\ensuremath{\D^*}}\xspace decay is found to be well modelled using calibration samples and the associated systematic uncertainty is found to be negligible. Uncertainties are considered where they arise from the shapes used to model the invariant-mass distribution, the efficiency determination, the resampling of the PID response, and the contribution of residual peaking backgrounds. The systematic uncertainty related to the signal model is evaluated by randomly sampling each tail parameter of the DSCB from a normal distribution centred at the value used in the fit and with a width corresponding to its uncertainty. The fit is then repeated with these new values and the yields are recalculated. The correlations of the tail parameters are accounted for. By doing this many times a distribution of yields is obtained. The RMS of this distribution is then used as the systematic uncertainty. Changing the shape of the background model is found to have a negligible impact on the resulting yields. The associated systematic uncertainty is thus neglected. To estimate the systematic uncertainty associated with the choice of the kernel width in the PID response correction, the procedure is repeated with a larger kernel width. The absolute difference between the new efficiency-corrected yield and the baseline value is taken as the uncertainty. Even after applying the flight-distance significance requirements on the {\ensuremath{\PD}}\xspace mesons there is still some underlying residual peaking background $n^{\mathrm{corr}}_{\mathrm{peaking}}$. This is subtracted from the signal yield. The uncertainty on the yield of the residual peaking background, determined using the $c$-hadron sidebands, is used as the systematic uncertainty. The limited size of the simulated samples leads to uncertainties in the efficiency estimations. Bootstrapped samples are produced by sampling randomly candidates from the original simulated sample, allowing repeated selection of the same candidate, until a new sample having the same number of candidates is derived. These samples are used to evaluate the associated systematic uncertainty, resulting in an ensemble of different efficiency distributions. The RMS values of the resulting yield distributions are then taken as a measure of the systematic uncertainties. This is typically the dominant systematic uncertainty. The tracking efficiencies are assumed to cancel in all ratios where the same number of tracks is reconstructed in the numerator and denominator. Differences in kinematics, most obviously for the slow pion in the {\ensuremath{\D^*}}\xspace decay, could lead to imperfect cancellation. This was explored and the effect was found to be negligible. In ratios where the number of tracks differ in the numerator and denominator, an additional systematic uncertainty of 1\% per additional track is applied. The magnitudes of the individual contributions are summarised in Table~\ref{tab:syst} together with the total systematic uncertainty obtained by combining the individual components in quadrature. \renewcommand{\arraystretch}{1.18} \begin{table}[tbp] \centering \caption{Systematic uncertainties on {\ensuremath{N^{\mathrm{corr}}}}\xspace from the signal PDF parameters ($\sigma_{\mathrm{PDF}}$), the finite simulation samples ($\sigma_{\mathrm{MC}}$), the PID resampling ($\sigma_{\mathrm{PID}}$), the residual peaking background ($\sigma_{\mathrm{bkg}}$), and the total systematic uncertainty ($\sigma_{\mathrm{tot.}}$). All values are given as a percentage of the central value of {\ensuremath{N^{\mathrm{corr}}}}\xspace.} \label{tab:syst} \begin{tabular}{@{}l | cccc|c | cccc | c@{}} \toprule \multirow{2}{*}{Decay channel} & \multicolumn{5}{c|}{Run 1 (\%)} & \multicolumn{5}{c}{Run 2 (\%)} \\ & $\sigma_{\mathrm{PDF}}$ & $\sigma_{\mathrm{MC}}$ & $\sigma_{\mathrm{PID}}$ & $\sigma_{\mathrm{bkg}}$ & $\sigma_{\mathrm{tot.}}$ & $\sigma_{\mathrm{PDF}}$ & $\sigma_{\mathrm{MC}}$ & $\sigma_{\mathrm{PID}}$ & $\sigma_{\mathrm{bkg}}$ & $\sigma_{\mathrm{tot.}}$ \\ \midrule \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} & $0.6$ & $0.8$ & $1.5$ & $0.8$ & $2.0$ & $0.5$ & $1.4$ & $0.2$ & $0.5$ & $1.6$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKtpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} & $1.2$ & $1.2$ & $0.9$ & $1.4$ & $2.4$ & $1.0$ & $2.1$ & $0.7$ & $0.6$ & $2.5$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} & $0.5$ & $1.0$ & $0.4$ & $0.7$ & $1.4$ & $0.8$ & $1.8$ & $0.7$ & $0.4$ & $2.1$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKtpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} & $1.4$ & $1.6$ & $1.1$ & $1.2$ & $2.7$ & $0.7$ & $2.5$ & $1.2$ & $0.6$ & $2.9$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $0.6$ & $0.7$ & $0.9$ & $0.3$ & $1.3$ & $0.5$ & $1.1$ & $0.2$ & $0.2$ & $1.2$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $0.8$ & $1.2$ & $0.3$ & $0.7$ & $1.6$ & $0.8$ & $1.7$ & $0.6$ & $0.3$ & $2.0$ \\ \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $0.9$ & $1.2$ & $0.3$ & $0.6$ & $1.6$ & $0.6$ & $2.0$ & $0.3$ & $0.3$ & $2.1$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DzbKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} & $0.6$ & $1.1$ & $1.0$ & $0.9$ & $1.8$ & $1.1$ & $1.8$ & $0.5$ & $0.4$ & $2.2$ \\ \decay{{\ensuremath{\B^+}}\xspace}{\DzbKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $0.7$ & $1.1$ & $0.5$ & $0.7$ & $1.6$ & $0.7$ & $1.6$ & $0.4$ & $0.3$ & $1.8$ \\ \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKpi{\ensuremath{\kaon^+}}\xspace} & $0.4$ & $0.7$ & $0.5$ & $0.4$ & $1.0$ & $0.3$ & $0.7$ & $0.7$ & $0.2$ & $1.1$ \\ \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKtpi{\ensuremath{\kaon^+}}\xspace} & $0.2$ & $1.4$ & $0.3$ & $0.5$ & $1.5$ & $0.8$ & $1.3$ & $0.4$ & $0.3$ & $1.6$ \\ \bottomrule \end{tabular} \end{table} \section{Results} \label{sec:Results} The ratios of branching fractions are obtained by appropriately combining the {\ensuremath{N^{\mathrm{corr}}}}\xspace yields of decay modes in Table~\ref{tab:modes} into ratios, such that the systematic uncertainty coming from the different number of tracks in the numerator and denominator is minimised. In order to calculate the first two branching-fraction ratios of the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace}(\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}) decay with respect to the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decay a weighted average of {\ensuremath{N^{\mathrm{corr}}}}\xspace of \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace}(\decay{{\ensuremath{\B^+}}\xspace}{\DstarpKpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}) and \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKtpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace}(\decay{{\ensuremath{\B^+}}\xspace}{\DstarpKtpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace}) is done and divided by the weighted average of {\ensuremath{N^{\mathrm{corr}}}}\xspace for the \decay{{\ensuremath{\B^+}}\xspace}{\DzbKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{\DzbKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} modes. The associated weight in the weighted average is the inverse of the variance of the value. The variance on {\ensuremath{N^{\mathrm{corr}}}}\xspace is obtained by adding the statistical and the systematic uncertainty, including the uncertainties due to {\ensuremath{\PD}}\xspace-meson branching fractions, in quadrature. The first measurement of the third ratio of \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} to \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} decays is calculated by performing a weighted average of {\ensuremath{N^{\mathrm{corr}}}}\xspace for \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKtpi\DzKpi{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKtpi{\ensuremath{\kaon^+}}\xspace} decays, and dividing it by the value of {\ensuremath{N^{\mathrm{corr}}}}\xspace for the \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKtpi{\ensuremath{\kaon^+}}\xspace} decay. A second measurement is obtained by finding the ratio of {\ensuremath{N^{\mathrm{corr}}}}\xspace for \decay{{\ensuremath{\B^0}}\xspace}{\DstarmKpi\DzKpi{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace\DzKpi{\ensuremath{\kaon^+}}\xspace}, which is combined with the first one into the final branching-fraction ratio. The fourth branching-fraction ratio of \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} decays is calculated as the weighted average of two ratios. The first is the ratio of \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} decays, and the second is that for \decay{{\ensuremath{\B^+}}\xspace}{\DstarpKtpi{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{\DstarmKtpi{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} decays. The ratios of branching fractions are computed separately for Run~1 and Run~2 and then combined in a weighted average. These ratios are measured to be \begin{equation*} \begin{split} \frac{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace})}{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace})} &= 0.517 \pm 0.015 \pm 0.013 \pm 0.011 , \\[10pt] \frac{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace})}{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\Dbar{}^0}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace})} &= 0.577 \pm 0.016 \pm 0.013 \pm 0.013 , \\[10pt] \frac{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace})}{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^-}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace})} &= 1.754 \pm 0.028 \pm 0.016 \pm 0.035 , \\[10pt] \frac{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace})}{{\ensuremath{\mathcal{B}}}\xspace (\decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace})} &= 0.907 \pm 0.033 \pm 0.014 , \end{split} \end{equation*} where the first uncertainty is statistical, the second systematic, and the third one is due to the uncertainties on the {\ensuremath{\PD}}\xspace-meson branching fractions~\cite{PDG2019}. The BaBar collaboration studied these decays previously~\cite{delAmoSanchez:2010pg}, with a different set of $D^{*0}$ and ${\ensuremath{\D^0}}\xspace$ channels, obtaining signal yields of $91\pm13$ \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} candidates, $75\pm13$ \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} candidates, and $1300\pm54$ \decay{{\ensuremath{\B^0}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\kaon^+}}\xspace} candidates. The sizes of the signal yields obtained using the LHCb data are around twenty times larger for the first two decays, and over five times larger for the third. Significant increases are seen for the yields obtained in the normalisation modes, with respect to earlier studies using data from the Belle and BaBar experiments. Good agreement is seen with respect to the corresponding branching fraction ratios according to the Particle Data Group (PDG)~\cite{PDG2019}, calculated to be $0.43\pm0.12$, $0.41\pm0.13$, $2.3\pm0.3$, and $1.1\pm0.3$, respectively. The measurements described in this article are between 5 and 7 times more precise. The ratio between the \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*+}}}\xspace{\ensuremath{\D^-}}\xspace{\ensuremath{\kaon^+}}\xspace} and \decay{{\ensuremath{\B^+}}\xspace}{{\ensuremath{\D^{*-}}}\xspace{\ensuremath{\D^+}}\xspace{\ensuremath{\kaon^+}}\xspace} deviates from unity with a significance just below $3\sigma$, suggesting activity in a channel other than the ${\ensuremath{\D^+}}\xspace^{*}{\ensuremath{\D^-}}\xspace$ channel that the two have in common. These measurements, and the high purity of the samples obtained for the decays under study, make these decays prime targets for future analyses of resonant structure. \section{Summary} \label{sec:summary} A data sample corresponding to an integrated luminosity of $9\ensuremath{\mbox{\,fb}^{-1}}\xspace$ recorded with the \mbox{LHCb}\xspace detector is used to measure four ratios of branching fractions in ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar}}\xspace K$ decays. The ratios are consistent with previous measurements and are measured with the highest precision to date. Furthermore, this work represents the first published analysis at the \mbox{LHC}\xspace of $b$-hadron decays to two open-charm hadrons and a third, light, hadron. Large samples of ${\ensuremath{\PB}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\D^{(*)}}}\xspace {\ensuremath{\Dbar}}\xspace K$ decays are available, and can be isolated in the \mbox{LHCb}\xspace dataset with low background contamination. These are promising characteristics for these channels with future studies of their intermediate resonant structure in view.
1,108,101,564,432
arxiv
\section{Introduction} \begin{figure*}[!t] \begin{center} \includegraphics[width=0.77\textwidth]{system_overview.png} \caption{System overview of MI Net. VQ-VAE (bottom) is used to reconstruct audio sequences of various instruments and infers a latent representation of music (latent code). VQ-VAE is conditioned by a prior network (middle) that encodes body movements w/wo MIDI content. Further, given an input of Video Frames of musician playing the instrument, Multi-instrumentalist Net (MI Net) generates the music for that instrument.} \label{fig:overview} \end{center} \end{figure*} \begin{flushright} \textit{``This is what it sounds like"\\ When Doves Cry, Purple Rain, Prince }\end{flushright} A multi-instrumentalist is a musician who plays two or more musical instruments and easily transitions from one instrument to another. For example, the famous multi-instrumentalist, Prince, played all of the 27 instruments featured in his first album `For You'. Such a talent is expressed in the ability to disentangle the uniqueness of each instrument along with maneuvering the similarities across instruments. \\ Music theory defines an assortment of components, such as pitch, rhythm, dynamics, timbre, that characterize the music. The combination and variation of these components creates numerous and substantially different types of music. One of the fundamental factors in the distinction between music instruments is timbre. While timbre is the dominant component in the association of music with an instrument, it is not the sole one. Such associations appear to be tangled in evident by perceptual experiments suggesting that non-professionals would not be able to tell which instrument is playing from just listening to a piece of single instrument music. Such a complexity exists in a computational setting as well, where disentanglement of timbre from audio is not a straightforward task. \\ In a situation where the audio signal could be ambiguous, the visual information, i.e., a video of the musician playing, greatly simplifies such associations. In perception, visual information helps the brain to disentangle the source of the sounds. For example, the association of the type of an instrument that plays the music becomes a simple task. This is supported by recent computational research which shows that visual information has the potential to significantly enhance audio tasks, such as the sound separation. \\ Indeed visual cues along with sound information complete each other. However, is it possible to step even further and to set a computational system to generate instrumental music from visual cues alone? It turns out to be a challenging problem, but has the potential to identify the components that are involved in generating music. In recent years, methods employing deep-learning techniques have shown plausibility of accomplishing such a transformation between visual cues and music. Current methods have achieved convincing music generation results from visual information, such as body keypoints, or full video. However, they still incorporate limitations in terms of generating music for several instruments. These limitations are expressed in relying on strong supervision that requires numerous examples accompanied by instrument labels or an implementation of training of a distinct model per each instrument. \\ In this work, we develop a system which is a single model that generates different types of instrumental music from unlabeled videos. In particular, we introduce `Multi-instrumentalist Net' (MI Net) that succeeds to generate in an unsupervised manner various instrumental music signals from videos of musicians playing music. An overview of the system is shown in Fig. \ref{fig:overview}. At the heart of our system is a Vector-Quantized Variational Autoencoder (VQ-VAE) network~\cite{van2017neural} that learns in an unsupervised way a discrete latent space for audio features by reconstructing the audio input. We then simultaneously train a prior network of the latent space with a visual information encoder network. In particular, we use body keypoints as our visual features. This step turns out to be critical in finding the correlations between the visual information and the audio features captured in the latent space. Indeed, our analysis indicates that the encoder of body movements and the prior network can cooperate and disentangle clearly the representation of the music on instrument level. This combination allows us to generate new music by sampling discrete latent vectors from the trained prior distribution and passing them through the decoder of VQ-VAE. In addition to music generation, we also study the production of the exact content of music played in a video by introducing a content encoder used during the training of the prior network. The content would be additional characteristic of music such as Midi. \\ In summary, our main contributions are as follows: (i) We introduce the Multi-Instrumentalist Net which is the first unsupervised system that for a given video of a musician playing an instrument from a variety of 13 instruments, will generate the associated music for the instrument in the video. We evaluate MI Net on videos and music in URMP dataset~\cite{li2018creating} recorded in a studio setting and demonstrate that MI Net can generate music with a reasonable quality and specific to the instrument being played. (ii) We demonstrate that the introduction of an additional content condition to the prior network can generate the corresponding music piece being played in the video. (iii) We further evaluate the MI Net on `in the wild' instrumental performance videos and discuss potential future directions. \section{Related Work} In our work we propose to use the visual information of a musician playing an instrument to generate the music that captures the specificity of the video and audio contents. To set up the framework, we describe here related work in music generation and audio-visual tasks. \textbf{\textit{Music Generation.}} Several deep learning methods have been introduced to generate novel music. In particular, autoregressive models that generate audio waveforms directly, such as Wavenet~\cite{oord2016wavenet}, SampleRNN~\cite{mehri2016samplernn}, and their variants~\cite{oord2018parallel,ping2018clarinet,dieleman2018challenge} have been shown to be successful in generation of speech or music signals. Since capturing high-level structure in audio waveforms is challenging, methods such as GANSynth~\cite{engel2019gansynth} and MelNet~\cite{vasquez2019melnet} proposed to use time-frequency representations, e.g., a spectrogram, to learn the generative model. Recently, non-autoregressive models such as MelGAN~\cite{kumar2019melgan} demonstrated convincing results on audio generation. In addition to spectrogram, symbolic musical representation (Midi) has been found instrumental in modeling and generating music~\cite{huang2018music,hawthorne2018enabling}. While the aforementioned methods generate music that is unconditional, there has been progress in generation of music with constraints. For example, it was proposed to constrain generative models to sample with respect to some predefined attributes~\cite{engel2017latent}. The Universal Music Translation network aims to translate music across various styles via raw audio waveforms. Works such as Jukebox~\cite{dhariwal2020jukebox} and MuseNet~\cite{payne2019musenet} showed the possibility to generate music based on user preferences, which translate to network model specifically trained with labeled tokens as a conditioning input. Furthermore, recently, the Transformer autoencoder have been proposed to aggregate encoding of the Midi data across time to obtain a global representation of style from a given performance. Such a global representation can be used to control the style of the music~\cite{choi2019encoding}. \\ \textbf{\textit{Audio-visual learning.}} While numerous methods have been introduced to work with the sound signal and its various representations, there is a possibility to add additional information that can help enhance the audio signal interpretation and generation. Indeed, the field of \textit{Audio-visual learning} deals with exploration and leveraging of the correlation of both audio and video for tasks that simultaneously involve these two signals. In recent years, methods for audio-visual learning have gained significant development and unlocked novel applications. For example, conditioning the visual and the sound streams on each other as a training supervision was shown as an effective training method for networks with unlabeled data in the audio-visual correspondence task~\cite{arandjelovic2017look,aytar2016soundnet,harwath2016unsupervised,owens2016ambient}. Moreover, it was shown that it is possible to separate object sounds by inspecting the video cues of an unlabeled video sound separation~\cite{gao2018learning,zhao2018sound,zhao2019sound,gan2020music}, or performing an audio-visual event localization task on unconstrained videos~\cite{tian2018audio}. \\ \textbf{\textit{Audio to Video Systems.}} Transformations between audio and video have been studied as well. In the audio-to-video direction, deep learning RNN based strategies were proposed to generate body dynamics correlated with sounds from audio-stream~\cite{shlizerman2018audio,ginosar2019learning}. Moreover, systems that generate parts of the face or synchronize lips movements from speech audio were shown to be possible~\cite{suwajanakorn2017synthesizing,jamaludin2019you,oh2019speech2face}. \\ \textbf{\textit{Sound Generation from Videos.}} The direction of generating sound from video is a challenging problem. Initial deep learning work~\cite{owens2016visually} implemented a recurrent neural network to predict the impact sound features from videos and then was able to produce a waveform from these features. Later, a conditional generative adversarial network~\cite{chen2017deep} was proposed to achieve cross-modal audio-visual generation of musical performances. A single image is used as an input and the network performs supervision on instrument classes to generate a low-resolution spectrogram. In addition, a SampleRNN-based~\cite{zhou2018visual} have been introduced to generate natural sounds, e.g., baby crying, water flowing, given a visual scene. Later, an audio forwarding regularizer that considers the real sound as an input and outputs bottle-necked sound features showed that it can provide stronger supervision for the natural sound prediction and to produce associated sounds only from visual features~\cite{chen2020generating}.\\ Compared to natural sounds that have relatively simple structure, music across different instruments contains more complex elements. Previous work on music generation from videos was mostly focused on piano performance. A ResNet-based method was proposed to predict the pitch and the onsets events given piano video frames stream~\cite{koepke2020sight}. Audeo~\cite{su2020audeo} succeeded to transcribe a silent piano performance video to high-precision audio outputs. While the results of such methods are promising, the generation is limited to piano only. For woodwinds and brass instruments, it is unlikely to transcribe the music only from the visual stream since changes of the air blown into the instrument can produce different pitches with very minor visual changes. Recently, Foley Music~\cite{gan2020foley} proposed a Graph-Transformer network to generate Midi events from body keypoints and achieved convincing and robust outcomes. However, it includes limitations of using a different model per instrument, and it requires instrument labels to synthesize Midi events. In comparison, our method can generate different instruments music with the training process being completely unsupervised. \begin{figure*}[t] \centering \includegraphics[width=\linewidth,height=3.5cm]{vq-vae.png} \caption{Detailed schematics of the components in VQ-VAE. The encoder and the decoder contain multiple multi-band residual (MBR) blocks and common residual blocks to reconstruct the input audio.} \label{fig:vq-vae} \end{figure*} \section{Methods} \textbf{\textit{Visual Representations.}} We use human pose keypoints to capture cues of body motion that express playing an instrument. We use the OpenPose framework~\cite{cao2018openpose} to detect body and hand key points from each video frame and then stack the $2$D coordinates over time to structured visual representations (matrices). In practice, we find that the upper body keypoints are sufficient, which results in $5$ keypoints for the body parts and $21$ keypoints for each hand in total. To remove noise, we perform a linear interpolation of missing frames. Specifically, the joints that were not predicted well are interpolated linearly according to the distance to the previous and post detected frames. This prediction is based on the relative position of the joints to the precedent joint and ensures stability in the absolute position.\\ \textbf{\textit{Audio Representations.}} The choice of the correct audio representation is key in learning a generative music model. The straightforward representation is the audio waveform, however, training on waveform signals is challenging as described in previous works~\cite{vasquez2019melnet} and would typically take a long time to converge to the desired performance. Another common representation is symbolic Midi. While informative, Midi is not applicable in our case since we aim to design an unsupervised model suitable for different instruments. Midi explicitly uses the instrument name to synthesize the audio via a Midi synthesizer. Alternatively, frequency representation could be used for audio representation. We use the magnitude of log-spectrogram as the audio representation by applying Short-Time Fourier Transform to the waveform resulting in a $2$D time-frequency representation $S = F\times T$, where $F$ is the number of frequency bins and $T$ is the number of time steps. We learn the latent representation of the spectrogram via a reconstruction task described in the following sections. \subsection{Encoding of Audio Features} \textbf{\textit{VQ-VAE.}} To encode the magnitude of the log-spectrogram into the latent space, we introduce a multi-band residual 1D convolutional Vector Quantized Variational Autoencoder (VQ-VAE), as shown in Fig.~\ref{fig:vq-vae}. VQ-VAE~\cite{van2017neural} is a type of VAE~\cite{kingma2013auto} in which the encoder outputs a discrete latent representation. The decoder decodes this representation and reconstructs the input. The prior in this network is being learned rather than being static. VQ-VAE have been shown to successfully learn latent representations utilized for generation of high-quality images, videos, and audio. In our case, the audio encoder network encodes the log-spectrogram to a discrete latent representation, and the audio decoder decodes it to reconstruct the log-spectrogram. In general, we define a latent embedding space $e\in \mathbb{R}^{K\times D}$ where $K$ is the size of the discrete latent space such that it is a $K$-way categorical, and $D$ is the dimension of each latent embedding vector $e_i$. During forward propagation, the continuous representation encoded by the audio encoder is replaced with its closest discrete vector. This is defined as follows, let $Z = E(S)\in \mathbb{R}^{D\times T}$ be the output of the audio encoder before quantization. For each time step $t$, VQ-VAE finds the nearest vector in the codebook and uses it as the latent representation, i.e. $\text{Quantize}(Z_t)=e_k$ where $k = \arg \min_i \Vert Z_t-e_i\Vert^2_2$. The vector $e_k$ is then passed to the decoder to decode and reconstruct the log-spectrogram $S$. The VQ-VAE model incorporates two additional terms in its objective to align the vector space of the code with the output of the encoder. (i) The embedding loss is applied to the codebook variable and $e_k$ and brings the selected codebook $\mathbf{e}$ closer to the output of the encoder $E(S)$. (ii) The commitment loss is applied to the encoder weights which aims to keep the output of the encoder as close as possible to the chosen codebook vector to prevent it from fluctuating from one code vector to another. As proposed in ~\cite{van2017neural}, we use the exponential moving average updates for the codebook as a replacement for the embedding loss. We define the resulting loss as $ \mathcal{L} = \Vert S - D(e)\Vert^2_2 + \beta \Vert E(S)-sg[e]\Vert^2_2, $ where the first term is the log-spectrogram reconstruction loss and the second term, $\beta$, is a hyper-parameter which depends on the scale of the reconstruction loss and $sg$ stands for the stop gradient operator defined as the identity at forward computation time and has zero partial derivatives. Since we assume a uniform prior for the latent space, the KL term that appears in the ELBO is constant with respect to the encoder parameters and is not included.\\ \textbf{\textit{Multi-band Residual Blocks.}} The log-spectrogram is of high dimension due to the desired high resolution on the frequency bins. Inspired by the PerformanceNet~\cite{wang2019performancenet}, we thereby use a multi-band residual learning method on the audio encoder and the decoder to better capture the spectral features of musical overtones. The multi-band residual (MBR) block splits the input into a specific number of frequency bands and then feeds each band individually to identical sub-blocks consisting of the following layers: 1D-convolution, ReLU, 1D-convolution. The output of all sub-blocks are then concatenated along the frequency dimension and a residual connection sums up the output with the input of the block. In the audio encoder, we progressively divide the spectrogram into more bands in the earlier layers, and into fewer bands in the latter layers. The decoder then decodes the latent representation from fewer bands to more bands in a symmetric way. In our proposed VQ-VAE architecture, the audio encoder receives the log-spectrogram as an input and passes it through a 1D convolutional layer. The features then go through four MBR blocks described above. Subsequently, it passes through two 1D convolutional residual blocks and a single 1D convolution to generate the continuous latent representation which is mapped to the discrete latent space. The resulting discrete latent features are then fed into the audio decoder which is a mirrored structure of the encoder. Since both the encoder and the decoder are all fully 1D convolutional, the log-spectrogram input generally supports any length of input. \subsection{Encoding Visual Features and Learning a Prior over the Audio Latent Code} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{uncond_prior.png} \caption{Detailed schematics of the components in the Body Movement Encoder and the Prior Network.} \label{fig:uncond_prior} \end{figure} \textbf{\textit{Body keypoints encoder.}} Given a sequence of 2D human pose key points $P = \{p_1, p_2,...,p_T\}\in \mathbb{R}^{J\times T}$, where $J$ is the number of joints and $T$ is the total number of time steps, we first encode them into a latent representation. This latent space should differentiate movements of playing different instruments and self-organize itself into separate clusters as shown in the recent unsupervised skeleton-based action recognition approach~\cite{zheng2018unsupervised, su2020predict}. To achieve that, we use a bidirectional Gated Recurrent Units (GRU) as the body movement encoder $E_b$. Given the input sequence $P$, the last hidden state of the encoder $h = E_b(P)$ can be seen as the global representation of the musician's movement. In order to associate the body movements with audio features, we \textit{jointly train the body keypoints encoder} with \textit{the prior of the latent space of the audio features}. Superimposed with the latent audio features, the body movements features differentiate the type of instrument performance and other characteristics of the performed music. This allow us to use body movements to generate new music of the corresponding instrument.\\ \textbf{Learning a Prior over the Latent Space.} The prior distribution over the discrete latent audio features $p(Z)$ is a joint distribution of categorical across time and can be learned autoregressively. When training the VQ-VAE, the prior is kept constant and uniform. After training, we fit an autoregressive distribution over $Z$ so that we can generate new samples via ancestral sampling as shown in Fig.\ref{fig:uncond_prior}. We use the encoder of transformer structure similar to the GPT-1~\cite{radford2018improving} over the discrete latent space. We concatenate the last hidden state of the body keypoints encoder $h$ to every time steps of the discrete latent representation. This forces the prior of audio features to align and correlate to those body motion features when autoregressively learning to predict the next latent code. Subsequently, the concatenated features are passed through the multi-head self-attention layer of the scale dot-product self-attention defined as: Attention$(Q,K,V) =$softmax$(\frac{QK^T}{\sqrt{D_k}})V$, where $Q,K$, and $V$ are query, key and value, respectively. The layer calculates a weight by dot products of the key $K$ and query $Q$, and then outputs a weighted sum of the value $V$. Using multi-head self-attention allows the model to integrate information from different independent representations. Next, the point-wise feed-forward layer takes the input from the self-attention layer, and further transforms it through two fully connected layers with ReLU activation as: Feed Forward$ = \max(0,xW_1 + b_1)W_2 + b_2.$ The outputs of the self-attention and feed forward layers are passed to a softmax layer to predict the probability of the next latent code over the codebook. \subsection{Content Conditioning} \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{cond_prior.png} \caption{detailed schematics of the components in prior network with content conditioning.} \label{fig:cond_prior} \end{figure} In addition to the unconditional generation of music, it is possible to generate actual music content of the video. Therefore, we also explore whether we can add content conditioning the prior network and to generate the exact music content for a specific instrument. We use the Midi as the content signal. The Midi is considered as a matrix $M \in N\times T$ where $N$ is the number of notes and $T$ is the number of time steps. Since all considered instruments in our experiments are monophonic, there is at most one active note at each time step. We first convert the 2D matrix into a binary matrix by ignoring the expressive dynamics (i.e., the loudness of music). We then transform the binary matrix to a 1D sequence that contains the index of the activated note at each time step. We use a transformer-based encoder-decoder architecture~\cite{vaswani2017attention} to achieve the content conditioning. The content encoder is the transformer encoder that takes the Midi information as input as shown in Fig.\ref{fig:cond_prior}. In this case, we concatenate the encoded body movement representation to the embedded Midi at each time step and pass them to two self-attention and Feed-forward layers. The content encoder output will go through a fully connected layer such that it becomes the conditioned signal $C$. The transformer decoder is similar to the unconditional prior network except that we add a cross-attention layer after the self-attention layer to compute the attention between the conditioned signal and the latent code representation. Considering the output of the self-attention module $A\in \mathbb{R}^{T_a \times C}$ and the Midi conditional signal $M\in \mathbb{R}^{T_m \times C}$, where $C$ is the feature dimensions, the cross-attention is defined as: Cross Attention$(A,M) =$ softmax$(\frac{AM^T}{\sqrt{D_k}})M.$ Here, the feature dimension of $A$ and $M$ are designed to be the same. During sampling, we provide the Midi content and body movements of single instrument performance to the body movement encoder and to the content encoder, respectively. The prior network then autoregressively generates discrete latent representations via ancestral sampling. The discrete latent representations will feed to the decoder of VQ-VAE and generate the instrument's music associated with the exact instrument and content in the video. \section{Experiments \& Results} \textbf{Datasets.} We evaluate the MI Net on \textbf{URMP} dataset~\cite{li2018creating}, a high-quality multi-instrument video dataset recorded in a studio. It includes $13$ instruments and provides the musical score in the Midi format which we use for evaluation. There are $44$ videos and $148$ tracks in total. We use $135$ tracks for training and $13$ tracks for testing such that each instrument has at least one track in the test set. We further evaluate the MI Net on \textbf{Solos} dataset~\cite{montesinos2020solos}, a very recently published dataset of YouTube videos containing excerpts of musicians playing different instruments for auditions. It contains the same $13$ instruments as the URMP dataset. There are $755$ videos in total, and each instrument has approximately $58$ videos on average. We use a 9:1 ratio to split the training and the testing sets. Solos also contains pre-processed skeleton keypoints extracted via Openpose however doesn't include Midi files due to `in the wild' nature of the videos.\\ \textbf{Implementation details.} We use Pytorch to implement our MI Net. The sampling rate of all audios is set as in 16Khz, and we use 1024 frequency bins with a hop size of 256 to generate the log-spectrograms. In training, we randomly select 4 seconds segments from videos. The VQ-VAE, includes 1D convolutions in all MBR and residual blocks whose outputs are of 512 channels. We reduce the audio encoder outputs to 64 channels and map it to an embedding space with the size of 1024. The body movement encoder is a 3-layer bi-GRU with hidden size of 32. For all self-attention modules, we use 2 layers of 128 dimensions and 8 heads. More details of architectural design can be found in the Supplementary material.\\ \textbf{Comparison with Other Models.} To our best knowledge, we are the first work in the direction of unsupervised generation of instrument specific music. Therefore, we additionally implement two baselines for comparison. (i) \textbf{RNN-based Seq2Seq Network:} We implement an encoder-decoder recurrent neural network. The encoder takes body keypoints as input, and the decoder generates the expected spectrogram. (ii) \textbf{Graph-Transformer Network:} We implement a Graph-Transformer network similar to the architecture of Foley Music~\cite{gan2020foley}. It is based on Spatio-temporal Graph Convolution Network (ST-GCN)~\cite{yan2018spatial} which encodes the body keypoints to pose features, which are then fed into the Transformer decoder where each block contains the self-attention, cross-attention, and feed-forward modules. Instead of predicting the Midi events, we directly generate the log-spectrograms.\\ \textbf{Evaluation.} Evaluation of generative models is not a well defined procedure, particularly for MI Net which aims to generate perceptually-realistic audio. We thereby evaluate our method against a diverse set of metrics, each of which captures a particular aspect of the model performance.\\ \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{spec.png} \caption{Examples of generated spectrograms for four instruments in URMP dataset. Left: Generated samples without content condition. Middle: Generated samples with content condition. Right: Ground Truth.} \label{fig:spec} \end{figure} \textbf{1) Number of Statistically-Different Bins (NDB).} We adopt the metric proposed in ~\cite{richardson2018gans} and used in ~\cite{engel2019gansynth, gan2020foley} to measure the diversity of the generated examples: first, the training examples are clustered into $k=50$ Voronoi cells by k-means in log-spectrogram space. The generated examples are also mapped into the same space and are assigned to the nearest cell. NDB is reported as the \textit{number of cells where the number of training examples is statistically significantly different from the number of generated examples by a two-sample Binomial test.} For each model, we generate about $1600$ samples from the testing set and perform the comparison. For a reference, we also evaluate the NDB on the testing data itself. The NDB results are shown in Table \ref{tab:ndb}. NDB indicates how well model learns from the training set. Here, the larger the number is (up to 50), the less similar generated samples are compared to the training set, which means worse learning performance. For the URMP dataset, the MI Net outperforms other methods by a large margin. Both the RNN-based Seq2Seq and Graph-Transformer do not generate music with sufficient quality under the unsupervised setup. In addition, the generated samples of MI Net without conditioning on content have a distribution closer to the training data, therefor, even resulting in a lower NDB score than the reference. Once the content condition is added, the distribution of the generated samples becomes closer to testing set. For Solos dataset, while the NDB result of our method is better than others, the generated samples are not satisfactory. One of the issues is the body motions of `in the wild' videos have large variations and becomes challenging to differentiate in the unsupervised manner as we analyze in the below metric.\\ \begin{table}[] \small \centering \caption{NDB results. \textbf{Lower is better}.} \label{tab:ndb} \begin{tabular}{|c|c|c|} \hline Model & URMP & Solos \\ \hline RNN-based Seq2Seq & 48 & 44 \\ \hline Graph-Transformer & 45 & 41 \\ \hline \textbf{MI Net (Our)} & \textbf{33} & \textbf{36} \\ \hline \textbf{Content Cond. MI Net (Our)} & \textbf{35} & - \\ \hline Testing Set Data & 36 & 31 \\ \hline \end{tabular} \end{table} \begin{figure*}[h] \centering \includegraphics[width=0.7\linewidth]{urmp_tsne.png} \caption{T-SNE plots of the encoded body movements representation in URMP test set. the number next to clusters indicates instruments: 0. Violin, 1. Viola, 2. Cello, 3. Double bass, 4. Flute, 5. Oboe, 6. Clarinet, 7. Bassoon, 8. Saxophone, 9. Trumpet, 10. Horn, 11. Trombone, 12. Tuba. Dotted circle: Mixing of samples belonging to Oboe and Clarinet instruments. Solid circle: Mixing of samples belonging to Violin and Viola instruments.} \label{fig:tsne} \end{figure*} \textbf{2) Classification with Body Motion Features.} To evaluate whether the encoded pose features are separated to generate exclusive music for specific instruments, we extract the body movement encoder's final hidden state and fit a K-Nearest-Neighbors classifier ($K=1$) using cosine similarity metric. For comparison, we use the encoder final state for RNN-based Seq2Seq, and take the mean of both temporal and joints dimensions of the outputs of ST-GCN for Graph-Transformer to perform the classification. The results are shown in Table~\ref{tab:knn}. Notably, for URMP dataset, our method associates the body movements with the instruments at a high accuracy, evident by the score in Table~\ref{tab:knn} and t-SNE plots of the latent representations in Fig.\ref{fig:tsne}. While the results of RNN-Seq2Seq and Graph-Transformer are of reasonable scores, the plots show that these models do not precisely distinguish the instruments. Furthermore, they cannot generate expected instrumental music since these models do not find the correlations between audio features and body movements but rather separate the instruments according to the difference of poses only. This becomes challenging for instruments with similar poses (e.g., Viola v.s. Violin, Oboe v.s. Clarinet). In comparison, our method learns the mapping of the latent spaces, containing more representative information to allow the model builds the connection between visual and audio events. As expected, in the more challenging `in the wild' dataset of Solos, our method outperforms other methods significantly. However, we realize that the $61.5\%$ accuracy is not enough for MI Net to fully distinguish the body movements and fully generate satisfactory music. We describe the limitations and possible improvements in Section 5.\\ \begin{table}[] \small \centering \caption{KNN Classification Accuracy in $\%$.} \label{tab:knn} \begin{tabular}{|c|c|c|} \hline Model & URMP & Solos \\ \hline RNN-based Seq2Seq & 82.6 & 37.8\\ \hline Graph-Transformer & 89.2 & 38.3\\ \hline \textbf{MI Net (Our)} & \textbf{93.7} & \textbf{61.5} \\ \hline \end{tabular} \end{table} \textbf{3) Qualitative Human Evaluation} \begin{table}[] \small \centering \caption{Human evaluation of real vs. fake audio samples. Success means the percentage of the generated sound by MI Net that was considered real (out of $50\%$ selected by the Oracle).} \label{tab:real-fake} \begin{tabular}{|c|c|c|} \hline Method & \textbf{MI Net (Our)} & Oracle \\ \hline Success Rate & 24 & 50 \\ \hline \end{tabular} \end{table} We evaluate the quality of generated samples by performing a human evaluation. We provide real (which originally belongs to the video) and fake (MI Net) audio to the Amazon Mechanical Turk (AMT turkers). The turkers are asked to choose the audio that they believe is real. We surveyed 50 participants individually, where each participant evaluated 39 pairs of 4 seconds audios (3 samples per instrument). To be noted, an oracle score of 50\% indicates perfect confusions between real and fake. Since the RNN-based Seq2Seq and Graph-Transformer cannot generate music compared with ground truth, we evaluate the MI Net on URMP dataset. The result in Table~\ref{tab:real-fake} shows that our generated samples could fool the participants at a reasonable success rate of 24\%.\\ \textbf{Ablation Study.} We perform the ablation studies to evaluate the impact of each component of our method. We use URMP dataset to perform the comparison experiments.\\ \textbf{1) MBR.} In VQ-VAE, we utilize the multi-band residual blocks (MBR) to learn the representation from a high-resolution log-spectrogram. We compare our method with common 1D convolutional without MBR to show the effectiveness. We compare the testing L2 reconstruction loss of spectrogram, and the results in Table~\ref{tab:ablation-vqvae} shows the benefit of MBR blocks. \\ \begin{table}[] \small \centering \caption{Ablation study of the architectural design of VQ-VAE in terms of reconstruction (L2) loss.} \label{tab:ablation-vqvae} \begin{tabular}{|c|c|} \hline VQ-VAE & L2 Loss \\ \hline 1D Conv w.o. MBR & 0.89\\ \hline \textbf{1D Conv w. MBR (Our)} & \textbf{0.78}\\ \hline \end{tabular} \end{table} \begin{table}[] \small \centering \caption{Ablation study of the architectural design of the Content Condition in terms of NELL loss computed for the latent code prediction.} \label{tab:ablation-midi} \begin{tabular}{|c|c|} \hline Midi Cond. Structure & NLL Loss \\ \hline GRU-Seq2Seq & 2.93\\ \hline \textbf{Content Encoder + Prior Net (Our)} & \textbf{2.55}\\ \hline \end{tabular} \end{table} \textbf{2) Content Encoder.} We use a transformer-based architecture to learn the Prior with the content condition. To verify its effectiveness, we replace the Transformer-based architecture with a GRU-seq2seq model. As shown in Table \ref{tab:ablation-midi}, our method achieves lower negative-log likelihood loss than the baseline. This demonstrates the benefit of our design choice to capture the dependencies between content and latent audio representations. \section{Limitations} Our experimental results show that MI-Net can generate instrumental music from unlabeled videos recorded in a studio. However, for videos `in the wild', generation of quality music remains to be challenging. We identify two main challenges that limit the performance. (i) For a dataset with a large variance in music, the current VQ-VAE model cannot reconstruct the spectrogram with resolution due to a limited latent space. (ii) Variations in views and subjects make it harder to differentiate instruments in an unsupervised way. A more Specifically designed body movement encoder would be required. These two challenges are interconnected since the latent spaces are interconnected. If one of the latent spaces is not learned well, it would be hard to generate satisfactory music. \section{Conclusion} We propose an unsupervised system named Multi-Instrumentalist Net (MI Net) that generates the associated sound for a video via human body movement playing an instrument. We demonstrate that the MI Net can generate reasonable quality music on the URMP dataset recorded in a studio setting. Besides, we evaluate the MI Net on `in the wild' videos and discuss the limitations and potential future research directions. {\small \bibliographystyle{ieee_fullname}
1,108,101,564,433
arxiv
\section{Preliminaries} \subsection{Stochastic off-policy theorem} Consider a Markov decision process (MDP), where an agent receives a reward, $r_{t}$, for an action, $a \in A$, taken in state, $s \in S$, according to some stochastic behavioral policy, $b: S$ x $A \rightarrow (0, 1)$. We can acquire a target policy, $\pi_{\theta}$, that maximizes the cumulative rewards expected under this MDP by expressing its value as \begin{align} J^{\pi}(\theta) = \mathbb{E}_{s \sim d^{b}}[V^{\pi}(s)] = \mathbb{E}_{s \sim d^{b}, a \sim b}[\varphi^{\pi, b}Q^{\pi}(s, a)] \end{align} where $V^{\pi}(s)$ is the expected cumulative rewards starting from a given state, $s_{t}$, and $Q^{\pi}(s, a)$, the expected cumulative rewards, starting from said state with an action, $a_{t}$; then following the policy, $\pi_{\theta}$, until termination \citep{sutton:16}. The importance sampling ratio, $\varphi^{\pi, b}$, scales $Q^{\pi}(s, a)$ according to the likelihood of sampling the undertaken action from $\pi_{\theta}$, rather than $b$. In order to find the parameters of $\pi_{\theta}$ such that the rewards are maximized, we can follow the direction of increasing performance \begin{align} \nabla_{\theta} J^{\pi}(\theta) &= \mathbb{E}_{s \sim d^{b}, a \sim b} [\nabla_{\theta}\varphi^{\pi, b}Q^{\pi}(s, a) + \varphi^{\pi, b} \nabla_{\theta}Q^{\pi}(s, a)]\\ &= \mathbb{E}_{s \sim d^{b}, a \sim b}\bigg[\varphi^{\pi, b} \frac{\nabla_{\theta}\pi_{\theta}(a|s)}{\pi_{\theta}(a|s)}Q^{\pi}(s, a) \bigg] + \mathbb{E}_{s \sim d^{b}, a \sim b}[\varphi^{\pi, b} \nabla_{\theta}Q^{\pi}(s, a)]\\ &\approx \mathbb{E}_{s \sim d^{b}, a \sim b}\bigg[ \varphi_{\pi, b}\frac{\nabla_{\theta}\pi_{\theta}(a|s)} {\pi_{\theta}(a|s)}Q^{\pi}(s, a)\bigg] = \widetilde{\nabla}_{\theta} J^{\pi}(\theta) \end{align} The first term in the above equation is the off-policy gradient and the second term is the off-policy action-value gradient \citep{degris:12}. We want to approximate the second term, so as to move in the policy gradient direction. \subsection{Deterministic action-value gradient} Action-value methods such as Q-learning acquire an implicit deterministic policy, $\mu_{\theta}$, that can be expressed as $a = \argmax_a Q^{\mu}(s, a)$. However this not feasible under continuous action spaces. For such, the policy needs to be represented explicitly \citep{silver:14, sutton:16}. In order to learn the parameters for a continuous deterministic policy, $\mu_{\theta}$, \citet{silver:14} proposed following the gradient of the action-value, $\nabla_{\theta}Q^{\mu}(s, a)|_{a=\mu_{\theta}(s)}$, such that the temporal difference (TD) error is minimized \citep{lillicrap:16}. Using the chain-rule, if $Q^{\mu}(s, a)$ is a compatible function, the gradient can be decomposed into the update equation \begin{align} \theta_{t + 1} &= \theta_{t} + \alpha\mathbb{E}_{s \sim p^{\mu}} [\nabla_{\theta}\mu_{\theta}(s)\nabla_{a}Q^{\mu}(s, a) |_{a=\mu_{\theta}(s)} ]\\ &= \theta_{t} + \alpha\mathbb{E}_{s \sim p^{\mu}} [\nabla_{\theta}\mu_{\theta}(s)\nabla_{\theta}\mu_{\theta}(s)^{\top} \omega] \end{align} where $\nabla_{\theta}\mu_{\theta}(s)$ is the deterministic policy gradient and $\omega$ are the parameters of the action-value function that minimize the TD error. The above equation moves in the same direction as the policy gradient. We now discuss a similar case for stochastic policies. \section{Stochastic Off-policy action-value gradient} \label{actgrad} \subsection{Compatible action-value functions} In order to estimate how the parameters of an explicit policy change with respect to $Q^{\pi}_{\omega}(s, a)$, the action-value needs to be compatible with whatever type of policy is being represented. To do this, we re-parametrize it as \begin{align} Q^{\pi}_{\omega}(s, a) = A^{\pi}_{\omega}(s, a) + V^{\pi}_{\nu}(s) \end{align} where $A^{\pi}_{\omega}(s, a)$ is the advantage function of an action in a state and $V^{\pi}_{\nu}(s)$ is the value of that state \citep{baird}. Due to the zero-mean property of compatible features \citep{sutton:00,peters:05}, $Q^{\pi}_{\omega}(s, a)$ by itself cannot serve as a compatible function and at the same time, a reliable estimator for cumulative expected rewards. In practice, $A^{\pi}_{\omega}(s, a)$ is made compatible with respect to the stochastic policy whilst $V^{\pi}_{\nu}(s)$ is used as a baseline. Following \citep{sutton:00}, we represent the advantage function of a stochastic policy as $A^{\pi}_{\omega}(s, a) = \frac{\nabla_{\theta}\pi_{\theta}(a|s)}{\pi_{\theta}(a|s)}^{\top}\omega$. \subsection{Stochastic off-policy gradient} Given an action-value function, $Q^{\pi}_{\omega}(s, a)$, that is compatible with the stochastic policy, $\pi_{\theta}$, in a manner shown above, we can decompose the action-value gradient term in the complete off-policy gradient theorem into \begin{align} \mathbb{E}[\varphi^{\pi, b} \nabla_{\theta}Q_{\omega}^{\pi}(s, a)|d^{b}, b] &= \mathbb{E}_{s \sim d^{b}, a \sim b}[\varphi^{\pi, b} \nabla_{\theta}A_{\omega}^{\pi}(s, a)]\\ &= \mathbb{E}_{s \sim d^{b}, a \sim b}[\varphi^{\pi, b} \nabla_{\theta}\pi_{\theta}(a|s)\nabla_{\pi_{\theta}} A_{\omega}^{\pi}(s, a)]\\ &= -\mathbb{E}_{s \sim d^{b}, a \sim b}\bigg[\varphi^{\pi, b} \frac{\nabla_{\theta}\pi_{\theta}(a|s)}{\pi_{\theta}(a|s)} \frac{\nabla_{\theta}\pi_{\theta}(a|s)}{\pi_{\theta}(a|s)}^{\top}\omega \bigg] \end{align} where the squared log-likelihood gradients represent the Fishers information matrix, $G^{\pi}(\theta)$. Consider the following from \cite{bhatnagar:09} \textbf{Lemma 1}. The optimal parameters, $w^{\ast}$, for the compatible function of a stochastic policy, $\pi_{\theta}$, can be expressed as \begin{align} \omega^{\ast} = G^{\pi}(\theta)^{-1}\widetilde{\nabla}_{\theta}J^{\pi}(\theta) \end{align} which represents the natural policy gradient \citep{amari:98, kakade:02}. Hence the above moves in the off-policy gradient direction as both squared terms in the action-value gradient cancel out. \section{Experimental results} \label{experiments} We evaluate the performance of an actor-critic algorithm that follows the policy gradient based on the above equation. We compare this algorithm, Actgrad, alongside two other algorithms: the off-policy actor-critic (Offpac) and Q-learning (Qlambda). Experiments are performed on the Cart Pole and Lunar Lander environments provided in the Open AI gym \citep{brockman:16}. \subsection{Details} The first task we consider, Cart Pole, is the task of balancing a pole attached atop a cart by an un-actuated joint. The goal is to apply a force, F, to either the right or left of the pole in order to keep it upright. For each time step the pole is upright, a reward of 1 is given. Next we consider, Lunar Lander, the task of piloting a lander module through the lunar atmosphere and unto a landing pad at the center of the screen. The goal is to bring the spacecraft to rest by either doing nothing or firing the main, left or right engines. Bad landings incur negative rewards, as do firing the engines. However, larger rewards are given for smooth landings. For Cart Pole, the state features are encoded using the Boxes method \citep{barto:83}, while for Lunar Lander they are encoded using heuristics provided by Open AI. Agents were trained on the environments for 1500 \& 700 episodes respectively, with the same training parameters \& learning rates shared across them. On Cart Pole, training episodes ended after 250 time steps while on Lunar Lander, they ended after 500 time steps. Training was repeated for 10 trials on each environment and testing was performed for 100 episodes after each trial. \subsection{Results} We now present the training and test results for the evaluated algorithms on each environment. \begin{table}[H] \caption{Average Test Results.} \label{sample-table} \begin{center} \begin{tabular}{|l|l|l|l|l|} \hline \multirow{2}{*}{Agent} & \multicolumn{2}{c|}{Cart Pole} & \multicolumn{2}{c|}{Lunar Lander}\\ & Rewards & Episodes Solved & Rewards & Episodes Solved\\ \hline Offpac & 214.56 $\pm$ 3.78 & 99.9\% & 152.89 $\pm$ 8.52 & 91.7\%\\ \hline Qlearning & 167.12 $\pm$ 2.97 & 99.0\% & 109.30 $\pm$ 8.87 & 79.2\%\\ \hline Actgrad & 209.18 $\pm$ 3.68 & 99.7\% & 109.46 $\pm$ 11.27 & 85.7\%\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[h] \begin{subfigure}{.50\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{graphs/CartPole-v0.png} \end{center} \caption{Cart Pole.} \end{subfigure} \begin{subfigure}{.49\textwidth} \begin{center} \includegraphics[width=1.0\textwidth]{graphs/LunarLander-v2.png} \end{center} \caption{Lunar Lander.} \end{subfigure} \caption{Average Training Rewards.} \end{figure} From the above, the investigated algorithm reaches a training performance close to that of the off-policy actor-critic on Cart Pole. However it suffers from higher variance on the Lunar Lander environment. This may be due to the fact that learning relies on estimates of the advantage as determined by the current advantage parameters, rather than the actual advantage gotten from the critic. This is likely pronounced due to the difficulty of Lunar Lander in comparison to Cart Pole. \section{Discussion} \label{discussion} In this paper, we have discussed a method for following the stochastic off-policy gradient in a manner similar to that of the deterministic policy gradient. We then compared the performance of such method with other policy gradient algorithms. Although the approach suffers from high variance on certain tasks, it nevertheless outperforms deterministic algorithms and can easily be made to follow the steepest ascent direction by dropping the natural gradient term.
1,108,101,564,434
arxiv
\section{Introduction} \label{sec:introduction} The Advanced Wakefield (AWAKE) experiment at CERN is a proof-of-principle experiment demonstrating plasma wakefield acceleration using a proton drive beam for the first time~\cite{Assmann:2014hva,Caldwell:2015rkk,Gschwendtner:2015rni,Muggli:2017rkx}. Proton bunches from the CERN SPS accelerator are injected into a rubidium (Rb) vapour and co-propagate with an intense laser pulse which creates the plasma and seeds the modulation of the proton bunch into microbunches~\cite{PhysRevLett.122.054802,PhysRevLett.122.054801}. These microbunches induce strong resonant wakefields which are sampled by an externally-injected electron bunch, which is accelerated to high energy.\par A magnetic spectrometer has been installed downstream of the plasma cell in order to measure the energy distribution of the accelerated electron bunch. The spectrometer has been designed to fulfil the following requirements: \begin{itemize} \item Separate the accelerated electrons from the drive bunch protons. \item Introduce a spatial distribution into the accelerated bunch that is a function of energy. \item Measure the spatial intensity distribution of the accelerated electrons to allow the mean energy, energy spread and bunch charge to be calculated. \item Provide sufficient acceptance to prevent significant loss of accelerated electrons before the energy measurement. \item Provide sufficient dynamic range to allow measurement of a range of electron energies from 0--5 GeV. \item Measure the energy profile of the electron bunch with sufficient resolution to demonstrate proton-driven plasma wakefield acceleration of witness bunch electrons. \end{itemize} The AWAKE electron spectrometer has been used recently to measure acceleration of electrons to GeV energies in the first demonstration of proton-driven plasma wakefield acceleration~\cite{Adli:2018}. The evolution of the spectrometer's design has been discussed previously~\cite{Deacon:2141860,Keeble:IPAC2018-THPML118}. Here, we present the final design and full calibration of the system. \subsection{Overview} \label{subsec:overview} The layout of the spectrometer within the AWAKE tunnel is shown in Figure~\ref{fig:cad}. The magnetic part of the spectrometer system begins approximately 4.5\,m downstream of the plasma cell exit and consists of two quadrupoles followed by a C-shaped dipole magnet. Inside the dipole magnet the AWAKE beamline expands into a large triangular vacuum chamber, terminated on one side by a thin window which allows high energy electrons to pass through. Attached to the exterior surface of the window is a scintillating phosphor screen which emits photons when particles deposit energy in it. The scintillator photons are transported, via a series of large mirrors, to a focusing lens and CCD camera in an adjacent tunnel. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_1.pdf} \caption{The electron spectrometer at AWAKE. The path of the scintillator photons which reach the camera is shown with coloured blocks: the red block shows the path from the scintillator to the first mirror (M1), the green block shows the path from M1 to the second mirror (M2), the blue block shows the path from M2 through the fire safety window to the third mirror (M3) which is within the spectrometer dark room and the yellow block shows the path inside the dark room from M3 to the lens and camera. Close to the dark room are the rack PCs used for data acquisition and control of the camera.} \label{fig:cad} \end{figure} \section{Components} \subsection{Magnets} The spectrometer dipole is an electromagnet which can be stably operated between input currents of 18\,A and 650\,A, corresponding to approximate integrated magnetic fields of 0.065\,T\,m and 1.545\,T\,m respectively. The length of the magnet's iron is 1\,m in the direction parallel to the beamline and 0.32\,m in the transverse direction. To reduce the impact of fringe fields on the electrons while maintaining a large integrated field the magnet is offset in the transverse direction such that electrons have 0.285\,m of iron in the direction in which they are bent. At the lowest magnet setting, electrons at the injection energy of approximately 18\,MeV can be measured by the spectrometer and at the highest setting, electrons with energies up to 8.5\,GeV can be measured. The strength of the magnetic field is varied by changing the current through the magnet's coils. The field has been mapped for a number of these currents and finite element analysis (FEA) simulations have been performed to infer field maps for other current settings. With these field maps the position--energy conversion function for the spectrometer can be specified using only three additional parameters. These parameters are displayed in Figure~\ref{fig:diag} and are summarised in Table~\ref{tab:uncerts}. The measurements come from a combination of a dedicated survey and measurements of the proton bunch's position. \par With the above measurements, the position--energy conversion function can be simulated using BDSIM~\cite{Nevay:2018zhp}. This simulation can be compared to an analytic solution under the assumption of a uniform magnetic field and the results are found to match to within 2\% at any given point on the scintillator. The uncertainty in the conversion function arising from uncertainties in the measured values was also estimated in these simulations. However, a 1\% overall uncertainty in the magnetic field map, determined by comparing the available measured values to those simulated by FEA, dominates over the uncertainties shown in Table~\ref{tab:uncerts}. Examples of the position--energy function using two of the field maps for input currents of 40\,A and 650\,A are shown in Figure~\ref{fig:conversion_funcs}. At 40\,A the energy range available is approximately 30--800\,MeV and at 650\,A it is 300--8500\,MeV. The relationship between the position and energy is non-linear, changing slowly at the lower energy end of the scintillator and rapidly at the high energy end. This has important implications for the energy resolution, as discussed in Section~\ref{sec:op_cal}. \par The spectrometer's quadrupoles have an iron length of 0.285\,m and a peak magnetic field gradient of 18.1\,T\,m$^{-1}$ at a current of 362\,A. At this setting the quadrupoles are maximally focusing for a beam of approximately 1.3\,GeV. Because the quadrupoles are separated by about 0.2\,m, they must be offset in strength by approximately 6\% in order to both focus onto the plane of the scintillator. However, the electron's path length from the quadrupoles to the scintillator varies depending on which part of the scintillator they are incident upon and, hence, their energy. This variation in the path length of 0.35\,m from the high energy end to the low energy end means that the quadrupoles cannot be offset with respect to each other to focus perfectly at the screen for all energies and the offset of 6\% is a compromise to provide reasonable focusing across the whole screen. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_2.pdf} \caption{A minimal schematic defining each of the quantities listed in Table~\ref{tab:uncerts}. An example of an electron trajectory, propagating from left to right, is shown in blue.} \label{fig:diag} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_3.pdf} \caption{The position--energy ($\xi$--$E$) relationship at dipole magnet settings of 40\,A and 650\,A, simulated using BDSIM.} \label{fig:conversion_funcs} \end{figure} \begin{table} \centering \caption{Measured values for each of the parameters defined in Figure~\ref{fig:diag}.} \label{tab:uncerts} \begin{tabular}{ |c|c| } \hline \textbf{Parameter} & \textbf{Value} \\ \hline $S_{z}$ & $1.676\pm0.001$\,m \\ \hline $S_{x}$ & $0.0620\pm0.0005$\,m \\ \hline $\vartheta$ & $44.80\pm0.01^{\circ}$ \\ \hline \end{tabular} \end{table} \subsection{Camera and optical line} The camera used to image the spectrometer's scintillator is an Andor iStar 340T, an intensified camera with a 2048\,$\times$\,512 pixel CCD, often used in low-light and spectroscopy applications. The camera is triggered approximately 100\,ms before proton extraction to AWAKE occurs and is delayed internally using the camera's digital delay generator which controls when the intensifier is gated to amplify the light. The camera is controlled remotely using a bespoke FESA~\cite{arruat2007front} class which is interfaced to the camera using Andor's SDK. This class is also responsible for data readout and interfaces to AWAKE's data acquisition and logging systems. To reduce readout noise during operation the camera is cooled to $-30^{\circ}\mathrm{C}$ using an in-built Peltier device with a heat sink cooled by a closed-circuit liquid cooling system circulating a 2:1 mixture of distilled water and ethylene glycol at $12^{\circ}\mathrm{C}$. \par A unique challenge for the spectrometer is the high level of radiation in the AWAKE tunnel, generated by the proton drive bunch. This radiation necessitates placing the spectrometer's camera far away from the beamline to reduce the background noise and protect it from radiation damage. This requires a specially designed optical line consisting of a long focal length lens and three mirrors. \par The optical distance between the camera and the scintillator is 17\,m. To ensure sufficient light capture and resolution a long focal length, low f-number lens is used: a Nikon AF-S NIKKOR 400 mm $f$/2.8E FL ED VR. The front of the lens is fitted with a 550$\pm$50\,nm filter of the same diameter. This filter reduces the ambient background due to lights in the experimental area. The parameters for this lens and the dimensions of the scintillator were used as inputs to a Zemax OpticStudio simulation to define the required dimensions for the optical line mirrors. These dimensions are summarised in Table~\ref{tab:mirrors}, which shows both the required size of the mirror (clear aperture) as defined by Zemax OpticStudio and the physical size used in the experiment. The image intensifier in the CCD camera limits the active pixels in the horizontal axis to 1850. For the 0.997\,m wide scintillator this gives 0.54\,mm\,px$^{-1}$. Imaging a resolution target directly from 17\,m with the camera shows that the resolution limit is approximately 1.5\,mm with the limiting factor likely being the intensifier in the camera. To maintain this resolution, the Zemax OpticStudio simulation of the line shows that the mirrors must be optical grade; they must have $\lambda/2$ flatness over any 100\,mm area. Additionally, the scratch-dig of the mirrors must not exceed 80/50. The mirrors are made from BK7 glass which is polished to achieve the desired flatness and scratch-dig. This polishing process generates a considerable amount of heat and, given the thermal expansion of the BK7 and the required surface properties, this necessitates a relatively thick piece of glass. As such, each mirror has a thickness of 40\,mm. This thicker glass is also minimally affected by gravitational bending. This is particularly important for M1 which hangs facing downwards, with the mirror held in place by the three adjustment screws.\par \begin{table} \centering \caption{Mirror dimensions and clear apertures. M1 is the mirror closest to the scintillator.} \begin{tabular}{|l|c|c|} \hline \textbf{Mirror} & \textbf{Width / mm} & \textbf{Height / mm} \\ \hline M1 full & 926.0 & 150.0 \\ \hline M1 clear aperture & 898.2 & 121.5 \\ \hline M2 full & 926.0 & 150.0 \\ \hline M2 clear aperture & 819.5 & 126.4 \\ \hline M3 full & 524.0 & 160.0 \\ \hline M3 clear aperture & 504.6 & 140.5 \\ \hline \end{tabular} \label{tab:mirrors} \end{table} The polished glass has a protected aluminium coating, with three layers of material designed to ensure high reflectance, uniformity and ease of use. The first is a 10\,nm chromium layer to ensure adhesion of the coating to the glass. The second layer is 100\,nm of aluminium which was selected because of its good reflectance around the 545\,nm peak of the scintillator emission. The final layer is 185\,nm of quartz, to further enhance reflectance and to prevent the oxidation of the aluminium layer. The thickness of the quartz layer has been adjusted using a combination of simulation and testing such that the mirrors provide their most uniform reflectance around the wavelength of emission of the scintillator. The mirrors were coated by evaporation; via an electron gun for the quartz and chromium layers and thermally for the aluminium layer. Each mirror has a reflectance of approximately 92\% around the emission peak of the scintillator.\par Due to the mirror's large size, bespoke mounts have been designed. The tip and tilt of the mirrors may be adjusted in these mounts using three screws and the mounts themselves can be further adjusted by additional screws. The mounts have been designed to hold the mirrors securely to minimise the need to realign the system. Another key feature of the mounts is that they are sturdy, such that vibrations from the floor are significantly damped. These vibrations can lead to movement of the mirrors which blurs the images recorded by the camera. The mirror most affected by these movements is M1 due to its less rigid base and the fact that the mirror hangs from the mount rather than sitting on it. A harmonic FEA modelling has been performed using ANSYS to determine the transfer functions of the mount for M1. These transfer functions in combination with the measured power spectral density from the mount locations can be used to determine the displacement of the centre of gravity of the mirror. Inputting these displacements into a model of the optical line in Zemax OpticStudio shows that there is a negligible effect on the image. Furthermore, the analysis shows that the vibrations do not cause movements of the mirrors at frequencies higher than 1\,Hz. This is important since the exposure time of the camera is typically $\mathcal{O}(100\,\mu$s) and, as such, movements below approximately 1\,kHz would cause no appreciable blurring of the images.\par The only other optical component in the line is a $550\times200\times3$\,mm$^{3}$ BK7 window between mirrors M2 and M3. This window is located in a door and its purpose is to maintain the fire rating of the door while minimally affecting the light passing through it. The window is coated with a broadband anti-reflective coating giving an overall transmittance of greater than $99.0\%$ in the wavelength range $550\pm50$\,nm. \subsection{Vacuum chamber and window} The large spectrometer magnet necessitates the use of a bespoke vacuum chamber, a schematic of which is shown in Figure~\ref{fig:vc}. A large portion of the vacuum chamber is positioned within the aperture of the magnet and, as such, its full height is restricted to 80\,mm. To minimise the loss of accelerated electrons this aperture is kept as clear as possible. This means that the portion of the chamber inside the magnet has no stiffeners. The chamber has a 6\;mm thick stainless steel wall, to prevent buckling. The centre of the chamber is attached to the magnet for the same reason. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_4.pdf} \caption{Schematic of the spectrometer vacuum chamber. Electrons enter from the left hand side and are spread out by the magnetic field of the magnet. The vacuum window comprises the upper right side of the triangular chamber and the scintillator is fixed to its exterior surface. The high energy protons propagate through the chamber and exit through the beam pipe on the right hand side, continuing to a beam dump.} \label{fig:vc} \end{figure} \par The most delicate part of the vacuum chamber is the window through which the electrons pass. To minimise the scattering and loss of these electrons the window is composed of a 2\;mm thick aluminium alloy. To avoid welding, which could create weak points, the whole window assembly is produced from one solid sheet of aluminium 6082-T6. The aluminium grain size is approximately 10--20\;$\mu$m meaning that there is a minimum of 100 grains across the thickness of the window. This relatively large number of grains is sufficient to ensure that the window is leak-tight. The window is 62\,mm high and 997\,mm wide with the aluminium rounded into a semicircle at each end to avoid corners which could create weak points. \par Electrons passing through the vacuum window scatter, losing energy and producing secondary particles in the process. The window is manufactured with a tolerance of 0.05\;mm and simulations show that an increase or decrease of this amount induces a change of only 5\;$\mu$m in the spatial spread of a 20\;MeV electron bunch after the window. \subsection{Scintillator} The scintillator chosen for the spectrometer is a DRZ-High screen, a terbium doped gadolinium oxysulfide (Gd$_{2}$O$_{2}$S:Tb) scintillator manufactured by Mitsubishi. The scintillator has a thickness of 507\,$\mu$m and has been cut to fit the vacuum window in the shape described above. The scintillator is attached to the exterior surface of the vacuum window using a 200\,$\mu$m thick double sided adhesive. This minimises the gap between the two components and, hence, the spread of the electron beam. The majority of the scintillator's emission is sharply peaked around 545\,nm and the response to incident radiation is linear for the charge densities present at AWAKE. \section{Calibration and simulation} \subsection{Optical line} \label{sec:op_cal} Not all the light produced by the scintillator is captured, due to the angular emission profile and the finite size of the spectrometer's optics. This induces a position dependence in the light captured by the camera, which must be corrected for. This correction factor is measured by imaging a constant light source as it is scanned across the surface of the scintillator. This light source is a diffuse emitter of light peaked at 545\,nm, mimicking the scintillator. This scan produces a curve which allows the scintillator emission to be normalised relative to a given point. In Figure~\ref{fig:vignette} the curve has been normalised such that the emission measured at $\xi=0.84$\,m is 1. This is the point at which electrons are normally incident on the scintillator, as shown in Figure~\ref{fig:angle} which gives the incident electron angle $\theta$ for each $\xi$. \par The optical resolution of the system is also determined using the light source. A resolution target consisting of a number of black and clear bars of varying widths is fixed to the front of the light source and imaged. From this, the modulation transfer function (MTF) of the system and, hence, the resolution, may be determined. The MTF for the optical system is shown in Figure~\ref{fig:mtf}. Without the fire safety window present the system performs as designed; the MTF is above 0.5 for a spatial frequency of 0.33\,mm$^{-1}$, corresponding to a resolution of 1.5\,mm. However, the inclusion of the 3\,mm thick fire safety window significantly affects the MTF at higher spatial frequencies, limiting the resolution to approximately 2\,mm. \par \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_5.pdf} \caption{Camera response to a constant light source scanned across the surface of the scintillator. The curve has been normalised to 1 at the point $\xi=0.84$\,m.} \label{fig:vignette} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_6.pdf} \caption{Electron incident angle on the scintillating screen as a function of screen position as simulated using BDSIM. This curve is approximately independent of the magnet setting though some small variations do occur.} \label{fig:angle} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_7.pdf} \caption{Modulation transfer function (MTF) of the full optical system measured using a resolution target. The four sets of bars in the target are spaced with frequency $f$. Results are shown for the system with and without the fire safety window present.} \label{fig:mtf} \end{figure} This 2\,mm optical resolution limit restricts the energy resolution. This effect is particularly significant at high energies, as displayed in Figure~\ref{fig:deriv_40} which shows the derivative of the inferred energy with respect to $\xi$ across the scintillator for a 40\,A dipole current. At the high energy ($\sim$800\,MeV) end a 2\,mm uncertainty in $\xi$ corresponds to an energy uncertainty of approximately 19\,MeV, or 2.4\%, which is larger than the 1\% uncertainty arising from the magnetic field and dominates the overall uncertainty. \par Changing the camera's gain and gate width is necessary to prevent saturation of the image under different conditions. For example, standard running conditions at AWAKE require a 500\,$\mu$s gate with a gain of 3000, but this does not work for calibration because the lamp is brighter than a typical signal. As such, the correction between different settings has been measured using the lamp. Increasing the gate width does not result in a linear increase in signal, as shown in Figure~\ref{fig:width}, which shows a plot of the camera response to a constant light source for different gate widths $w$. The points represent the mean of several measurements which have had a $w=0$ exposure subtracted and have been normalised such that a 1\,$\mu$s gate gives a response of 1. The orange line represents a linear 1:1 response where doubling the gate width results in a doubling of the signal. The correction for the camera's gain is shown in Figure~\ref{fig:gain}, where each point again represents the mean of several background-subtracted exposures to a constant light source. The response is approximately exponential at low to intermediate gain values but deviates at higher gains. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_8.pdf} \caption{Absolute energy derivative with respect to position across the screen for a 40\,A dipole setting. The curve peaks at the high energy end showing that measurements in this region are more sensitive to an uncertainty in the electron's $\xi$ position.} \label{fig:deriv_40} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_9.pdf} \caption{Camera response to a constant light source for different gate widths $w$ (blue), normalised such that a 1\,$\mu$s gate gives a response of 1. The points may be compared to a linear response in the gate width (orange).} \label{fig:width} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_10.pdf} \caption{Camera response $R$ to a constant light source for different gain values $g$ (blue), normalised such that a gain of 0 gives a response of 1.} \label{fig:gain} \end{figure} \subsection{Scintillator} When imaging the scintillator response with different gate widths an additional correction must be applied. This arises because the response of the scintillator to radiation varies over time, rising quickly to a maximum value and then decaying approximately exponentially with a given decay constant. This response has been measured using radiation generated by the proton beam at AWAKE. The radiation was generated by the proton beam interacting with a number of removable foils along the AWAKE beamline and the radiative flux is linearly proportional to the proton bunch charge, which varies but is measured each extraction. A scan was performed by increasing the delay before the camera is gated and taking a number of images at each setting. These images were background subtracted and then fit against the proton charge with a linear function with an intercept of 0. The fit coefficients for a series of delay settings are shown in Figure~\ref{fig:decay}, with an exponential function fit to the points with delays greater than 163.7\,$\mu$s. The axis is set such that the proton bunch passes at approximately 0\,$\mu$s and smaller gate widths have been used closer to this point to show the structure of the response. The data have been normalised relative to the first data point, which is centred on 1.2\,$\mu$s and has a gate width of 1\,$\mu$s. As can be seen, the signal rises very quickly and then becomes well described by an exponential approximately 200\,$\mu$s after the radiation passes. The error bars on each point come from a combination in quadrature of the statistical uncertainty and a systematic uncertainty arising from different experimental setups. The exponential fit over the full range returns a half life of $379\pm1$\,$\mu$s.\par The long distance between the electron source and the spectrometer at AWAKE makes it difficult to propagate a bunch of well-known charge to the scintillator. Consequently the charge response of the scintillator has been measured at the CLEAR facility at CERN~\cite{GAMBA2018480}. A setup intended to mimic that present at AWAKE was used, with the scintillator attached to the vacuum window placed in the path of an electron bunch with an energy of approximately 150\,MeV. The bunch was normally incident on the rear surface of the vacuum window with a spot size of $\mathcal{O}(1$\,mm). The charge of this bunch was scanned from the minimum available charge of approximately 2\,pC up to 35\,pC; a range intended to be representative of the expected accelerated bunch charge at AWAKE~\cite{Caldwell:2015rkk}. For each event the charge was measured immediately before the bunch was incident on the vacuum window and the scintillator response was captured using the same Andor camera used at AWAKE. Due to the variable bunch charge for any given setting, a large number of images were taken at each point. The optical setup for the calibration was very different to that used at AWAKE. A smaller 105\,mm focal length lens was used and the camera was positioned at a distance of 3\,m from the scintillator facing orthogonal to the beamline, with the light reflected via a 5\,cm diameter silver mirror. The correction for the different optical setups is made using the same light source as used in the optical calibration, which mimics the emission of the scintillator. The wide charge range necessitated changing the camera gain and gate width settings for different points and these have been corrected for as described in the previous subsection. When the gate width is corrected the scintillator emission is also corrected using the half life measured in Figure~\ref{fig:decay}. \par The captured images were subtracted for three different backgrounds: the intrinsic camera background, the ambient background in the room and a particle background generated by radiation directly incident on the camera during events. The data are binned by charge with a bin width of 0.5\,pC, the approximate resolution of the charge measurement device. The fit to the data is shown in Figure~\ref{fig:clear}, with a response of $1.09\pm0.02\times10^{6}$ CCD~counts~per~incident~pC. The values given here correspond to a gate width of 500\,$\mu$s and a gain of 250 measured from a delay of 200\,$\mu$s after the initial glow of the scintillator begins. Measurements of the scintillator at different points and for beam energies of 95 and 120\, MeV agree with this fit to within $1\sigma$. \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_11.pdf} \caption{Normalised response of the scintillator to radiation at different times after it passes. The radiation is incident at approximately 101.0363\,ms and the response reaches a maximum within 1\,$\mu$s of that. The response then decays, slowly at first and approximately exponentially after 200--250\,$\mu$s. The vertical bars are a combination of statistical and systematic uncertainties. The horizontal bars indicate the exposure time, not the timing uncertainty. The exponential fitted here has a half life of $379\pm1$\,$\mu$s.} \label{fig:decay} \end{figure} \begin{figure}[t] \includegraphics[width=\columnwidth]{fig_12.pdf} \caption{Scintillator response to incident electron beam charge (blue) fitted with a linear function (orange). The data are binned by charge in 0.5\,pC bins.} \label{fig:clear} \end{figure} \section{Conclusion} In conclusion, a magnetic spectrometer to measure accelerated electrons bunches at AWAKE has been designed. The spectrometer tackles the unique challenge of the high proton backgrounds present by removing the imaging device from the beamline area and transferring scintillation signals to it using an optical path comprised of metre-scale mirrors. The scintillator and the optical system have been sufficiently characterised in order to allow the spectrometer to achieve its goal of measuring the charge and energy of the accelerated electrons. \section{Acknowledgements} This work was supported by a Leverhulme Trust Research Project Grant RPG-2017-143 and by STFC, United Kingdom. We gratefully acknowledge F. Galleazzi for the model of the AWAKE tunnels used in Figure~\ref{fig:cad} and the operators of AWAKE, the SPS and the CLEAR facility for the provision of the proton and electron data used to produce Figure~\ref{fig:decay} and Figure~\ref{fig:clear}. M. Wing acknowledges the support of the Alexander von Humboldt Stiftung and DESY, Hamburg. \bibliographystyle{elsarticle-num}
1,108,101,564,435
arxiv
\section{Introduction} \label{sec:intro} End-to-end (E2E) models have achieved the state-of-art performance for automatic speech recognition (ASR) \cite{chiu2018state, sainath2020streaming, li2020developing, li2021recent}. With the goal of directly mapping speech features into word sequences using a single neural network (NN), the most popular E2E models include connectionist temporal classification \cite{graves2006connectionist, hannun2014deep, Li18CTCnoOOV}, recurrent neural network transducer (RNN-T) \cite{graves2012sequence, sainath2020streaming, li2020developing}, and attention-based encoder-decoder (AED) models \cite{chorowski2015attention, karita2019comparative, Li2020Comparison}. However, the performance of E2E models degrades when a mismatch exists between the source-domain training data and the target-domain test data. Many ideas have been proposed to adapt ASR models, such as regularization methods \cite{kld_yu, meng2019asa,l2_liao, meng2020lvector, lhuc}, teacher-student learning \cite{li2014learning, meng2018adversarial, manohar2018teacher, meng2019conditional}, transformation methods \cite{lhn, tan2015cluster}, and adversarial learning \cite{meng2018speaker, grl_shinohara, grl_serdyuk, dsn_meng}. However, all these methods require audio data for adaptation when applied to E2E models \cite{ochiai2018speaker, meng2019speaker, meng2019domain}. To overcome this, the most promising approach is to adapt the E2E model using \emph{text-only} data because it is easy to collect orders of magnitude more text-only data than the audio-transcript pairs in the target domain. The most common solution is to train an LM using text-only adaptation data and integrate it into the E2E model during inference. The simplest LM fusion method is Shallow Fusion \cite{hannun2014deep, gulcehre2015on} which combines the E2E model score and the external LM score in the log-linear domain at each step of the beam search. To improve Shallow Fusion, a Density Ratio method \cite{mcdermott2019density, kanda2016maximum} and an internal LM estimation-based Fusion \cite{variani2020hybrid, meng2021ilme} were proposed in which a source-domain LM score and an internal LM score are subtracted from the Shallow Fusion score, respectively, during inference. Variants of internal LM estimation were proposed in \cite{zeyer2021librispeech, zeineldeen2021investigating}. Minimum word error rate training with LM fusion \cite{meng2021minimum, kanda2017minimum, weng2019minimum, peyser2020improving} was conducted to obviate the need for LM weights tuning. Furthermore, structural LM fusion methods such as Deep Fusion \cite{gulcehre2015on}, Cold Fusion \cite{sriram2017cold} and Simple Fusion \cite{stahlberg2018simple} jointly train an E2E model with an external LM to learn a sophisticated combination between the two models. However, all these fusion methods involve an additional external LM during inference which significantly increases the run-time computational cost. Other solutions include synthesizing speech from the text-only adaptation data using a text-to-speech (TTS) model, and then fine-tuning the E2E model with the synthesized audio-transcript pairs \cite{huang2020rapid, peyser2020improving, baskar2019self}. During inference, only the adapted E2E model is needed. However, these approaches require additional transcribed speech to train a TTS model. Training such a reliable multi-speaker TTS model and generating synthesized speech from it are both computationally expensive. Moreover, the adaptation cost with audio-transcript pairs is typically much higher than that with text-only data. Therefore, it is hard to apply these methods to the fast on-device adaptation where the target-domain text is frequently updated. Recently, \cite{pylkkonen2021fast} shows that an RNN-T can be adapted with text-only data by training an auxiliary LM output layer and using it as a regularizer to fine-tune the prediction network. However, learning the additional LM output layer complicates the adaptation process, leading to increased adaptation time and computational cost. \cite{chen2022factorized} factorizes the blank and vocabulary predictions in a transducer and effectively adapts the vocabulary predictor with text as adapting an LM. However, the additional blank predictor increases the model size and computational cost. To address these limitations, we propose an internal LM adaptation (ILMA) of an E2E model using the \emph{text-only} data to improve the ASR performance on the target-domain test data. Trained with audio-transcript pairs, an E2E model implicitly learns an internal LM that characterizes token sequence distribution. The internal LM probability is estimated by the E2E model output after eliminating the encoder contribution from the network \cite{variani2020hybrid, meng2021ilme}. Therefore, we define the E2E model components excluding the encoder as an \emph{internal LM}. Specifically, the internal LM refers to the prediction network followed by the joint network for a transducer model, or to the decoder for an AED model. During ILMA, we fine-tune the internal LM parameters to maximize the log probability of the adaptation sentences. Before ILMA, it is essential to perform internal LM training (ILMT) \cite{meng2021ilmt} to minimize a cross-entropy internal LM loss in addition to the standard E2E loss. ILMT learns a \emph{dual-mode} internal LM that acts as a standalone LM without losing its capability of generating the correct E2E scores together with the other components of the E2E model. Kullback-Leibler divergence (KLD) \cite{kullback1951information} regularization is performed during ILMA to prevent the adapted E2E model from overfitting the text-only data. We show that, for a transducer model, ILMA is the most effective when we update only the last linear layer of the joint network. Compared to LM fusion and TTS-based adaptation, ILMA enables a fast text-only adaptation without increasing run-time computational cost during inference. Evaluated with 30 thousand (K)-hour trained transformer transducers (T-Ts) \cite{zhang2020transformer}, ILMA achieves up to 34.9\% relative word error rate (WER) reduction from the unadapted T-T on Microsoft production test sets. \section{Related Work} \subsection{E2E Models for ASR} Given a sequence of speech features $\mathbf{X}=\{\mathbf{x}_1, \ldots, \mathbf{x}_T\}$ where $\mathbf{x}_t \in \mathbbm{R}^d$, an E2E model is trained to minimize the summation of the negative log posteriors of the reference token sequences $\mathbf{Y}=\{y_1, \ldots, y_U\}$ over the training corpus $\mathcal{D}_\text{T}$ where $y_u \in \mathcal{V}$ and $\mathcal{V}$ is the set of non-blank output tokens, e.g., word pieces, \begin{align} \mathcal{L}_\text{E2E}(\theta_\text{E2E}) = - \sum_{(\mathbf{X}, \mathbf{Y}) \in \mathcal{D}_\text{T}} \log P(\mathbf{Y} | \mathbf{X};\theta_\text{E2E}), \label{eqn:e2e_loss} \end{align} where $\theta_\text{E2E}$ is the parameters of an E2E model. A transducer model \cite{graves2012sequence} comprises an encoder, a prediction network and a joint network. The encoder maps the input speech features $\mathbf{X}$ to a sequence of hidden states $\mathbf{H}^{\text{enc}} = \{\mathbf{h}^\text{enc}_1, \ldots, \mathbf{h}^\text{enc}_T\}$. The predictor takes in the embedding vectors of the previous \emph{non-blank} token $\mathbf{Y}_{0:u-1}$ to generate the hidden state $\mathbf{h}^\text{pred}_u$. The joint network is a feed-forward network that combines the outputs of the encoder and predictor to predict the conditional distribution over the next possible token $k\in \mathcal{V} \cup \texttt{<b>}$, i.e., \begin{align} & \mathbf{z}_{t, u} = W_j \phi(W_e\mathbf{h}^{\text{enc}}_{t} + W_p\mathbf{h}^\text{pred}_{u} + \mathbf{b}_e + \mathbf{b}_p) + \mathbf{b}_j, \label{eqn:rnnt_logit} \\ \hspace{-4pt} & \left[P(k|t, u; \theta_\text{E2E})\right]_{k\in \mathcal{V} \cup \texttt{<b>}}= \text{softmax}(\mathbf{z}_{t, u}), \label{eqn:rnnt_softmax} \end{align} where $\texttt{<b>}$ denotes a blank symbol, $\phi$ is a non-linear function, e.g., tanh or ReLU. $W_j$, $W_e$, $W_p$ are weight matrices, and $\mathbf{b}_e$, $\mathbf{b}_p$, $\mathbf{b}_j$ are biases. $\mathbf{z}_{t, u}$ is a $|\mathcal{V}|+1$ dimensional logit vector. \subsection{Internal LM of E2E Model} \label{sec:ilme} The E2E model implicitly learns an internal LM via the E2E training with paired audio and transcripts. The internal LM characterizes the distribution of token sequences given $\mathbf{\theta}_\text{E2E}$. The internal LM probability of a token sequence $\mathbf{Y}$ is \begin{align} & P(\mathbf{Y};\theta_\text{E2E}) = \prod^{U}_{u=1}P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{E2E}), \label{eqn:e2e_ilm} \end{align} where $P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{E2E})$ is estimated as the E2E model output at the step $u$ after zeroing out the encoder hidden states $\mathbf{H}^\text{enc}$ \cite{variani2020hybrid, meng2021ilme}. For a transducer, the internal LM probability is the model output after feeding the previous tokens $\mathbf{Y}_{0:u-1}$ through the internal LM, i.e., the prediction network followed by the joint network \begin{align} & \mathbf{z}^\text{ILM, NB}_u = \mathbf{W}^\text{NB}_j\phi(\mathbf{W}_p\mathbf{h}^\text{pred}_u + \mathbf{b}_p) + \mathbf{b}^\text{NB}_j \label{eqn:logit_ilm}, \\ & P(y_u|\mathbf{Y}_{0:u-1}; \theta_\text{pred}, \theta_\text{joint}) = \text{softmax}(\mathbf{z}^\text{ILM, NB}_{u}), \label{eqn:rnnt_cond_ilm} \end{align} where NB denotes non-blank, $\mathbf{W}_j^\text{NB}$ and $\mathbf{b}^\text{NB}_j$ are created from $\mathbf{W}_j$ and $\mathbf{b}_j$, respectively, by removing the rows corresponding to the blank, and $\mathbf{z}^\text{ILM, NB}_u$ is a logit vector of dimension $|\mathcal{V}|$ without the blank logit. $\theta_\text{pred}$ and $\theta_\text{joint}$ are the parameters of the prediction network and joint network, respectively. \subsection{Internal LM Training (ILMT)} \label{sec:ilmt} The standard E2E training learns a weak internal LM with a high perplexity because it maximizes the posterior $P(\mathbf{Y}|\mathbf{X};\theta_\text{E2E})$ instead of the internal LM probability $P(\mathbf{Y};\theta_\text{E2E})$. To address this limitation, ILMT was proposed in \cite{meng2021ilmt} where the E2E model is trained to minimize an internal LM loss in additional to the standard E2E loss. The internal LM loss is the summation of negative log internal LM probabilities over the training corpus $\mathcal{D}_\text{T}$ as follows \begin{align} \hspace{-2pt}\mathcal{L}_{\text{ILM}}(\theta_\text{ILM}; \mathcal{D}_\text{T}) = -\sum_{\mathbf{Y} \in \mathcal{D}_\text{T}} \sum^{U}_{u=1}\log P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{ILM}), \label{eqn:ilm_loss_train} \end{align} where $\theta_\text{ILM}$ denotes the internal LM parameters. $\theta_\text{ILM}$ equals to $\{\theta_\text{pred}, \theta_\text{join}\}$ for a transducer model, and denotes the decoder parameters for an AED model. The ILMT loss is a weighted sum of the E2E loss in Eq. \eqref{eqn:e2e_loss} and the internal LM loss in Eq. \eqref{eqn:ilm_loss_train} \begin{align} & \hspace{-3pt} \mathcal{L}_{\text{ILMT}}(\theta_\text{E2E}; \mathcal{D}_\text{T}) = \mathcal{L}_{\text{E2E}}(\theta_\text{E2E}; \mathcal{D}_\text{T}) + \alpha \mathcal{L}_{\text{ILM}}(\theta_\text{ILM}; \mathcal{D}_\text{T}), \label{eqn:ilmt} \end{align} where $\alpha$ is the weight of the internal LM loss. By minimizing the ILMT loss, we maximize the internal LM probability of the E2E training transcripts by updating only the internal LM while maximizing the conditional probability of the training transcripts given input speech by updating the entire E2E model. ILMT learns a strong internal LM with significantly reduced perplexity without losing the ASR performance. It encourages the internal LM to behave also like a standalone NN-LM \cite{mikolov2010recurrent}, increasing the modularity of the E2E model. \section{Internal LM Adaptation (ILMA)} Our goal is to adapt the E2E model using \emph{text-only} data so that it achieves improved ASR performance in the target domain without any increase of the run-time computational cost. Since an internal LM of an E2E model acts exactly the same as an standard NN-LM, one possible way is to adapt the internal LM using the text-only data. Theoretically, any NN-LM adaptation methods are applicable to ILMA. The simplest solution is to re-train the internal LM to minimize the cross-entropy internal LM loss as follows \begin{align} & \hspace{-2pt}\mathcal{L}_{\text{ILM}}(\theta_\text{ILM};\mathcal{D}_\text{A}) = - \hspace{-2pt} \sum_{\mathbf{Y} \in \mathcal{D}_\text{A}} \sum^{U}_{u=1}\log P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{ILM}). \label{eqn:ilm_loss_adapt} \end{align} Different from ILMT where the internal LM loss in Eq. \eqref{eqn:ilm_loss_train} is computed with the transcripts of the training data $\mathcal{D}_\text{T}$, the internal LM loss of ILMA is computed with the sentences in the text-only adaptation set $\mathcal{D}_\text{A}$. In the standard E2E model training, the internal LM is updated together with the encoder to minimize the internal LM loss. However, in Eq. \eqref{eqn:ilm_loss_adapt}, the internal LM is re-trained separately to become a standalone NN-LM by maximizing the log probability of the adaptation sentences. After ILMA of a standard E2E model, it is probable that the adapted internal LM deviates too much from the unadapted one such that it forgets how to work together with the encoder to generate reasonable E2E model probabilities during inference. To circumvent this, we perform ILMT of the E2E model to minimize an ILMT loss in Eq. \eqref{eqn:ilmt} before adapting the internal LM towards text-only data. Through ILMT, a \emph{dual-mode} internal LM is learned to behave like a standalone LM while maintaining its capability to collaborate with the encoder to compute the correct E2E model probabilities. During ILMA, only the ``LM mode'' of the internal LM is adapted towards the text-only data while its ``E2E component mode'' remains unchanged, leading to improved performance on the target-domain test data. Further, to prevent the adapted E2E model from overfitting the text-only adaptation data, we minimize the KLD between the output distribution of the adapted internal LM and that of unadapted one below in addition to the cross-entropy internal LM loss \begin{align} & \mathcal{KL}\left[P(y_u|\mathbf{Y}_{0:u-1};\theta^*_\text{ILM}) || P(y_u|\mathbf{Y}_{0:u-1};\theta_\text{ILM})\right] = \nonumber \\ & \sum_{v \in\mathcal{V}} P( v|\mathbf{Y}_{0:u-1};\theta^*_\text{ILM}) \log \left[\frac{P( v|\mathbf{Y}_{0:u-1};\theta^*_\text{ILM})} {P( v|\mathbf{Y}_{0:u-1};\theta_\text{ILM})} \right], \label{eqn:kl} \end{align} where $\theta^*_\text{ILM}$ is the internal LM parameters before ILMA. With KLD regularization, the ILMA loss becomes \begin{align} & \mathcal{L}_{\text{ILMA}}(\theta_\text{ILM}; \mathcal{D}_\text{A}) = (1 - \rho) \mathcal{L}_{\text{ILM}}(\theta_\text{ILM}; \mathcal{D}_\text{A}) \nonumber \\ & - \rho \hspace{-2pt} \sum_{\mathbf{Y} \in \mathcal{D}_\text{A}} \sum_{u = 1}^U \sum_{v \in\mathcal{V}} P( v|\mathbf{Y}_{0:u-1};\theta^*_\text{ILM}) \log P( v|\mathbf{Y}_{0:u-1};\theta_\text{ILM}) \nonumber \\ & = - \hspace{-5pt}\sum_{\mathbf{Y} \in \mathcal{D}_\text{A}} \sum_{u = 1}^U \sum_{v \in\mathcal{V}} \left[(1- \rho) \delta(v = y_u) + \rho P( v|\mathbf{Y}_{0:u-1};\theta^*_\text{ILM}) \right] \nonumber \\ & \qquad\qquad\qquad\qquad\quad \log P(v|\mathbf{Y}_{0:u-1};\theta_\text{ILM}), \label{eqn:ilma} \end{align} where $\rho \in [0, 1]$ is the regularization weight and $\delta(v = y_u)$ is the Kronecker function which equals 1 when $v= y_u$ and $0$ otherwise. Note that ILMA with KLD regularization in Eq. \eqref{eqn:ilma} reduces to the cross-entropy ILMA in Eq. \eqref{eqn:ilm_loss_adapt} when $\rho = 0$. During ILMA, different parts of the internal LM parameters can be updated to minimize the ILMA loss. We find that, for a transducer model, ILMA is the most effective when updating only $\{\mathbf{W}^\text{NB}_j, \mathbf{b}_j^\text{NB}\}$ of the joint network. One possible reason is: The adaptation text does not include the blank token \texttt{<b>}, so the best we can do is to \emph{only} adapt the transducer’s capability of predicting non-blank tokens \emph{without} sacrificing its capability of predicting \texttt{<b>}. In a transducer, the only parameters that do not contribute to $P(\texttt{<b>}|\mathbf{Y}_{0:u-1};\theta^\text{S}_\text{E2E})$ are $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$. Therefore, adapting $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ performs the best while adapting any other parameters may hurt the \texttt{<b>} predictions. In summary, ILMA of an E2E model with text-only data includes the following steps: \begin{enumerate} \item Perform ILMT of the E2E model with audio-transcript pairs in the training set $\mathcal{D}_T$ by minimizing the ILMT loss in Eq. \eqref{eqn:ilmt} to generate the model $M_1$. \item Perform ILMA of the E2E model $M_1$ with \emph{text-only} data $\mathcal{D}_A$ by minimizing the ILMA loss defined in Eq. \eqref{eqn:ilma} to generate the model $M_2$. \item Perform inference with the adapted E2E model $M_2$ on the target-domain test data. \end{enumerate} ILMA enables a fast text-only adaptation of the internal LM without losing its functionality to work seamlessly with the other components of an E2E model. Contrary to LM fusion methods, ILMA does \emph{not} increase any computational cost during inference because only the adapted E2E model is needed to run the beam search decoding. Compared to Fast Text-only Adaptation (FTA) \cite{pylkkonen2021fast}, ILMA saves one step of training the additional output LM component, effectively reducing the training time and the computational cost. Most importantly, ILMA has the flexibility to adapt only the joint network (the most effective) while FTA does not. \section{Experiments} We perform text-only ILMA of the T-T models trained with a transducer loss or a ILMT loss and evaluate the adapted models on intra-domain and cross-domain test sets. During inference, we perform beam search with a beam size of 5 \emph{without} using any external LM. 3999 word-piece units generated by byte-pair encoding \cite{sennrich2015neural} are used as the output token set $\mathcal{V}$ for all T-T models. \subsection{Data Preparation} We train T-T models with 30K hours of anonymized and transcribed data as in \cite{meng2021ilme, meng2021ilmt} collected from Microsoft services, including voice search, short message dictation, conversations, etc. In addition, we collect two \emph{text-only} adaptation sets as follows \begin{itemize} \item \textbf{Command \& Control}: 50 million (M) words of simulated text generated by a command \& control grammar and 400K words of real anonymized text from Microsoft command \& control services. \item \textbf{Multi-Domain}: 2 billion (B) words of anonymized text corpus comprising conversational data, voice search and short message dictation from Microsoft services. \end{itemize} Correspondingly, we gather two target-domain test sets whose transcripts do not overlap with the adaptation text. \begin{itemize} \item \textbf{Command \& Control}: 1K utterances collected from Microsoft command \& control services. \item \textbf{Voice Search}: 18K far-field voice search utterances collected from Microsoft services. \end{itemize} Note that Command \& Control is a \emph{cross-domain} evaluation set because it is not covered by the 30K-hour training data while Voice Search is an \emph{intra-domain} one because 30K-hour training data includes voice search utterances. We extract 80-dimensional (dim) log Mel filter bank features from the speech signal for training and test sets every 10 ms over a 25 ms window. \subsection{Baseline Systems} \label{sec:tt} T-T has a streaming encoder \cite{chen2020developing} with four 2D convolution layers sub-sampling the 80-dim log Mel filter bank features in time by a factor of 4. On top of that is a transformer with 18 layers, each layer has an 8-head attention sub-layer with relative positional encoding \cite{vaswani2017attention} and a 2048-dim fully-connected sub-layer. The encoder has a look-ahead of 360 ms on average. The T-T prediction network is a 2-layer transformer with each layer containing a 4-head attention sub-layer followed by a 1024-dim fully-connected sub-layer. The input to the prediction network are 512-dim word-piece embedding vectors with positional encoding. The attention dimension is fixed at 512. The outputs of encoder and prediction network are projected to 512-dim vectors. Dropout with a probability of 0.1 is deployed. T-T has 67 M parameters. We first train a T-T model to minimize the transducer loss using the 30K-hour data as the baseline model. In Table \ref{table:compare_adaptation}, the baseline T-T achieves 12.81\% and 4.23\% WERs on Command \& Control and Voice Search test sets, respectively. To achieve an effective ILMA, we also perform the ILMT of T-T to minimize the ILMT loss in Eq. \eqref{eqn:ilmt} with the 30K-hour data before ILMA. The ILMT-ed T-T achieves 12.55\% and 4.22\% WERs on Command \& Control and Voice Search test sets, performing slightly better than the baseline T-T. With ILMT, the token-level perplexity of the internal LM reduces from 131.2 (baseline) to 60.7 on the validation set of the 30K-hour training data. This shows that a dual-mode internal LM with significantly lower perplexity and slightly better ASR performance is learned via ILMT. To compare with ILMA, we perform FTA \cite{pylkkonen2021fast} on the baseline T-T. The output LM component is a feed-forward layer followed by a softmax. In Table \ref{table:compare_adaptation}, FTA achieves 10.04\% and 4.15\% WERs on Command \& Control and Voice Search test sets, respectively. Moreover, we train two long short-term memory (LSTM) \cite{sak2014long,meng2017deep}-LMs with Command \& Control and Multi-Domain adaptation text, respectively. Both LSTM-LMs have two hidden layers and 2048 hidden units at each layer. Each LSTM-LM consists of 58M parameters. We perform Shallow Fusion of the ILMT-ed T-T and the LSTM-LM during inference. In Table \ref{table:compare_adaptation}, Shallow Fusion achieves 8.71\% and 4.04\% WERs for cross-domain and intra-domain evaluations, respectively. \begin{table} \centering \setlength{\tabcolsep}{4.0pt} \begin{tabular}[c]{c|c|c|c|c} \hline \hline \multirow{2}{*}{\begin{tabular}{@{}c@{}} Method \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} External \\ LM \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} Model \\ \hspace{0.6pt} Params \hspace{0.6pt}\end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} Comd. \\ \hspace{0.1pt} Control \hspace{0.1pt} \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} Voice \\ \hspace{1.1pt}Search\hspace{1.1pt} \end{tabular}} \\ & & & & \\ \hline T-T & \multirow{3}{*}{\begin{tabular}{@{}c@{}} No \end{tabular}} & \multirow{3}{*}{\begin{tabular}{@{}c@{}} 67M \end{tabular}} & 12.81 & 4.23 \\ \hhline{-~~--} ILMT & & & 12.55 & 4.22 \\ \hhline{-~~--} FTA & & & 10.04 & 4.15 \\ \hline ILMT + SF & Yes & 125M & 8.71 & 4.04 \\ \hline ILMT + ILMA & No & 67M & \textbf{8.34} & \textbf{3.95} \\ \hline \hline \end{tabular} \caption{Compare WERs (\%) and model sizes of 3 adaptation methods: Fast Text-only Adaptation (FTA) \cite{pylkkonen2021fast}, Shallow Fusion (SF) of ILMT-ed T-T and NN-LM, proposed ILMA after ILMT.} \label{table:compare_adaptation} \vspace{-10 pt} \end{table} \subsection{Internal LM Adaptation} \label{sec:exp_ilma} \subsubsection{Cross-Domain Evaluation} \label{sec:exp_ilma_tt} We adapt the baseline T-T with text-only data in the Command \& Control adaptation set by minimizing the ILMA loss in Eq. \eqref{eqn:ilma}. Different components of the internal LM are updated during ILMA. We also adjust the KLD regularization weight $\rho$ to prevent the T-T from overfitting the adaptation text. As in Table \ref{table:command_control}, when updating the entire internal LM, the prediction network alone or $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ of the joint network (Eq. \eqref{eqn:logit_ilm}), we obtain the largest relative WER reductions 6.8\%, 6.2\% and 19.1\%, respectively, from the baseline T-T at a common weight $\rho=0.2$. Adapting $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ alone performs significantly better than adapting the entire internal LM or the predictor alone. \begin{table}[t] \setlength{\tabcolsep}{7.0 pt} \centering \begin{tabular}[c]{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{\begin{tabular}{@{}c@{}} Method \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} Adapted \\ Params \end{tabular}} & \multicolumn{4}{c}{KLD Regularization Weight $\rho$} \\ \hhline{~~----} & & 0.0 & 0.2 & 0.5 & 0.8 \\ \hline T-T & - & \multicolumn{4}{c}{12.81} \\ \hline \multirow{3}{*}{\begin{tabular}{@{}c@{}} + ILMA \end{tabular}} & ILM & 12.30 & 11.93 & 12.44 & 12.29 \\ \hhline{~-----} & Predictor & 12.32 & 12.01 & 12.34 & 12.83 \\ \hhline{~-----} & Joiner & 10.85 & 10.36 & 10.53 & 11.09 \\ \hline \hline ILMT & - & \multicolumn{4}{c}{12.55} \\ \hline \multirow{3}{*}{\begin{tabular}{@{}c@{}} + ILMA \end{tabular}} & ILM & 10.34 & 9.67 & 10.17 & 11.59 \\ \hhline{~-----} & Predictor & 10.47 & 10.02 & 10.42 & 11.50 \\ \hhline{~-----} & Joiner & 8.96 & \textbf{8.34} & 8.51 & 9.65 \\ \hline \hline \end{tabular} \caption{ The WERs (\%) of T-T models on the \textbf{cross-domain Command \& Control} test set. T-Ts are first trained with a standard transducer loss (T-T) or an ILMT loss, and then adapted with the ILMA loss using Command \& Control \emph{text-only} adaptation data. Joiner represents $\mathbf{W}_j^\text{NB}, \mathbf{b}^\text{NB}_j$, i.e., the joint network output layer after removing the rows corresponding to the blank. } \label{table:command_control} \vspace{-15 pt} \end{table} We then perform ILMA of the ILMT-ed T-T using the text-only data from the Command \& Control adaptation set. ILMA achieves 24.5\%, 21.8\% and 34.9\% relative WER reductions from the baseline T-T by updating the entire internal LM, the predictor alone or $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ of the joint network, respectively, when $\rho=0.2$. Among the three update schemes, adapting $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ alone performs the best, achieving 13.8\% and 16.8\% relative WER reductions from adapting the whole internal LM and adapting only the prediction network, respectively. It also outperforms ILMT-ed T-T and FTA by 33.5\% and 16.9\% relatively in terms of lower WER, respectively. Moreover, in Table \ref{table:compare_adaptation}, ILMA achieves a 4.2\% relative WER reduction from Shallow Fusion with 46.4\% fewer model parameters and 20.7\% shorter inference time. \subsubsection{Intra-Domain Evaluation} \label{sec:exp_ilma_ilmt} We perform ILMA of the baseline T-T using the Multi-Domain adaptation text. However, as in Table \ref{table:voice_search}, adapting the entire internal LM or the predictor alone degrades the WER of the baseline T-T for all regularization weights. Even adapting $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ alone achieves little relative WER reduction from the baseline T-T. \begin{table}[h] \setlength{\tabcolsep}{8.0 pt} \centering \begin{tabular}[c]{c|c|c|c|c|c} \hline \hline \multirow{2}{*}{\begin{tabular}{@{}c@{}} Method \end{tabular}} & \multirow{2}{*}{\begin{tabular}{@{}c@{}} Adapted \\ Params \end{tabular}} & \multicolumn{4}{c}{KLD Regularization Weight $\rho$} \\ \hhline{~~----} & & 0.0 & 0.2 & 0.5 & 0.8 \\ \hline T-T & - & \multicolumn{4}{c}{4.23} \\ \hline \multirow{3}{*}{\begin{tabular}{@{}c@{}} + ILMA \end{tabular}} & ILM & 4.31 & 4.31 & 4.32 & 4.29 \\ \hhline{~-----} & Predictor & 4.28 & 4.30 & 4.30 & 4.31 \\ \hhline{~-----} & Joiner & 4.19 & 4.19 & 4.19 & 4.18 \\ \hline \hline ILMT & - & \multicolumn{4}{c}{4.22} \\ \hline \multirow{3}{*}{\begin{tabular}{@{}c@{}} + ILMA \end{tabular}} & ILM & 4.13 & 4.12 & 4.12 & 4.13 \\ \hhline{~-----} & Predictor & 4.11 & 4.10 & 4.09 & 4.10 \\ \hhline{~-----} & Joiner & 4.07 & 4.01 & \textbf{3.95} & 4.00 \\ \hline \hline \end{tabular} \caption{ The WERs (\%) of T-T models on the \textbf{intra-domain Voice Search} test set. T-Ts are first trained with a standard transducer loss (T-T) or an ILMT loss, and then adapted with the ILMA loss using Multi-Domain \emph{text-only} adaptation data. Joiner represents $\mathbf{W}_j^\text{NB}, \mathbf{b}^\text{NB}_j$, i.e., the joint network output layer after removing the rows corresponding to the blank. } \label{table:voice_search} \vspace{-20 pt} \end{table} We then adapt the internal LM of the ILMT-ed T-T using the Multi-Domain text. ILMA achieves 2.6\%, 3.3\% and 6.6\% relative WER reductions from the baseline T-T when updating the entire internal LM, the predictor alone and $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ of the joint network, respectively, when $\rho=0.5$. Adapting $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ performs the best among all three update schemes, with 6.4\% and 4.8\% relative WER reductions from ILMT-ed T-T and FTA, respectively. Furthermore, in Table \ref{table:compare_adaptation}, ILMA achieves a 2.2\% relative WER reduction from Shallow Fusion with 46.4\% fewer model parameters and 25.6\% shorter inference time. \subsection{Result Analysis} T-T + ILMA results in Table \ref{table:voice_search} indicate that the ILMA of a T-T trained with standard transducer loss may degrade the ASR performance because minimizing the ILMA loss pushes the internal LM towards a pure NN-LM that is incompatible with the other components of the E2E model. By comparing the T-T + ILMA and ILMT + ILMA results in both tables, we see that the ILMA of a T-T trained with ILMT loss achieves much larger relative WER reductions than that of a T-T trained with a standard transducer loss for both cross-domain and intra-domain evaluations. This shows that the dual-mode internal LM learned via ILMT can be adapted effectively as a standalone LM using \emph{text-only} data while its full functionality as an essential T-T component is well maintained. ILMT + ILMA results in Tables \ref{table:command_control} and \ref{table:voice_search} suggest that adding a proper KLD regularization ($\rho>0$) always reduces WER by preventing the E2E model from overfitting the adaptation text. Furthermore, ILMA by updating only $\{\mathbf{W}_j^\text{NB}, \mathbf{b}_j^\text{NB}\}$ of the joint network shows consistently and significantly better performance than updating the entire internal LM or the prediction network alone. Compared to FTA \cite{pylkkonen2021fast}, the proposed ILMA saves one step of output LM component training, achieving \emph{simpler} and \emph{faster} text-only adaptation. Most importantly, as in Table 1, ILMA achieves 4.8\% - 16.9\% relatively lower WERs than FTA. Further, ILMA consistently outperforms Shallow Fusion with significantly reduced model parameters, computational cost and inference time. \section{Conclusion} In this work, we propose an internal LM adaptation of the E2E model using text-only data. During ILMA, the internal LM of the E2E model is fine-tuned with adaptation text to minimize a cross-entropy internal LM loss. The internal LM training of the E2E model is performed before ILMA to ensure that the internal LM behaves like a standalone NN-LM while maintaining its functionality as an inseparable component of the E2E model. The KLD regularization is added to the ILMA loss to avoid overfitting. Experimented with a 30K-hour trained T-T model, ILMA achieves up to 34.9\% and 6.6\% relative WER reductions from the baseline T-T on cross-domain and intra-domain test sets, respectively. We also show that ILMT is necessary for an effective and robust ILMA. During ILMA, updating only the last linear layer of the joint network consistently achieves the best performance. \bibliographystyle{IEEEtran}
1,108,101,564,436
arxiv
\section*{Acknowledgments} \label{sec:acks} We are grateful to Suhas Daftuar and Matt Corallo for initial discussions on time-dilation attacks. We would like to thank Clara Shikhelman, Sergei Tikhomirov, Rene Pickhardt, Sergi Delgado Segura, Michael Folkson, James Chiang, devrandom, Bastien Teinturier and Thibaut Le Guilly for the useful feedback on the paper. This work was supported by Chaincode Labs. \section{Attack Preparation. Eclipse and Node Mapping} \label{sec:attacks_prep} All Bitcoin nodes constitute a peer-to-peer network. Full Bitcoin nodes can be roughly split into two categories: \begin{itemize} \item reachable from most of the Internet and accepting inbound connections from other nodes \item non-reachable nodes behind NATs and firewalls \end{itemize} Reachable nodes act as a backbone, allowing other nodes to join and relay transactions, blocks, and other necessary information. As of March 2020, every Bitcoin Core node by default maintains up to 8 outbound connections to relay transactions, blocks, and network addresses of other nodes; and 2 extra connections to relay exclusively blocks. All connections in the Bitcoin network are bidirectional. Connections relaying only blocks leak less information and are supposed to secure block relay. Although outbound peer rotation has been discussed multiple times \cite{Naumenko2018Rotation, Pustogarov2014Rotation}, Bitcoin Core never deviated from the status quo approach. Thus, the topology is currently fairly static, and new outbound connections for an existing node are only made due to issues with existing connections. Since the network is permissionless, it is naturally susceptible to certain attacks, which enable time-dilation. We consider two practical scenarios for time-dilation: \textbf{C1.} Victim’s Bitcoin node is first eclipsed (isolated) as a part of a broader attack on the Bitcoin network, and an attacker attempts to find a corresponding Lightning node. \textbf{C2.} Specific victim’s Lightning node (identified by IP and the channels) is targeted, and then an attacker attempts to locate and eclipse a corresponding Bitcoin node. In both cases, an attacker would have to eclipse a victim's Bitcoin node and map a Lightning node to a Bitcoin node (often involving transaction origin inference), in different orders. These attacks are relevant against LN users running their own full nodes or light clients for Bitcoin blockchain processing, instead of relying on third parties. We will now describe the relevant attacks targeting full nodes, and then discuss applying them to light clients in more detail. \subsection{Eclipse Attacks on Full Nodes} By definition, an Eclipse attack implies preventing a victim's node from communicating with other honest participants of the network. It is usually done by occupying all of the victim’s node connections by malicious nodes or pseudo-nodes. An attacker gains complete control over \textit{what} and \textit{when} a victim sends and receives from the network. This is a crucial requirement for performing time-dilation attacks. Fig. \ref{fig:eclipse} demonstrates an Eclipse attack from the topology perspective. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{node-under-eclipse.pdf} \caption{Eclipse attack on the Bitcoin network. Only node $A$ is eclipsed, because all its connections lead to the attacker.} \label{fig:eclipse} \end{figure} The first Eclipse attack on the Bitcoin network was demonstrated by Heilman et. al. \cite{Heilman2015Eclipse}. It is purely based on the high-level protocols of the Bitcoin network, namely, address management and relay logic. Further research demonstrated that using a combination of the exploitation of the BGP protocol and Bitcoin’s address management can significantly reduce the eclipsing cost. Apostolaki et. al. \cite{Apostolaki2017Hijack} demonstrated that any network-level attacker (e.g., Autonomous System) can isolate a significant portion of nodes by hijacking less than 100 prefixes, although the attack can be detected. Tran et. al. \cite{Tran2019Erebus} further shown that any Tier-2 Autonomous System can Eclipse most of the reachable Bitcoin nodes in an undetectable way. The studied consequences of eclipsing a Bitcoin node include monetary (double-spending attacks, attacks on mining) and non-monetary (peer-to-peer layer deanonymization). Although Heilman et. al. briefly mentioned monetary consequences on second layer protocols \cite{Marcus2018Eclipse}, our work is the first to discuss these attacks in detail. \subsection{Eclipse Attacks on Neutrino} \label{sec:eclipse_neutrino} It is crucial for BIP 157 light clients to be connected to Bitcoin full nodes providing the required server-side support. The number of available honest serving nodes defines the security of light clients against Sybil attacks. The more honest nodes serve light clients, the more Sybil nodes an attacker needs to deploy to Eclipse a victim. The security of light clients can in theory be increased by connecting to regular full nodes and checking the tip with them, but this behavior is currently not implemented. Thus, for a given number of attacker Sybil serving nodes ($N_a$), and a number of honest serving nodes ($N_h$), and the number of outgoing connections every honest node maintains ($C$), the probability of a successful Eclipse attack can be then measured as: \begin{equation} P_{E} = (\frac{N_a}{N_h+N_a})^{C} \end{equation} To estimate the cost of eclipsing honest nodes via a trivial Sybil attack today, we collected a list of available serving nodes from the Bitcoin DNS seed servers. This is the same procedure any new node in the network follows to learn about nodes in the network during the initial bootstrap. After getting a subset of this list, light clients usually choose 8 peers at random. Currently, only one of the Bitcoin implementations (\textit{btcd}) has released a build with the server-side support for these light clients. There are only around 30 reachable nodes in the network that run this implementation. We also found 20 nodes running a custom version on Bitcoin Core, based on the work-on-progress implementation of server-side BIP 157 support. According to the formula above, spawning just 500 Sybil nodes with server-side BIP 157 support would trivially eclipse a random 47\% of newly deployed or restarted light clients. To increase the probability of success for the attack, an attacker would have to either create more Sybil server nodes or reduce available honest server nodes. The former requirement makes it expensive to Sybil attack the whole Bitcoin network, but quite practical to attack the subset of nodes using BIP 157/158, because this feature is supported by so few honest nodes. The latter can be achieved via DoS. Additionally, an attacker might exploit that neither Neutrino nor btcd implement countermeasures discussed in \cite{Heilman2015Eclipse, Tran2019Erebus}, and those used in Bitcoin Core. For example, they don’t employ the following methods: \begin{itemize} \item peer selection diversification based on an Autonomous System a peer belongs too, to make it more difficult for an attacker to get victims exclusively connect to the attacker’s Sybil nodes \item eviction on inbound connections when all the slots are occupied, to allow new nodes to connect to honest reachable nodes even when their inbound capacity is exhausted. \end{itemize} Lack of these and other countermeasures in light clients allow more sophisticated Eclipse attacks to succeed even at a lower cost. \subsection{Verifying Eclipse} Once an attacker suspects that a victim Bitcoin node is eclipsed, they should verify that a node does not have another form of access to the Bitcoin network. The easiest way to verify a node is eclipsed is \textbf{transaction probing} based on transaction relay protocols. An attacker chooses a random transaction they received from the network, and don't relay it to the eclipsed node through any of the connections under control. Then, if a victim’s node announces that transaction to an attacker, it means there is still a link between a victim and an honest part of the network. It would be more difficult if a node is connected to an external source of \textit{blocks}, which does not relay transactions. The proposed methodology would identify eclipsing only the transaction relay aspect of the peer-to-peer communication, while these attacks require eclipsing all links relaying blocks. In this case, an attacker would have to apply \textbf{block probing}: delaying a block delivery through all links to the victim, and observing whether a victim relays that block to the attacker nodes. The only problem with this approach is that blocks arrive much less often than transactions. Thus, if the probing demonstrated that the victim is still not eclipsed, the next attempt will be possible no earlier than in 10 minutes on average (as opposed to every second with transactions). Sometimes a victim may have an irregular source of blocks or transactions (e.g., Blockstream Satellite service). In this case, \textbf{a combination of block and transaction probing} would help an attacker to timely identify that this is the case, deduce what kind of external service a victim is using and whether they are capable of disrupting this service. This would ultimately help the attacker to choose a better strategy for proceeding with the attack. \subsection{Mapping Nodes} \label{sec:mapping_nodes} To launch a time-dilation attack, a malicious actor also has to map a victim's Bitcoin node to a Lightning node. The easiest mapping technique is correlating Bitcoin and Lightning nodes that \textbf{operate under the same IP}. To measure how many users run their nodes under the same IP, we scraped IP addresses of the Bitcoin nodes over a week. Then we correlated them with the list of lightning nodes with advertised channels. We were able to gather a list of 4,500 Lightning nodes and 52,000 Bitcoin nodes and found 982 matches by IP. Almost half of the Lightning nodes were represented by an onion address, making them even less likely to be traceable by this methodology. Only two pairs of nodes shared the onion address. These numbers do not include Lightning nodes with \textit{private} (not advertised) channels and non-listening Bitcoin nodes. If Bitcoin and Lightning nodes \textbf{operate under different IPs}, an attacker would have to apply the following heuristics. In case \textbf{C1}, an attacker would have to find which Lightning channel funding transactions originated from the victim’s eclipsed Bitcoin node, and map those transactions to channel announcements in the Lightning network. The most straightforward approach is to apply transaction origin inference against Lightning-related transactions coming from the victim’s Bitcoin node. It can provide precise results because an attacker can analyze all the relevant messages coming from/to the victim node, acting as a Man-in-the-Middle between a victim and the honest part of the network. Alternatively, an attacker can withhold a block from the victim’s eclipsed Bitcoin node, and look for the nodes in the Lightning Network which don’t accept and relay some of the channel announcements, which become valid \textit{within the withheld block}. For case \textbf{C2}, an attacker would have to: \begin{enumerate} \item Deploy Sybil nodes in the Bitcoin network, both connecting to honest nodes and accepting connections from them \item Apply transaction origin inference to the relay of the Bitcoin transactions corresponding to the victim’s channels \end{enumerate} In both cases, the approach involving transaction origin inference might take days or even weeks. For \textbf{C1}, the upper bound on time is set by the channel's lifetime, which can be unlimited in the LN. If a victim never commits any channels on-chain, it is impossible to map their nodes. In practice, channels do get closed, although the lifetime may vary from hours to weeks, and an average channel age is currently 319 days \footnote{As shown by https://1ml.com/statistics on 2020-05-15}. Alternatively, an attacker can proactively open low-value channels with a victim and infer from them, if the victim's LN implementation is configured accordingly (often enabled by default). This technique would require an attacker to spend the minimum cost of opening a channel per probe. For \textbf{C2}, the upper bound is set by the time it takes to passively accept enough incoming connections from honest nodes. It may take days because honest nodes rarely add new peers (only when an existing peer is disconnected), and forcing reconnections requires attack capabilities beyond our threat model. Proactive connection to victims is not enough, because in Bitcoin Core transactions are relayed to inbound connections slower to make Sybil-based spying less effective. We will now discuss the known techniques and the feasibility of the transaction origin inference required for mapping nodes. \subsection{Transaction Origin Inference on Full Nodes} Transaction origin inference means finding a Bitcoin node, from which a particular transaction was initially relayed. This would allow linking a transaction to a particular IP address, assuming a transaction sender uses their own node to submit transactions. We anticipate that this is a fair assumption for LN users who prefer a trust-minimized model compared to relying on a third-party. It is possible that a transaction was relayed via a proxy node or Tor, in which case it would trigger a false positive observation, but this is currently not the default behavior and not the general case \footnote{Only 5 of 17 popular wallets listed at bitcoin.org have the Tor feature as of 2020-01-10}. Transaction origin inference was previously explored in \cite{Biryukov2014Deanonymisation, Neudecker2016Timing, Grundmann2019Exploiting, Miller2015DiscoveringB, Delgado2019TxProbe}. Most of the demonstrated attacks are a form of a Sybil attack and use the first-spy estimator. The first-spy estimator technique relies on the assumption that the node which announces a transaction earlier than other nodes, is likely to be an originating node to the transaction or is directly connected to the originating node \cite{Fanti2017Dandelion}. Related work demonstrated that it is currently possible to infer transaction origin at high accuracy \cite{Naumenko2019Erlay, Fanti2017Dandelion} with a moderate network of Sybils. \subsection{Transaction Origin Inference on Neutrino} As we explained before, network-level transaction deanonymization in the suggested threat model usually relies on first-spy estimation: establishing multiple connections to honest nodes in the network and analyzing the messages coming from those nodes. Bitcoin Core employs several techniques to obfuscate transaction flow across the network. These include: \begin{enumerate} \item random “diffusion” delays before announcing a transaction \item increased diffusion delay for inbound connections \item shared diffusion delay timer for all inbound connections \item diverse node connectivity based on the IP ranges or Autonomous Systems \end{enumerate} None of these would work for Neutrino, because those light clients broadcast only their own transactions. Thus, it is enough for an attacker to make sure it has \textbf{at least one direct connection} from the Neutrino node of a victim, to infer the origin. In this case an attacker has to be sure that the victim runs a light client (and not a full node), which is currently trivial to infer from the peer-to-peer behavior of the victim. Neutrino clients currently connect to a very limited number of public nodes (see Section \ref{sec:eclipse_neutrino}). Every Neutrino client chooses 8 nodes at random from the available pool of ~50 honest nodes serving filters per BIP 158. An attacker with only 100 Sybil nodes can be sure that a victim is directly connected to a Sybil node at least once with a 97\% chance. This would allow the attacker to identify a source of a given transaction with very high accuracy and low cost. \subsection{Attacks on Electrum Light Clients} Robustness to Eclipse attacks and transaction origin inference of Electrum light clients depends on the chosen mode of operation. If an Electrum user runs their own Electrum Personal Server or ElectrumX Server connected to their own Bitcoin full node, the user inherits the security from Bitcoin Core, partially described previously in this section. If an Electrum user connects to ElectrumX Servers run by someone else, they face the same issues as Neutrino. A very low number of deployed servers\footnote{61 server, as listed at https://1209k.com/bitcoin-eye/ele.php as of 2020-04-13} and lack of strong anti-Sybil measures (compared to Bitcoin Core) make it easy to eclipse honest users. \section{Background} \label{sec:background} \subsection{Bitcoin Base Layer} The primary goals of the Bitcoin system are relaying and validating financial transactions. Bitcoin solves the double-spending problem by organizing transactions into a sequence of blocks. A transaction in Bitcoin is unconfirmed until it is included in a valid block. Then, the number of blocks created on top of that block represents the degree of confirmations the transaction has. This confirmation indication works largely due to the Bitcoin built-in incentive system: mining a block is a difficult and expensive task, which may result in a reward. The more confirmations a transaction has, the more confident the receiver is that the transaction is unlikely to be reverted. We use \textit{unlikely} because absolute transaction finality in Bitcoin does not exist by design. However, the incentives are aligned in a way that reverting a larger number of blocks becomes more and more unprofitable under a fundamental Bitcoin assumption. This assumption is that a fraction of malicious mining power does not exceed 50\% in the long run (usually roughly defined as several hours to days). Mining a Bitcoin block mainly consists of two phases: assembling a valid sequence of transactions and finding a random nonce, which would satisfy the Proof-of-Work \cite{back2002hashcash, Dwork2003PoW} algorithm requirements. The Proof-of-Work difficulty adjustment rule makes sure the expected time of producing a block is 10 minutes on average. These 10-minute average intervals, as well as an upper bound of the block size, are used for a number of reasons, related to security and scalability. These rules have two negative consequences. First, they make Bitcoin transactions potentially expensive: competition for the block space creates a transaction fee market, which may drive fees up. Second, they make transactions slow: as explained above, in most cases confirming a transaction requires waiting for at least one block. Both of these problems become more apparent when more transactions are happening on the Bitcoin blockchain. \subsection{Off-Chain Scaling} To address these issues, off-chain scaling constructions were proposed \cite{Gudgeon2019L2}. These constructions are usually based on the techniques enabled by the Bitcoin scripting language, \textit{Script}. They are often referred to as \textit{Layer 2} because they operate on top of Bitcoin on-chain transactions (referred to as \textit{Layer 1}). The security of off-chain protocols differs from the security of the Bitcoin protocol because at least one of the following holds: \begin{itemize} \item these protocols introduce an extra assumption on trusting third parties (e.g., a federation of operators \cite{back2014sidechains}) \item users are assumed to react in a timely manner to base layer updates \cite{poon2016lightning} \end{itemize} In Section \ref{sec:lightning_network}, we further discuss the second assumption (relevant for the LN), and later use it as a basis for the attacks we demonstrate. Since Lightning Network heavily uses advanced transaction types, we will now explain the internals of Bitcoin transactions. \subsection{Transactions in Bitcoin} Transactions in Bitcoin consist of inputs, unlocking scripts, and outputs. Inputs indicate which funds are being spent by a transaction. Unlocking scripts provide data required to verify that a spender indeed is authorized to access inputs. Outputs define how the funds can be spent further, effectively defining who owns the funds and under which conditions. In a simple scenario, a payee sends their public key to the payer, and the payer uses that as a transaction output. When a transaction is included in the blockchain, a payee can be sure they have access to those funds. Every transaction may have multiple inputs and multiple outputs, and they don’t have to directly map one to each other. Outputs can be spent via a simple digital signature or more complex conditions. For example, revealing a preimage for a pre-defined hash. These are called \textbf{Hash Time Locked Contracts (HTLCs)}. As the name suggests, a HTLC is built up from two primitives: a timelock and a hashlock. The contract semantics of an HTLC can be understood as “if a preimage $P$ is provided such $hash(P) == H\_lock$, before timelock expiration $T$, allow spending". These primitives are provided by the Bitcoin scripting language. We will now discuss payment channel constructions and LN, based on these advanced spending conditions. \subsection{Lightning Network} \label{sec:lightning_network} The high-level idea of payment channels was first suggested \cite{Nakamoto2013Channel} by the creator(s) of Bitcoin in 2011: cache transactions between the peers (payer and payee) instead of committing every transaction to the Bitcoin blockchain. Even though the described design was not secure, the high-level idea has since evolved and payment channels are now used for off-chain scaling. The most widely used system based on Bitcoin payment channels is the Lightning Network \cite{poon2016lightning}: independent payment channels form a network, where users transact bidirectionally with other members of the network (via multi-hop payments). Payment channels for the LN can be created after an out-of-band negotiation where two users decide that it makes sense for them to use channels instead of submitting every transaction on-chain. However, the software is often enabled to open channels without any negotiation. Since the LN enables multi-hop payments, another common way to join the network is to use an on-demand service for the channel creation: create a channel to a hub, which would allow transacting with other users reachable (potentially, indirectly) via that hub. LN uses a modified Poon-Dryja revocation mechanism \cite{poon2016lightning} to enable \textbf{bidirectional channels with unlimited lifetime}. At a high-level, proceeding with the new state reveals the revocation secret, which makes the previous state invalid. Poon-Dryja payment channels are opened when a funding transaction, a 2-of-2 multi-signature contract between Alice and Bob, is submitted on-chain. This design enables a channel close with any outcome if both of the parties are online and cooperating. To ensure the security of the funds even if the counterparty is irresponsive, transactions spending the multisig funding output ("commitment") must always be valid and ready to broadcast. Therefore at every state update, encoded by a new commitment transaction, signatures must be exchanged. If Alice initiates the update, she sends signatures for Bob's new transactions. Bob then revokes his previous set of transactions and sends his signatures to Alice for her new transactions. This transaction asymmetry and the structure of non-commitment transactions (which we show in detail below) allow every party to unilaterally close a channel without further interactivity. At the same time, they enable punishment by the counterparty, if channel closing is dishonest. Committing to an outdated state on-chain by a malicious actor is disincentivized by a \textit{punishment} time-window. During this time, an honest user can confiscate all funds of a malicious counterparty through a justice transaction. The time-window is enforced directly via relative timelocks \cite{Friedenbach2015Csv}. Every state in LN is enforced by 6 types of transactions \cite{Bolt2019Format}, as seen from Alice's viewpoint. Alice has 3 transactions, fully-countersigned by Bob and ready to broadcast: \begin{itemize} \item Commitment transaction, used by Alice to finalize the state. It has 4 types of outputs: Alice's balance, Bob's balance, offered HTLC, received HTLC. An offered HTLC allows locking a conditional payment flowing from Alice to Bob. A received HTLC allows a conditional payment flowing in the reverse, from Bob to Alice. The distinction enables bidirectional payment. \item HTLC-Timeout, used by Alice to spend an offered output on her commitment transaction. It allows her to refund after timelock expiration. \item HTLC-Success, used by Alice to spend a received output on her commitment transaction. It allows her to get paid by presenting a preimage before timelock expiration. \end{itemize} For every offered or received HTLC output, Alice must have a corresponding HTLC-Success or HTLC-Timeout transaction. Bob may generate 3 single-signed (no need to update Alice's signatures) transactions in reaction to Alice behavior: \begin{itemize} \item Preimage transaction, used by Bob to spend an offered output on Alice's commitment transaction. Allows him to get paid by presenting a preimage before timelock expiration. \item Timeout transaction, used by Bob to spend a received output on Alice's commitment transaction to refund himself after timelock expiration. \item Justice transaction, used by Bob to spend any output belonging to Alice's revoked transaction. Allows him to confiscate Alice's funds by using previously revealed per-update revocation secret. \end{itemize} Since the channel is symmetrical, Bob holds his own commitment, HTLC-Timeout/HTLC-Success, on which Alice can generate her reaction transactions. \textbf{Multi-hop payments} are enabled in LN by routing HTLCs across a path of nodes. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{routing.pdf} \caption{Routing payments in Lightning Network. } \label{fig:routing} \end{figure} While the payment is routed through the LN, the whole payment path shares a hashlock. A timelock is decreased at every hop from a payer to a payee. Every multi-hop payment consists of three phases (see Fig. \ref{fig:routing}): \begin{enumerate} \item A payee sends to the payer an invoice containing a hash to a preimage chosen by the payee. \item Route setup, when every party agrees with the next hop to add an HTLC on their local channel sequentially in every channel, ordered from the payer to the payee. \item Settlement phase, where every party agrees with the previous hop to remove the HTLC, once the preimage is known to the channel participant from the payee side. The preimage is propagated all the way to the first channel in the chain. \end{enumerate} This sequence of decreasing timelocks enables a secure in-order HTLC settlement. An intermediate hop is always able to claim an incoming HTLC with a timelock less or equal to the outgoing one, or to cancel it after a state update. In other words, it makes an intermediate router protected from the loss of an HTLC. The per-hop delay (timelock difference) used for this protection is called $cltv\_delta$. It is enforced by each intermediate hop during the route setup phase, at the reception of an incoming HTLC. \subsection{Extra Assumptions} The cost of scaling solutions based on payment channels is new assumptions, of which the following is relevant for our work: \textit{a user should always have access to the recent blockchain history and should be able to broadcast transactions, in the case of counterparty misbehavior}. Fundamentally speaking, LN introduces new security parameters. Instead of measuring the finality of the transactions with confirmations only (number of blocks after inclusion in the blockchain), the security of payments in the LN should also be measured by the chosen timelocks. The longer funds are locked in a channel, the better chance an honest user has to act on the misbehavior of a counterparty and get their funds back from the channel. At the same time, it makes the protocol less flexible for an honest unilateral close, triggered by an irresponsive counterparty. The required blockchain monitoring can be done via running a full Bitcoin node, by relying on a trusted third party or by using a \textit{light client}. We do not cover issues with trusting third parties in our work, because we focus on the non-custodial use. In theory, partial third party trust can significantly increase the security, and, to the best of our knowledge, this method is used by several popular Lightning wallets. But again, it changes the threat model by introducing trust. For example, if a Lightning wallet is based on secure open-source software but doesn't have strong Eclipse protection, the trusted node still can choose to steal funds via time-dilation. While it might not make sense against a single client, an exit scam from a wallet developer (usually operating the trusted node) stealing funds from all the channels is plausible, even if the software is secure otherwise. Thus, by focusing on a trust-minimized scenario we cover the security of these clients as well. \subsection{Light Client Protocols} \label{sec:light_clients} Several protocols have been proposed to reduce the requirement of running a full node and still use Bitcoin with a fairly trust-minimized model. All of them use client-server architecture with multiple servers, assuming that at least one of the servers a client connects to is honest. Light clients are often used as a Bitcoin blockchain processing backend on resource-constrained devices (like mobile phones). Understanding the security of these clients is important to evaluate the security of LN client implementations. The first popular non-standardized implementation of a light client is \textbf{Electrum}. Per this protocol, Electrum light clients connect to Electrum servers. Electrum server must have access to the chain processing backend, usually co-located on the same machine with the server. Electrum itself provides configurations with different trade-offs. For example, an Electrum user can connect their light clients to Electrum Personal Server software run by themselves, or connect to multiple reachable ElectrumX Servers run by someone else. Electrum is currently used as a Bitcoin chain processing backend by one of the most popular Lightning wallets, \textit{Eclair}. \textbf{BIP 157} is the most popular standardized light client protocol. Clients based on this protocol would connect to full nodes in the Bitcoin network, receive a compact representation of Bitcoin blocks (filters, as defined in another related standard, BIP 158), and, if a filter detects relevant transactions on the client-side, request a full block of transactions. Neutrino is currently one of the most popular light client implementations, and it is based on BIP 157. Neutrino is used by at least \textit{Breez} and \textit{Wallet by Lightning Labs}). We will now provide the background on the relevant attacks on the Bitcoin network and light clients, required to understand time-dilation attacks on the Lightning Network. We will cover the robustness of full Bitcoin nodes, Neutrino clients and Electrum clients, because these are the most widely used Bitcoin backends in the Lightning Network. We will not cover the security of other light client protocols (e.g., BIP 37) because their implementations are much less used and maintained. \section{Conclusions} \label{sec:conclusion} Even though Lightning Network has the potential to address the scalability limitations of Bitcoin, it introduces new security assumptions. In this work we explored how they hold in practice. More specifically, we explored what can be done when an attacker isolates (eclipses) a user of the Lightning Network, and feeds blocks to the victim at a slower rate. We showed that time-dilation cannot be addressed by simply detecting slow block arrival, and implementing sophisticated detection measures is not trivial. We argued that time-dilation attacks are currently the most practical way of stealing funds via Eclipse attacks since time-dilation attacks do not require access to hashrate and an attacker doesn't have to purchase anything from a victim. The Eclipse attack cost can be justified against both light clients (the cost is low) and full nodes (an attacker may steal all liquidity of wealthy nodes at once). Finally, we suggested that strong anti-Eclipse/anti-Sybil measures (e.g. alternative sources of blocks) is the key to significantly reducing the risks of time-dilation attacks. \section{Countermeasures} \label{sec:countermeasures} We split countermeasures into preventing time-dilation itself, preventing the exploitation of it, and the issues related to reacting once the exploitation is detected. The ideas we suggest are heuristics: they can't guarantee full mitigation of time-dilation attacks, but rather increase the attack cost. At the end of this section we discuss WatchTowers separately, because they can be used as all three types of countermeasures. \subsection{Preventing Time-Dilation} \label{sec:preventing_td} \textit{Preventing Eclipse attacks} on Bitcoin nodes would make time-dilation impossible. The cost of Eclipse attacks can be increased via the following measures. \textbf{Higher connectivity and the number of honest reachable nodes.} As it was shown in Section \ref{sec:eclipse_neutrino}, the probability of Eclipse attacks goes down when any of the two parameters are increased. There are three ways to achieve this: \begin{itemize} \item encourage users to provide more resources (bandwidth, computational) to the network \item make the use of those resources more efficient \item increase adoption and deployment of BIP 157 (especially server-side) across the ecosystem \end{itemize} \textbf{Peer diversity.} Since both Eclipse attacks and transaction origin inference involve an attacker connecting to the victim, increasing the cost of Sybil attacks is an effective countermeasure. If honest nodes make decisions based on some scarce property, it would make it more difficult for an attacker to gain enough connections. Complementing anti-Sybil mechanisms (e.g., peer diversification based on the peer’s Autonomous System) with proactive topology improvements through peer rotation would help to break free from ongoing Eclipse attacks \cite{Tran2019Erebus}. \textbf{Link layer diversity.} A natural countermeasure to peer-to-peer layer attacks is communication redundancy: using several interconnected multi-homed nodes, a VPN, a mesh network, or the Lightning Network itself for block and transaction relay. If any of these methods are employed to receive blocks and transmit transactions, an attacker would be required to disrupt the attacks as well. in the case of bandwidth-constrained communication channels, transmitting block headers would be enough to detect an anomaly. In addition, LN clients may serve each other to increase their security in a web-of-trust-style deployment. A "friendly" client may be asked to watch a list of outputs spending belonging to another client and notify in the case of a match against their filters. Therefore, an attacker would have to control all chain providers of every relevant swarm. Given resources and incentives for the watching client and the privacy leak for the beneficiary client, this scheme relies on social trust assumption (e.g., a set of mobile wallets belonging to a family). \textbf{Peer-to-peer protocols anonymity} is a standalone research topic. Integrating ideas from prior work \cite{Fanti2018Dandelion++, Naumenko2019Erlay} into Bitcoin Core, as well as improving the existing features may make time-dilation attacks impractical. Although \textbf{straightforward use of Bitcoin over Tor} was demonstrated to be vulnerable to certain attacks \cite{Biryukov2015Tor}, other designs of transaction relay mechanisms involving various mixnets should be explored. \subsection{Detecting Time-dilation} Although in Section \ref{sec:evaluation} we demonstrated that the current stale tip detection technique is limited, a specialized time-dilation detection could be useful. The local system clock can be used to detect the absence of new blocks at an unlike-enough interval, similarly to the stale tip check, although local clock can be a subject to manipulation or system errors. Alternatively, time present in the header of a mined block may be compared to the local time, although this field in block headers is only moderately enforced by consensus. These methods can be expanded to consider a series of blocks. \textbf{Lightning implementation-level warnings} in the case of the observed anomalies may help a node operator identify that a node is currently under attack. For example, if a Lightning node receives channel announcements related to blocks “from too far in the future”, it may issue a warning. Similarly, \textbf{abnormal routing failure rate} may be used. If a Lightning node is behind in its Bitcoin blockchain view, but Lightning payments between honest nodes are still flowing through it, this node will have a high routing failure rate. This would happen because honest nodes on the routing path would reject the forwarded HTLC for being too close to expired. This observation can be used to detect time-dilation. The implementation of these measures is not trivial. All these solutions have a fundamental trade-off: security against false positive detection rate. And even if an attack was properly detected, it is unclear what the victim's reaction should be. In Section \ref{sec:td}), we explored why this is problematic by looking at the currently implemented stale tip detection. \subsection{Reaction} Even if an LN node detected it is under a time-dilation attack and it's not too late, it still cannot easily prevent the loss of funds. The issues with stopping the attack include: \begin{enumerate} \item If there are multiple channels opened, it is unclear which of them should be closed to prevent the loss \item For a given channel, it may be unclear which state was committed by the attacker. This must be taken into account by the victim when constructing a justice transaction. \item A justice transaction should have a proper fee and be transmitted to the miners via the honest peer-to-peer network, or it won't be confirmed. \end{enumerate} The challenges (1), (2), (4) are critical in the context of a victim’s Bitcoin node being eclipsed. Thus, even if the attack was detected, the only solution is to apply the same anti-eclipsing mechanisms we suggested above. \subsection{WatchTowers} This approach implies that chain monitoring is replicated with different computers, each of them maintaining the channel state \cite{Dryja2016Watchtowers, khabbazian2019outpost, avarikioti2020cerberus, Mccorry2018Pisa}. WatchTowers provide an alternative stack operating over a separate infrastructure. This raises the bar for an attacker by making Eclipse attacks and transaction origin inference, and thus imrpoves on all three suggested countermeasure directions. The current discussion around WatcTtowers usually assumes they are operated by special providers, and not users themselves. This increases the robustness even further but introduces an extra assumption about the WatchTower provider. \section{Discussion} \label{sec:discussion} \textbf{Other time-sensitive protocols.} While the attacks we demonstrated were specifically targeting Lightning Network, we believe that a wide variety of Bitcoin second layer protocols \cite{dryja2017discreet, LeGuilly2020OffDLC, moser2016bitcoin, Gibson2017Coinswaps} may be susceptible to time-dilation attacks. This applies to any of them where timelocks are used to arbitrate parties willing to commit concurrent on-chain transactions. We believe that designers of those protocols should take time-dilation threats into account whilst arguing about their security. \textbf{Combined attack with mempool spam.} The attacks we discussed may be prevented by the victim detecting it and submitting a justice transaction in a timely manner. During an attack, a victim has to act in a very limited time-frame, less than the one anticipated in the original timelock. An attacker may make it even harder for a victim by running a DoS against Bitcoin, to ensure the victim’s transaction is not confirmed. If LN implementations employ dynamic fee-bumping, this may help a victim to prioritize their transactions and overcome DoS. \textbf{Attacker controls broader infrastructure.} The attacks we demonstrated work under limited capabilities of the adversary. If an adversary controls the victim's ISP, or has ways to influence DNS responses, or have other ways to exploit the key infrastructure, the attacks may be executed at a much lower cost. It also makes countermeasures we suggested much less efficient. \textbf{Better mapping techniques} There are more advanced techniques for correlating Bitcoin and Lightning nodes. For example, an attacker can use timing analysis of bootstrap/restart or force a Lightning node to close a channel to speed up transaction origin inference. We leave this research for future work. \textbf{Initial Block Download after 24h.} Although we mentioned that there is currently no dedicated mechanism for time-dilation detection, one relevant feature of Bitcoin Core software is switching to Initial Block Download mode. This switching can then be detected by LN operator. It happens if the time defined in the latest known block header is 24h behind the current system time. This feature is not efficient against time-dilation attacks, because, as we demonstrate in this paper, they need to dilate less than 24 hours. We do not recommend modifying it to be useful in this context, because it was not originally designed to prevent attacks. \textbf{Explore tradeoff between higher-security and fund liquidity.} Increasing the timelocks would require an attacker to keep a victim eclipsed for longer, and would give a victim more time to prevent the attack or react to it. This makes more secure channels less dynamic because in a non-cooperative case it takes longer to settle them. Attack cost may be spread by exploiting all links of a single LN node, so the sum of all channel values should be used to assess operational risks. Reasoning on time-value only is an incomplete method to argue about sophisticated attacks. Every LN operator should separately consider their own risks related to time-dilation, in addition to the time-value trade-off of payment channels. Finding the proper balance between the systemic risk caused by the \textit{liquidity market for routing payments} \cite{Bitmex2019RoutingFees} and security is an open area of research. \section{Evaluation} \label{sec:evaluation} The practicality of time-dilation attacks once the node is already eclipsed can be measured by the \textit{time it takes to perform them} and the \textit{failure rate}. The timing aspect is relevant because if the attack takes several days, it can be more easily disrupted by a random event (e.g., a scheduled restart making a victim node connect to new peers). Both of these metrics depend on the countermeasures software uses to disrupt time-dilation attacks. We previously mentioned that there is currently no \textit{dedicated} countermeasure implemented for this purpose. In this section we explore how one mechanism, originally designed for another purpose, bounds the practicality of time-dilation attacks. The only heuristic employed by Bitcoin Core implementation which may help to break free from eclipsing is \textit{stale tip detection}. If a block hasn’t arrived during the last 30 minutes, a node attempts to establish one extra outbound connection and sync tips with a new peer, and then repeats it in 10 minutes if a block was not found during that time. This feature was originally introduced to handle \textit{non-malicious} failures of honest nodes to provide the latest blocks. This countermeasure does not guarantee to mitigate the Eclipse attack, because a victim’s chosen extra outbound connections may be the attacker’s Sybil node. We will further refer to the probability of this event as a \textit{failure rate}, upon which an attacker can improve. For example, an attacker can degrade the effectiveness of this extra connection by poisoning the victim's address manager while the node is eclipsed. Although it was demonstrated to be impractical \cite{Heilman2015Eclipse, Tran2019Erebus}, it may become feasible when a node is eclipsed. \subsection{Optimal Attack Strategy} To reduce the possibility of the victim’s Bitcoin node de-eclipsing due to the stale tip detection, the attacks we demonstrate never intentionally trigger stale tip detection. An attacker should never exceed 30-minute delay between delivering blocks. It is possible, however, that stale tip detection may be triggered naturally by the randomness of the block mining process. According to our estimates, this happens with a probability of 5\% ($e^{-(30/10)}$), so on average 7 times a day. The optimal strategy for an attacker would be to delay every block by 29.5 minutes. This time value is chosen to never exceed a 30-minute stale tip detection threshold with a room for network and processing latency. This approach works best because it allows the fastest time-dilation without triggering the stale tip check. At the same time, it reduces the probability of "natural" de-eclipsing as much as possible: an attacker accumulates the time which can be used to amortize naturally slow (>30 minutes) blocks in the most efficient way. If an attacker combines it with address manager poisoning to make sure de-eclipsing doesn't help the victim, the delay can be increased. We created a simulation-based model accounting for the exponential nature of block generation and stale tip detection. In our model, we simulate a generation of 1,000 blocks, and model an attack per which attacker delays \textit{every} block by a constant chosen interval of 29.5 minutes, so that the stale tip check is never triggered. We repeat this experiment 100,000 times for every configuration. \subsection{Length and Success Rate of the Attacks} According to our model, the time it takes to become ahead of a victim by 144 blocks is 36 hours. We summarize the estimated time of keeping a node eclipsed required to perform time-dilation attacks based on the configurations of Lightning Network implementations in Tables \ref{tab:timings_a1}, \ref{tab:timings_a2}, \ref{tab:timings_a3}. These results are based on the configurations presented in Table \ref{tab:config_impl}. \begin{table}[] \begin{tabular}{l|l|l|l|l} & Implementation & Eclipse time (N) & Eclipse time (BC) & \\ \hline & C-lightning & 24h & 36h & \\ & LND & 24-336h & 36-508h & \\ & Eclair & 120h & 182h & \\ & Rust-lightning & 24h & 36h & \end{tabular} \caption{\label{tab:timings_a1} The time a node has to remain eclipsed to allow attack A1 with Bitcoin Core and Neutrino backends.} \end{table} \begin{table}[] \begin{tabular}{l|l|l|l|l} & Implementation & Eclipse time (N) & Eclipse time (BC) & \\ \hline & C-lightning & 2.3h & 4h & \\ & LND & 6.6h & 10h & \\ & Eclair & 24h & 36h & \\ & Rust-lightning & 12h & 18h & \end{tabular} \caption{\label{tab:timings_a2} The time a node has to remain eclipsed to allow attack A2 with Bitcoin Core and Neutrino backends.} \end{table} \begin{table}[] \begin{tabular}{l|l|l|l|l} & Implementation & Eclipse time (N) & Eclipse time (BC) & \\ \hline & C-lightning & 1.2h & 1.9h & \\ & LND & 1.7h & 2.7 & \\ & Eclair & 1.9h & 2.9h & \\ & Rust-lightning & 1h & 1.6h & \end{tabular} \caption{\label{tab:timings_a3} The time a node has to remain eclipsed to allow attack A3 with Bitcoin Core and Neutrino backends.} \end{table} To confirm the results produced by the model, we derived an intuitive formula. The following formula assumes that every block comes exactly in 10 minutes, instead of the exponential distribution observed in practice. This is why the formula does not include any failures (a block interval can't exceed 30 minutes). The formula computes how long an attacker has to keep a victim eclipsed ($ET$), if an attacker needs to be ahead of the victim for a given number of blocks ($TL$) to break the timelock, and an attacker is limited to delay every block only for a given time ($SR$). Given a targeted timelock($TL$, in blocks) and a per-block malicious slowdown rate ($SR$, in minutes), an attacker can estimate the required eclipsing time ($ET$, in minutes): \begin{equation} ET=(TL + \frac{10}{SR} * TL) * 10 \end{equation} The formula intuitively reads as follows: $TL$ is the number of blocks of advantage required to be mined in the network, $\frac{10}{SR}$ represents how much a victim's blockchain tip moves per every \textit{block of advantage}, while an attacker dilates them by $TL$ blocks. The blocks in the second term are "undilatable" and have to be mined at a normal rate because all the dilation went into producing blocks in the first term. Since both of these values are in blocks, we need to multiply by 10 to get a result in minutes. For Neutrino, where $SR$ is unbounded, an attacker has to simply wait when an honest network mines $TL$ blocks, because the second term is zero, the victim's tip just never moves. For $SR$=0, the result is infinity, meaning it's impossible to perform the attack without any slow down rate. Let's say an attacker wants to be ahead of a victim by $TL$=40 blocks, and they can dilate at $SR$=0.33h per block (20 minutes). In this case, they would have to eclipse a victim for $ET$=3.33h. This example corresponds to the case of exploiting CLTV timelock of LND, which the model claims to be possible within 10 hours with a slowdown rate of 30 minutes, as set by the Bitcoin Core constraints we previously discussed. Stale tip check can be triggered naturally even under the most optimal attack strategy. It would happen when it took \textit{very long} to mine a block. In this case, the \textit{delayed delivery time} of a particular block will be behind the actual block generation time. According to our model, with a chosen strategy of delaying for 29.5m every time, the probability of this natural de-eclipsing (attack failure rate) is around 7\%. Intuitively, the probability of successful de-eclipsing via the stale tip check rapidly goes down with every maliciously delayed block. For a first block to trigger a stale tip detection (while a node is under attack), the natural mining time of that block should exceed 30 minutes, while for a fourth block it should exceed 80 minutes. The probability of these events is 5\% and 0.03\% respectively. Since Neutrino does not implement stale tip detection, there is no such upper bound, and the time it takes to dilate a node by a chosen number of blocks is constrained only by the natural time to produce those blocks. At the same time, without this check, the attacks on Neutrino \textit{always} succeed. If an attacker had (or chose) to use 19.5m delays instead of 29.5m, it would increase the attack failure rate from 7\% to 22\%, while increasing the time it takes to perform time-dilation from 25-32h for reaching a difference of 100 blocks. Reducing the stale tip threshold wouldn't help against time-dilation, because it would significantly increase false positives. \subsection{Gain from the Attacks} Even though Eclipse attacks against full nodes are difficult expensive \cite{Tran2019Erebus}, an attacker may steal all liquidity available at a victim's LN node at once. In addition, since latest Eclipse attacks are infrastructure-level \cite{Tran2019Erebus, Apostolaki2017Hijack}, they may target several nodes at once. Since full nodes are often used by large LN hubs or service providers and have high available liquidity, an attacker may justify the high Eclipse attack cost by stealing aggregate liquidity from several nodes at once. As for the LN users running light clients, stealing from them is already easy enough, so even stealing rather low amount may be justified. The amount per channel to be stolen from a victim is technically different across attacks, but usually equals the channel capacity. Lightning Network recently lifted \cite{Bolt2020Wumbo} the channel capacity bounds, allowing users to open as large channels as they want. In practice, a median LN node capacity is 0.003 BTC, and the total amount locked in LN channels is 940.5 BTC, as of May 2020\footnote{1ml.com}. \textbf{Attack A1} assumes that an attacker commits one of the outdated states on-chain. To maximize the gain, an attacker would claim the state where they had the largest amount. Picking a particular state does not make an attack more difficult. An attacker can then steal \textit{full channel capacity}, minus a small value ("channel reserve"), enforced by the protocol to disincentivize channel revocations. It is possible that there was no state per which all funds were located on the attacker's side of the channel. In that case, an attacker can route payments to themselves via the victim, and move funds to the attacker's side of the channel. \textbf{Attacks A2 and A3} rely on stealing in-flight HTLC, so they can steal at most the maximum in-flight value, as negotiated during channel opening. In most of the LN implementations, this value is by default the same as the full channel capacity. \section{Time-dilation and the attacks} \label{sec:exploiting_td} In this Section we demonstrate the conservative threat model we chose, discuss the nature of time-dilation, and suggest three practical ways to steal money from LN channels. \subsection{Threat Model} \label{sec:threat_model} First of all, we assume that an attacker can open a payment channel to a victim. Although it can be done both before and after eclipsing the victim, in our work we assume the former for simplification. The process of opening a channel is discussed in Section \ref{sec:lightning_network}. We also make the following assumptions: \begin{itemize} \item Users run unmodified Bitcoin and Lightning node software. \item The blockchain provides transaction safety based on confirmations: mining hashrate is stable and blocks are mined reliably. \item The network of honest users forms a connected graph, except for a victim eclipsed by an attacker. \end{itemize} For simplicity we also assume that blocks are reliably relayed across nodes within seconds. We refer to the latest known block as "blockchain tip". When it comes to the capabilities of an attacker, we consider: \begin{itemize} \item An attacker does not control any hash-rate. \item An attacker can deploy hundreds of Sybil nodes with modified Bitcoin node software. \item An attacker does not exceed the network-level capabilities discussed in the prior art on Eclipse attacks \cite{Heilman2015Eclipse, Tran2019Erebus, Apostolaki2017Hijack}. \end{itemize} This threat model allows an attacker to execute the underlying attacks (eclipsing, node mapping). An attacker then becomes capable of time-dilating a victim and stealing funds from their payment channels. \subsection{Time-dilation} \label{sec:td} After a node is eclipsed (and there is a payment channel to a victim), an attacker has to perform time-dilation: slowing down block delivery to the victim’s Bitcoin node. Time-dilation is possible (can't be trivially detected) because, as we discussed in Section \ref{sec:background}, block mining is a Poisson process. For example, even though it is expected to see blocks every 10 minutes on average, seven blocks a day generally take longer than 30 minutes to be produced. To time-dilate a victim, an attacker has to simply introduce a delay between receiving a block and feeding it to the victim. Since the victim is eclipsed and doesn't have an honest source of blocks, the attacker can decide when the victim receives a new block. As of today, no dedicated countermeasure letting a victim distinguish between deferred block propagation from a random event is implemented in any of the Bitcoin client software. Furthermore, \textbf{countermeasures based on the delivery time alone can't be effective against time-dilation}: they either have high false positive rate or high false negative rate. In other words, if these detections are triggered too often, and a node is configured to force close channels on this trigger, channels become less attractive economically, because they become very short-lived. There are also privacy issues: if emergency block fetching is triggered too easily (even by naturally slow blocks), it leaks the fetcher's privacy, which may enable more severe attacks. If they are triggered not often enough, they allow an attacker to adapt to them (for example, by time-dilating at a pace which doesn't trigger them), so that the attack still can be launched undetectably, although it may take a little longer. Thus, a good detection-based countermeasure implemented in the Bitcoin client should have a low true negative rate, but at the same time trigger a less radical (e.g., warning to a node operator) action. But even then, a victim would attempt to connect to the honest network again, which boils down to anti-Eclipse and anti-Sybil measures which could have been taken in the first place even without this trigger. In Section \ref{sec:evaluation}, we demonstrate how stale tip detection in Bitcoin Core bounds an attacker in terms of the maximum time-dilation per block, and suggest an optimal attack strategy which makes stale tip detection ineffective against time-dilation. Once the victim’s Bitcoin node is confirmed to be eclipsed and an attacker is able to slow down block delivery to that node, time-dilation attacks can be launched. In the following descriptions, pseudonyms “Alice” and “Bob” represent users of the Lightning Network, "Mallory" and "Mallet" represent an attacker's entities. \begin{table}[] \begin{tabular}{l|l|l|l|l} & Implementation & CSV delta & CLTV delta & Timeout Policy\\ \hline & C-lightning & 144 & 14 & 7 \\ & LND & 144-2016 & 40 & 10 \\ & Eclair & 720 & 144 & 11 \\ & Rust-lightning & 144 & 72 & 6 \end{tabular} \caption{\label{tab:config_impl} Default timelocks (in blocks).} \end{table} We start by examining the scenario targeting the channel state finalization delay, the hardest to exploit in practice but the most studied so far. Then we explore more creative attacks targetting per-hop packet delay and packet finalization delays, the latter being much more practical than the other two. \subsection{A1. Targeting Channel State Finalization} This attack is structured similarly to the regular on-chain Bitcoin double-spend. Let's say Alice and Mallory have a payment channel. The channel is configured with a CheckSequenceVerify\footnote{A delay (in blocks) timelocking the spending transaction based on the confirmation height of the spent transaction} \cite{Friedenbach2015Csv} timelock of $C$ blocks for contestation (see channel design in Section \ref{sec:lightning_network}). The default choice of $C$ in major LN implementations is summarized in \ref{tab:config_impl}. To start exploiting an attack, Mallory should make Alice be $C$ blocks behind the actual tip of the blockchain by performing time-dilation. As a result, Alice's block height is pinned at $H-C$, where $H$ is the height of the actual latest block in the network. Once the difference in heights is achieved, Mallory can double-spend Alice. To do so, Mallory negotiates with Alice a new state. Per this new state, Mallory pays Alice and receives something (in an irreversible way, like a physical or digital good) from Alice. Then Mallory commits the previous state on-chain with a preferred outdated state. The malicious revoked commitment transaction is settled on-chain at $H+1$. Since the latest block Alice sees corresponds to time $H-C$, she won’t detect the channel revocation until reaching $H+1$. At that time, the honest network and Mallory are already at height $H+C+1$. The contestation period is expired for the rest of the honest network, and the malicious spend is fully valid. \subsection{A2: Targeting Per-Hop Packet Delay} This attack is based on exploiting the HTLC-based routing. The attack starts with two lightning channels being opened: Mallory-Bob and Bob-Mallet. Bob enforces a $cltv\_delta$ (see Section \ref{sec:lightning_network}) of $M$ blocks on incoming HTLCs. We summarize how different LN implementations choose the $cltv\_delta$ in Table \ref{tab:config_impl}. Mallory and Mallet eclipse Bob’s Bitcoin node and perform time-dilation until they gain a lead of $M+1$ blocks on Bob. Once Mallory and Mallet have managed to be $M+1$ blocks ahead of Bob, they route a payment through him with a final timelock delta of $N$. On Bob-Mallet channel, HTLC timelock expires at $H+N$. On Mallory-Bob channnel, HTLC timelock expires at $H+M+N$, therefore satisfying Bob's $cltv\_delta$ of $M$. As before, $H$ is the height of the actual latest block in the network. Once the actual Bitcoin blockchain tip is at height $H+M+N$, Mallet provides a required preimage to Bob and gets from Bob a signature for a new state. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{a2.pdf} \caption{Attack A2. Mallory and Mallet are attackers, Bob is a time-dilated victim. Steps 4-5 happen simultaneously.} \label{fig:a2} \end{figure} At the same time, Mallory finalizes the state of her channel on the Bitcoin blockchain and broadcasts her HTLC-timeout transaction to get back the offered payment. This prevents Bob from re-negotiating the state of that channel via the preimage he just got against a full-payment to Mallet, making him effectively robbed. We summarize the attack in Fig. \ref{fig:a2}. \subsection{A3: Targeting Packet Finalization} This attack is based on exploiting the incoming HTLC safety delay on a channel. When a party knows the preimage for an incoming HTLC but the remote peer doesn't respond in a timely manner to update channel state, the party will go on-chain to claim the incoming HTLC a few blocks before expiration. Mallory, the attacker, starts by time-dilating Bob, the victim, by $I+1$ blocks (as specified in Table \ref{tab:timings_a3}). The attacker has to wait an extra block to avoid a broadcast race condition between an honest preimage transaction and a malicious HTLC-timeout. At $H$, an HTLC is routed from Alice via Mallory to Bob and will expire at $H+N$, with $H$ actual latest block in the network, and $N$ final timelock delta. There is no collusion between Alice, an honest payer, and Mallory. Bob reveals the preimage to Mallory, and Mallory deliberately doesn't reply back to update channel state. When the blockchain tip reaches $H+N$ on the non-eclipsed network, Mallory broadcasts her commitment and HTLC-timeout transactions, therefore making revealed preimage invalid to claim offered HTLC on the Mallory-Bob channel. Finally, when Bob reaches $H+N-I$ on his blockchain view, he attempts to claim the incoming HTLC by broadcasting his Preimage transaction. This one is going to be rejected by other network peers, the HTLC output has already been finalized by Mallory's HTLC-timeout transaction. Then, Mallory claims offered HTLC on Mallory-Alice channel, presenting Bob's preimage, therefore earning a routed payment for which she hasn't sent fund forward. \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{a3.pdf} \caption{Attack A3. Mallory is an attacker, Bob is a time-dilated victim, Alice simply routes a payment via Bob.} \label{fig:a3} \end{figure} This attack differs from the previous one because an attacker only needs one channel with a victim, but also needs to be selected for a payment path. It also differs in the way an attacker finalizes the channel on-chain while stealing funds (by timing out a stolen HTLC). We summarize the attack in Fig. \ref{fig:a3}. \section{Introduction} \label{sec:intro} Bitcoin is a peer-to-peer electronic cash system, which solves the double-spend problem with a trust-minimized architecture by letting everyone verify all transactions \cite{Nakamoto2008Bitcoin}. As of Nov. 2019, the system operates over at least 60,000 nodes\footnote{From https://luke.dashjr.org/programs/bitcoin/files/charts/software.html} simultaneously running Bitcoin protocol software, not including the many more users of custodial services and trusted solutions. Public auditability of the Bitcoin transaction history is the foundation of removing trusted third parties. The drawback of public auditability is a constraint on the transaction throughput. \textit{Second-layer} protocols on top of Bitcoin were designed to overcome this limitation \cite{poon2016lightning, back2014sidechains, lerner2015rsk}. For example, in Lightning Network (LN) \cite{poon2016lightning}, solving the double-spend problem by shifting it to be a private matter (as opposed to being solved \textit{on-chain}). Second-layer protocols introduce new assumptions when compared to the original Bitcoin threat model. In this work, we explore how LN users may be subjected to a risk of having their funds stolen, once their Bitcoin nodes are eclipsed from honest nodes. At high level, we exploit the requirements to monitor the Bitcoin blockchain and to detect relevant transactions in a timely manner. Per \textit{time-dilation attacks}, a malicious actor slows down block delivery to the victim and then finalizes an expired state of the Lightning channel on-chain, before a victim can notice. For a non-infrastructure attacker eclipsing full nodes is difficult, but definitely not impossible, as demonstrated by prior work \cite{Heilman2015Eclipse, Apostolaki2017Hijack, Tran2019Erebus}. Since full nodes in the LN are often used by hubs (or big service providers), we will show that an attacker may justify the high attack cost by stealing their aggregate liquidity during one short (several hours) Eclipse attack. At the same time, we will demonstrate that Eclipse attacks are easier to carry out against the many LN users whose wallets rely on \textit{light client protocols} to obtain information from the Bitcoin network, and light client implementations are currently more vulnerable to Eclipse attacks then full nodes. If an attacker has a payment channel to a victim, and a victim is eclipsed, the remaining attack is only a matter of time (hours to days), and the attack success rate is approximately 93\%. This makes our attacks \textit{as difficult as Eclipse attacks} in practice. When combined with the fact that time-dilation attacks require neither hashrate access nor purchasing from a victim, time-dilation becomes the most practical way of stealing funds via Eclipse attacks, compared to the well-known double-spending of eclipsed nodes. The problem can't be addressed by simply detecting the slow block arrival due to the uneven intervals between mined blocks. More advanced detection-based measures should be deployed carefully, considering a number of trade-offs: the effect of false positives and a chosen recovery strategy on payment channels and Bitcoin in general. Mitigations to the time-dilation attacks should be built around strong anti-Eclipse measures. The paper is structured as follows: \begin{itemize} \item We provide the background required to understand Bitcoin and the Lightning Network: their advantages, limitations and assumptions in Section \ref{sec:background}. \item We discuss the preparation required for time-dilation attacks including eclipsing a victim's Bitcoin node and mapping a Bitcoin node to a Lightning node in Section \ref{sec:attacks_prep}. \item We define our threat model, how to launch time-dilation, why it is so difficult to mitigate it by simply observing slow block arrival, and three ways of exploiting time-dilation in Section \ref{sec:exploiting_td}. \item We discuss the optimal strategy for an attacker to exploit time-dilation considering the already implemented stale tip detection in Bitcoin Core, and measure the attack cost and the gain from the attacks in Section \ref{sec:evaluation}. \item We suggest various measures to raise the bar for setting up time-dilation attacks on LN and discuss the trade-offs of sophisticated detection-based measures in Section \ref{sec:countermeasures}. \item We suggest a list of open questions for further discussion and research in Section \ref{sec:discussion}. \item We discuss how our work complements the prior research on the security of Bitcoin and Lightning in Section \ref{sec:related_work}. \end{itemize} \section{Related work} \label{sec:related_work} Attacks on the Bitcoin peer-to-peer network usually result in eclipsing honest nodes and transaction deanonymization. The first Eclipse attack on Bitcoin was based purely on the high-level Bitcoin network protocols \cite{Heilman2015Eclipse}, while the latter exploited BGP and Internet infrastructure \cite{Apostolaki2017Hijack, Tran2019Erebus}. The studied consequences of Eclipse attacks include monetary (double-spending attacks, attacks on mining) and non-monetary (peer-to-peer layer deanonymization). The prior art demonstrated that it's possible to steal funds via an Eclipse attack via on-chain double-spend, although it would require access to hashrate and purchasing something from a victim. Thus, it is more difficult than with time-dilation attacks. The consequences of Eclipse attacks for the second layer were only briefly mentioned \cite{Heilman2015Eclipse}. Bitcoin was also considered as a target for attacks on NTP \cite{Malhotra2015NTP, Herrera2019HidingBalance}, although the consequences for the second-layer protocols were also not explored. Attacks on the privacy of the Bitcoin peer-to-peer protocols demonstrated that transaction deanonymization is fairly feasible both with simple techniques like first-spy estimator and more advanced strategies \cite{Fanti2017Dandelion, Fanti2018Dandelion++, Biryukov2014Deanonymisation, Naumenko2019Erlay, biryukov2019deanonymization}. Some of the privacy improvements were deployed, but they can't address the issues in full. Using Tor at the peer-to-peer layer was demonstrated to be an inadequate solution to these problems \cite{Biryukov2015Tor}. Attacks on the Lightning Network may be split into two groups: privacy-related or DoS-related. Prior work on privacy mainly explored real-time balances of channels in the network via probing \cite{Tikhomirov2020ProbingBalance}. These DoS attacks achieve cheap network congestion preventing the flow of honest payments \cite{perez2019lockdown, rohrer2019discharged, mizrahi2020congestion}. In light of our work, an attacker may use route hijacking techniques \cite{tochner2019hijacking} to prepare targeted channels for time-dilation. It was also explored how an attacker can steal routing fees \cite{malavolta2019anonymous}, and exploit transaction propagation policies to get an advantage in LN settlement \cite{Corallo2020Pinning}. To the best of our knowledge, none of the major LN update proposals \cite{Decker2015Duplex, Decker2018Eltoo, Poelstra2019PTLC} can solve the time-dilation issues.
1,108,101,564,437
arxiv
\section{Introduction} \label{sec:intro} At the core of many computational tasks arising in science and engineering is the problem of repeatedly evaluating the output of an expensive forward model for many statistically similar inputs. Such settings include the numerical solution of parametric partial differential equations (PDEs), time-stepping for evolutionary PDEs and, more generally, the evaluation of input-output maps defined by black-box computer models. The key idea in this paper is the development of a new data-driven emulator which is defined to act between the infinite-dimensional input and output spaces of maps such as those defined by PDEs. By defining approximation architectures on infinite-dimensional spaces, we provide the basis for a methodology which is robust to the resolution of the finite-dimensionalizations used to create implementable algorithms. This work is motivated by the recent empirical success of neural networks in machine learning applications such as image classification, aiming to explore whether this success has any implications for algorithm development in different applications arising in science and engineering. We further wish to compare the resulting new methods with traditional algorithms from the field of numerical analysis for the approximation of infinite-dimensional maps, such as the maps defined by parametric PDEs or the solution operator for time-dependent PDEs. We propose a method for approximation of such solution maps purely in a data-driven fashion by lifting the concept of neural networks to produce maps acting between infinite-dimensional spaces. Our method exploits approximate finite-dimensional structure in maps between Banach spaces of functions through three separate steps: (i) reducing the dimension of the input; (ii) reducing the dimension of the output, and (iii) finding a map between the two resulting finite-dimensional latent spaces. Our approach takes advantage of the approximation power of neural networks while allowing for the use of well-understood, classical dimension reduction (and reconstruction) techniques. Our goal is to reduce the complexity of the input-to-output map by replacing it with a data-driven emulator. In achieving this goal we design an emulator which enjoys mesh-independent approximation properties, a fact which we establish through a combination of theory and numerical experiments; to the best of our knowledge, these are the first such results in the area of neural networks for PDE problems. To be concrete, and to guide the literature review which follows, consider the following prototypical parametric PDE \[(\mathcal{P}_x y)(s) = 0, \qquad \forall s \in D,\] where \(D \subset \mathbb{R}^d\) is a bounded open set, \(\mathcal{P}_x\) is a differential operator depending on a parameter \(x \in \mathcal{X}\) and \(y \in \mathcal{Y}\) is the solution to the PDE (given appropriate boundary conditions). The Banach spaces \(\mathcal{X}\) and \(\mathcal{Y}\) are assumed to be spaces of real-valued functions on $D.$ Here, and in the rest of this paper, we consistently use $s$ to denote the independent variable in spatially dependent PDEs, and reserve $x$ and $y$ for the input and output of the PDE model of interest. We adopt this idiosyncratic notation (from the PDE perspective) to keep our exposition in line with standard machine learning notation for input and output variables. \begin{example} Consider second order elliptic PDEs of the form \begin{align} \begin{split} \label{eq:darcy} - \nabla \cdot (a(s) \nabla u(s)) &= f(s), \quad s \in D \\ u(s) &= 0, \qquad \:\: s \in \partial D \end{split} \end{align} which are prototypical of many scientific applications. As a concrete example of a mapping defined by this equation, we restrict ourselves to the setting where the forcing term \(f\) is fixed, and consider the diffusion coefficient \(a\) as the input parameter \(x\) and the PDE solution \(u\) as output \(y\). In this setting, we have \(\mathcal{X} = L^\infty(D;\mathbb{R}_+)\), \(\mathcal{Y} = H_0^1(D;\mathbb{R})\), and \(\mathcal{P}_x = - \nabla_s\cdot ( a \nabla_s \cdot ) - f\), equipped with homogeneous Dirichlet boundary conditions. This is the Darcy flow problem which we consider numerically in Section \ref{sec:numdarcy}. \end{example} \subsection{Literature Review} \label{subsec:litreview} The recent success of neural networks on a variety of high-dimensional machine learning problems \cite{lecunnature} has led to a rapidly growing body of research pertaining to applications in scientific problems \cite{Adler2017,bhatnagar2019prediction, Cheng2019,weinandeepritz,gilmer2017neural,holland2019field,raissi2019physics,surrogatemodeling,smith2020eikonet}. In particular, there is a substantial number of articles which investigate the use of neural networks as surrogate models, and more specifically for obtaining the solution of (possibly parametric) PDEs. We summarize the two most prevalent existing neural network based strategies in the approximation of PDEs in general, and parametric PDEs specifically. The first approach can be thought of as image-to-image regression. The goal is to approximate the parametric solution operator mapping elements of $\mathcal{X}$ to $\mathcal{Y}$. This is achieved by discretizing both spaces to obtain finite-dimensional input and output spaces of dimension $K$. We assume to have access to data in the form of observations of input \(x \) and output \(y\) discretized on \(K\)-points within the domain \(D\). The methodology then proceeds by defining a neural network \(F: \mathbb{R}^K \to \mathbb{R}^K\) and regresses the input-to-output map by minimizing a misfit functional defined using the point values of $x$ and $y$ on the discretization grid. The articles \cite{Adler2017,bhatnagar2019prediction,holland2019field,surrogatemodeling,geist2020numerical} apply this methodology for various forward and inverse problems in physics and engineering, utilizing a variety of neural network architectures in the regression step; the related paper \cite{khoo2017solving} applies a similar approach, but the output space is $\mathbb{R}.$ This innovative set of papers demonstrate some success. However, from the perspective of the goals of our work, their approaches are not robust to mesh-refinement: the neural network is defined as a mapping between two Euclidean spaces of values on mesh points. The rates of approximation depend on the underlying discretization and an overhaul of the architecture would be required to produce results consistent across different discretizations. The papers \cite{lu2019deeponet,lu2020deeponet} make a conceptual step in the direction of interest to us in this paper, as they introduce an architecture based on a neural network approximation theorem for operators from \cite{chen1995universal}; but as implemented the method still results in parameters which depend on the mesh used. Applications of this methodology may be found in \cite{cai2020deepm,mao2020deepm,lin2020operator}. The second approach does not directly seek to find the parametric map from $\mathcal{X}$ to $\mathcal{Y}$ but rather is thought of, for fixed $x \in \mathcal{X}$, as being a parametrization of the solution $y \in \mathcal{Y}$ by means of a deep neural network \cite{Dockhorn19,weinandeepritz,hsieh2018learning,lagaris1998artificial,raissi2019physics,shin2020convergence}. This methodology parallels collocation methods for the numerical solution of PDEs by searching over approximation spaces defined by neural networks. The solution of the PDE is written as a neural network approximation in which the spatial (or, in the time-dependent case, spatio-temporal) variables in $D$ are inputs and the solution is the output. This parametric function is then substituted into the PDE and the residual is made small by optimization. The resulting neural network may be thought of as a novel structure which composes the action of the operator $\mathcal{P}_x$, for fixed $x$, with a neural network taking inputs in $D$ \cite{raissi2019physics}. While this method leads to an approximate solution map defined on the input domain $D$ (and not on a $K-$point discretization of the domain), the parametric dependence of the approximate solution map is fixed. Indeed for a new input parameter \(x\), one needs to re-train the neural network by solving the associated optimization problem in order to produce a new map \(y : D \to \mathbb{R}\); this may be prohibitively expensive when parametric dependence of the solution is the target of analysis. Furthermore the approach cannot be made fully data-driven as it needs knowledge of the underlying PDE, and furthermore the operations required to apply the differential operator may interact poorly with the neural network approximator during the back-propagation (adjoint calculation) phase of the optimization. The work \cite{ruthotto2019DeepNN} examines the forward propagation of neural networks as the flow of a time-dependent PDE, combining the continuous time formulation of ResNet \cite{haber2017stable,weinan2017proposal} with the idea of neural networks acting on spaces of functions: by considering the initial condition as a function, this flow map may be thought of as a neural network acting between infinite-dimensional spaces. The idea of learning PDEs from data using neural networks, again generating a flow map between infinite dimensional spaces, was studied in the 1990s in the papers \cite{krischer1993model,Kev98} with the former using a PCA methodology, and the latter using the method of lines. More recently the works \cite{HESTHAVEN201855,WANG2019289} also employ a PCA methodology for the output space but only consider very low dimensional input spaces. Furthermore the works \cite{lee2020model,fresca2020comprehensive,gonzalez2018deep,fresca2020comprehensive} proposed a model reduction approach for dynamical systems by use of dimension reducing neural networks (autoencoders). However only a fixed discretization of space is considered, yielding a method which does not produce a map between two infinite-dimensional spaces. The development of numerical methods for parametric problems is not, of course, restricted to the use of neural networks. Earlier works in the engineering literature started in the 1970s focused on computational methods which represent PDE solutions in terms of known basis functions that contain information about the solution structure \cite{almroth1978automatic,nagy1979modal}. This work led to the development of the reduced basis method (RBM) which is widely adopted in engineering; see \cite{barrault2004empirical,hesthaven2016certified,quarteroni2015reduced} and the references therein. The methodology was also used for stochastic problems, in which the input space $\mathcal{X}$ is endowed with a probabilistic structure, in \cite{boyaval2010reduced}. The study of RBMs led to broader interest in the approximation theory community focusing on rates of convergence for the RBM approximation of maps between Banach spaces, and in particular maps defined through parametric dependence of PDEs; see \cite{DeVoreReducedBasis} for an overview of this work. Ideas from model reduction have been combined with data-driven learning in the sequence of papers \cite{PEHERSTORFER2016196,mcquarrie2020datadriven,Benner_2020,peherstorfer2019sampling,qian2020lift}. The setting is the learning of data-driven approximations to time-dependent PDEs. Model reduction is used to find a low-dimensional approximation space and then a system of ordinary differential equations (ODEs) is learned in this low-dimensional latent space. These ODEs are assumed to have vector fields from a known class with unknown linear coefficients; learning is thus reduced to a least squares problem. The known vector fields mimic properties of the original PDE (for example are restricted to linear and quadratic terms for the equations of geophysical fluid dynamics); additionally transformations may be used to render the original PDE in a desirable form form (such as having only quadratic nonlinearities.) The development of theoretical analyses to understand the use of neural networks to approximate PDEs is currently in its infancy, but interesting results are starting to emerge \cite{herrmann2020deep,kutyniok2019theoretical,schwab2019deep,laakmann2020efficient}. A recurrent theme in the analysis of neural networks, and in these papers in particular, is that the work typically asserts the \emph{existence} of a choice of neural network parameters which achieve a certain approximation property; because of the non-convex optimization techniques used to determine the network parameters, the issue of \emph{finding} these parameters in practice is rarely addressed. Recent works take a different perspective on data-driven approximation of PDEs, motivated by small-data scenarios; see the paper \cite{cohen2020state} which relates, in part, to earlier work focused on the small-data setting \cite{binev2017data,maday2015parameterized}. These approaches are more akin to data assimilation \cite{reich2015probabilistic,law2015data} where the data is incorporated into a model. \subsection{Our Contribution} \label{subsec:contribution} The primary contributions of this paper are as follows: \begin{enumerate} \item we propose a novel data-driven methodology capable of learning mappings between Hilbert spaces; \item the proposed method combines model reduction with neural networks to obtain algorithms with controllable approximation errors as maps between Hilbert spaces; \item as a result of this approximation property of maps between Hilbert spaces, the learned maps exhibit desirable mesh-independence properties; \item we prove that our architecture is sufficiently rich to contain approximations of arbitrary accuracy, as a mapping between function spaces; \item we present numerical experiments that demonstrate the efficacy of the proposed methodology, demonstrate desirable mesh-indepence properties, elucidate its properties beyond the confines of the theory, and compare with other methods for parametric PDEs. \end{enumerate} Section \ref{sec:method} outlines the approximation methodology, which is based on use of {\it principal component analysis (PCA)} in a Hilbert space to finite-dimensionalize the input and output spaces, and a neural network between the resulting finite-dimensional spaces. Section \ref{sec:analysis} contains statement and proof of our main approximation result, which invokes a global Lipschitz assumption on the map to be approximated. In Section \ref{sec:numerics} we present our numerical experiments, some of which relax the global Lipschitz assumption, and others which involve comparisons with other approaches from the literature. Section \ref{sec:conclusion} contains concluding remarks, including directions for further study. We also include auxiliary results in the appendix that complement and extend the main theoretical developments of the article. Appendix~\ref{app:approxanalysis-local-Lipschitz} extends the analysis of Section~\ref{sec:analysis} from globally Lipschitz maps to locally Lipschitz maps with controlled growth rates. Appendix~\ref{app:supportlemmas} contains supporting lemmas that are used throughout the paper while Appendix~\ref{app:b} proves an analyticity result pertaining to the solution map of the Poisson equation that is used in one of the numerical experiments in Section~\ref{sec:numerics}. \section{Proposed Method} \label{sec:method} Our method combines PCA-based dimension reduction on the input and output spaces $\mathcal{X}, \mathcal{Y}$ with a neural network that maps the dimension-reduced spaces. After a pre-amble in Subsection \ref{sec:framework}, giving an overview of our approach, we continue in Subsection \ref{sec:functionalpca} with a description of PCA in the Hilbert space setting, including intuition about its approximation quality. Subsection \ref{sec:neuralnets} gives the background on neural networks needed for this paper, and Subsection \ref{sec:representation} compares our methodology to existing methods. \subsection{Overview} \label{sec:framework} Let \(\mathcal{X}\), \(\mathcal{Y}\) be separable Hilbert spaces and \(\Psi: \mathcal{X} \to \mathcal{Y}\) be some, possibly nonlinear, map. Our goal is to approximate \(\Psi\) from a finite collection of evaluations \(\{x_j,y_j\}_{j=1}^N\) where \(y_j = \Psi(x_j)\). We assume that the $x_j$ are i.i.d. with respect to (w.r.t.) a probability measure \( \mu\) supported on \(\mathcal{X}\). Note that with this notation the output samples $y_j$ are i.i.d. w.r.t. the push-forward measure \(\Psi_\sharp\mu\). The approximation of $\Psi$ from the data $\{x_j, y_j\}_{j=1}^N$ that we now develop should be understood as being designed to be accurate with respect to norms defined by integration with respect to the measures $\mu$ and \(\Psi_\sharp\mu\) on the spaces \(\mathcal{X}\) and \(\mathcal{Y}\) respectively. Instead of attempting to directly approximate \(\Psi\), we first try to exploit possible finite-dimensional structure within the measures $\mu$ and $\Psi_\sharp \mu$. We accomplish this by approximating the identity mappings \(I_\mathcal{X}: \mathcal{X} \to \mathcal{X}\) and \(I_\mathcal{Y}: \mathcal{Y} \to \mathcal{Y}\) by a composition of two maps, known as the \textit{encoder} and the \textit{decoder} in the machine learning literature \cite{dimreduction,deeplearningbook}, which have finite-dimensional range and domain, respectively. We will then interpolate between the finite-dimensional outputs of the encoders, usually referred to as the \textit{latent codes}. Our approach is summarized in Figure \ref{fig:approach}. \begin{figure} \centering \begin{tikzpicture} \begin{tikzcd} \mathcal{X} \arrow[rr,"F_\X"] \arrow[d,swap,"\Psi"] && \mathbb{R}^{d_\X} \arrow[rr,"G_\X"] \arrow[d,swap,"\varphi"] && \mathcal{X} \arrow[d,swap,"\Psi"] \\ \mathcal{Y} \arrow[rr,"F_\Y"] && \mathbb{R}^{d_\Y} \arrow[rr,"G_\Y"] && \mathcal{Y} \end{tikzcd} \end{tikzpicture} \hspace*{6.2cm} \caption{A diagram summarizing various maps of interest in our proposed approach for the approximation of input-output maps between infinite-dimensional spaces.} \label{fig:approach} \end{figure} Here, \(F_\X\) and \(F_\Y\) are the encoders for the spaces $\mathcal{X}, \mathcal{Y}$ respectively, whilst \(G_\X\) and \(G_\Y\) are the decoders, and \(\varphi\) is the map interpolating the latent codes. The intuition behind Figure \ref{fig:approach}, and, to some extent, the main focus of our analysis, concerns the quality of the the approximations \begin{subequations} \begin{align} G_\X \circ F_\X &\approx I_\mathcal{X}, \label{eq:autoencodex} \\ G_\Y \circ F_\Y &\approx I_\mathcal{Y}, \label{eq:autoencodey} \\ G_\Y \circ \varphi \circ F_\X &\approx \Psi. \label{eq:approxpsi} \end{align} \end{subequations} In order to achieve \eqref{eq:approxpsi} it is natural to choose $\varphi$ as \begin{equation} \varphi:=F_\Y \circ \Psi \circ G_\X; \label{eq:approxphi} \end{equation} then the approximation \eqref{eq:approxpsi} is limited only by the approximations \eqref{eq:autoencodex}, \eqref{eq:autoencodey} of the identity maps on $I_\mathcal{X}$ and $I_\mathcal{Y}$. We further label the approximation in \eqref{eq:approxpsi} by \begin{equation} \Psi_{\scriptscriptstyle{PCA}}:=G_\Y \circ \varphi \circ F_\X, \label{eq:apsipca} \end{equation} since we later choose PCA as our dimension reduction method. We note that \(\Psi_{\scriptscriptstyle{PCA}}\) is not used in practical computations since $\varphi$ is generally unknown. To make it practical we replace $\varphi$ with a data-driven approximation $\chi \approx \varphi$ obtaining, \begin{equation} \Psi_{\scriptscriptstyle{NN}}:=G_\Y \circ \chi \circ F_\X. \label{eq:apsipcn} \end{equation} Later we choose $\chi$ to be a neural network, hence the choice of the subscript $\scriptstyle{NN}$. The combination of PCA for the encoding/decoding along with the neural network approximation $\chi$ for $\varphi$, forms the basis of our computational methodology. The compositions \(G_\X \circ F_\X\) and \(G_\Y \circ F_\Y\) are commonly referred to as \textit{autoencoders}. There is a large literature on dimension-reduction methods \cite{originalpca,kernelpca,diffusionmaps,graphlaplaciandimreduce,dimreduction} both classical and rooted in neural networks. In this work, we will focus on PCA which is perhaps one of the simplest such methods known \cite{originalpca}. We make this choice due to its simplicity of implementation, excellent numerical performance on the problems we study in Section~\ref{sec:numerics}, and its amenability to analysis. The dimension reduction in the input and output spaces is essential, as it allows for function space algorithms that make use of powerful finite-dimensional approximation methods, such as the neural networks we use here. Many classical dimension reduction methods may be seen as encoders. But not all are as easily inverted as PCA -- often there is no unambiguous, or no efficient, way to obtain the decoder. Whilst neural network based methods such as deep autoencoders \cite{dimreduction} have shown empirical success in finite dimensional applications they currently lack theory and practical implementation in the setting of function spaces, and are therefore not currently suitable in the context of the goals of this paper. Nonetheless methods other than PCA are likely to be useful within the general goals of high or infinite-dimensional function approximation. Indeed, with PCA, we approximate the solution manifold (image space) of the operator \(\Psi\) by the \textit{linear} space defined in equation \eqref{eq:pcasubspace}. We emphasize however that, usually, \(\Psi\) is a nonlinear operator and our approximation succeeds by capturing the induced nonlinear input-output relationship within the latent codes by using a neural network. We will show in Section \ref{sec:approxanalysis} that the approximation error of the linear space to the solution manifold goes to zero as the dimension increases, however, this decay may be very slow \cite{cohendevore2015,devore1998}. Therefore, it may be beneficial to construct nonlinear dimension reducing maps such as deep autoencoders on function spaces. We leave this as an interesting direction for future work. Regarding the approximation of $\varphi$ by neural networks, we acknowledge that there is considerable scope for the construction of the neural network, within different families and types of networks, and potentially by using other approximators. For our theory and numerics however we will focus on relatively constrained families of such networks, described in the following Subsection~\ref{sec:neuralnets}. \subsection{PCA On Function Space} \label{sec:functionalpca} Since we will perform PCA on both $\mathcal{X}$ and $\mathcal{Y}$, and since PCA requires a Hilbert space setting, the development here is in a generic real, separable Hilbert space \(\mathcal{H}\) with inner-product and norm denoted by \(\langle \cdot, \cdot \rangle\) and \(\|\cdot\|\) respectively. We let \(\nu\) denote a probability measure supported on \(\mathcal{H}\), and make the assumption of a finite fourth moment: \(\mathbb{E}_{u \sim \nu} \|u\|^4 < \infty\). We denote by \(\{u_j\}_{j=1}^{N}\) a finite collection of $N$ i.i.d. draws from \(\nu\) that will be used as the training data on which PCA is based. Later we apply the PCA methodology in two distinct settings where the space $\mathcal H$ is taken to be the input space $\mathcal{X}$ and the data \(\{u_j\}\) are the input samples \(\{x_j\}\) drawn from the input measure $\mu$, or $\mathcal{H}$ is taken to be the output space $\mathcal{Y}$ and the data $\{u_j\}$ are the corresponding outputs \(\{y_j = \Psi(x_j)\}\) drawn from the push-forward measure $\Psi_\sharp \mu$. The following exposition, and the subsequent analysis in Section \ref{sec:pcaanalysis}, largely follows the works \cite{Blanchard2007,ShaweTaylor2,ShaweTaylor1}. We will consider the standard version of non-centered PCA, although more sophisticated versions such as kernel PCA have been widely used and analyzed \cite{kernelpca} and could be of potential interest within the overall goals of this work. We choose to work in the non-kernelized setting as there is an unequivocal way of producing the decoder. For any subspace \(V \subseteq \mathcal{H}\), denote by \(\Pi_V : \mathcal{H} \to V\) the orthogonal projection operator and define the \textit{empirical projection error}, \begin{equation} \label{eq:empiricalerror} R_N(V) \coloneqq \frac{1}{N}\sum_{j=1}^N \|u_j - \Pi_V u_j\|^2. \end{equation} PCA consists of projecting the data onto a finite-dimensional subspace of \(\mathcal{H}\) for which this error is minimal. To that end, consider the \textit{empirical, non-centered covariance} operator \begin{equation} \label{eq:empiricalcovariance} C_N \coloneqq \frac{1}{N} \sum_{j=1}^N u_j \otimes u_j \end{equation} where \(\otimes\) denotes the outer product. It may be shown that \(C_N\) is a non-negative, self-adjoint, trace-class operator on $\mathcal{H}$, of rank at most \(N\) \cite{zeidler2012appliedfunc}. Let \(\phi_{1,N}, \dots \phi_{N,N}\) denote the eigenvectors of \(C_N\) and \(\lambda_{1,N} \geq \lambda_{2,N} \geq \dots \geq \lambda_{N,N} \geq 0\) its corresponding eigenvalues in decreasing order. Then for any $d \ge 1$ we define the \textit{PCA subspaces} \begin{equation} \label{eq:pcasubspace} V_{d,N} = \text{span} \{\phi_{1,N},\phi_{2,N},\dots,\phi_{d,N}\} \subset \mathcal{H}. \end{equation} It is well known \cite[Thm. 12.2.1]{murphybook} that \(V_{d,N}\) solves the minimization problem \[\min_{V \in \mathcal{V}_d} R_N(V),\] where \(\mathcal{V}_d\) denotes the set of all \(d\)-dimensional subspaces of \(\mathcal{H}\). Furthermore \begin{equation} \label{eq:nt} R_N(V_{d,N}) = \sum_{j=d+1}^N \lambda_{j,N}, \end{equation} hence the approximation is controlled by the rate of decay of the spectrum of \(C_N\). With this in mind, we define the \textit{PCA encoder} \(F_\HH: \mathcal{H} \to \mathbb{R}^d\) as the mapping from $\mathcal{H}$ to the coefficients of the orthogonal projection onto \(V_{d,N}\) namely, \begin{equation} \label{eq:encoder} F_\HH(u) = (\langle u, \phi_{1,N} \rangle, \dots, \langle u, \phi_{d,N} \rangle)^T \in \mathbb R^d. \end{equation} Correspondingly, the PCA decoder \(G_\HH: \mathbb{R}^d \to \mathcal{H}\) constructs an element of \(\mathcal{H}\) by taking as its input the coefficients constructed by \(F_\HH\) and forming an expansion in the empirical basis by zero-padding the PCA basis coefficients, that is \begin{equation} \label{eq:decoder} G_\HH(s) = \sum_{j=1}^d s_j \phi_{j,N} \qquad \forall s \in \mathbb{R}^d. \end{equation} In particular, \begin{equation*} (G_\HH \circ F_\HH)(u) = \sum_{j=1}^d \langle u, \phi_{j,N} \rangle \phi_{j,N},\quad \text{equivalently}\quad G_\HH \circ F_\HH = \sum_{j=1}^d \phi_{j,N} \otimes \phi_{j,N}. \end{equation*} Hence \(G_\HH \circ F_\HH = \Pi_{V_{d,N}}\), a \(d\)-dimensional approximation to the identity \(I_{\mathcal{H}}\). We will now give a qualitative explanation of this approximation to be made quantitative in Subsection \ref{sec:pcaanalysis}. It is natural to consider minimizing the infinite data analog of \eqref{eq:empiricalerror}, namely the \textit{projection error} \begin{equation} \label{eq:projectionerror} R(V) \coloneqq \mathbb{E}_{u \sim \nu}\|u - \Pi_V u\|^2, \end{equation} over $\mathcal{V}_d$ for $d \ge 1$. Assuming \(\nu\) has a finite second moment, there exists a unique, self-adjoint, non-negative, trace-class operator \(C : \mathcal{H} \to \mathcal{H}\) termed the \textit{non-centered covariance} such that $\langle v, Cz \rangle = \mathbb{E}_{u \sim \nu}[\langle v, u \rangle \langle z, u \rangle]$, $\forall v,z \in \mathcal{H}$ (see \cite{Baxendale}). From this, one readily finds the form of \(C\) by noting that \begin{equation} \label{eq:covaraince} \langle v, \mathbb{E}_{u \sim \nu}[u \otimes u] z \rangle = \mathbb{E}_{u \sim \nu} [\langle v, (u \otimes u)z \rangle] = \mathbb{E}_{u \sim \nu}[\langle v, u \rangle \langle z, u \rangle], \end{equation} implying that $C = \mathbb{E}_{u \sim \nu}[u \otimes u].$ Moreover, it follows that \[\tr C = \mathbb{E}_{u \sim \nu} [\tr u \otimes u] = \mathbb{E}_{u \sim \nu} \|u\|^2 < \infty.\] Let \(\phi_1,\phi_2,\dots\) denote the eigenvectors of \(C\) and \(\lambda_1 \geq \lambda_2 \geq \dots\) the corresponding eigenvalues. In the infinite data setting $(N = \infty)$ it is natural to think of $C$ and its first $d$ eigenpairs as known. We then define the \textit{optimal projection space} \begin{equation} \label{eq:optprojectionspace} V_d = \text{span} \{\phi_1, \phi_2, \dots, \phi_d\}. \end{equation} It may be verified that \(V_d\) solves the minimization problem $\min_{V \in \mathcal{V}_d} R(V)$ and that $R(V_d) = \sum_{j=d+1}^\infty \lambda_j.$ With this infinite data perspective in mind observe that PCA makes the approximation \(V_{d,N} \approx V_d\) from a finite dataset. The approximation quality of \(V_{d,N}\) w.r.t. \(V_d\) is related to the approximation quality of \(\phi_j\) by \(\phi_{j,N}\) for \(j=1,\dots,N\) and therefore to the approximation quality of \(C\) by \(C_N\). Another perspective is via the Karhunen-Loeve Theorem (KL) \cite{Lord}. For simplicity, assume that \(\nu\) is mean zero, then \(u \sim \nu\) admits an expansion of the form $u = \sum_{j=1}^\infty \sqrt{\lambda_j} \xi_j \phi_j$ where \(\{\xi_j\}_{j=1}^\infty\) is a sequence of scalar-valued, mean zero, pairwise uncorrelated random variables. We can then truncate this expansion and make the approximations \[ u \approx \sum_{j=1}^d \sqrt{\lambda_j} \xi_j \phi_j \approx \sum_{j=1}^d \sqrt{\lambda_{j,N}} \xi_j \phi_{j,N}, \] where the first approximation corresponds to using the optimal projection subspace \(V_d\) while the second approximation replaces \(V_d\) with \(V_{d,N}\). Since it holds that \(\mathbb{E} C_N = C\), we expect \(\lambda_j \approx \lambda_{j,N}\) and \(\phi_j \approx \phi_{j,N}\). These discussions suggest that the quality of the PCA approximation is controlled, on average, by the rate of decay of the eigenvalues of \(C\), and the approximation of the eigenstructure of $C$ by that of $C_N.$ \subsection{Neural Networks} \label{sec:neuralnets} A neural network is a nonlinear function \(\chi : \mathbb{R}^n \to \mathbb{R}\) defined by a sequence of compositions of affine maps with point-wise nonlinearities. In particular, \begin{equation}\label{nn-general-form} \chi(s) = W_t \sigma ( \dots \sigma ( W_2 \sigma (W_1 s + b_1) + b_2)) + b_t, \qquad s \in \mathbb{R}^n, \end{equation} where \(W_1,\dots,W_t\) are {\it weight matrices} (that are not necessarily square) and \(b_1,\dots,b_t\) are vectors, referred to as {\it biases}. We refer to $t \ge 1$ as the {\it depth of the neural network}. The function \(\sigma : \mathbb{R}^d \to \mathbb{R}^d\) is a monotone, nonlinear {\it activation function} that is defined from a monoton function \(\sigma : \mathbb{R} \to \mathbb{R}\) applied entrywise to any vector in $\mathbb{R}^d$ with $d \ge 1$. Note that in \eqref{nn-general-form} the input dimension of $\sigma$ may vary between layers but regardless of the input dimension the function $\sigma$ applies the same operations to all entries of the input vector. We primarily consider the Rectified Linear Unit (ReLU) activation functions, i.e., \begin{equation}\label{ReLU-def} \sigma(s) := (\max\{ 0, s_1\}, \max\{0, s_2\}, \dots, \max\{0, s_d\})^T \in \mathbb R^d\qquad \forall s \in \mathbb R^d. \end{equation} The weights and biases constitute the parameters of the network. In this paper we learn these parameters in the following standard way \cite{lecunnature}: given a set of data $\{x_j,y_j\}_{j=1}^N$ we choose the parameters of $\chi$ to solve an appropriate regression problem by minimizing a data-dependent cost functional, using stochastic gradient methods. Neural networks have been demonstrated to constitute an efficient class of regressors and interpolators for high-dimensional problems empirically, but a complete theory of their efficacy is elusive. For an overview of various neural network architectures and their applications, see \cite{deeplearningbook}. For theories concerning their approximation capabilities see \cite{pinkus,yarotsky,schwab2019deep,devoredeep,kutyniok2019theoretical}. For the approximation results given in Section \ref{sec:analysis}, we will work with a specific class of neural networks, following \cite{yarotsky}; we note that other approximation schemes could be used, however, and that we have chosen a proof setting that aligns with, but is not identical to, what we implement in the computations described in Section \ref{sec:numerics}. We will fix \(\sigma \in C(\mathbb{R};\mathbb{R})\) to be the ReLU function \eqref{ReLU-def} and consider the set of neural networks mapping $\mathbb{R}^n$ to $\mathbb{R}$ \begin{equation*} \mathcal{M}(n; t, r) := \left \{ \begin{aligned} & \chi(s) = W_t \sigma ( \dots \sigma ( W_2 \sigma (W_1 s + b_1) + b_2)) + b_t \in \mathbb{R}, \\ & \text{for all } s\in \mathbb{R}^n \text{ and such that } \sum_{k =1}^t | W_k|_0 + | b_k|_0 \le r. \end{aligned} \right \} \end{equation*} Here $| \cdot |_0$ gives the number of non-zero entries in a matrix so that $r \ge 0$ denotes the number of active weights and biases in the network while $t \ge 1$ is the total number of layers. Moreover, we define the class of stacked neural networks mapping $\mathbb{R}^n$ to $\mathbb{R}^m$: \begin{equation*} \mathcal{M}(n, m; t, r) := \left \{ \begin{aligned} & \chi(s) = (\chi^{(1)}(s), \dots, \chi^{(m)}(s) )^T \in \mathbb{R}^m, \\ & \text{where }\chi^{(j)} \in \mathcal{M}(n; t^{(j)}, r^{(j)}), \text{ with } t^{(j)} \le t, r^{(j)} \le r. \end{aligned} \right \} \end{equation*} From this, we build the set of \textit{zero-extended} neural networks \begin{equation*} \mathcal{M}(n, m; t, r, M) := \left \{ \chi = \left\{ \begin{aligned} \tilde{\chi}(s), \quad & s \in [-M,M]^{n} \\ 0, \quad & s \not \in [-M,M]^{n} \end{aligned} \right\}, \text{ for some } \tilde{\chi} \in \mathcal{M}( n, m, t, r) \right \}, \end{equation*} where the new parameter \(M > 0\) is the side length of the hypercube in $\mathbb{R}^n$ within which $\chi$ can be non-zero. This construction is essential to our approximation as it allows us to handle non-compactness of the latent spaces after PCA dimension reduction. \subsection{Comparison to Existing Methods} \label{sec:representation} In the general setting of arbitrary encoders, the formula \eqref{eq:approxpsi} for the approximation of $\Psi$ yields a complicated map, the representation of which depends on the dimension reduction methods being employed. However, in the setting where PCA is used, a clear representation emerges which we now elucidate in order to highlight similarities and differences between our methodology and existing methods appearing in the literature. Let \(F_\X :\mathcal{X} \to \mathbb{R}^{d_\X}\) be the PCA encoder w.r.t. the data \(\{x_j\}_{j=1}^N\) given by \eqref{eq:encoder} and, in particular, let \(\phi^\mathcal{X}_{1,N},\dots,\phi^\mathcal{X}_{d_\X,N}\) be the eigenvectors of the resulting empirical covariance. Similarly let \(\phi^\mathcal{Y}_{1,N},\dots,\phi^\mathcal{Y}_{d_\Y,N}\) be the eigenvectors of the empirical covariance w.r.t. the data \(\{y\}_{j=1}^N\). For the function $\varphi$ defined in \eqref{eq:approxphi}, or similarly for approximations $\chi$ thereof found through the use of neural networks, we denote the components by \(\varphi(s) = (\varphi_1(s), \dots, \varphi_{d_\Y}(s))\) for any \(s \in \mathbb{R}^{d_\X}\). Then \eqref{eq:approxpsi} becomes $\Psi(x) \approx \sum_{j=1}^{d_\Y} \alpha_j(x) \phi^\mathcal{Y}_{j,N}$ with coefficients \[\alpha_j(x) = \varphi_j \big ( F_\X(x) \big ) = \varphi_j \big ( \langle x, \phi^\mathcal{X}_{1,N} \rangle_\mathcal{X},\dots, \langle x, \phi^\mathcal{X}_{d_\X,N} \rangle_\mathcal{X} \big ), \qquad \forall x \in \mathcal{X}.\] The solution data \(\{y\}_{j=1}^N\) fixes a basis for the output space, and the dependence of \(\Psi(x)\) on \(x\) is captured solely via the scalar-valued coefficients \(\alpha_j\). This parallels the formulation of the classical reduced basis method \cite{DeVoreReducedBasis} where the approximation is written as \begin{equation} \label{eq:thisa} \Psi(x) \approx \sum_{j=1}^{m} \alpha_j(x) \phi_j. \end{equation} Many versions of the method exist, but two particularly popular ones are: (i) when \(m = N\) and \(\phi_j = y_j\); and (ii) when, as is done here, \(m=d_\Y\) and \(\phi_j = \phi^\mathcal{Y}_{j,N}\). The latter choice is also referred to as the reduced basis with a proper orthogonal decomposition. The crucial difference between our method and the RBM is in the formation of the coefficients \(\alpha_j\). In RBM these functions are obtained in an intrusive manner by approximating the PDE within the finite-dimensional reduced basis and as a consequence the method cannot be used in a setting where a PDE relating inputs and outputs is not known, or may not exist. In contrast, our proposed methodology approximates \(\varphi\) by regressing or interpolating the latent representations $\{F_\X (x_j), F_\Y(y_j)\}_{j=1}^N$. Thus our proposed method makes use of the entire available dataset and does not require explicit knowledge of the underlying PDE mapping, making it a non-intrusive method applicable to black-box models. The form \eqref{eq:thisa} of the approximate solution operator can also be related to the Taylor approximations developed in \cite{cohenalgo,cohenconv} where a particular form of the input $x$ is considered, namely $x = \bar{x} + \sum_{j \geq 1} a_j \tilde{x}_j$ where \(\bar{x} \in \mathcal{X}\) is fixed, \(\{a_j\}_{j \geq 1} \in \ell^\infty(\mathbb{N};\mathbb{R})\) are uniformly bounded, and \(\{\tilde{x}_j\}_{j \geq 1} \in \mathcal{X}\) have some appropriate norm decay. Then, assuming that the solution operator \(\Psi : \mathcal{X} \to \mathcal{Y}\) is analytic \cite{cohenanalytic}, it is possible to make use of the Taylor expansion \[\Psi(x) = \sum_{h \in \mathcal{F}} \alpha_h(x) \psi_h, \] where \(\mathcal{F} = \{h \in \mathbb{N}^\infty : |h|_0 < \infty\}\) is the set of multi-indices and \[\alpha_h(x) = \prod_{j \geq 1} a_j^{h_j} \in \mathbb{R}, \qquad \psi_h = \frac{1}{h!} \partial^h \Psi(0) \in \mathcal{Y};\] here the differentiation \(\partial^h\) is with respect to the sequence of coefficients \(\{a_j\}_{j \geq 1}\). Then $\Psi$ is approximated by truncating the Taylor expansion to a finite subset of \(\mathcal{F}\). For example this may be done recursively, by starting with \(h = 0\) and building up the index set in a greedy manner. The method is not data-driven, and requires knowledge of the PDE to define equations to be solved for the $\psi_h$. \section{Approximation Theory} \label{sec:analysis} In this section, we prove our main approximation result: given any $\epsilon>0$, we can find an $\epsilon-$approximation $\Psi_{\scriptscriptstyle{NN}}$ of $\Psi$. We achieve this by making the appropriate choice of PCA truncation parameters, by choosing sufficient amounts of data, and by choosing a sufficiently rich neural network architecture to approximate $\varphi$ by $\chi.$ In what follows we define \(F_\X\) to be a PCA encoder given by \eqref{eq:encoder}, using the input data $\{ x_j\}_{j=1}^N$ drawn i.i.d. from $\mu$, and \(G_\Y\) to be a PCA decoder given by \eqref{eq:decoder}, using the data $\{ y_j = \Psi(x_j) \}_{j=1}^N$. We also define \begin{align*} e_{\scriptscriptstyle{NN}}(x) &= \|(G_\Y \circ \chi \circ F_\X)(x) - \Psi(x)\|_Y\\ &=\|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi(x)\|_\mathcal{Y}. \end{align*} We prove the following theorem: \begin{theorem} \label{thm:limit} Let \(\mathcal{X}\), \(\mathcal{Y}\) be real, separable Hilbert spaces and let \(\mu\) be a probability measure supported on \(\mathcal{X}\) such that \(\mathbb{E}_{x \sim \mu} \|x\|_\mathcal{X}^4 < \infty\). Suppose \(\Psi : \mathcal{X} \to \mathcal{Y}\) is a $\mu$-measurable, globally Lipschitz map. For any \(\epsilon > 0\), there are dimensions \(d_\X = d_\X(\epsilon) \in \mathbb{N}\), \(d_\Y = d_\Y(\epsilon) \in \mathbb{N}\), a requisite amount of data \(N = N(d_\X, d_\Y) \in \mathbb{N}\), parameters $t, r, M$ depending on $d_\X,d_\Y$ and $\epsilon$, and a zero-extended stacked neural network $\chi \in \mathcal{M}(d_\X, d_\Y; t, r, M)$ such that \[\mathbb{E}_{\{x_j\} \sim \mu} \mathbb{E}_{x \sim \mu}\bigl(e_{\scriptscriptstyle{NN}}(x)^2\bigr) < \epsilon.\] \end{theorem} \begin{remark} This theorem is a consequence of Theorem \ref{thm:approximation} which we state and prove below. For clarity and ease of exposition we state and prove Theorem \ref{thm:approximation} in a setting where \(\Psi\) is globally Lipschitz. With a more stringent moment condition on \(\mu\), the result can also be proven when $\Psi$ is locally Lipschitz; we state and prove this result in Theorem \ref{thm:approximation-local-lip}. \end{remark} \begin{remark} The neural network \(\chi \in \mathcal{M}(d_\X, d_\Y; t, r, M)\) has maximum number of layers $t \le c [ \log (M^2 d_\Y/ \epsilon) +1 ]$, with the number of active weights and biases in each component of the network $r \le c (\epsilon/4M^2)^{-d_\X/2}[ \log ( M^2 d_\Y/\epsilon) + 1]$, with an appropriate constant $c = c( d_\X, d_\Y) \ge 0$ and support side-length \(M = M(d_\X,d_\Y) > 0\). These bounds on $t$ and $r$ follow from Theorem \ref{thm:approximation} with $\tau=\epsilon^{\frac12}.$ Note, however, that in order to achieve error $\epsilon$, the dimensions $d_\X, d_\Y$ must be chosen to grow as $\epsilon \to 0$; thus the preceding statements do not explicitly quantify the needed number of parameters, and depth, for error $\epsilon$; to do so would require quantifying the dependence of $c,M$ on $d_\X,d_\Y$ (a property of neural networks) and the dependence of $d_\X,d_\Y$ on $\epsilon$ (a property of the measure $\mu$ and spaces $\mathcal{X},\mathcal{Y}$ -- see Theorem \ref{thm:pca_generalization_bound}). The theory in \cite{yarotsky}, which we employ for the existence result for the neural network produces the constant \(c\) which depends on the dimensions \(d_\X\) and \(d_\Y\) in an unspecified way. \end{remark} The double expectation reflects averaging over all possible new inputs $x$ drawn from $\mu$ (inner expectation) and over all possible realizations of the i.i.d. dataset $\{ x_j, y_j = \Psi(x_j) \}_{j=1}^N$ (outer expectation). The theorem as stated above is a consequence of Theorem \ref{thm:approximation} in which the error is broken into multiple components that are then bounded separately. Note that the theorem does not address the question of whether the optimization technique used to fit the neural network actually finds the choice which realizes the theorem; this gap between theory and practice is difficult to overcome, because of the non-convex nature of the training problem, and is a standard feature of theorems in this area \cite{kutyniok2019theoretical,schwab2019deep}. The idea of the proof is to quantify the approximations \(G_\X \circ F_\X \approx I_\mathcal{X}\) and \(G_\Y \circ F_\Y \approx I_\mathcal{Y}\) and $\chi \approx \varphi$ so that $\Psi_{\scriptscriptstyle{NN}}$ given by \eqref{eq:apsipcn} is close to $\Psi.$ The first two approximations, which show that $\Psi_{\scriptscriptstyle{PCA}}$ given by \eqref{eq:apsipca} is close to $\Psi,$ are studied in Subsection \ref{sec:pcaanalysis} (see Theorem \ref{thm:pca_generalization_bound}). Then, in Subsection \ref{sec:approxanalysis}, we find a neural network \(\chi\) able to approximate \(\varphi\) to the desired level of accuracy; this fact is part of the proof of Theorem \ref{thm:approximation}. The zero-extension of the neural network arises from the fact that we employ a density theorem for a class of neural networks within continuous functions defined on compact sets. Since we cannot guarantee that \(F_\X\) is bounded, we simply set the neural network output to zero on the set outside a hypercube with side-length \(2M\). We then use the fact that this set has small $\mu$-measure, for sufficiently large $M.$ \subsection{PCA And Approximation} \label{sec:pcaanalysis} We work in the general notation and setting of Subsection \ref{sec:functionalpca} so as to obtain approximation results that are applicable to both using PCA on the inputs and on the outputs. In addition, denote by \((\text{HS}(\mathcal{H}), \langle \cdot, \cdot \rangle_{HS}, \|\cdot\|_{HS})\) the space of Hilbert-Schmidt operators over \(\mathcal{H}\). We are now ready to state the main result of this subsection. Our goal is to control the projection error \(R(V_{d,N})\) when using the finite-data PCA subspace in place of the optimal projection space since the PCA subspace is what is available in practice. Theorem \ref{thm:pca_generalization_bound} accomplishes this by bounding the error \(R(V_{d,N})\) by the optimal error \(R(V_d)\) plus a term related to the approximation \(V_{d,N} \approx V_d\). While previous results such as \cite{Blanchard2007,ShaweTaylor2,ShaweTaylor1} focused on bounds for the excess error in probability w.r.t. the data, we present bounds in expectation, averaging over the data. Such bounds are weaker, but allow us to remove strict conditions on the data distribution to obtain more general results; for example, our theory allows for \(\nu\) to be a Gaussian measure. \begin{theorem} \label{thm:pca_generalization_bound} Let \(R\) be given by \eqref{eq:projectionerror} and \(V_{d,N}\), \(V_d\) by \eqref{eq:pcasubspace}, \eqref{eq:optprojectionspace} respectively. Then there exists a constant \(Q \geq 0\), depending only on the data generating measure \(\nu\), such that \[\mathbb{E}_{\{u_j\} \sim \nu} [R(V_{d,N})] \leq \sqrt{\frac{Qd}{N}} + R(V_d),\] where the expectation is over the dataset $\{ u_j \}_{j=1}^N \stackrel{iid}{\sim} \nu$. \end{theorem} The proof generalizes that employed in \cite[Thm. 3.1]{Blanchard2007}. We first find a bound on the average excess error \(\mathbb{E} [R(V_{d,N}) - R_N (V_{d,N})]\) using Lemma \ref{lemma:convariancemontecarlo}. Then using Fan's Theorem \cite{Fan} (Lemma \ref{thm:fan}), we bound the average sum of the tail eigenvalues of \(C_N\) by the sum of the tail eigenvalues of \(C\), in particular, \(\mathbb{E} [R_N (V_{d,N})] \leq R(V_d) \). \begin{proof} For brevity we simply write $\mathbb{E}$ instead of $\mathbb{E}_{\{u_j\} \sim \nu}$ throughout the proof. For any subspace \(V \subseteq \mathcal{H}\), we have \begin{align*} R(V) &= \mathbb{E}_{u \sim \nu} [\|u\|^2 - 2 \langle u, \Pi_V u \rangle + \langle \Pi_V u, \Pi_V u \rangle]= \mathbb{E}_{u \sim \nu} [\tr (u \otimes u) - \langle \Pi_V u , \Pi_V u \rangle ] \\ &= \mathbb{E}_{u \sim \nu} [\tr (u \otimes u)- \langle \Pi_V, u \otimes u \rangle_{HS}] = \tr C - \langle \Pi_V, C \rangle_{HS} \end{align*} where we used two properties of the fact that \(\Pi_V\) is an orthogonal projection operator, namely \(\Pi_V^2 = \Pi_V = \Pi_V^*\) and \[\langle \Pi_V, v \otimes z \rangle_{HS} = \langle v, \Pi_V z \rangle = \langle \Pi_V v, \Pi_V z \rangle \quad \forall v,z \in \mathcal{H}.\] Repeating the above arguments for $R_N(V)$ in place of $R(V)$, with the expectation replaced by the empirical average, yields $R_N(V) = \tr C_N - \langle \Pi_V, C_N \rangle_{HS}.$ By noting that \(\mathbb{E}[C_N] = C\) we then write \begin{align*} \mathbb{E} [R(V_{d,N}) - R_N(V_{d,N})] & = \mathbb{E} \langle \Pi_{V_{d,N}}, C_N - C \rangle_{HS} \leq \sqrt{d} \: \mathbb{E} \|C_N - C\|_{HS} \\ & \leq \sqrt{d} \sqrt{\mathbb{E} \|C_N - C\|^2_{HS}} \end{align*} where we used Cauchy-Schwarz twice along with the fact that \(\|\Pi_{V_{d,N}}\|_{HS} = \sqrt{d}\) since \(V_{d,N}\) is \(d\)-dimensional. Now by Lemma \ref{lemma:convariancemontecarlo}, which quantifies the Monte Carlo error between $C$ and $C_N$ in the Hilbert-Schmidt norm, we have that \[\mathbb{E} [R(V_{d,N}) - R_N(V_{d,N})] \leq \sqrt{\frac{Qd}{N}},\] for a constant \(Q \geq 0\). Hence by\eqref{eq:nt}, \begin{align*} \mathbb{E} [R(V_{d,N})]&\leq \sqrt{\frac{Qd}{N}} + \mathbb{E} \sum_{j=d+1}^N \lambda_{j,N}. \end{align*} It remains to estimate the second term above. Letting \(S_d\) denote the set of subspaces of \(d\) orthonormal elements in \(\mathcal{H}\), Fan's Theorem (Proposition~\ref{thm:fan}) gives \begin{align*} \sum_{j=1}^d \lambda_j &= \max_{\{v_1,\dots,v_d\} \in S_d} \sum_{j=1}^d \langle Cv_j, v_j \rangle = \max_{\{v_1,\dots,v_d\} \in S_d} \mathbb{E}_{u \sim \nu} \sum_{j=1}^d |\langle u, v_j \rangle|^2 \\ &= \max_{\{v_1,\dots,v_d\} \in S_d} \mathbb{E}_{u \sim \nu} \sum_{j=1}^d \|\Pi_{\text{span}\{v_j\}} u\|^2 = \max_{V \in \mathcal{V}_d} \mathbb{E}_{u \sim \nu} \|\Pi_V u\|^2 \\ &= \mathbb{E}_{u \sim \nu} \|u\|^2 - \min_{V \in \mathcal{V}_d} E_{u \sim \nu} \|\Pi_{V^\perp} u\|^2. \end{align*} Observe that $\sum_{j=1}^\infty \lambda_j = \tr C = \mathbb{E}_{u \sim \nu} \|u\|^2$ and so \[\sum_{j=d+1}^\infty \lambda_j = \mathbb{E}_{u \sim \nu} \|u\|^2 - \sum_{j=1}^d \lambda_j = \min_{V \in \mathcal{V}_d} \mathbb{E}_{u \sim \nu} \|\Pi_{V^\perp} u\|^2.\] We now repeat the above calculations for $\lambda_{j,N}$, the eigenvalues of $C_N$, by replacing the expectation with the empirical average to obtain \[\sum_{j=d+1}^N \lambda_{j,N} = \min_{V \in \mathcal{V}_d} \frac{1}{N} \sum_{k=1}^N \|\Pi_{V^\perp} u_k\|^2,\] and so \begin{align*} \mathbb{E} \sum_{j=d+1}^N \lambda_{j,N} &\leq \min_{V \in \mathcal{V}_d} \mathbb{E} \frac{1}{N} \sum_{k=1}^N \|\Pi_{V^\perp} u_k\|^2 = \min_{V \in \mathcal{V}_d} \mathbb{E}_{u \sim \mu} \|\Pi_{V^\perp} u\|^2 = \sum_{j=d+1}^\infty \lambda_j. \end{align*} Finally, we conclude that \begin{align*} \mathbb{E} [R(V_{d,N})] \leq \sqrt{\frac{Qd}{N}} + \sum_{j=d+1}^\infty \lambda_j = \sqrt{\frac{Qd}{N}} + R(V_d). \end{align*} \end{proof} \subsection{Neural Networks And Approximation} \label{sec:approxanalysis} In this subsection we study the approximation of $\varphi$ given in \eqref{eq:approxphi} by neural networks, combining the analysis with results from the preceding subsection to prove our main approximation result, Theorem \ref{thm:approximation}. We will work in the notation of Section \ref{sec:method}. We assume that \((\mathcal{X}, \langle \cdot, \cdot \rangle_\mathcal{X}, \|\cdot\|_\mathcal{X}) \) and \((\mathcal{Y}, \langle \cdot, \cdot \rangle_\mathcal{Y}, \|\cdot\|_\mathcal{Y}) \) are real, separable Hilbert spaces; \(\mu\) is a probability measure supported on \(\mathcal{X}\) with a finite fourth moment \(\mathbb{E}_{x \sim \mu} \|x\|_\mathcal{X}^4 < \infty\), and \(\Psi: \mathcal{X} \to \mathcal{Y}\) is measurable and globally \(L\)-Lipschitz: there exists a constant $L>0$ such that \[\forall x,z \in \mathcal{X}\quad \|\Psi(x) - \Psi(z)\|_\mathcal{Y} \leq L \|x-z\|_\mathcal{X}.\] Note that this implies that \(\Psi\) is linearly bounded: for any \(x \in \mathcal{X}\) \[\|\Psi(x)\|_\mathcal{Y} \le \|\Psi(0)\|_\mathcal{Y} + \|\Psi(x) - \Psi(0)\|_\mathcal{Y} \leq \|\Psi(0)\|_\mathcal{Y} + L\|x\|_\mathcal{X}.\] Hence we deduce existence of the fourth moment of the pushforward \(\Psi_\sharp \mu\): \[ \mathbb{E}_{y \sim \Psi_\sharp \mu} \|y\|^4_\mathcal{Y} = \int_\mathcal{X} \|\Psi(x)\|_\mathcal{Y}^4 d\mu(x) \leq \int_\mathcal{X} (\|\Psi(0)\|_\mathcal{Y} + L \|x\|_\mathcal{X})^4 d\mu(x) < \infty \] since we assumed \(\mathbb{E}_{x \sim \mu} \|x\|^4_\mathcal{X} < \infty\). Let us recall some of the notation from Subsections \ref{sec:functionalpca} and \ref{sec:representation}. Let \(V^\mathcal{X}_{d_\X}\) be the \(d_\X\)-dimensional optimal projection space given by \eqref{eq:optprojectionspace} for the measure \(\mu\) and \(V^\mathcal{X}_{d_\X,N}\) be the \(d_\X\)-dimensional PCA subspace given by \eqref{eq:pcasubspace} with respect to the input dataset \(\{x_j\}_{j=1}^N\). Similarly let \(V^\mathcal{Y}_{d_\Y}\) be the \(d_\Y\)-dimensional optimal projection space for the pushforward measure \(\Psi_\sharp \mu\) and \(V^\mathcal{Y}_{d_\Y,N}\) be the \(d_\Y\)-dimensional PCA subspace with respect to the output dataset \(\{y_j = \Psi(x_j)\}_{j=1}^N\). We then define the input PCA encoder \(F_\X : \mathcal{X} \to \mathbb{R}^{d_\X}\) by \eqref{eq:encoder} and the input PCA decoder \(G_\X: \mathbb{R}^{d_\X} \to \mathcal{X}\) by \eqref{eq:decoder} both with respect to the orthonormal basis used to construct \(V^\mathcal{X}_{d_\X,N}\). Similarly we define the output PCA encoder \(F_\Y : \mathcal{Y} \to \mathbb{R}^{d_\Y}\) and decoder \(G_\Y : \mathbb{R}^{d_\Y} \to \mathcal{Y}\) with respect to the orthonormal basis used to construct \(V^\mathcal{Y}_{d_\Y,N}\). Finally we recall \(\varphi : \mathbb{R}^{d_\X} \to \mathbb{R}^{d_\Y}\) the map connecting the two latent spaces defined in \eqref{eq:approxphi}. The approximation \(\Psi_{\scriptscriptstyle{PCA}}\) to \(\Psi\) based only on the PCA encoding and decoding is given by \eqref{eq:apsipca}. In the following theorem, we prove the existence of a neural network giving an \(\epsilon\)-close approximation to \(\varphi\) for fixed latent code dimensions $d_\X, d_\Y$ and quantify the error of the full approximation \(\Psi_{\scriptscriptstyle{NN}}\), given in \eqref{eq:apsipcn}, to \(\Psi\). We will be explicit about which measure the projection error is defined with respect to. In particular, we will write \eqref{eq:projectionerror} as \[R^\mu(V) = \mathbb{E}_{x \sim \mu} \|x - \Pi_V x\|^2_\mathcal{X}\] for any subspace \(V \subseteq \mathcal{X}\) and similarly \[R^{\Psi_\sharp \mu}(V) = \mathbb{E}_{y \sim \Psi_\sharp \mu} \|y - \Pi_V y\|^2_\mathcal{Y}\] for any subspace \(V \subseteq \mathcal{Y}\). \begin{theorem} \label{thm:approximation} Let \(\mathcal{X}\), \(\mathcal{Y}\) be real, separable Hilbert spaces and let \(\mu\) be a probability measure supported on \(\mathcal{X}\) such that \(\mathbb{E}_{x \sim \mu} \|x\|_\mathcal{X}^4 < \infty\). Suppose \(\Psi : \mathcal{X} \to \mathcal{Y}\) is a $\mu$-measurable, globally Lipschitz map. Fix $d_\X, d_\Y$, $N \ge \max\{d_\X, d_\Y\},$ $\delta \in (0,1)$ and $\tau>0$. Define \(M = \sqrt{\mathbb{E}_{x \sim \mu} \|x\|_\mathcal{X}^2 / \delta}\). Then there exists a constant $c = c(d_\X, d_\Y) \ge 0$ and a zero-extended stacked neural network \(\chi \in \mathcal{M}(d_\X, d_\Y; t, r,M)\) with $t \le c(d_\X,d_\Y) [ \log ( M \sqrt{d_\Y}/ \tau) +1 ]$ and $r \le c(d_\X,d_\Y) (\tau/2M)^{-d_\X}[ \log (M \sqrt{d_\Y}/\tau) + 1]$, so that \begin{equation} \label{eq:approximationbound} \begin{aligned} \mathbb{E}_{\{x_j\} \sim \mu} \mathbb{E}_{x \sim \mu}\bigl(e_{\scriptscriptstyle{NN}}(x)\bigr)^2 \le C \Bigg( \tau^2 + \sqrt{\delta} + \sqrt{\frac{d_\X}{N}} + R^{\mu}(V_{d_\X}^{\mathcal{X}}) + \sqrt{\frac{d_\Y}{N}} + R^{\Psi_\sharp \mu}(V_{d_\Y}^{\mathcal{Y}}) \Bigg), \end{aligned} \end{equation} where $C > 0$ is independent of $d_\X, d_\Y, N, \delta$ and $\tau$. \end{theorem} The first two terms on the r.h.s. arise from the neural network approximation of \(\varphi\) while the last two pairs of terms are from the finite-dimensional approximation of \(\mathcal{X}\) and \(\mathcal{Y}\) respectively as prescribed by Theorem \ref{thm:pca_generalization_bound}. The way to interpret the result is as follows: first choose $d_\X, d_\Y$ so that $R^{\mu}(V_{d_\X}^{\mathcal{X}})$ and $R^{\Psi_\sharp \mu}(V_{d_\Y}^{\mathcal{Y}})$ are small -- these are intrinsic properties of the measures $\mu$ and $\Psi_\sharp \mu$; secondly, choose the amount of data $N$ large enough to make $\max\{d_\X, d_\Y\}/N$ small, essentially controlling how well we approximate the intrinsic covariance structure of $\mu$ and $\Psi_\sharp \mu$ using samples; thirdly choose $\delta$ small enough to control the error arising from restricting the domain of $\varphi$; and finally choose $\tau$ sufficiently small to control the approximation of $\varphi$ by a neural network restricted to a compact set. Note that the size and values of the parameters of the neural network \(\chi\) will depend on the choice of \(\delta\) as well as $d_\X, d_\Y$ and $N$ in a manner which we do not specify. In particular, the dependence of $c$ on $d_\X, d_\Y$ is not explicit in the theorem of \cite{yarotsky} which furnishes the existence of the requisite neural network $\chi.$ The parameter \(\tau\) specifies the error tolerance between \(\chi\) and \(\varphi\) on \([-M,M]^{d_\X}\). Intuitively, as \((\delta,\tau) \to 0\), we expect the number of parameters in the network to also grow \cite{pinkus}. Quantifying this growth would be needed to fully understand the computational complexity of our method. \begin{proof} Recall the constant $Q$ from Theorem \ref{thm:pca_generalization_bound}. In what follows we take $Q$ to be the maximum of the two such constants when arising from application of the theorem on the two different probability spaces $(\mathcal{X},\mu)$ and $(\mathcal{Y}, \Psi_\sharp \mu).$ Through the proof we use $\mathbb{E}$ to denote $\mathbb{E}_{\{x_j \} \sim \mu}$ the expectation with respect to the dataset $\{ x_j\}_{j=1}^N$. We begin by approximating the error incurred by using \(\Psi_{\scriptscriptstyle{PCA}}\) given by \eqref{eq:apsipca}: \begin{align} \label{eq:truephiapprox} \begin{split} \mathbb{E} & \mathbb{E}_{x \sim \mu} \|\Psi_{\scriptscriptstyle{PCA}}(x) - \Psi(x)\|^2_\mathcal{Y} \\ &= \mathbb{E} \mathbb{E}_{x \sim \mu} \|(G_\Y \circ F_\Y \circ \Psi \circ G_\X \circ F_\X)(x) - \Psi(x)\|^2_\mathcal{Y} \\ &= \mathbb{E} \mathbb{E}_{x \sim \mu} \left\|\Pi_{V_{d_\Y,N}^\mathcal{Y}} \Psi (\Pi_{V_{d_\X,N}^\mathcal{X}} x) - \Psi(x) \right\|^2_\mathcal{Y} \\ &\leq 2 \mathbb{E} \mathbb{E}_{x \sim \mu} \left\|\Pi_{V_{d_\Y,N}^\mathcal{Y}} \Psi(\Pi_{V_{d_\X,N}^\mathcal{X}} x) - \Pi_{V_{d_\Y,N}^\mathcal{Y}} \Psi(x) \right\|^2_\mathcal{Y} \\ &\:\:\:\:+ 2 \mathbb{E} \mathbb{E}_{x \sim \mu}\left \|\Pi_{V_{d_\Y,N}^\mathcal{Y}} \Psi(x) - \Psi(x) \right\|^2_\mathcal{Y} \\ & \leq 2 L^2 \mathbb{E} \mathbb{E}_{x \sim \mu} \left\|\Pi_{V_{d_\X,N}^\mathcal{X}} x - x \right\|^2_\mathcal{X} + 2 \mathbb{E} \mathbb{E}_{y \sim \Psi_\sharp \mu} \left\|\Pi_{V_{d_\Y,N}^\mathcal{Y}} y - y \right\|^2_\mathcal{Y} \\ &= 2 L^2 \mathbb{E} [R^{\mu}(V_{d_\X,N}^\mathcal{X})] + 2 \mathbb{E} [R^{\Psi_\sharp \mu}(V_{d_\Y,N}^\mathcal{Y})] \end{split} \end{align} noting that the operator norm of an orthogonal projection is \(1\). Theorem \ref{thm:pca_generalization_bound} allows us to control this error, and leads to \begin{align} \label{eq:truephiapprox2} \begin{split} \mathbb{E} \mathbb{E}_{x \sim \mu} \bigl(e_{\scriptscriptstyle{NN}}(x)^2\bigr) &=\mathbb{E} \mathbb{E}_{x \sim \mu} \|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi(x)\|^2_\mathcal{Y}\\ & \le 2\mathbb{E} \mathbb{E}_{x \sim \mu}\|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi_{\scriptscriptstyle{PCA}}(x)\|^2_\mathcal{Y}+2\mathbb{E} \mathbb{E}_{x \sim \mu} \|\Psi_{\scriptscriptstyle{PCA}}(x) - \Psi(x)\|^2_\mathcal{Y}\\ & \le 2 \mathbb{E}_{x \sim \mu}\|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi_{\scriptscriptstyle{PCA}}(x)\|^2_\mathcal{Y}\\ &\quad\quad+4L^2\left( \sqrt{\frac{Q d_\X}{N}} + R^{\mu}(V_{d_\X}^\mathcal{X}) \right)+ 4\left(\sqrt{\frac{Q d_\Y}{N}} + R^{\Psi_\sharp \mu}(V_{d_\Y}^\mathcal{Y})\right). \end{split} \end{align} We now approximate \(\varphi\) by a neural network \(\chi\) as a step towards estimating $\|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi_{\scriptscriptstyle{PCA}}(x)\|_\mathcal{Y}.$ To that end we first note from Lemma \ref{l:add} that \(\varphi\) is Lipschitz, and hence continuous, as a mapping from $\mathbb{R}^{d_\X}$ into $\mathbb{R}^{d_\Y}.$ Identify the components \(\varphi(s) = (\varphi^{(1)}(s), \dots, \varphi^{(d_\Y)}(s))\) where each function \(\varphi^{(j)} \in C(\mathbb{R}^{d_\X};\mathbb{R})\). We consider the restriction of each component function to the set \([-M,M]^{d_\X}\). Let us now change variables by defining \(\tilde{\varphi}^{(j)} : [0,1]^{d_\X} \to \mathbb{R}\) by \(\tilde{\varphi}^{(j)}(s) = (1/2M) \varphi^{(j)}(2Ms - M)\) for any \(s \in [0,1]^{d_\X}\). Note that equivalently we have \(\varphi^{(j)}(s) = 2M \tilde{\varphi}^{(j)}((s+M)/2M)\) for any \(s \in [-M,M]^{d_\X}\) and further \(\varphi^{(j)}\) and \(\tilde{\varphi}^{(j)}\) have the same Lipschitz constants on their respective domains. Applying \cite[Thm.~1]{yarotsky} to the $\tilde{\varphi}^{(j)}$(s) then yields existence of neural networks \(\tilde{\chi}^{(1)}, \dots, \tilde{\chi}^{(d_\Y)} : [0, 1]^{d_\X} \to \mathbb{R}\) such that \[|\tilde{\chi}^{(j)}(s) - \tilde{\varphi}^{(j)}(s)| < \frac{\tau}{2 M \sqrt{d_\Y}} \quad \forall s \in [0,1]^{d_\X},\] for any \(j \in \{1,\dots, d_\Y\}\). In fact, each neural network $\tilde{\chi}^{(j)} \in \mathcal{M}(d_\X; t^{(j)}, r^{(j)})$ with parameters $t^{(j)}$ and $r^{(j)}$ satisfying \begin{equation*} t^{(j)} \le c^{(j)} \left[ \log( M \sqrt{d_\Y} / \tau) + 1 \right], \qquad r^{(j)} \le c^{(j)} \left(\frac{\tau}{2M}\right)^{- d_\X} \left[ \log( M \sqrt{d_\Y} /\tau ) +1 \right], \end{equation*} with constants $c^{(j)}(d_\X) >0$. Hence defining $\chi^{(j)}: \mathbb{R}^{d_\X} \to \mathbb{R}$ by \(\chi^{(j)}(s) := 2M \tilde{\chi}^{(j)}((s+M)/2M)$ for any \(s \in [-M,M]^{d_\X}\), we have that \begin{equation*} \big| \big( \chi^{(1)}(s), \dots, \chi^{(d_\Y)}(s) \big) - \varphi(s) \big|_2 < \tau \quad \forall s \in [-M,M]^{d_\X}. \end{equation*} \iffalse Let us now change variables $\tilde{s}:= s/2M$ and define $\tilde{\varphi}^{(j)}(\tilde s) := \frac{\varphi^{(j)}(s/2M)}{2M}$ on the unit box $[-1/2, 1/2]^{d_\X}$. Note that $\varphi^{(j)}$ and $\tilde{\varphi}^{(j)}$ have the same Lipschitz constants on their respective domains. Applying \cite[Thm.~1]{yarotsky} to the $\tilde{\varphi}^{(j)}$ then yields existence of neural networks \(\tilde{\chi}^{(1)}, \dots, \tilde{\chi}^{(d_\Y)} : [-1/2, 1/2]^{d_\X} \to \mathbb{R}\) such that \[|\tilde{\chi}^{(j)}( \tilde{s}) - \tilde{\varphi}^{(j)}(\tilde{s})| < \frac{\tau}{2 M \sqrt{d_\Y}} \quad \forall s \in [-1/2,1/2]^{d_\X},\] for any \(j \in \{1,\dots, d_\Y\}\). In fact, each neural network $\tilde{\chi}^{(j)} \in \mathcal{M}(d_\X; t^{(j)}, r^{(j)})$ with parameters $t^{(j)}$ and $r^{(j)}$ satisfying \begin{equation*} t^{(j)} \le c^{(j)} \left[ \log( M \sqrt{d_\Y} / \tau) + 1 \right], \qquad r^{(j)} \le c^{(j)} \left(\frac{\tau}{2M}\right)^{- d_\X} \left[ \log( M \sqrt{d_\Y} /\tau ) +1 \right], \end{equation*} with constants $c^{(j)}(d_\X) >0$. Hence defining $\chi^{(j)}(s) := 2M \tilde{\chi}^{(j)}(s/2M)$ we have that \begin{equation*} \big| \big( \chi^{(1)}(s), \dots, \chi^{(d_\Y)}(s) \big) - \varphi(s) \big|_2 < \tau \quad \forall s \in [-M,M]^{d_\X}. \end{equation*} \fi We can now simply define \(\chi: \mathbb{R}^{d_\X} \to \mathbb{R}^{d_\Y}\) as the stacked network \((\chi^{(1)},\dots,\chi^{d_\Y})\) extended by zero outside of $[-M, M]^{d_\X}$ to immediately obtain \begin{equation} \label{eq:nnepsilonclose} \sup_{ s \in [-M,M]^{d_\X}} \big| \chi(s) - \varphi(s) \big|_2 < \tau. \end{equation} Thus, by construction $\chi \in \mathcal{M}(d_\X, d_\Y, t, r, M)$ with at most $t \le \max_j t^{(j)}$ many layers and $r \le r^{(j)}$ many active weights and biases in each of its components. Let us now define the set $A = \{x \in \mathcal{X} : F_\X(x) \in [-M,M]^{d_\X}\}.$ By Lemma \ref{lemma:bounded_projection}, \(\mu(A) \geq 1 - \delta\) and \(\mu(A^c) \leq \delta\). Define the approximation error \[e_{\scriptscriptstyle{PCA}}(x) = \|\Psi_{\scriptscriptstyle{NN}}(x) - \Psi_{\scriptscriptstyle{PCA}}(x)\|_\mathcal{Y}\] and decompose its expectation as \[\mathbb{E}_{x \sim \mu}\bigl(e_{\scriptscriptstyle{PCA}}(x)^2\bigr) = \underbrace{\int_A e_{\scriptscriptstyle{PCA}}(x)^2 d\mu(x)}_{\coloneqq I_A} + \underbrace{\int_{A^c} e_{\scriptscriptstyle{PCA}}(x)^2 d\mu(x)}_{\coloneqq I_{A^c}}.\] For the first term, \begin{align} \label{eq:erroronA} \begin{split} I_A &\leq \int_A \|(G_\Y \circ \chi \circ F_\X)(x) - (G_\Y \circ \varphi \circ F_\X)(x)\|^2_\mathcal{Y} d\mu(x)\leq \tau^2, \end{split} \end{align} by using the fact, established in Lemma \ref{l:add}, that \(G_\Y\) is Lipschitz with Lipschitz constant $1$, the \(\tau\)-closeness of \(\chi\) to \(\varphi\) from \eqref{eq:nnepsilonclose}, and \(\mu(A) \leq 1\). For the second term we have, using that $G_\Y$ has Lipschitz constant $1$ and that $\chi$ vanishes on $A^c$, \begin{align} \label{eq:erroroutsideA} \begin{split} I_{A^c} &\leq \int_{A^c} \|(G_\Y \circ \chi \circ F_\X)(x) - (G_\Y \circ \varphi \circ F_\X)(x)\|^2_\mathcal{Y} d\mu(x) \\ &\leq \int_{A^c} | \chi(F_\X(x)) - \varphi(F_\X(x))|^2_2 d\mu(x) =\int_{A^c} | \varphi(F_\X(x))|^2_2 d\mu(x). \end{split} \end{align} Once more from Lemma \ref{l:add}, we have that \[|F_\X(x)|_2 \leq \|x\|_{\mathcal{X}}; \quad |\varphi(x)|_2 \leq |\varphi(0)|_2 + L|x|_2,\] so that \begin{align} \label{eq:erroroutsideA2} \begin{split} I_{A^c} & \leq 2\bigl(\mu(A^c)|\varphi(0)|_2^2+\mu(A_c)^{\frac12}L^2 (\mathbb{E}_{x \sim \mu } \|x\|_\mathcal{X}^4)^{\frac12}\bigr),\\ & \leq 2 \bigl(\delta |\varphi(0)|_2^2+\delta^{\frac12}L^2 (\mathbb{E}_{x \sim \mu } \|x\|_\mathcal{X}^4)^{\frac12}\bigr). \end{split} \end{align} Combining \eqref{eq:truephiapprox2}, \eqref{eq:erroronA} and \eqref{eq:erroroutsideA2}, we obtain the desired result. \end{proof} \section{Numerical Results} \label{sec:numerics} \begin{figure}[t] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{gauss_ex} \caption{\(\mu_{\text{G}}\)} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{lognormal_ex} \caption{\(\mu_{\text{L}}\)} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{piececonst_ex} \caption{\(\mu_{\text{P}}\)} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{example_mub} \caption{\(\mu_{\text{B}}\)} \end{subfigure} \caption{Representative samples for each of the probability measures $\mu_{\text{G}}, \mu_{\text{L}}, \mu_{\text{P}}, \mu_{\text{B}}$ defined in Subsection \ref{ssec:PDES}. \(\mu_{\text{G}}\) and \(\mu_{\text{P}}\) are used in Subsection~\ref{sec:numlip} to model the inputs, \(\mu_{\text{L}}\) and \(\mu_{\text{P}}\) are used in Subsection~\ref{sec:numdarcy}, and $\mu_{\text{B}}$ is used in Subsection~\ref{sec:burgers}.} \label{fig:samples} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{elliptic_in} \caption{Input} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{elliptic_truth} \caption{Ground Truth} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{elliptic_approx} \caption{Approximation} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{elliptic_error} \caption{Error} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{poisson_in} \caption{Input} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{poisson_truth} \caption{Ground Truth} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{poisson_approx} \caption{Approximation} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{poisson_error} \caption{Error} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{lognormal_in} \caption{Input} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{lognormal_truth} \caption{Ground Truth} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{lognormal_approx} \caption{Approximation} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{lognormal_error} \caption{Error} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{piececonst_in} \caption{Input} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{piececonst_truth} \caption{Ground Truth} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{piececonst_approx} \caption{Approximation} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{piececonst_error} \caption{Error} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{burgers_ex_x} \caption{Input} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{burgers_ex_yt} \caption{Ground Truth} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{burgers_ex_yp} \caption{Approximation} \end{subfigure} \begin{subfigure}[b]{0.24\textwidth} \includegraphics[width=\textwidth]{burgers_ex_err} \caption{Error} \end{subfigure} \caption{Randomly chosen examples from the test set for each of the five considered problems. Each row is a different problem: linear elliptic, Poisson, Darcy flow with log-normal coefficients, Darcy flow with piecewise constant coefficients, and Burgers' equation respectively from top to bottom. The approximations are constructed with our best performing method (for \(N=1024\)): Linear \(d=150\), Linear \(d=150\), NN \(d=70\), NN \(d=70\), NN \(d=15\) respectively from top to bottom.} \label{fig:examples} \end{figure} We now present a series of numerical experiments that demonstrate the effectiveness of our proposed methodology in the context of the approximation of parametric PDEs. We work in settings which both verify our theoretical results and show that the ideas work outside the confines of the theory. The key idea underlying our work is to construct the neural network architecture so that it is defined \emph{as a map between Hilbert spaces} and only then to discretize and obtain a method that is implementable in practice; prevailing methodologies first discretize and then apply a standard neural network. Our approach leads, when discretized, to methods that have properties which are uniform with respect to the mesh size used. We demonstrate this through our numerical experiments. In practice, we obtain an approximation $\Psi_{\scriptscriptstyle{num}}$ to $\Psi_{\scriptscriptstyle{NN}}$, reflecting the numerical discretization used, and the fact that $\mu$ and its pushforward under $\Psi$ are only known to us through samples and, in particular, samples of the pushforward of $\mu$ under the numerical approximation of the input-output map. However since, as we will show, our method is robust to the discretization used, we will not explicitly reflect the dependence of the numerical method in the notation that appears in the remainder of this section. In Subsection \ref{ssec:PDES} we introduce a class of parametric elliptic PDEs arising from the Darcy model of flow in porous media, as well as the time-dependent, parabolic, Burgers' equation, that define a variety of input-output maps for our numerical experiments; we also introduce the probability measures that we use on the input spaces. Subsection \ref{sec:numlip} presents numerical results for a Lipschitz map. Subsections \ref{sec:numdarcy}, \ref{sec:burgers} present numerical results for the Darcy flow problem and the flow map for the Burgers' equation; this leads to non-Lipschitz input-output maps, beyond our theoretical developments. We emphasize that while our method is designed for approximating nonlinear operators \(\Psi\), we include some numerical examples where \(\Psi\) is linear. Doing so is helpful for confirming some of our theory and comparing against other methods in the literature. Note that when \(\Psi\) is linear, each piece in the approximate decomposition \eqref{eq:apsipca} is also linear, in particular, \(\varphi\) is linear. Therefore it is sufficient to parameterize \(\varphi\) as a linear map (matrix of unknown coefficients) instead of a neural network. We include such experiments in Section \ref{sec:numlip} revealing that, while a neural network approximating \(\varphi\) arbitrarily well exists, the optimization methods used for training the neural network fail to find it. It may therefore be beneficial to directly build into the parametrization known properties of \(\varphi\), such as linearity, when they are known. We emphasize that, for general nonlinear maps, linear methods significantly underperform in comparison with our neural network approximation and we will demonstrate this for the Darcy flow problem, and for Burgers' equation. We use standard implementations of PCA, with dimensions specified for each computational example below. All computational examples use an identical neural network architecture: a $5$-layer dense network with layer widths $500, 1000, 2000, 1000, 500$, ordered from first to last layer, and the SELU nonlinearity \cite{selu}. We note that Theorem \ref{thm:approximation} requires greater depth for greater accuracy but that we have found our $5$-layer network to suffice for all of the examples described here. Thus we have not attempted to optimize the architecture of the neural network. We use stochastic gradient descent with Nesterov momentum (\(0.99\)) to train the network parameters \cite{deeplearningbook}, each time picking the largest learning rate that does not lead to blow-up in the error. While the network must be re-trained for each new choice of reduced dimensions \(d_\mathcal{X}, d_\mathcal{Y}\), initializing the the hidden layers with a pre-trained network can help speed up convergence. \subsection{PDE Setting} \label{ssec:PDES} We will consider a variety of solution maps defined by second order elliptic PDEs of the form \eqref{eq:darcy}. which are prototypical of many scientific applications. We take \(D = (0,1)^2\) to be the unit box, \(a \in L^\infty(D;\mathbb{R}_+), f \in L^2(D;\mathbb{R})\), and let \(u \in H^1_0(D;\mathbb{R})\) be the unique weak solution of \eqref{eq:darcy}. Note that, since $D$ is bounded, \(L^\infty(D;\mathbb{R}_+)\) is continuously embedded within the Hilbert space \(L^2(D;\mathbb{R}_+).\) We will consider two variations of the input-output map generated by the solution operator for \eqref{eq:darcy}; in one, it is Lipschitz and lends itself to the theory of Subsection \ref{sec:approxanalysis} and, in the other, it is not Lipschitz. We obtain numerical results which demonstrate our theory as well as demonstrating the effectiveness of our proposed methodology in the non-Lipschitz setting. Furthermore, we consider the one-dimensional viscous Burgers' equation on the torus given as \begin{align} \begin{split} \label{eq:burgers} \frac{\partial}{\partial t} u(s,t) + \frac{1}{2}\frac{\partial}{\partial s} (u(s,t))^2 &= \beta \frac{\partial^2}{\partial s^2} u(s,t), \qquad (s,t) \in \mathbb{T}^1 \times (0,\infty) \\ u(s,0) &= u_0(s), \qquad \qquad \:\:\:\: s \in \mathbb{T}^1 \end{split} \end{align} where \(\beta > 0\) is the viscosity coefficient and \(\mathbb{T}^1\) is the one dimensional unit torus obtained by equipping the interval \([0,1]\) with periodic boundary conditions. We take \(u_0 \in L^2(\mathbb{T}^1;\mathbb{R})\) and have that, for any \(t>0\), \(u(\cdot,t) \in H^r(\mathbb{T}^1;\mathbb{R})\) for any \(r > 0\) is the unique weak solution to \eqref{eq:burgers} \cite{temam2012infinite}. In Subsection \ref{sec:burgers}, we consider the input-output map generated by the flow map of \eqref{eq:burgers} evaluated at a fixed time which is a locally Lipschitz operator. We make use of four probability measures which we now describe. The first, which will serve as a base measure in two dimensions, is the Gaussian \(\mu_{\text{G}} = \mathcal{N}(0, (-\Delta + 9I)^{-2})\) with a zero Neumann boundary condition on the operator \(\Delta\). Then we define \(\mu_{\text{L}}\) to be the log-normal measure defined as the push-forward of \(\mu_{\text{G}}\) under the exponential map i.e. $\mu_{\text{L}} = \exp_\sharp \mu_{\text{G}}$. Furthermore, we define \(\mu_{\text{P}} = T_\sharp \mu_{\text{G}} \) to be the push-forward of \(\mu_{\text{G}}\) under the piecewise constant map \begin{equation*} T(s) = \left\{ \begin{aligned} &12 && s \ge 0,\\ & 3 && s < 0. \end{aligned} \right. \end{equation*} Lastly, we consider the Gaussian \(\mu_{\text{B}} = \mathcal{N}(0, 7^4(-\frac{d^2}{ds^2} + 7^2I)^{-2.5})\) defined on \(\mathbb{T}^1\). Figure \ref{fig:samples} shows an example draw from each of the above measures. We will use as $\mu$ one of these four measures in each experiment we conduct. Such probability measures are commonly used in the stochastic modeling of physical phenomenon \cite{Lord}. For example, \(\mu_{\text{P}}\) may be thought of as modeling the permeability of a porous medium containing two different constituent parts \cite{darcyref}. Note that it is to be expected that a good choice of architecture will depend on the probability measure used to generate the inputs. Indeed good choices of the reduced dimensions $d_\X$ and $d_\Y$ are determined by the input measure and its pushforward under $\Psi$, respectively. For each subsequently described problem we use, unless stated otherwise, \(N=1024\) training examples from $\mu$ and its pushforward under $\Psi$, from which we construct $\Psi_{\scriptscriptstyle{NN}}$, and then \(5000\) unseen testing examples from $\mu$ in order to obtain a Monte Carlo estimate of the relative test error: \[\mathbb{E}_{x \sim \mu} \frac{\|(G_2 \circ \chi \circ F_1)(x) - \Psi(x)\|_\mathcal{Y}}{\|\Psi(x)\|_\mathcal{Y}}.\] For problems arising from \eqref{eq:darcy}, all data is collected on a uniform \(421 \times 421\) mesh and the PDE is solved with a second order finite-difference scheme. For problems arising from \eqref{eq:burgers}, all data is collected on a uniform \(4096\) point mesh and the PDE is solved using a pseudo-spectral method. Data for all other mesh sizes is sub-sampled from the original. We refer to the size of the discretization in one direction e.g. 421, as the \textit{resolution}. We fix \(d_{\mathcal{X}} = d_{\mathcal{Y}}\) (the dimensions after PCA in the input and output spaces) and refer to this as \textit{the reduced dimension}. We experiment with using a linear map as well as a dense neural network for approximating \(\varphi\); in all figures we distinguish between these by referring to Linear or NN approximations respectively. When parameterizing with a neural network, we use the aforementioned stochastic gradient based method for training, while, when parameterizing with a linear map, we simply solve the linear least squares problem by the standard normal equations. We also compare all of our results to the work of \cite{surrogatemodeling} which utilizes a $19$-layer fully-connected convolutional neural network, referencing this approach as Zhu within the text. This is done to show that the image-to-image regression approach that many such works utilize yields approximations that are not consistent in the continuum, and hence across different discretizations; in contrast, our methodology is designed as a mapping between Hilbert spaces and as a consequence is robust across different discretizations. For some problems in Subsection \ref{sec:numlip}, we compare to the method developed in \cite{cohenalgo}, which we refer to as Chkifa. For the problems in Subsection \ref{sec:numdarcy}, we also compare to the reduced basis method \cite{DeVoreReducedBasis,quarteroni2015reduced} when instantiated with PCA. We note that both Chkifa and the reduced basis method are intrusive, i.e., they need knowledge of the governing PDE. Furthermore the method of Chkifa needs full knowledge of the generating process of the inputs. We re-emphasize that our proposed method is fully data-driven. \subsection{Globally Lipschitz Solution Map} \label{sec:numlip} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{gaussian_elliptic1} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{gaussian_elliptic1_r421} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{gaussianelliptic_moredata} \caption{} \end{subfigure} \caption{Relative test errors on the linear elliptic problem. Using \(N=1024\) training examples, panel (a) shows the errors as a function of the resolution while panel (b) fixes a \(421 \times 421\) mesh and shows the error as a function of the reduced dimension. Panel (c) only shows results for our method using a neural network, fixing a \(421 \times 421\) mesh and showing the error as a function of the reduced dimension for different amounts of training data. } \label{fig:ellipticproblem} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{coeff_poisson1} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{coeff_poisson1_r421} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{coeffpoisson_moredata} \caption{} \end{subfigure} \caption{Relative test errors on the Poisson problem. Using \(N=1024\) training examples, panel (a) shows the errors as a function of the resolution while panel (b) fixes a \(421 \times 421\) mesh and shows the error as a function of the reduced dimension. Panel (c) only shows results for our method using a neural network, fixing a \(421 \times 421\) mesh and showing the error as a function of the reduced dimension for different amounts of training data.} \label{fig:poissonproblem} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{cohen_inputsample} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{cohen_outputsample} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{poisson_pcavoptimal} \caption{} \end{subfigure} \caption{Panel (a) shows a sample drawn from the model \eqref{eq:cohenmodel} while panel (b) shows the solution of the Poisson equation with the sample from (a) as the r.h.s. Panel (c) shows the relative test error as a function of the amount of PDE solves/ training data for the method of Chkifa and our method respectively. We use the reduced dimension \(d=N\).} \label{fig:cohen} \end{figure} We consider the input-output map \(\Psi: L^2(D;\mathbb{R}) \to H^1_0(D;\mathbb{R})\) mapping \(f \mapsto u\) in \eqref{eq:darcy} with the coefficient \(a\) fixed. Since \eqref{eq:darcy} is a linear PDE, \(\Psi\) is linear and therefore Lipschitz. We study two instantiations of this problem. In the first, we draw a single \(a \sim \mu_{\text{P}}\) and fix it. We then solve \eqref{eq:darcy} with data \(f \sim \mu_{\text{G}}\). We refer to this as the \textit{linear elliptic} problem. See row 1 of Figure \ref{fig:examples} for an example. In the second, we set \(a(w) = 1\) \(\forall w \in D\), in which case \eqref{eq:darcy} becomes the Poisson equation which we solve with data \(f \sim \mu = \mu_{\text{G}}\). We refer to this as the \textit{Poisson} problem. See row 2 of Figure \ref{fig:examples} for an example. Figure \ref{fig:ellipticproblem} (a) shows the relative test errors as a function of the resolution on the linear elliptic problem, while Figure \ref{fig:poissonproblem} (a) shows them on the Poisson problem. The primary observation to make about panel (a) in these two figures is that it shows that the error in our proposed method does not change as the resolution changes. In contrast, it also shows that the image-to-image regression approach of Zhu \cite{surrogatemodeling}, whilst accurate at low mesh resolution, fails to be invariant to the size of the discretization and errors increase in an uncontrolled fashion as greater resolution is used. The fact that our dimension reduction approach achieves constant error as we refine the mesh, reflects its design as a method on Hilbert space which may be approximated consistently on different meshes. Since the operator \(\Psi\) here is linear, the true map of interest \(\varphi\) given by \eqref{eq:approxphi} is linear since \(F_{\mathcal{Y}}\) and \(G_{\mathcal{X}}\) are, by the definition of PCA, linear. It is therefore unsurprising that the linear approximation consistently outperforms the neural network, a fact also demonstrated in panel (a) of the two figures. While it is theoretically possible to find a neural network that can, at least, match the performance of the linear map, in practice, the non-convexity of the associated optimization problem can cause non-optimal behavior. Panels (b) of Figures \ref{fig:ellipticproblem} and \ref{fig:poissonproblem} show the relative error as a function of the reduced dimension for a fixed mesh size. We see that while the linear maps consistently improve with the reduced dimension, the neural networks struggle as the complexity of the optimization problem is increased. This problem can usually be alleviated with the addition of more data as shown in panels (c), but there are still no guarantees that the optimal neural network is found. Since we use a highly-nonlinear 5-layer network to represent the linear \(\varphi\), this issue is exacerbated for this problem and the addition of more data only slightly improves the accuracy as seen in panels (c). In Appendix~\ref{app:error}, we show the relative test error during the training process and observe that some overfitting occurs, indicating that the optimization problem is stuck in a local minima away from the optimal linear solution. This is an issue that is inherent to most deep neural network based methods. Our results suggest that building in \textit{a priori} information about the solution map, such as linearity, can be very beneficial for the approximation scheme as it can help reduce the complexity of the optimization. To compare to the method of Chkifa \cite{cohenalgo}, we will assume the following model for the inputs, \begin{equation} \label{eq:cohenmodel} f = \sum_{j=1}^\infty \xi_j \phi_j\ \end{equation} where \(\xi_j \sim U(-1,1)\) is an i.i.d. sequence, and \(\phi_j = \sqrt{\lambda_j} \psi_j \) where \(\lambda_j, \psi_j\) are the eigenvalues and eigenfunctions of the operator \((-\Delta + 100I)^{-4.1}\) with a zero Neumann boundary. This construction ensures that there exists \(p \in (0,1)\) such that \((\|\phi_j\|_{L^\infty})_{j \geq 1} \in \ell^p(\mathbb{N};\mathbb{R})\) which is required for the theory in \cite{cohenanalytic}. We assume this model for \(f\), the r.h.s. of the Poisson equation, and consider the solution operator \(\Psi : \ell^\infty(\mathbb{N};\mathbb{R}) \to H_0^1(D; \mathbb{R})\) mapping \((\xi_j)_{j \geq 1} \mapsto u\). Figure \ref{fig:cohen} panels (a)-(b) show an example input from \eqref{eq:cohenmodel} and its corresponding solution \(u\). Since this operator is linear, its Taylor series representation simply amounts to \begin{equation} \label{eq:taylorsum} \Psi((\xi_j)_{j \geq 1}) = \sum_{j=1}^\infty \xi_j \eta_j \end{equation} where \(\eta_j \in H_0^1(D;\mathbb{R})\) satisfy \[-\Delta \eta_j = \phi_j.\] This is easily seen by plugging in our model \eqref{eq:cohenmodel} for \(f\) into the Poisson equation and formally inverting the Laplacian. We further observe that the \(\ell^1(\mathbb{N};\mathbb{R})\) summability of the sequence \((\|\eta_j\|_{H_0^1})_{j \geq 1}\) (inherited from \((\|\phi_j\|_{L^\infty})_{j \geq 1} \in \ell^p(\mathbb{N};\mathbb{R})\)) implies that our power series \eqref{eq:taylorsum} is summable in \(H_0^1(D;\mathbb{R})\). Combining the two observations yields analyticity of \(\Psi\) with the same rates as in \cite{cohenanalytic} obtained via Stechkin's inequality. For a proof, see Theorem \ref{thm:poissonanalytic}. We employ the method of Chkifa simply by truncation of \eqref{eq:taylorsum} to \(d\) elements, noting that in this simple linear setting there is no longer a need for greedy selection of the index set. We note that this truncation requires \(d\) PDE solves of the Poisson equation hence we compare to our method when using \(N=d\) data points, since this also counts the number of PDE solves. Since the problem is linear, we use a linear map to interpolate the PCA latent spaces and furthermore set the reduced dimension of our PCA(s) to \(N\). Panel (c) of Figure \ref{fig:cohen} shows the results. We see that the method of Chkifa outperforms our method for any fixed number of PDE solves, although the empirical rate of convergence appears very similar for both methods. Furthermore we highlight that while our method appears to have a larger error constant than that of Chkifa, it has the advantage that it requires no knowledge of the model \ref{eq:cohenmodel} or of the Poisson equation; it is driven entirely by the training data. \subsection{Darcy Flow} \label{sec:numdarcy} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{logdarcy1} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{logdarcy1_r421} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{logdarcy_moredata} \caption{} \end{subfigure} \caption{Relative test errors on the Darcy flow problem with log-normal coefficients. Using \(N=1024\) training examples, panel (a) shows the errors as a function of the resolution while panel (b) fixes a \(421 \times 421\) mesh and shows the error as a function of the reduced dimension. Panel (c) only shows results for our method using a neural network, fixing a \(421 \times 421\) mesh and showing the error as a function of the reduced dimension for different amounts of training data.} \label{fig:lognormaldarcy} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{piecedarcy1} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{piecedarcy1_r421} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{piecedarcy_moredata} \caption{} \end{subfigure} \caption{Relative test errors on the Darcy flow problem with piecewise constant coefficients. Using \(N=1024\) training examples, panel (a) shows the errors as a function of the resolution while panel (b) fixes a \(421 \times 421\) mesh and shows the error as a function of the reduced dimension. Panel (c) only shows results for our method using a neural network, fixing a \(421 \times 421\) mesh and showing the error as a function of the reduced dimension for different amounts of training data.} \label{fig:piececonstdarcy} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{timing1} \caption{Online} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{timing2} \caption{Offline} \end{subfigure} \caption{The online and offline computation times for the Darcy flow problem with piecewise constant coefficients. The number of training examples \(N=1024\) and grid resolution \(421 \times 421\) are fixed. The results are reported in seconds and all computations are done on a single GTX 1080 Ti GPU. } \label{fig:timing} \end{figure} \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{logdarcy1_interp} \caption{\(\mu_{\text{L}}\)} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{piecedarcy1_interp} \caption{\(\mu_{\text{P}}\)} \end{subfigure} \caption{Relative test errors on both Darcy flow problems with reduced dimension \(d=70\), training on a single mesh and transferring the solution to other meshes. When the training mesh is smaller than the desired output mesh, the PCA basis are interpolated using cubic splines. When the training mesh is larger than the desired output mesh, the PCA basis are sub-sampled.} \label{fig:meshinterp} \end{figure} We now consider the input-output map \(\Psi: L^\infty(D;\mathbb{R}_+) \to H^1_0(D;\mathbb{R})\) mapping \(a \mapsto u\) in \eqref{eq:darcy} with \(f(s) = 1\) \(\forall s \in D\) fixed. In this setting, the solution operator is nonlinear and is locally Lipschitz as a mapping from \(L^\infty(D;\mathbb{R}_+)\) to \(H^1_0(D;\mathbb{R})\) \cite{DHS12}. However our results require a Hilbert space structure, and we view the solution operator as a mapping from \(L^2(D;\mathbb{R}_+) \supset L^\infty(D;\mathbb{R}_+)\) into \(H^1_0(D;\mathbb{R}_+)\), noting that we will choose the probability measure $\mu$ on \(L^2(D;\mathbb{R}_+)\) to satisfy $\mu(L^\infty(D;\mathbb{R}_+))=1.$ In this setting, $\Psi$ is not locally Lipschitz and hence Theorem \ref{thm:approximation} is not directly applicable. Nevertheless, our methodology exhibits competitive numerical performance. See rows 3 and 4 of Figure \ref{fig:examples} for an example. Figure \ref{fig:lognormaldarcy} (a) shows the relative test errors as a function of the resolution when \(a \sim \mu = \mu_{\text{L}}\) is log-normal while Figure \ref{fig:piececonstdarcy} (a) shows them when \(a \sim \mu = \mu_{\text{P}}\) is piecewise constant. In both settings, we see that the error in our method is invariant to mesh-refinement. Since the problem is nonlinear, the neural network outperforms the linear map. However we see the same issue as in Figure \ref{fig:ellipticproblem} where increasing the reduced dimension does not necessarily improve the error due to the increased complexity of the optimization problem. Panels (b) of Figures \ref{fig:lognormaldarcy} and \ref{fig:piececonstdarcy} confirm this observation. This issue can be alleviated with additional training data. Indeed, panels (c) of Figures \ref{fig:lognormaldarcy} and \ref{fig:piececonstdarcy} show that the error curve is flattened with more data. We highlight that these results are consistent with our interpretation of Theorem~\ref{thm:limit}: the reduced dimensions $d_\X, d_\Y$ are determined first by the properties of the measure $\mu$ and its pushforward, and then the amount of data necessary is obtained to ensure that the finite data approximation error is of the same order of magnitude as the finite-dimensional approximation error. In summary, the size of the training dataset $N$ should increase with the number of reduced dimensions. For this problem, we also compare to the reduced basis method (RB) when instantiated with PCA. We implement this by a standard Galerkin projection, expanding the solution in the PCA basis and using the weak form of \eqref{eq:darcy} to find the coefficients. We note that the errors of both methods are very close, but we find that the online runtime of our method is significantly better. Letting \(K\) denote the mesh-size and \(d\) the reduced dimension, the reduced basis method has a runtime of \(\mathcal{O}(d^2K + d^3)\) while our method has the runtime \(\mathcal{O}(dK)\) plus the runtime of the neural network which, in practice, we have found to be negligible. We show the online inference time as well as the offline training time of the methods in Figure \ref{fig:timing}. While the neural network has the highest offline cost, its small online cost makes it a more practical method. Indeed, without parallelization when \(d=150\), the total time (online and offline) to compute all 5000 test solutions is around 28 hours for the RBM. On the other hand, for the neural network, it is 28 minutes. The difference is pronounced when needing to compute many solutions in parallel. Since most modern architectures are able to internally parallelize matrix-matrix multiplication, the total time to train and compute the 5000 examples for the neural network is only 4 minutes. This issue can however be slightly alleviated for the reduced basis method with more stringent multi-core parallelization. We note that the linear map has the lowest online cost and only a slightly worse offline cost than the RBM. This makes it the most suitable method for linear operators such as those presented in Section \ref{sec:numlip} or for applications where larger levels of approximation error can be tolerated. We again note that the image-to-image regression approach of \cite{surrogatemodeling} does not scale with the mesh size. We do however acknowledge that for the small meshes for which the method was designed, it does outperform all other approaches. This begs the question of whether one can design neural networks that match the performance of image-to-image regression but remain invariant with respect to the size of the mesh. The contemporaneous work \cite{neuralopour} takes a step in this direction. Lastly, we show that our method also has the ability to transfer a solution learned on one mesh to another. This is done by interpolating or sub-sampling both of the input and output PCA basis from the training mesh to the desired mesh. Justifying this requires a smoothness assumption on the PCA basis; we are, however, not aware of any such results and believe this is an interesting future direction. The neural network is fixed and does not need to be re-trained on a new mesh. We show this in Figure \ref{fig:meshinterp} for both Darcy flow problems. We note that when training on a small mesh, the error increases as we move to larger meshes, reflecting the interpolation error of the basis. Nevertheless, this increase is rather small: as shown in Figure \ref{fig:meshinterp}, we obtain a \(3\%\) and a \(1\%\) relative error increasing when transferring solutions trained on a \(61 \times 61\) grid to a \(421 \times 421\) grid on each respective Darcy flow problem. On the other hand, when training on a large mesh, we see almost no error increase on the small meshes. This indicates that the neural network learns a property that is intrinsic to the solution operator and independent of the discretization. \subsection{Burgers' Equation} \label{sec:burgers} \begin{figure}[t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{burgers1} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{burgers2} \caption{} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{burgers3} \caption{} \end{subfigure} \caption{Relative test errors on the Burgers' Equation problem. Using \(N=1024\) training examples, panel (a) shows the errors as a function of the resolution while panel (b) fixes a \(4096\) mesh and shows the error as a function of the reduced dimension. Panel (c) only shows results for our method using a neural network, fixing a \(4096\) mesh and showing the error as a function of the reduced dimension for different amounts of training data.} \label{fig:burgers} \end{figure} We now consider the input-output map \(\Psi : L^2(\mathbb{T}^1;\mathbb{R}) \to H^r(\mathbb{T}^1;\mathbb{R})\) mapping \(u_0 \mapsto u|_{t=1}\) in \eqref{eq:burgers} with \(\beta = 10^{-2}\) fixed. In this setting, \(\Psi\) is nonlinear and locally Lipschitz but since we do not know the precise Lipschitz constant as defined in Appendix \ref{app:approxanalysis-local-Lipschitz}, we cannot verify that the assumptions of Theorem \ref{thm:approximation-local-lip} hold; nevertheless the numerical results demonstrate the effectiveness of our methodology. We take \(u_0 \sim \mu = \mu_{\text{B}}\); see rows 5 of Figure \ref{fig:examples} for an example. Figure \ref{fig:burgers} (a) shows the relative test errors as a function of the resolution again demonstrating that our method is invariant to mesh-refinement. We note that, for this problem, the linear map does significantly worse than the neural network in contrast to the Darcy flow problem where the results were comparable. This is likely attributable to the fact that the solution operator for Burgers' equation is more strongly nonlinear. As before, we observe from Figure \ref{fig:burgers} panel (b) that increasing the reduced dimension does not necessarily improve the error due to the increased complexity of the optimization problem. This can again be mitigated by increasing the volume of training data, as indicated in Figure \ref{fig:burgers}(c); the curve of error versus reduced dimension is flattened as \(N\) increases. \section{Conclusion} \label{sec:conclusion} In this paper, we proposed a general data-driven methodology that can be used to learn mappings between separable Hilbert spaces. We proved consistency of the approach when instantiated with PCA in the setting of globally Lipschitz forward maps. We demonstrated the desired mesh-independent properties of our approach on parametric PDE problems, showing good numerical performance even on problems outside the scope of the theory. This work leaves many interesting directions open for future research. To understand the interplay between the reduced dimension and the amount of data needed requires a deeper understanding of neural networks and their interaction with the optimization algorithms used to produce the approximation architecture. Even if the optimal neural network is found by that optimization procedure, the question of the number of parameters needed to achieve a given level of accuracy, and how this interacts with the choice of reduced dimensions $d_\X$ and $d_\Y$ (choice of which is determined by the input space probability measure), warrants analysis in order to reveal the computational complexity of the proposed approach. Furthermore, the use of PCA limits the scope of problems that can be addressed to Hilbert, rather than general Banach spaces; even in Hilbert space, PCA may not be the optimal choice of dimension reduction. The development of autoenconders on function space is a promising direction that has the potential to address these issues; it also has many potential applications that are not limited to deployment within the methodology proposed here. Finally we also wish to study the use of our methodology in more challenging PDE problems, such as those arising in materials science, as well as for time-dependent problems such as multi-phase flow in porous media. Broadly speaking we view our contribution as a first step in the development of methods that generalize the ideas and applications of neural networks by operating on, and between, spaces of functions. \section*{Acknowledgments} The authors are grateful to Anima Anandkumar, Kamyar Azizzadenesheli, Zongyi Li and Nicholas H. Nelsen for helpful discussions in the general area of neural networks for PDE-defined maps between Hilbert spaces. The authors thank Matthew M. Dunlop for sharing his code for solving elliptic PDEs and generating Gaussian random fields. The work is supported by MEDE-ARL funding (W911NF-12-0022). AMS is also partially supported by NSF (DMS 1818977) and AFOSR (FA9550-17-1-0185). BH is partially supported by a Von K{\'a}rm{\'a}n instructorship at the California Institute of Technology. \bibliographystyle{plain}
1,108,101,564,438
arxiv
\section{Introduction} Axions are pseudo-Nambu-Goldstone bosons that arise in consequence of the spontaneous breaking of a chiral global $U(1)$ symmetry. Such a symmetry, if anomalous under $SU(3)_c$ in quantum chromodynamics (QCD), may play an important role in resolving the strong CP problem \cite{Peccei:1977hh,Peccei:1977ur,Weinberg:1977ma,Wilczek:1977pj}. Nonetheless, the interest in axions extends way beyond the QCD axion; axions or ALPs arise in a variety of theories (e.g. ~\cite{Dienes:1999gw,Gelmini:1980re,Davidson:1981zd,Wilczek:1982rv,Cicoli:2013ana}). Furthermore, axions are attractive dark-matter candidates, which can be nonthermally produced via the vacuum misalignment mechanism \cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah} or the decay of topological defects \cite{Kibble:1976sj,Kibble:1980mv,Davis:1985pt,Davis:1986xc,Harari:1987ht,Battye:1993jv}. The former mechanism proceeds as follows. In the early Universe, the axion field is frozen at an initial field value due to Hubble friction. At later times, when the Hubble friction becomes comparable to the axion mass, the axion begins to roll, and its subsequent oscillations around the minimum of the potential are characterized by an energy density that is redshifted in a matter-like manner. This behavior is expected to persist until the present moment, thereby providing a natural mechanism for dark matter. This naive picture of the vacuum misalignment mechanism may be altered if the axion $a$ couples to a non-abelian gauge sector in a thermal bath via the operator, \begin{align} {\cal L}\supset \frac{\alpha}{8\pi} \frac{a}{f_a} F^b_{\mu\nu}\widetilde{F}^{b\mu\nu}\,, \label{Eq: Lagrangian first} \end{align} where $F^b_{\mu\nu}$ is a gauge field strength tensor and $f_a$ the axion decay constant. Indeed, in the seminal work in Ref.~\cite{McLerran:1990de}, the existence of non-perturbative transitions in the high-temperature QCD plasma was demonstrated and their impact on the axion evolution was studied. These transitions describe thermal fluctuations in the topological charge that are reminiscent of the sphaleron processes in the electroweak sector. Despite some differences at the technical level, we will follow standard practice and refer to these thermal fluctuations as \textit{strong sphalerons} for simplicity. The main effect of the strong sphalerons consists in the appearance of an extra friction term $\Upsilon(T)$ in the axion equation of motion (EOM), \begin{align} {\ddot{a}}+\left[3 H + \Upsilon(T)\right]{\dot{a}}+ V'(a)=0\,, \end{align} where $H$ is the Hubble parameter. In the context of QCD, the authors of Ref.~\cite{McLerran:1990de} were able to show that the net effect of this new friction term turns out to be very weak. It ends up being suppressed by small fermion Yukawa couplings in the Standard Model (SM), and thus the friction term is only active at high temperatures when the axion field is still frozen, leaving no significant impact on the prediction for the QCD axion relic density. Nonetheless, thermal sphaleron transitions may play an important role in the axion evolution if they arise from a dark/hidden thermal bath, different from that of QCD and not containing any light fermions. The main aim of the present work is precisely to study in a general context how the axion dark-matter relic density may be modified due to thermal friction arising from such a dark sector. As we will show, in most cases, sphalerons in a hidden sector result in a damping of the coherent axion oscillations and thus in a suppression of the axion abundance; still in some other scenarios, the friction can delay the onset of oscillations and thus enhance the axion abundance. We shall in particular derive an analytical formula for the adiabatic invariant that remains constant in the presence of thermal friction\,---\,the \emph{frictional adiabatic invariant}\,---\,taking into account the one-loop running of the gauge coupling. This will allow us to apply our novel mechanism\,---\,the \textit{frictional misalignmemt mechanism}\,---\,to a broad range of scenarios. \definecolor{blueUnder}{HTML}{4169e1} \definecolor{blueOver}{HTML}{20b2aa} \begin{figure*}[ht] \centering \includegraphics[width=0.65\textwidth]{AxionPhoton-Friction.pdf} \caption{Axion coupling to photons $g_{a\gamma\gamma}\sim {\alpha}/({2\pi f_a})$ versus its mass for the case where the axion couples to two separate non-abelian gauge groups as studied in \cref{Sec:ALP2GaugeGroups}. The presence of friction can open up the ALP-dark-matter parameter space: For different values of the enhancement parameter $\lambda$, the correct ALP DM relic density is obtained for the traditionally underabundant region along the {\color{blueUnder} \bf blue} lines and for the traditionally underabundant region along the {\color{blueOver} \bf cyan} lines (assuming $\alpha_{\rm thr}=0.1$ and $g_{\rho,s}\sim {\cal O}(10)$, see \cref{Sec:ALP2GaugeGroups} for details). Experimental bounds adapted from \href{https://cajohare.github.io/AxionLimits/}{\color{black} \texttt{AxionLimits}}~\cite{AxionLimits}.} \label{fig:ALP-lambda} \end{figure*} Our work builds upon earlier work on thermal friction effects in cosmological axion models. In the past, such effects have been largely explored~\cite{Moore:2010jd,Laine:2016hma,Altenkort:2020axj} in other contexts, such as warm inflaton \cite{Berera:1995ie,Berera:1995wh,Berera:1999ws,Berera:1998px,Berera:2008ar,Bastero-Gil:2016qru,Berghaus:2019whh,Kamali:2021ugx,Berera:2020dvn,Yokoyama:1998ju,Visinelli:2011jy,Kamali:2019ppi,Mirbabayi:2022cbt}, late-time quintessence \cite{Berghaus:2020ekh}, leptogenesis \cite{Buchmuller:2005eh,Domcke:2020kcp} and early dark energy \cite{Berghaus:2019cls,Berghaus:2020ekh}; see Section~VI in Ref.~\cite{Agrawal:2022yvu} for a recent review. The present work also contributes to the larger effort in the community to explore the different possibilities for the cosmological axion evolution which open up once the assumptions of the canonical misalignment picture are relaxed or modified. Indeed, a variety of such scenarios have been proposed in the literature, e.\,g., large~\cite{Co:2018mho,Takahashi:2019pqf,Arvanitaki:2019rax,Huang:2020etx} or small~\cite{Dvali:1995ce,Banks:1996ea,Choi:1996fs,Co:2018phi} misalignment angles, parametric resonance~\cite{Co:2017mop,Harigaya:2019qnl,Co:2020dya}, kinetic misalignment mechanism~\cite{Co:2019jts,Chang:2019tvx,Co:2019wyp,Domcke:2020kcp,Co:2020jtv,Harigaya:2021txz,Chakraborty:2021fkp,Kawamura:2021xpu,Co:2021qgl,Gouttenoire:2021wzu,Gouttenoire:2021jhk}, trapped misalignment~\cite{DiLuzio:2021gos}, axion fragmentation~\cite{Fonseca:2019ypl,Eroncel:2022abd,Eroncel:2022abc}, varying axion decay constant~\cite{Allali:2022yvx}, interaction with monopoles~\cite{Fischler:1983sc,Nakagawa:2020zjr} or modifications in the cosmological history of the Universe~\cite{Visinelli:2009kt,Arias:2021rer}, including entropy injection~\cite{Dine:1982ah,Steinhardt:1983ia} or non-standard inflation scenarios~\cite{Dimopoulos:1988pw,Davoudiasl:2015vba,Hoof:2017ibo,Graham:2018jyp,Takahashi:2018tdu,Kitajima:2019ibn}. The rest of the paper is organized as follows. In \cref{Sec:TheoreticalFramework}, we will provide the necessary foundation by reviewing the standard axion misalignment mechanism. We will also explore the constraints that apply to a cosmological hidden thermal bath. In \cref{Sec:FrictionalMis}, we will then analyze the consequences of the existence of a hidden thermal bath in the context of the vacuum misalignment mechanism. In \cref{Sec:MinimalALP}, we will apply the machinery developed in the second section to the most minimal ALP dark matter model, in which the hidden strong dynamics that generate the axion mass via instanton effects also yield the thermal friction. In \cref{Sec:ALP2GaugeGroups}, we will generalize these results by assuming that the gauge group that provides the friction is distinct from the one that provides the axion mass. \cref{Sec:QCDAxion}, finally, is dedicated to applying our results to the special case of the QCD axion. \cref{sec:conclusions} contains our conclusions and a comparison to other results in the literature. \section{Framework and Assumptions} \label{Sec:TheoreticalFramework} \noindent\textbf{Vacuum misalignment mechanism\,---\,}% Let us briefly review the prediction for the axion relic density in terms of the misalignment mechanism in the absence of any extra sources of friction. In general, there are also other non-thermal production mechanisms for the axion such as the decay of topological effects. In this paper, we will, however, assume the pre-inflationary scenario and therefore focus solely on the misalignment mechanism. The EOM for a classical, non-relativistic and homogeneous scalar field $\theta_a\equiv a/f_a$ in an expanding Friedmann-Lema\^itre-Robertson-Walker Universe reads \begin{align} \ddot{\theta}_a+3 H \dot{\theta}_a+\frac{1}{f_a^2}V'(\theta_a)=0\,, \end{align} where $V'(\theta_a)={dV(\theta_a)}/{d\theta_a}$ and the spatial gradients have been neglected. For a field oscillating near its minimum, $V'(\theta_a)\simeq m_a^2(T)f_a^2\, \theta_a$ is a good approximation, and the differential equation resembles that of a damped harmonic oscillator whose solution depends on the interplay between the friction term\,---\,the Hubble parameter $H(T)$\,---\,and the oscillator frequency\,---\,the axion mass $m_a(T)$. At high temperatures $T\gg \sqrt{m_a M_p}$, the axion field is frozen at an arbitrary initial misalignment angle $\theta_i$ due to Hubble friction, since $H(T)\gg m_a(T)$. As the Universe cools down and the Hubble parameter decreases, the axion mass overcomes the friction and the field starts to oscillate at a temperature $T_{\rm osc}$ defined as $ H(T_{\rm osc})\sim m_a(T_{\rm osc})\equiv m_{\rm osc}$ where the exact numerical prefactors depend on the temperature scaling of the mass. Using the Wentzel–Kramers–Brillouin (WKB) approximation, the final relic density can be expressed as \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}} \simeq 28 \sqrt{\frac{m_{a}}{\mathrm{eV}}}\sqrt{\frac{m_a}{m_{\rm osc}}} \left(\frac{\theta_{i}\,f_a}{ 10^{12}\, \mathrm{GeV}}\right)^{2} \mathcal{F}(T_{\rm osc})\,, \label{Eq:simple ALP relic dansity ratio CMB} \end{align} where $m_a\equiv m_a(T=0)$ is the axion mass at zero temperature, $\mathcal{F}(T_{\rm osc}) \equiv\left(g_{\epsilon}(T_{\rm osc}) / 3.38\right)^{3/4}\left(3.93/g_{s}(T_{\rm osc})\right)$ is an $\mathcal{O}(1)$ factor, and $\rho_{\rm DM}\simeq 1.26\, {\mathrm{keV}}/{\mathrm{cm}^{3}}$ from Planck 2018 data~\cite{Aghanim:2018eyx}. Through the factor $\sqrt{m_a/m_{\rm osc}}$, the relic density is affected by the temperature dependence of the axion mass, which may be constant $m_a(T)=m_a$, such that $\sqrt{m_a/m_{\rm osc}}=1$, or present a power-like dependence if the axion obtains its mass from a confining gauge group, \begin{align} m_a(T)\simeq \left\{\begin{array}{lll} \displaystyle m_a &\qquad \text{ for } \ \ \ T<T_{c}\\ \displaystyle \displaystyle m_{a}\,\left(\frac{ T_{c}}{T}\right)^{\beta} & \qquad \text{ for } \quad T>T_c \,, \end{array}\right. \label{Eq: axion mass temp} \end{align} which leads to \begin{align} \sqrt{\frac{m_a}{m_{\rm osc}}}\simeq \left(\frac{\sqrt{m_a M_p}}{T_c}\right)^{\frac{\beta}{\beta+2}} \,. \end{align} \medskip \noindent\textbf{Hidden thermal bath\,---\,}% We shall assume the existence of a dark thermal bath characterized by a temperature $T'$ and composed of non-abelian $SU(N)$ gauge bosons in the absence of fermions. This hidden thermal bath is secluded from the SM one, and we remain agnostic as to the means of its generation. Eventually, the dark gauge group either confines or becomes spontaneously broken. We will explore both options in this work and analyze the possible outcomes. In the case of confinement of a pure non-abelian gauge field, one generally expects the energy of the dark sector to be converted into glueballs. The glueballs subsequently evolve as dark matter and may in principle overclose the universe prematurely~\cite{Soni:2016gzf,Boddy:2014yra}. In order to avoid this, it is necessary to assume the decay of the glueballs to some light degrees of freedom such as moduli \cite{Halverson:2016nfq}. On the other hand, in the case of spontaneous symmetry breaking, the massive degrees of freedom are assumed to decay rapidly to a remaining unbroken U(1) subgroup of the initial $SU(N)$ gauge group. In either case, we assume that this process takes place rapidly so that from temperatures greater than the electroweak scale all the way down to recombination the energy density of the dark sector redshifts like radiation. The temperature of this leftover radiation is constrained by limits on the effective number of neutrino species that can be inferred from the cosmic microwave background (CMB), \begin{equation} \Delta N_{\rm eff}= \frac{8}{7}\left(\frac{11}{4}\right)^\frac{4}{3}\frac{\rho_X}{\rho_\gamma}\Bigg|_{T=T_{\rm rec}} < 0.3 \text{ at } 95\% \text{C.L.} \,, \label{eq:CMB-bound} \end{equation} where $\rho_\gamma$ is the energy of photons and $\rho_X$ is the energy of the byproducts of the dark gauge field and the bound comes from the TT,\,TE,\,EE,\,lowE\,$+$\,lensing\,$+$\,BAO Planck 2018 data~\cite{Aghanim:2018eyx}. We use $\rho^{(\prime)} = \pi^2/30 \,g_{\rho}(T^{(\prime)})\, T^{(\prime)4}$ for the energy density and $s^{(\prime)}= 2\pi^2/45\,g_{s}(T^{(\prime)})\,T^{(\prime)3}$ for the entropy density of each of the thermal baths. Here and in the rest of the paper, primed variables refer to the dark thermal bath. Assuming that the entropy of the SM and the entropy of the dark sector are separately conserved (since they are not interacting with each other), one can relate the temperature ratio at recombination to the temperature ratio at some high reference temperature, $\xi \equiv T'_0/T_0$. Under these assumptions, we find that \begin{equation} \Delta N_{\rm eff}\simeq 4.4\frac{g'_{\rm \rho}(T_{\rm rec})}{g_{\rm \rho,SM}(T_{\rm rec})}\left[\frac{g'_{\rm s}(T_0)g_{\rm s,SM}(T_{\rm rec})}{g'_{\rm s}(T_{\rm rec})g_{\rm s,SM}(T_0)}\right]^{4/3}\,\xi^4\,, \end{equation} with $g_{\rm s, SM}(T_{\rm rec})= 3.91$, $g_{\rm \rho, SM }(T_{\rm rec})=3.36$, $g_{\rm s,SM}(T_0)=106.75$ and assuming $g'_{\rm s}=g'_{\rm \rho}$, we thus obtain \begin{equation} \Delta N_{\rm eff}=0.016 \times n^{-1/3}\,\left(2N_c^2-2\right)^{4/3}\xi^4\,, \end{equation} where $n$ is the number of degrees of freedom of the byproducts and $N_c$ is the number of colors of the gauge field. With the simplest assumption, $n=2$, we can identify the dark thermal bath temperature as the SM temperature only for the $N_c=2$ case, while the $N_c=3$ case requires a small suppression of $T'_{0}$ compared to $T_{0}$ by about ten percent in order to be consistent with the upper limit set by CMB observations. Larger gauge groups would require a further suppression of the temperature ratio. This limit is also expected to be improved by CMB Stage-4 observations with a projected sensitivity of $\Delta N_{\rm eff}< 0.03$ \cite{CMB-S4:2016ple}. For concreteness, we will display our results for $N_c=3$ in the following and work with $\xi = 0.86$ which is the value that saturates the bound in \cref{eq:CMB-bound}. Note that, depending on the strength of the coupling between the axion and dark sector, it is possible that the axion may thermalize with the dark gauge field. In this case, one would need to add one degree of freedom for the axion in the dark thermal bath. This, however, only marginally changes the bound on the temperature ratio and hence, for simplicity, we disregard the thermalization of the axion with the dark sector. At lower temperatures, the relation between the temperature of the dark thermal bath and the temperature of the SM is well described by \begin{align} T'=\xi\left(\frac{g_{\rm s, SM}(T)\;g'_{\rm s}(T'_0)}{g_{\rm s, SM}(T_0)\;g'_{\rm s}(T')}\right)^{1/3} T \,, \label{eq:temperature-relation} \end{align} where $g_{\rm s,SM}(T)$ describes the evolution of the entropic degrees of freedom of the SM, while the function $g'_{\rm s}(T')$ is a step function equal to $2(N_c^2-1)$ at temperatures higher than the confinement scale or the temperature of spontaneous symmetry breaking (whichever comes first) and equal to 2 at lower temperatures. \section{Frictional misalignment} \label{Sec:FrictionalMis} We are now equipped to study the effect of friction due to the dark thermal bath on the axion evolution. The EOMs for the axion-gauge field system take the form, \begin{align} \ddot{\theta}_a+\left[3 H + \Upsilon(T')\right]\dot{\theta}_a&=-\frac{1}{f_a^2}V'(\theta_a) \,,\label{eq:eom}\\ \dot{\rho}_{\rm dr}+4H \rho_{\rm dr}&=f_a^2\,\Upsilon(T')\, \dot{\theta_a}^2 \,.\,\label{eq:eom2} \end{align} At weak gauge coupling, $\alpha \lesssim 0.1$, and if the Hubble rate is small compared to the rate of thermalization, $H< \alpha^2 T'$, the sphaleron transitions induce an effective friction $\Upsilon(T')$ in the axion EOM that depends on the sphaleron rate $\Gamma_{\rm sph}$, whose general expression can be found in \cref{App:SphlaeronValerie}. For our purposes, the friction term is well approximated by \begin{equation} \Upsilon(T')=\frac{\Gamma_{\rm sph}}{2 T' f_a^2}\simeq1.8\times\frac{N_c^2-1}{N_c^2}\frac{\left(N_c \alpha\right)^5 T'^3}{2 f_a^2}\,. \label{eq:friction} \end{equation} The axion potential will be assumed to be $V(\theta_a)=m_a^2(T) f_a^2 \left[1-\cos(\theta_a)\right]$, with temperature-dependent axion mass as in \cref{Eq: axion mass temp} with $\beta=4$, which is commonly expected in the dilute instanton gas approximation if the axion mass is generated by strong dynamics. For the application at hand, the energy of the gauge field is always much greater than the energy of the axion and hence the axion-induced backreaction in \cref{eq:eom2} will always be negligible. In practice, we therefore neglect \cref{eq:eom2} and only solve \cref{eq:eom} in the presence of a dark plasma that redshifts like radiation, $\rho_{\rm dr}\propto a^{-4}$ (where $a$ is the scale factor of the Universe). This assumption always holds in our scenario, as we checked a posteriori. \medskip \noindent\textbf{Running gauge coupling constant\,---\,}% The dynamics in our mechanism span a substantial range of energies and therefore the running of the dark gauge coupling constant cannot be neglected. The coupling of the dark sector as a function of temperature may be approximated at one loop by \begin{align} \alpha\left(T'\right)=\frac{4\pi}{\bar{b}_0 N_c}\,\frac{1}{\ln\left(T'^2/\Lambda^2\right)} \,, \label{eq:running} \end{align} where $\Lambda$ represents the confinement scale in the strongly coupled case and the \textit{would-be} confinement scale if the gauge group becomes spontaneously broken at energies above $\Lambda$. The factor $\bar{b}_0$ is related to the one-loop $\beta$-function coefficient $b_0$ as $\bar{b}_0\equiv 4\pi b_0/N_c$ and takes the value $\bar{b}_0 = 11/3$ in the confining case, which only contains gauge bosons. On the other hand, in the case of spontaneous symmetry breaking, we assume the minimal Higgs content that allows us to break the $SU(N_c)$ down to $U(1)$. As outlined in \cite{Buccella:1979sk} this can be achieved with $N_c-2$ complex Higgses in the fundamental representation and one Higgs in the real adjoint representation of $SU(N_c)$. In that case, in the large $N_c$ limit, the beta function coefficient takes the value $\bar{b}_0 = 10/3$. \medskip \noindent\textbf{Onset of oscillations under thermal friction\,---\,}% The first effect of the introduction of friction that we will study consists of a delay of the onset of oscillations. If at early times the friction is dominant, $\Upsilon(T'),\,3H\gg m_a(T)$, the motion of the axion field corresponds to an overdamped oscillator with approximate solution \cite{Berghaus:2019cls} \begin{align} \theta_a(T)\simeq \theta_i {\rm e}^{-\frac{m_a(T_{\rm})^2}{(5+2\beta)\Upsilon(T')H(T)}}\,, \end{align} in the case where $\Upsilon(T')\gg 3H$, whereas it takes the form \begin{align} \theta_a(T)\simeq \theta_i {\rm e}^{-\frac{m_a(T)^2}{6\left(2+\beta\right) H(T)^2}}\,, \end{align} when $3H\gg \Upsilon(T')$, where we approximate $g_s,\,g_\rho,\alpha\simeq {\rm const}$ and $\beta$ is defined in \cref{Eq: axion mass temp}. The expression above indicates that the onset of oscillations takes place when the exponent is ${\cal O}(1)$. The precise value is best found by comparing with the numerical solution and identifying the prefactor that yields the most accurate results. Using the above, we may write the condition for the onset of oscillations in the presence of friction as \begin{align} m_a(T_{\rm osc})\simeq \left\{\begin{array}{lll} \displaystyle 4 \,H(T_{\rm osc}) &\;,\; 3 H> \Upsilon \\ \displaystyle \displaystyle \frac{10 \Upsilon(T'_{\rm osc})\,H(T_{\rm osc})}{m_a(T_{\rm osc})} & \;,\; 3 H < \Upsilon \end{array}\right.\,, \label{eq:oscillation-temperature} \end{align} depending on whether the thermal friction dominates over the Hubble friction at the onset of oscillations or vice versa. The numerical prefactors on the right-hand side correspond to the values that yield the best agreement with the numerics regarding the late-time comoving number of axions and assume a QCD-like temperature dependence of the mass ($\beta=4$ in \cref{Eq: axion mass temp}). For other values of $\beta$, the prefactors are modified by ${\cal O}(1)$ factors. \medskip \noindent\textbf{Frictional adiabatic invariant\,---\,}% The second and most relevant effect of thermal friction on the axion evolution is the damping of the axionic oscillations, which results in a depletion of its relic abundance. In order to estimate the impact of this effect, we will now derive the quantity that remains constant during the axion evolution after the onset of oscillations — \emph{the frictional adiabatic invariant}. For some general time-dependent friction $\Gamma(t)$, the EOM of the axion takes the form \begin{align} \ddot{\theta}_a+\Gamma(t)\dot{\theta}_a+m_a^2(t)\theta_a=0\,, \end{align} where we expand the cosine of the potential to quadratic order. Via the change of variables \begin{align} \tilde{\theta}_a\equiv g(t)\,\theta_a\;\;\;,\;\;\;g(t) \equiv \;{\rm exp}\left[-\frac{1}{2}\int^t_{t_{\rm osc}}\Gamma(\tilde{t})d\tilde{t}\right]\,, \end{align} the EOM may be transformed into the standard template of a harmonic oscillator with a time-dependent frequency, \begin{align} \ddot{\tilde{\theta}}_a+\omega^2(t) \tilde{\theta}_a=0\,, \end{align} with $\omega^2=m_a(t)^2+\ddot{g}/g +\Gamma \,\dot{g}/g$. Provided then that $\dot{\omega}/\omega \ll \omega$, which will always be true in our case after the onset of oscillations as defined in \cref{eq:oscillation-temperature}, one may use the WKB approximation to write the solution at first order as \begin{align} \tilde{\theta}_a(t)\simeq\frac{\tilde{\theta}_0}{\sqrt{\omega}}\cos\left(\int^t_{t_{\rm osc}}\omega(\tilde{t})\;d \tilde{t}\right)\,, \end{align} and most importantly to obtain the adiabatic invariant, \begin{align} A & = \frac{\rho_\theta\left(t\right)}{\omega(t)}\,\exp\left[\int^td\tilde{t}\:\Gamma(\tilde{t})\right] = {\rm const} \,, \end{align} where $\rho_\theta$ is the energy density $\rho_\theta/f_a^2 \equiv1/2 \dot{\theta}_{a}^{2} +1/2 m_{a}^{2} \theta_{a}^{2}$. If this general formula is applied to the standard case, in which the only friction is the Hubble expansion $\Gamma(t)=3H(t)$, then for $m_a\gg H$ and thus $\omega\simeq m_a$, we obtain that the adiabatic invariant corresponds to the comoving number of axions $N_a$, \begin{align} N_a = \frac{\rho_\theta\left(T\right)}{m_a(T)}\,\exp\left[\int^t_{t_{\rm osc}}d\tilde{t}\, 3 H(\tilde{t})\right] = \frac{\rho_\theta a^3}{m_a} = {\rm const} . \end{align} In the presence of thermal friction $\Gamma=\Upsilon+3H$, and the adiabatic invariant becomes \begin{align} \label{eq:adiabatic} A_{\rm fr} = \frac{\rho_\theta\left(T\right)a^3\left(T\right)}{m_a(T)} \exp\left[\int^t d\tilde{t}\:\Upsilon(\tilde{t})\right] = {\rm const} \,. \end{align} Notice that we are still assuming that the frequency of oscillations can be approximated by $\omega\simeq m_a$, even though the onset of oscillations as defined in \cref{eq:oscillation-temperature} takes place before this assumption is satisfied. This generally introduces only an ${\cal O}(1)$ error in the final abundance as long as the thermal friction is not much greater than the Hubble friction by some ${\cal O}(10)$ value, since in those cases the onset of oscillations as defined in \cref{eq:oscillation-temperature} and the moment when $\omega\simeq m_a$ is satisfied are very close to each other due to the rapid decay of the ratio $\Upsilon(T')/m_a(T)\propto a^{-7}$. For the cases in which there is a greater-than-${\cal O} (10)$ hierarchy between the thermal and Hubble friction, we will never use this formula to compute the abundance. Instead, we will simply be interested in the maximum amount that we can delay the onset of oscillations before $SU(N_c)$ is spontaneously broken. The integral in the exponent $D\equiv \int d\tilde{t}\:\Upsilon$ can be computed analytically by plugging the expression for the thermal friction in \cref{eq:friction} and the one-loop running of the gauge coupling in \cref{eq:running}, as it is shown in \cref{sec:analytical_derivation_of_the_frictional_invariant}. The final results reads, \begin{align} && D &= C \frac{M_p \Lambda}{f^2}\left[\frac{\tau^3 + \tau^2 + 2 \tau + 6}{\tau^4}\,e^\tau - \textrm{Ei}\left(\tau\right)\right]\;,\nonumber\\ && C &\simeq 10\frac{\pi^4}{\bar{b}_0^5}\left(\frac{g'_{\rm s}(T'_{0})}{g_{\rm s,SM}\left(T_0\right)g'_{\rm s}\left(T'_{\rm osc}\right)}\right)^{2/3}\frac{\sqrt{\bar{g}_{\rm \rho,SM}}}{\bar{g}_{\rm s,SM}^{1/3}}\,,\nonumber\\ &&\tau &= \ln\bigg(\frac{T'}{\Lambda}\bigg)\,, \label{Eq:analytical integral} \end{align} where ${\rm Ei}(z)=-\int^\infty_{-z}dt\;{\rm e}^{-t}/t$ is the exponential integral function and $M_p$ denotes the reduced Planck mass. The derived \emph{frictional adiabatic invariant} in \cref{eq:adiabatic,Eq:analytical integral} remains constant from the onset of oscillations at $T'_{\rm osc}$ until the effects of friction are turned off at $T'_{\rm end}$ due to either confinement or spontaneous symmetry breaking. Therefore, in order to compute the axion relic density, the result of the integral in \Cref{Eq:analytical integral} needs to be evaluated with the corresponding limits. The result can be collected and re-expressed in terms of physical scales as follows, \begin{multline} D \simeq 6.3 \left(\frac{10^8\;{\rm Gev}}{f}\right)^2\left(\frac{\Lambda}{150\;{\rm MeV}}\right)\\ \times \left[\frac{\tau^3 + \tau^2 + 2 \tau + 6}{\tau^4}\,e^\tau - \textrm{Ei}\left(\tau\right)\right]^{\tau_{\rm end}}_{\tau_{\rm osc}}\,, \label{numerical-result} \end{multline} where the degrees of freedom were assumed to be approximately $g_{\rm \rho}\sim g_{\rm s}\sim {\cal O}(10)$, since our mechanism takes place mostly at high temperatures, and for concreteness, we selected $\bar{b}_0\sim 11/3$. Different options of these parameters will change the overall prefactor by some ${\cal O} (1)$ value. If the running of the coupling constant is mild at the temperatures at which the friction is active, i.e. $T\gg \Lambda$, then the formula for the frictional invariant in \cref{Eq:analytical integral} can be further simplified to, \begin{align} D &= 24\, C \frac{M_p \Lambda}{f^2}\left[\frac{T'/\Lambda}{\left[\ln(T'/\Lambda)\right]^5}\right]^{T'_{\rm end}}_{T'_{\rm osc}}\;\,. \end{align} as derived at the end of \cref{App:SphlaeronValerie}. The set of \cref{eq:oscillation-temperature,Eq:analytical integral} constitute the main result of this paper and can be used to compute the axion dark-matter abundance for the various scenarios we will explore in the subsequent sections. The only case that is not covered by the formula above is the case in which the gauge group is spontaneously broken at some temperature before the onset of oscillations $T'_{\rm end}> T'_{\rm osc}$. In that case there is no suppression and we simply set $D=1$. Our final result for the axion dark-matter abundance takes the form \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}}\simeq 28 \sqrt{\frac{m_{a}}{{\rm eV}}} \sqrt{\frac{m_{a}}{m_{\rm osc}}}\left(\frac{\theta_i\;f_a}{10^{12}\,{\rm GeV} }\right)^2 {\rm e}^{-D} \left(\frac{m_{\rm osc}}{4\,H_{\rm osc}}\right)^{3/2} {\cal F}\nonumber\\ \label{eq:relic-abundance} \end{align} The various factors have been rearranged so that the result matches the standard result in \cref{Eq:simple ALP relic dansity ratio CMB} in the absence of the factors ${\rm e}^{-D}$ and $\left(\frac{m_{\rm osc}}{ 4 H_{\rm osc}}\right)^{3/2}$. These factors correspond respectively to an overall suppression due to friction and an enhancement due to the delay in the oscillations in the overall abundance. It will be shown in the various realizations of our mechanism that either of these factors may dominate depending on the scenario. The rest of the paper is devoted to applying the main results in \cref{eq:temperature-relation,eq:friction,eq:running,eq:oscillation-temperature,eq:relic-abundance} for the computation of the relic density in various scenarios. \begin{figure} \centering \includegraphics[scale=0.4]{DI-function-plot.pdf} \caption{Contour plot of the overall suppression factor ${\rm e}^{-D}$ where $D$ given by \cref{numerical-result} for $f=10^8\;{\rm GeV}$ and $\Lambda=150\; {\rm MeV}$.} \label{fig:exponent} \end{figure} \section{Minimal ALP Dark Matter Model} \label{Sec:MinimalALP} The first scenario we will focus on is the most minimal ALP dark-matter model. In this scenario, there is a single non-abelian gauge group that provides a mass to the ALP via the anomaly while simultaneously introducing the thermal friction term in the axion EOM described in the preceding section. We will investigate its impact on the axion relic abundance. The relevant Lagrangian corresponds to that in \cref{Eq: Lagrangian first} and the temperature-dependent mass generated by this anomalous coupling corresponds to that in \cref{Eq: axion mass temp}, where the critical temperature corresponds to the confinement scale of the gauge group $T_c=\Lambda$, the zero temperature mass of the axion is $m_a\simeq {\Lambda^2}/{f_a}$ and the power-like coefficient predicted by dilute instanton gas approximation (DIGA) for an $SU(N_c)$ gauge sector with no fermions reads $\beta=\frac{1}{2}(\frac{11}{3}N_c-4)$~\cite{Gross:1980br,Borsanyi:2015cka}. For concreteness, we will show the results for $N_c=3$ and $\beta\sim 4$, as one would expect for QCD within the DIGA.% \footnote{Note that, while the power-law behaviour predicted by DIGA for QCD $\beta\sim 4$ agrees with some lattice simulations \cite{Borsanyi:2016ksw}, alternative approximations like the interacting instanton liquid model (IILM) suggest $\beta \sim 3.3$ and other lattice computations result in smaller values, $\beta \sim 1$ \cite{Trunin:2015yda} for QCD.} The final ingredient that needs to be specified in order to apply the machinery that was developed in the preceding section is the temperature $T'_{\rm end}$ at which the friction term turns off. For $T'\ll \Lambda$, the sphaleron rate is exponentially suppressed, but the expression in \cref{eq:friction} is no longer valid for these temperatures (the approximations break down for $\alpha (T)\gtrsim 0.1$). There is thus some ambiguity in regard to the value of the gauge coupling that signals the end of the effect of friction, $\alpha_{\rm thr}\equiv \alpha(T'_{\rm end})$. One may be tempted to be conservative and choose $\alpha_{\rm thr}=0.1$ so that the friction term is completely turned off whenever we are outside the validity of the formula for the sphaleron rate. For this case, thermal friction is only active far above the confinement scale, and even a large friction at such early times would not be imprinted in the final abundance, because at that point, the axion has not yet started to oscillate. Nevertheless, we argue that this conservative option underestimates the total effect. Indeed, we expect a non-zero sphaleron rate for larger values of the gauge coupling until $\alpha_{\rm thr}\sim 1/3$, even though the accuracy of the sphaleron rate formulas is lost. This approach is consistent with other attempts in the literature to extrapolate the sphaleron rate expression to couplings greater than $\alpha_{\rm thr}\sim 0.1$, such as in the case of heavy-ion collisions. Ref.~\cite{Kapusta:2020qdk} concluded that the sphaleron rate remains significative at least up to $\alpha_{\rm thr}\sim 1/3$, albeit the rate may be suppressed by up to an order of magnitude with respect to \cref{eq:friction}. For this reason, we compute the overall axion abundance for two distinct values of $\alpha_{\rm thr}=0.2,\,0.4$, thus demonstrating that the result is quite sensitive to this choice. \begin{figure} \centering \includegraphics[scale=0.30]{Plotfma2.pdf} \includegraphics[scale=0.30]{Plotfma4.pdf} \caption{ Predictions for the values of $\{m_a,\,1/f_a\}$ that reproduce the correct axion DM relic density (black solid lines), taking into account thermal friction for $\alpha_{\rm thr}=0.2$ (\textit{top panel}) and $\alpha_{\rm thr}=0.4$ (\textit{bottom panel}). They deviate with respect to the standard frictionless computation, which is shown in dashed black lines. The red lines are constant depletion factor ${\rm e}^{-D}$, with $D$ defined in \cref{Eq:analytical integral} and correspond to $10^{-20}$, $10^{-10}$, $10^{-1}$ from top to bottom. The cyan lines are lines of constant enhancement factor $\left(m_{\rm osc}/4 H_{\rm osc}\right)^{3/2}$ and correspond to 100, 10, 2 respectively from top to bottom.} \label{fig:ALPDM} \end{figure} The results are shown in \cref{fig:ALPDM}, where the points in the $\{m_a,\,1/f_a\}$ plane that can successfully account for the observed dark matter are displayed. Interestingly, depending on the value of $\alpha_{\rm thr}$ the resulting relic density is either suppressed or enhanced with respect to the frictionless case. For $\alpha_{\rm thr}=0.2$, the dominant effect of the friction is the delay of the onset of oscillations that results in an enhancement of the relic density and thus requires smaller decay constants $f_a$ to reproduce the observed DM density (the black solid line in the top panel of \cref{fig:ALPDM} bends upwards). Instead, for $\alpha_{\rm thr}=0.4$, the dominant effect is the damping of the oscillations that reduces the relic density requiring larger values of $f_a$ (the black solid line in the bottom panel of \cref{fig:ALPDM} bends downwards). Regarding the validity of our results, even though we are extrapolating the sphaleron rate, it is important to note that suppressing it by an order of magnitude would only marginally affect our results. The important point here is that allowing for the extrapolation of the sphaleron rate to higher couplings allows us to capture the effect of friction for temperatures closer to the confinement scale, while the results are only mildly sensitive to the exact value of the friction coefficient. This exact effect is displayed in \cref{fig:YvsH}. One may observe that by extrapolating our expression for higher values of the gauge coupling, the effects of friction last for a much longer period, whereas if we switch off the friction at $\alpha_{\rm thr}=0.1$, the corresponding temperature is too premature for the friction effects to act on the axion after it has started to roll. Of course the sphaleron rate (green line) is inaccurate at lower temperatures than the one corresponding to the point $\alpha_{\rm thr}=0.1$ but since one does not expect it to be exponentially suppressed yet, any ${\cal}O(10)$ suppression of the sphaleron rate would only imply a small modification of the lines in \cref{fig:ALPDM}. \begin{figure} \centering \includegraphics[scale=0.45]{plot-Y.pdf} \caption{Gauge friction versus Hubble friction for some example parameter values. The red dots denote the instant we switch off the friction effects depending on the largest acceptable value of the gauge coupling $\alpha_{\rm thr}$.} \label{fig:YvsH} \end{figure} The main conclusion from our analysis is that the predictions for the dark-matter abundance of the standard calculation in the Minimal ALP scenario are only reliable up to a mass of approximately $m_a\simeq 10^2\;{\rm eV}$ and for greater masses the friction plays an important role and we find a deviation from the standard prediction. This conclusion holds true as long as the axion obtains its mass from instanton effects of a dark non-abelian gauge sector. Further studies on the extrapolation of the sphaleron rate close to the confinement scale are needed in order to elucidate the maximum value of the coupling constant at which the friction is still active, which strongly impacts the axion relic density prediction. Finally, regarding the phenomenological consequences of this minimal model, it is important to note this minimal ALP cannot couple to photons for the parameter region where the friction terms are important; the reason being that for those values of $\{m_a,\,1/f_a\}$ and assuming an electromagnetic anomaly coefficient $E\sim \mathcal{O}(1)$ \cite{Georgi:1986df}, the corresponding value of the axion coupling to photons is excluded due to several cosmological considerations (see e.g. Ref.~\cite{Cadamuro:2011fd}). \section{ALP coupled to two Gauge groups} \label{Sec:ALP2GaugeGroups} As another application of the thermal friction effects, we will consider the case in which the potential of the axion and the friction are provided by two separate gauge groups. We will allow for a possible coupling hierarchy among them that could arise in the context of clockwork axions or alignment scenarios \cite{Kim:2004rp,Choi:2014rja,Kaplan:2015fuy,Giudice:2016yja,Long:2018nsl}. The corresponding axion interaction Lagrangian reads, \begin{align} {\cal L_{\rm int}}= \frac{\alpha_G}{8\pi}\theta_a G^b_{\mu\nu}\widetilde{G}^{b\mu\nu} + \lambda \frac{\alpha}{8\pi}\theta_a F^b_{\mu\nu}\widetilde{F}^{b\mu\nu}\,, \label{Eq: Lagrangian first2} \end{align} where $G^b_{\mu\nu}$ is the field strength tensor of the gauge group that confines and provides the potential through instanton effects and $F^b_{\mu\nu}$ is the field strength tensor of the gauge group that provides the friction and will be assumed to be spontaneously broken. The spontaneous breaking of the gauge field $F^b_{\mu\nu}$ allows us to have large friction at early times, while large contributions to the potential are avoided at late times, since the instanton effects are exponentially suppressed after the gauge gauge is spontaneously broken. We will also assume a hierarchy between the couplings, which is encoded in the parameter $\lambda\gtrsim1$. Such a value of the enhancement parameter may be justified by the alignment mechanism if the enhancement is relatively small \cite{Kim:2004rp}. However, we will also explore very large hierarchies, $\lambda \gg 1$, which is possible in the context of the clockwork mechanism \cite{Choi:2014rja,Kaplan:2015fuy,Giudice:2016yja,Long:2018nsl}, since it grows exponentially with the number $N$ of scalar fields of the full theory, $\lambda=3^N$. In principle, there is no bound on $N$, and hence we will also explore very large $\lambda$ values. Upon confinement, the axion mass exhibits the temperature dependence in \cref{Eq: axion mass temp} for $T_c=\Lambda_G$ and $m_a\simeq{\Lambda_G^2}/{f_a}$, where $\Lambda_G$ is the confinement scale. Note that this scale differs from $\Lambda$, which corresponds to the would-be confinement scale of the gauge group that causes the friction and which becomes spontaneously broken, see \cref{eq:running}. Taking the above into account and assuming that the onset of oscillations corresponds to $m_{\rm osc}\equiv m_a(T_{osc})\simeq 4 H(T_{osc})$, the prediction for the relic density in \cref{Eq:simple ALP relic dansity ratio CMB} for the frictionless case can be simplified to, \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}} \simeq \left(\frac{f_a}{7.57 \cdot 10^{9}\; {\rm GeV}}\right)^{5/3}\,\theta_i^2\,\sqrt{\frac{m_a}{{\rm eV}}}\,\frac{\mathcal{F}(T_{\rm osc})}{g_{\rm \rho, SM}(T_{\rm osc})^{1/6}}\,. \label{eq:ALPDM} \end{align} This implies that, for initial misalignments $\theta_i\sim\mathcal{O}(1)$, the ALP abundance correctly accounts for dark matter only for axion scales $f_a \sim 7\cdot 10^{9}\; {\rm GeV}\; ({\rm eV}/m_a)^{3/10}$, while it is underproduced (overproduced) for smaller (larger) $f_a$. The thermal friction effects studied in this work may in principle allow us to open up the parameter space for ALP dark matter. In the traditionaly underabundant case, this is possible if the onset of oscillations is delayed with respect to the standard case, while in the traditionally overabundant case, this may happen if the ALP experiences the gauge friction after it has started rolling. We will now investigate both possibilities separately. \medskip \noindent\textbf{Underabundant ALP\,---\,}% In the presence of thermal friction, the true temperature at which the onset of oscillations occurs might be distinct from the standard one and correspond to the lower option in \cref{eq:oscillation-temperature}. Treating $T_{\rm osc}$ as a free parameter we may derive a new expression for the relic abundance, \begin{align} \frac{\rho_{\rm a,0}}{\rho_{\rm DM}}\simeq \left(\frac{m_a f_a}{T_{\rm osc}^2}\right)^4 \,\theta_i^2\,\left(\frac{T_{\rm osc}}{4.53\cdot10^{-10}\,{\rm GeV}}\right) \frac{{\cal F}}{g_{\rm \rho,SM}(T_{\rm osc})^{3/4}}\,.\nonumber\\ \label{eq:relic-abundance-ALP} \end{align} For $\theta_i\simeq 1$, it follows that the required temperature $T_{\rm crit}$ for the onset of oscillations that predicts the correct dark matter density reads \begin{align} T_{\rm crit}\simeq 21.6\,{\rm GeV}\left(\frac{m_a\,f_a}{{\rm GeV}^2}\right)^{4/7} \frac{{\cal F}^{1/7}}{g_{\rm \rho,SM}(T_{\rm crit})^{3/28}}\,. \label{eq:critical-temperature} \end{align} In the presence of thermal friction, it is possible to delay the onset of oscillations until this critical temperature provided the friction is strong enough to prevent the axion from rolling until that moment. The relevant scales of the problem must then obey the following hierarchy, \begin{align} \Lambda\leq T_2 \leq T_{\rm end} &= T_{\rm crit} \leq T_1 \label{eq:temperature-conditionsALP} \end{align} where $T_1$ and $T_2$ are the solutions to the upper and lower cases in \cref{eq:oscillation-temperature}, respectively. Our scenario assumes that spontaneous breaking occurs at just the right moment and from that moment on the ALP undergoes oscillations as usual. Aside from the aforementioned scenario, there is also a possibility that the scale of spontaneous symmetry breaking is lower than $T_2$. In that case the axion abundance first overshoots the desired one, since the axion remains frozen beyond temperature $T_{\rm crit}$, but that additional abundance is also diluted after the onset of oscillations. We exclude this fringe possibility because it always requires severe fine-tuning in the closeness of $T_2$ and $T_{\rm crit}$. We can now use \cref{eq:oscillation-temperature} to write \begin{align} T_2<T_{\rm crit} \;\;\longleftrightarrow \;\;\frac{10 \Upsilon(T'_{\rm crit})\,H(T_{\rm crit})}{m_{a}(T_{\rm crit})^2}>1\,, \end{align} which can be recast as \begin{align} \frac{{\cal F}_a\left(\frac{ m_a f_a }{17.0 \,{\rm GeV}^2}\right)^{10/7} \lambda^2}{\left[1+0.17\left(\ln\left[{\cal F}_b\left(\frac{ m_a f_a}{{\rm GeV}^2}\right)^{1/7}\right]+\ln\left[{\Lambda_G^2}/{\Lambda^2}\right]\right)\right]^5} > 1\,, \label{eq:enhancement-parameter} \end{align} where ${\cal F}_a$ and ${\cal F}_b$ depend on the effective number of degrees of freedom and are given by \begin{align} {\cal F}_a&=\frac{g_{\rm s,SM}(T_{\rm crit})\,g'_{\rm s}\left(T'_{0}\right){\cal F}^{13/7}}{g_{\rm s,SM}(T_0)\,g'_{\rm s}\left(T'_{\rm crit}\right)\,g_{\rm \rho,SM}(T_{\rm crit})^{25/28}}\,,\\ {\cal F}_b&=\left(\frac{g_{\rm s,SM}\left(T_{\rm crit}\right)\,g'_{\rm s}\left(T'_{0}\right)}{g_{\rm s,SM}(T_0)\,g'_{\rm s}\left(T'_{\rm crit}\right)}\right)^{2/3}\frac{{\cal F}^{2/7}}{g_{\rm \rho,SM}(T_{\rm crit})^{3/14}}\,.\nonumber \end{align} The expression in \cref{eq:enhancement-parameter} displays a necessary condition for the delay of the onset of oscillations to lead to the correct DM abundance. Aside from the small dependence on the degrees of freedom, this is an expression that depends on the enhancement parameter $\lambda$, the ratio of the confinement scales $\Lambda_G/ \Lambda$ and the product $m_af_a$. It is important to note that this analysis has focused only on the axion zero mode as a first step. Due to the delay on the onset of oscillations, at $T_{\rm osc}$ the Hubble parameter is significantly smaller than the axion mass and thus the axion field experiences large amplitude oscillations before being significantly damped. As a consequence, the non-harmonicities of the potential will induce the production of higher momentum axion quanta, i.e. axion fragmentation \cite{Fonseca:2019ypl,Eroncel:2022abd}. This process is not expected to significantly modify the prediction for the axion relic density (and thus will not be discussed further in the present work) but could give rise to observational consequences pointing to a mechanism responsible for the delay of the onset of oscillations \cite{Eroncel:2022abc}. The main result of this section is provided in \cref{fig:ALP-lambda} where we show that the region of the parameter space which traditionally results on axion underabundance, may now account for all the dark matter provided that the enhancement parameter $\lambda$ takes at least the indicated value (which is required to guarantee that the axion remains frozen until the critical temperature). For the purpose of producing this plot, we followed a conservative approach and assumed that our approximate expressions for the friction term are valid until $\alpha_{\rm thr}=0.1$. If we relax this assumption, one requires a smaller enhancement. At this point it is important to mention that the results shown above do not apply necessarily for the QCD axion which will be treated separately in the next section. We simply added the QCD axion line in figure \cref{fig:ALP-lambda} as a visual reference point. To summarize, our analysis shows that in the presence of thermal friction it is possible to delay the onset of oscillations sufficiently enough to enhance the overall ALP relic abundance. This enhancement may be enough to account for all, or a fraction of dark matter and it is in principle possible to open up the entirety of the parameter space in which the ALP relic abundance from the misalignment mechanism is too small, provided that the coupling to the dark non-abelian gauge field is strong enough. \medskip \noindent\textbf{Overabundant ALP\,---\,}% In contrast to the previous case, in the traditionally overabundant case, one requires the axion to experience the effects of friction after the onset of oscillations. Exploring the parameter space for this scenario is a straightforward application of the general formulas derived in the previous chapter. It is however difficult to express the results in closed analytical form because of the complexity involved in solving the relevant transcendental equations. For clarity, we will simplify the equations of the previous chapter by approximating the degrees of freedom at $T_{\rm osc}$ by an ${\cal O}(10)$ number and assuming that $H_{\rm osc}$ corresponds to the lower branch of \cref{eq:oscillation-temperature}. The axion relic density in \cref{eq:ALPoverRHO} then translates into, \begin{align} \frac{\rho_{\rm a,0}}{\rho_{\rm DM}}&=0.11 \,{\rm e}^{-D}\left(\frac{m_a^{10}f_a^{10}\lambda^{14}}{{\rm GeV}^{20}\,\tau_{2}^{ 35}}\right)^{1/13}\,,\label{eq:ALPoverRHO} \end{align} where the exponential suppression parameter $D$ in \cref{numerical-result} now reads, \begin{multline} D \simeq 6.3 \left(\frac{\lambda\,10^8\;{\rm Gev}}{f_a}\right)^2\left(\frac{\Lambda}{150\;{\rm MeV}}\right)\\ \times \left[\frac{\tau^3 + \tau^2 + 2 \tau + 6}{\tau^4}\,e^\tau - \textrm{Ei}\left(\tau\right)\right]^{\tau_{\rm end}}_{\tau_{2}}\,, \label{eq:numerical-result2} \end{multline} and the temperature of onset of oscillations is \begin{align} \tau_{2}\equiv \ln\left({T'_2}/{\Lambda}\right) &=\ln\left[\frac{13.2 \;{\rm GeV}}{\Lambda}\left(\frac{ m_a^6 f_a^6\,\tau_{2}^{ 5}}{{\rm GeV}^{12}\lambda^2}\right)^{1/13}\right]\label{eq:ToscY}\,. \end{align} For convenience, let us also define the variables $\tau_1$ and $\tau_{\rm thr}$ as \begin{align} \tau_{1}&\equiv \ln\left({T'_1}/{\Lambda}\right)=\ln\left(\frac{600\,{\rm GeV}^{1/6}\,f_a^{1/3}\,m_a^{1/2}}{\Lambda}\right)\label{eq:ToscH}\\ \tau_{\rm thr}&=\frac{\pi}{5\,\alpha_{\rm thr}}\label{eq:Tendalpha} \end{align} where $\tau_1$ corresponds to the solution to the upper branch of \cref{eq:oscillation-temperature} and $\tau_{\rm thr}$ corresponds to the value of $\tau$ when the coupling constant reaches the threshold value $\alpha_{\rm thr}$. The required values of $\lambda$ and $\Lambda$ to yield the right dark-matter relic abundance can then be determined using the following algorithm. \begin{itemize} \item We select a point of interest in the $\{m_a,f_a\}$ plane in the overabundant regime, $f_a > 7.57\cdot 10^{9}\; {\rm GeV}\; ({\rm eV}/m_a)^{3/10}$. \item We evaluate the first line of \cref{eq:numerical-result2} for the selected point and determine the range of values in the $\{\lambda,\Lambda\}$ plane that yield a value that is at most of order $\sim 10$. This is required in order to avoid fine-tuning of $\tau_{\rm end}$. \item Next, for a choice of $\{\lambda,\Lambda\}$ we evaluate both \cref{eq:ToscY} and \cref{eq:ToscH} and demand that $\tau_{1}>\tau_{2}$ so that the gauge friction is comparable to Hubble friction at the onset of oscillations. We repeat the search in the $\{\lambda,\Lambda\}$ plane until an acceptable pair of values is found. \item Next, we set the left hand side of \cref{eq:ALPoverRHO} equal to one and solve for $\tau_{\rm end}$. \item Finally, if $\tau_{\rm end}\geq\tau_{\rm thr}$ then the values $\{\lambda,\Lambda\}$ chosen are acceptable and the correct dark-matter abundance is obtained. Otherwise we go back to step 3 and try a different choice of $\{\lambda,\Lambda\}$. \end{itemize} Following this algorithm we have produced \cref{table:1} where some choices of parameters which yield the correct dark-matter abundance are displayed. \begin{table}[h!] \centering \begin{tabular}{|c| c| c| c| c| c| c|} \hline No & $f_a$ (GeV) & $m_a$ (eV) & $\lambda$ & $\Lambda$ (GeV) & $\tau_{2}$ & $\tau_{\rm end}$ \\ \hline 1 & $10^{15}$ & 1 & $6.3\times10^{6}$ & 1 & 7.31 & 6.54 \\ 2 & $10^{14}$ & $10^{-3}$ & $4.2\times 10^{6}$ & $5\times 10^{-3}$ & 8.48 & 6.85 \\ 3 & $10^{17}$ & $10^{-11}$ & $2\times 10^{11}$ & $3\times10^{-6}$ & 8.95 & 7.82\\ \hline \end{tabular} \caption{Sample values of $\{\lambda,\Lambda\}$ that may induce sufficient friction to deplete the axion relic abundance to the observed one for a given choice of $\{m_a,f_a\}$.} \label{table:1} \end{table} One may proceed in an analogous manner to study any value in the $\{m_a,f_a\}$ plane and determine the coupling strength and confinement scale required to account for the observed dark-matter abundance. As one may expect from \cref{eq:numerical-result2} large values of $f_a$ require a large enhancement parameter $\lambda$. This fact is manifest in the values displayed in \cref{table:1}. In order to get a more complete picture of the relevant parameter space, we may also find the lowest value of $\lambda$ that sufficiently dilutes the abundance to the observed value. The lowest possible value of the enhancement parameter corresponds to the minimum friction, acting on the axion for the maximum time period while still accounting for the observed dark-matter abundance. With this in mind we select the latest possible moment to turn off the friction by identifying $\tau_{\rm end}=\tau_{\rm thr}$ and we are also setting $\Lambda$ to be much lower than the confinement scale of the group that provides the mass \begin{equation} \Lambda\ll \Lambda_G\sim\sqrt{m_a f_a}\,.\label{eq:condALP} \end{equation} As long as \cref{eq:condALP} is satisfied, the exact choice of $\Lambda$ changes the result for the enhancement parameter by some ${\cal O}(1)$. At this point we are mainly interested in the minimum order of magnitude for the enhancement parameter and hence, for concreteness, we select the value \begin{equation} \Lambda={\rm e}^{-10}\times 600\,{\rm GeV}^{1/6}\,m_a^{1/2}\,f_a^{1/3}\,, \end{equation} which automatically satisfies the condition in \cref{eq:condALP} in the parameter space of interest. These choices considerably simplify the algorithm mentioned above and yield a unique solution for $\lambda$ which is roughly the minimum required to dilute the dark-matter abundance to the observed one. The parameter space is displayed in \cref{fig:ALP-lambda}. Our results demonstrate that the traditionally overabundant region of ALP dark matter parameter space may be opened up in the presence of thermal friction. Unlike the case of the traditionally underabundant regime where the minimum value of the enhancement parameter depends on the product $m_a f_a$ (and thus the constant $\lambda$ lines in \cref{fig:ALP-lambda} are parallel to the QCD axion band), for the traditionally overabundant the required $\lambda$ strongly depends on the value of the axion decay constant $f_a$ and to a lesser degree on the axion mass. \section{QCD Axion} \label{Sec:QCDAxion} In this section, we study the effect of thermal friction on the QCD axion and apply the results of the previous sections for this particular case. The QCD axion is particularly interesting due to its possible role in resolving the strong CP problem. Some of the results of the previous section are not directly applicable to this case since, for the canonical QCD axion, the mass has an extra suppression factor due to the existence of light quarks, \begin{align} m_a^{\rm QCD} f_a=m_\pi f_\pi \frac{\sqrt{z}}{1+z},\;\;{\rm where}\; z\equiv\frac{m_u}{m_d}\,, \label{eq:QCD-mass} \end{align} with $m_a^{\rm QCD}$, $f_\pi$, $m_\pi$, $m_u$ and $m_d$ denoting the QCD axion mass, the pion decay constant and the pion, up and down quark masses respectively. Since $\Lambda_{\rm QCD}\simeq 150 \,{\rm MeV}$ the mass is overall suppressed with respect to the generic expectation $m_a\sim\Lambda_{\rm QCD}^2/f_a$ in the absence of light charged fermions assumed in the previous section.\footnote{Note that alternative QCD axion scenarios have been proposed where the axion mass is suppressed~\cite{Hook:2018jle, DiLuzio:2021pxd, DiLuzio:2021gos} or enhanced, e.g.~\cite{Rubakov:1997vp,Berezhiani:2000gh,Hook:2014cda,Fukuda:2015ana,Agrawal:2017ksf,Gaillard:2018xgk,Csaki:2019vte}.} Taking the above into account we can rewrite formula in \cref{Eq:simple ALP relic dansity ratio CMB} for the relic density in terms of the axion decay constant \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}} \simeq \left(\frac{f_a}{1.84\cdot 10^{11}\, {\rm GeV}}\right)^{7/6} \frac{\mathcal{F}}{g_{\rm \rho,SM}(T_{\rm osc})^{1/6}}\,\,, \label{eq:QCDDM} \end{align} where we used $T_c=\Lambda_{\rm QCD}$, $\beta=4$ and $\theta_i=1$. \cref{eq:QCDDM} implies that for $f_a < 1.8\cdot 10^{11}\, {\rm GeV}$ the relic abundance is too little to account for the observed dark-matter abundance while for $f_a > 1.8\cdot 10^{11}\, {\rm GeV}$ the relic abundance is too high. In the next few paragraphs we will investigate how thermal friction could in principle open up both regimes for the QCD axion. We will treat the two regimes separately in a manner that is analogous to the previous section. \medskip \noindent\textbf{Underabundant QCD Axion\,---\,}% The standard expression for the relic abundance in \cref{Eq:simple ALP relic dansity ratio CMB} implicitly assumes that the oscillation temperature is the solution of the upper branch of \cref{eq:oscillation-temperature}. However, in the presence of thermal friction, the true temperature at which the onset of oscillations occurs is given by the lower branch of \cref{eq:oscillation-temperature}. We may derive a general formula for the relic abundance given the parameters of the QCD axion and assuming that the moment of the onset of oscillations is a free parameter \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}} \simeq \left(\frac{ m_a f_a\theta_i}{T_{\rm osc}^2}\right)^2\left(\frac{104\,{\rm GeV}}{T_{\rm osc}}\right)^3 \frac{\mathcal{F}}{g_{\rm \rho,SM}(T_{\rm osc})^{3/4}}\,. \label{eq:QCDDM2} \end{align} % Assuming a non fine-tuned initial misalignment angle $\theta_i\simeq 1$ it is apparent from \cref{eq:QCD-mass} that the product $m_a f_a$ is a constant associated to SM parameters which implies that there is one precise onset-of-oscillations temperature which guarantees that the QCD relic abundance will be the observed dark-matter one. This temperature is \begin{align} T_{\rm crit}\simeq 1.08\;{\rm GeV}\;\;\;\longleftrightarrow\;\;\; {\rm \rho_{a,0}\simeq\rho_{\rm DM}}\,. \end{align} In the presence of thermal friction, it is possible to delay the onset of oscillations until this critical temperature provided the friction is strong enough to prevent the axion from rolling until then. Considering all of the above we conclude that in order to open up the $f<1.8\cdot 10^{11}\, {\rm GeV}$ parameter space, we need the dark sector to be spontaneously broken at the temperature $T_{\rm crit}$. \cref{fig:QCD-example} displays a concrete example of the various relevant mass dimension quantities as functions of temperature. \begin{figure} \centering \includegraphics[scale=0.45]{QCDexample.pdf} \caption{An example of the scenario that could open up the parameter space for QCD axion dark matter. The vertical lines represent the relevant scales defined in \cref{eq:temperature-conditionsALP} except for $\Lambda$ which is too low to be seen in the panel. Normally the onset of oscillations occurs at $T_1$, however the friction can delay the onset at most until $T_2$. If the dark sector spontaneously breaks at $T_{\rm crit}$ then the correct abundance is obtained. For this example we used $f_a=10^{10}\; {\rm GeV}$, $\lambda=10^5$ and $\Lambda=10^{-3}\; {\rm GeV}$.} \label{fig:QCD-example} \end{figure} Just as in the previous section, we require that the friction is strong enough so that the QCD axion remains frozen until at least $T_{\rm crit}$. This implies that \begin{align} \frac{1.04\cdot 10^{-6}\lambda^2}{\left[1+\ln\left(\frac{\Lambda_{\rm QCD}^2}{\Lambda^2}\right)/3.420\right]^5}>1\,. \label{eq:ineq} \end{align} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{QCDRange.pdf} \caption{Region of the parameter space $\{\lambda,\Lambda\}$ that allows to reproduce the correct axion DM relic density. The blue shaded region ensures that the QCD axion does not roll until $T_{\rm crit}$ at which time the dark sector is spontaneously broken. Finally the gray shaded regions signal the parameter space for which our expressions for the thermal friction breakdown before $T_{\rm crit}$ is reached for different options of the threshold value of the running coupling.} \label{fig:QCDRange} \end{figure} In \cref{fig:QCDRange} the acceptable parameter space is displayed. It is important to note that this result is independent of the exact QCD axion mass or decay constant since it only depends on the SM parameters through the product $m_af_a$, see \cref{eq:QCD-mass}. It only assumes that $T_{\rm end}\simeq T_{\rm crit}$ and hence we consider the entire range $ 10^8\,{\rm GeV} < f_a < 1.8 \cdot 10^{11}\, {\rm GeV}$ to be viable. \medskip \noindent\textbf{Overabundant QCD Axion\,---\,}% The main tools for computing the abundance and determining the required $\{\lambda,\Lambda\}$ to open up the parameter space were explained in detail in the previous section. Here we simply rewrite the relevant equations adapted for the QCD axion \begin{align} \frac{\rho_{a,0}}{\rho_{\rm DM}}&=2.4\cdot 10^{-3}\,{\rm e}^{-D}\left(\frac{\lambda^{14}}{\tau_{\rm 2}^{ 35}}\right)^{1/13}\label{eq:QCDoverRho}\,,\\ \tau_{\rm 2}&=\ln\left[\frac{1.97\,{\rm GeV}}{\Lambda}\left(\frac{\tau_{\rm 2}^{ 5}}{\lambda^2}\right)^{1/13}\right]\label{eq:ToscYQCD}\,,\\ \tau_{\rm 1}&=\ln\left(\frac{169\,{\rm GeV}^{5/6}\,m_a^{1/6}}{\Lambda}\right)\,, \label{eq:ToscHQCD} \end{align} and the suppression parameter $D$ is given by the same expression as in the case of a generic ALP. Aside from replacing the equations above, the algorithm for determining acceptable values of $\{\lambda,\Lambda\}$ pairs remains unchanged. Some indicative choices are found in table \cref{table:2}. \begin{table}[h!] \centering \begin{tabular}{|c| c| c| c| c| c| c|} \hline No & $f_a$ (GeV) & $m_a$ (eV) & $\lambda$ & $\Lambda$ (GeV) & $\tau_{\rm 2}$ & $\tau_{\rm end}$ \\ \hline 1 & $10^{13}$ & $8.5\times 10^{-7}$ & $1.5\times10^{6}$ & $1\times 10^{-4}$ & 8.52 & 6.50 \\ 2 & $10^{15}$ & $8.5\times 10^{-9}$ & $4.3\times 10^{8}$ & $3\times 10^{-5}$ & 8.88 & 7.00 \\ 3 & $10^{17}$ & $8.5\times 10^{-11}$ & $1\times 10^{11}$ & $3\times10^{-6}$ & 10.40 & 6.50\\ \hline \end{tabular} \caption{Sample values of $\{\lambda,\Lambda\}$ that induce sufficient friction to deplete the axion relic abundance to the observed one for a given choice of $\{m_a,f_a\}$ for the QCD axion.} \label{table:2} \end{table} \section{Discussion and Conclusions} \label{sec:conclusions} The main aim of this work has been to extend the standard misalignment mechanism for the generation of axion dark matter in the presence of sphaleron-induced thermal gauge friction. The coupling of the axion to a dark non-abelian gauge sector in a secluded thermal bath can significantly impact the prediction for the axion abundance. This "frictional misalignment" mechanism can result in an enhancement of the axion relic density by delaying the onset of oscillations or in a depletion of the abundance if the friction is active during the oscillations. Taking into account the one-loop running of the dark coupling constant, we have derived an analytical expression for the \emph{frictional adiabatic invariant} which can be used to compute the axion abundance in a variety of models since it is a constant of motion in the presence of thermal friction. We have then applied this general mechanism to some particular models of interest. In the most minimal case where a single non-abelian confining gauge group is responsible for both the ALP mass and the friction, we find that the prediction of axion dark matter departs from the standard result for masses $m_a\gtrsim 100\;{\rm eV}$. In order to obtain a reliable prediction of the abundance, further studies on the sphaleron rate close to the confinement scale are required. As a second example we studied an axion that couples instead to two separate gauge groups: one confines and generates the axion mass while the other is spontaneously broken and generates the friction. We find that, in order for the friction to have an impact, an enhancement of the coupling to the spontaneously broken group is required, which could arise from clockwork or alignment scenarios. In this case, the window for axion dark matter opens up and for different values of the enhancement parameter, both the traditionally under- and overabundant regions are populated within the frictional misalignment mechanism. Analogously, this also applies to the QCD axion. Couplings between axions and gauge fields, in a cosmological setting, result in rich phenomenology that should be further studied and understood. Axion-gauge couplings may lead to tachyonic, non thermal enhancement of gauge modes which results in rich phenomenology in many aspects of cosmology such as inflation \cite{Anber:2009ua,Barnaby:2011vw,Domcke:2019lxq,Domcke:2020zez,Gorbar:2021rlt}, dark matter \cite{Agrawal:2017eqm}, cosmological relaxation \cite{Hook:2016mqo,Domcke:2021yuz,Ji:2021mvg} or dark energy \cite{DallAgata:2019yrr}. On the other hand, the cosmological implications of thermal backgrounds of gauge fields have only been recently explored in a limited setting and mainly in the context of inflation and not axion dark matter ({see also \cite{Ferreira:2017lnd,Ferreira:2017wlx,Ji:2021mvg,DeRocco:2021rzv} for the transition from the non-thermal to thermal regime}). Our work fills this gap in the literature and provides a natural extension to the standard misalignment mechanism that is based on a set of reasonable assumptions which however have a strong impact on the size of the viable parameter space for axion dark matter. \medskip\noindent\textit{Acknowledgments\,---\,}% A.~P.\ and P.~Q.\ would like to thank the CERN Theory Division for the warm hospitality where this work was initiated. This work was supported by IBS under the project code, IBS-R018-D1. P.~Q.\ acknowledges support by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy\,--\,EXC 2121 Quantum Universe\,--\,390833306 and 491245950. This project has received funding and support from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreements No.\ 860881-HIDDeN and 796961 ("AxiBAU"). \bibliographystyle{JHEP}
1,108,101,564,439
arxiv
\section{Introduction} \label{sec:introduction} The task of providing a good estimation of a high-resolution (HR) image from low-resolution (LR) input with minimum upsampling effects, such as ringing, noise, and blurring has been studied extensively~\cite{free02,chang04,yang08,yang10}. In recent years, deep learning approaches have led to a significant increase in performance on the task of image super-resolution~\cite{dong16,kim16,kim16_2,ledig16}. Potentially, multiple frames of a video provide extra information that allows even higher quality up-sampling than just a single frame. However, the task of simultaneously super-resolving multiple frames is inherently harder and thus has not been investigated as extensively. The key difficulty from a learning perspective is to relate the structures from multiple frames in order to assemble their information to a new image. Kappeler~\etal~\cite{kapp16} were the first who proposed a convolutional network (CNN) for video super-resolution. They excluded the frame registration from the learning problem and rather applied motion compensation (warping) of the involved frames using precomputed optical flow. Thus, only a small part of the video super-resolution task was learned by the network, whereas large parts of the problem rely on classical techniques. \pagebreak In this work, we provide for the first time an end-to-end network for video super-resolution that combines motion compensation and super-resolution into a single network with fast processing time. To this end, we make use of the FlowNet2-SD for optical flow estimation~\cite{ilg17}, integrate it into the approach by Kappeler~\etal~\cite{kapp16}, and train the joint network end-to-end. The integration requires changing the patch-based training~\cite{dong16,kapp16} to an image-based training and we show that this has a positive effect. We analyze the resulting approach and the one from Kappeler~\etal ~\cite{kapp16} on single, multiple, and multiple motion-compensated frames in order to quantify the effect of using multiple frames and the effect of motion estimation. The evaluation reveals that with the original approach from Kappeler~\etal both effects are surprisingly small. Contrary, when switching to image-based trainng we see an improvement if using motion compensated frames and we obtain the best results with the \mbox{FlowNet2-SD} motion compensation. The approach of Kappeler~\etal~\cite{kapp16} follows the common practice of first upsampling and then warping images. Both operations involve an interpolation by which high-frequency image information is lost. To avoid this effect, we then implement a motion compensation operation to directly perform upsampling and warping in a single step. We compare to the closely related work of Tao et al.~\cite{tao17} and also perform experiments with their network architecture. Finally, we show that with this configuration, CNNs for video super-resolution clearly benefit from optical flow. We obtain state-of-the-art results. \section{Related work} \subsection{Image super-resolution} The pioneering work in super-resolving a LR image dates back to Freeman~\etal~\cite{free02}, who used a database of LR/HR patch examples and nearest neighbor search to perform restoration of a HR image. Chang~\etal~\cite{chang04} replaced the nearest neighbor search by a manifold embedding, while Yang~\etal built upon sparse coding~\cite{yang08,yang10}. Dong~\etal~\cite{dong16} proposed a convolutional neural network (SRCNN) for image super-resolution. They introduced an architecture consisting of the three steps patch encoding, non-linear mapping, and reconstruction, and showed that CNNs outperform previous methods. In Dong~\etal~\cite{dong16_2}, the three-layer network was replaced by a convolutional encoder-decoder network with improved speed and accuracy. Shi~\etal~\cite{shi16} showed that performance can be increased by computing features in the lower resolution space. Recent work has extended SRCNN to deeper~\cite{kim16} and recursive~\cite{kim16_2} architectures. Ledig~\etal~\cite{ledig16} employed generative adversarial networks. \subsection{Video super-resolution} Performing super-resolution from multiple frames is a much harder task due to the additional alignment problem. Many approaches impose restrictions, such as the presence of HR keyframes~\cite{song11} or affine motion~\cite{baba11}. Only few general approaches exist. Liu~\etal~\cite{liu14} provided the most extensive approach by using a Bayesian framework to estimate motion, camera blur kernel, noise level, and HR frames jointly. Ma~\etal~\cite{ma15} extended this work to incorporate motion blur. Takeda~\etal~\cite{takeda09} followed an alternative approach by considering the video as a 3D spatio-temporal volume and by applying multidimensional kernel regression. A first learning approach to the problem was presented by Cheng~\etal~\cite{hui12}, who used block matching to find corresponding patches and applied a multi-layer perceptron to map the LR spatio-temporal patch volumes to HR pixels. Kappeler~\etal~\cite{kapp16} proposed a basic CNN approach for video-super-resolution by extending SRCNN to multiple frames. Given the LR input frames and optical flow (obtained with the method from~\cite{drulea11}), they bicubically upsample and warp distant time frames to the current one and then apply a slightly modified SRCNN architecture (called VSR) on this stack. The motion estimation and motion compensation are provided externally and are not part of the training procedure. Caballero~\etal~\cite{jose16} proposed a spatio-temporal network with 3D convolutions and slow fusion to perform video super-resolution. They employ a multi-scale spatial transformer module for motion compensation, which they train jointly with the 3D network. Very recently, Tao~\etal~\cite{tao17} used the same motion compensation transformer module. Instead of a 3D network, they proposed a recurrent network with an LSTM unit to process multiple frames. Their work introduces an operation they call SubPixel Motion Compensation (SPMC), which performs forward warping and upsampling jointly. This is strongly related to the operation we propose here, though we use backward warping combined with a confidence instead of forward warping. Moreover, we use a simple feed-forward network instead of a recurrent network with an LSTM unit, which is advantageous for training. \subsection{Motion estimation} Motion estimation is a longstanding research topic in computer vision, and a survey is given in~\cite{sun10}. In this work, we aim to perform video super-resolution with a CNN-only approach. The pioneering FlowNet of Dosovitskiy~\etal~\cite{doso15} showed that motion estimation can be learned end-to-end with a CNN. Later works~\cite{ranjan16,ilg17} elaborated on this concept and provided multiscale and multistep approaches. The FlowNet2 by Ilg~\etal~\cite{ilg17} yields state-of-the-art accuracy but is orders of magnitudes faster than traditional methods. We use this network as a building block for end-to-end training of a video super-resolution network. \section{Video super-resolution with patch-based training} In this section we revisit the work from Kappeler~\etal~\cite{kapp16}, which applies network-external motion compensation and then extends the single-image SRCNN~\cite{dong16} to operate on multiple frames. This approach is shown in Figure~\ref{fig:vsr_architecture}. \begin{figure}[H] \subfigure[\label{fig:vsr_architecture}Architecture as proposed by Kappeler~\etal \cite{kapp16}]{\includegraphics[width=\linewidth]{images/sr1.pdf}} \subfigure[\label{fig:vsr_joint_architecture}Architecture with integrated FlowNet2-SD from~\cite{ilg17}]{\includegraphics[width=\linewidth]{images/sr2.pdf}} \caption{Video super-resolution architectures used by the basic models tested in this paper. Optical flow is estimated from the center to the outer frames using either an external method or a CNN. The flow is used to warp all frames to the center frame. The frames are then input into to the VSR network~\cite{kapp16}. The complete network in (b) can be trained end-to-end including the motion estimation.} \vspace*{-2mm} \end{figure}% \noindent Kappeler~\etal~\cite{kapp16} compare different numbers of input frames and investigate early and late fusion by performing the concatenation of features from the different frames after different layers. They conclude that fusion after the first convolution works best. Here, we use this version and furthermore stick to three input frames and an upsampling factor of four throughout the whole paper. We performed an analysis of their code and model. The results are given in the first row of Table~\ref{tab:orig_code_results}. Using their original code, we conducted an experiment, where we replaced the three frames from the image sequence by three times the same center frame (column 4 of Table~\ref{tab:orig_code_results}), which corresponds to the information only from single-image super-resolution. We find that on the Myanmar validation set the result is still much better than \mbox{SRCNN}~\cite{dong16} but only marginally worse than VSR~\cite{kapp16} on real video information. Since except for a concatenation there is no difference between the VSR~\cite{kapp16} and SRCNN~\cite{dong16} architectures, this shows that surprisingly the improvement is mainly due to training settings of VSR~\cite{kapp16} rather than the usage of multiple frames. For training and evaluation, Kappeler~\etal~\cite{kapp16} used the publicly available Myanmar video~\cite{myanmardata}. We used the same training/validation split into 53 and 6 scenes and followed the patch sampling from~\cite{kapp16}. However, the publicly available data has changed by that the overlaid logo at the bottom right corner from the producing company is now bigger than before. Evaluating on the data with the different logo gives much worse results (row 2 of Table~\ref{tab:orig_code_results}), while when the logo is cropped off (column 3 of Table~\ref{tab:orig_code_results}), results are comparable. The remaining difference stems from a different implementation of the warping operation\footnote{We use the implementation from~\cite{ilg17}; it differs from~\cite{kapp16} in that it performs bilinear interpolation instead of bicubic.}. However, when we retrained the approach with our implementation and training data (row 3 of Table~\ref{tab:orig_code_results}), we achieved results very close to Kappler et al.~\cite{kapp16}. To further investigate the effects of motion compensation, we retrained the approach using only the center frame, the original frames, and frames motion compensated using FlowNet2~\cite{ilg17} and FlowNet2-SD~\cite{ilg17} in addition to the method from Drulea~\cite{drulea11}. For details we refer to the supplemental material. Again we observed that including or excluding motion compensation with different optical flow methods has no effect on the Myanmar validation set. We additionally evaluated on the commonly used Videoset4 dataset \cite{liu14,kapp16}. In this case we do see a PSNR increment of $0.1$ with Drulea~\cite{drulea11} and higher increment of $0.18$ with FlowNet2~\cite{ilg17} when using motion compensation. The Videoset4 dataset includes larger motion and it seems that there is some small improvement when larger motion is involved. However, the effect of motion compensation is still very small when compared to the effect of changing other training settings. \section{Video super-resolution with image-based training\label{sec:image-based}} In contrast to Kappeler~\etal, we combine motion compensation and super-resolution in one network. For motion estimation, we used the FlowNet2-SD variant from \cite{ilg17}. We chose this network, because FlowNet2 itself is too large to fit into GPU memory besides the super-resolution network and FlowNet2-SD yields smooth flow predictions and accurate performance for small displacements. Figure~\ref{fig:vsr_joint_architecture} shows the integrated network. For the warping operation, we use the implementation from~\cite{ilg17}, which also allows a backward pass while training. The combined network is trained on complete images instead of patches. Thus, we repeated our experiments from the previous section for the case of image-based training. The results are given in Table~\ref{tab:image_and_joint}. In general, we find that image-based processing yields much higher PSNRs than patch-based processing. Detailed comparison of the network and training settings for both variants can be found in the supplemental material. \begin{table}[H] \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|@{}p{2pt}@{}|c|@{}p{2pt}@{}|c|c|c|c|} \cline{1-1}\cline{3-3}\cline{5-8} \multirow{2}{*}{Dataset/Model} && \multirow{2}{*}{SRCNN \cite{dong16}} && \multirow{2}{*}{VSR \cite{kapp16}} & VSR \cite{kapp16} & VSR \cite{kapp16} & VSR \cite{kapp16}\\ && && & (cropped) & (only center) & (no warp.)\\ \cline{1-1}\cline{3-3}\cline{5-8} \multicolumn{8}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-8} Myanmar validation from \cite{kapp16} && $31.26$ && $\textbf{31.81}$ & $32.95$ & $\textbf{31.71}$ & -\\ \cline{1-1}\cline{3-3}\cline{5-8} \multicolumn{8}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-8} Myanmar validation (ours) && \multirow{2}{*}{$31.30$} && $\textbf{31.30}$ & $32.88$ & $31.23$ & $31.19$\\ Myanmar validation (ours), retrained && && $\textbf{31.81}$ & $32.76$ & $31.74$ & $31.77$ \\ \cline{1-1}\cline{3-3}\cline{5-8} \end{tabular} } \vspace*{1mm} \caption{ Analysis of Kappeler~\etal~\cite{kapp16} on the different versions of the Myanmar dataset. Numbers show the PSNR in dB. The first row is with the original code and test data from \cite{kapp16}, while the second and third row are with our re-implementation and the new test data that was recently downloaded. The third column shows results when the logo area is cropped off. Fourth and fifth columns show the PSNR when motion compensation is disabled during testing, by using only the center frame or the original frames without warping. There is no significant improvement by neither the use of multiple frames nor by the use of optical flow. } \label{tab:orig_code_results} \end{center} \vspace*{-3mm} \end{table} \begin{table}[H] \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|@{}p{2pt}@{}|c|@{}p{2pt}@{}|c|c|c|c|c|} \cline{1-1}\cline{3-3}\cline{5-9} Network && SRCNN \cite{dong16} && VSR \cite{kapp16} & VSR \cite{kapp16} & VSR \cite{kapp16} & VSR \cite{kapp16} & VSR \cite{kapp16} joint \\ \cline{1-1}\cline{3-3}\cline{5-9} Motion Compensation && - && only center & no warp. & Drulea \cite{drulea11} & FN2-SD \cite{ilg17} & FN2-SD \cite{ilg17} \\ \cline{1-1}\cline{3-3}\cline{5-9} \multicolumn{9}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-9} Myanmar validation (ours) && $32.42$ && $32.41$ & $32.55$ & $32.60$ & $32.62$ & $32.63$ \\ Videoset4 && $24.63$ && $24.66$ & $24.79$ & $24.91$ & $25.12$ & $25.21$ \\ \cline{1-1}\cline{3-3}\cline{5-9} \end{tabular} } \vspace*{1mm} \caption{ PSNR scores from Myanmar validation (ours) and Videoset4 for image-based training. For each column of the table we trained the architecture of \cite{dong16} and \cite{kapp16} by applying convolutions over the complete images. We used different types of motion compensation for training and testing (FN2-SD denotes FlowNet2-SD). For Myanmar, motion compensation still has no significant effect. However, on Videoset4 an effect for motion compensation using Drulea's method~\cite{drulea11} is noticeable and is even stronger for FlowNet2-SD\cite{ilg17}. } \label{tab:image_and_joint} \end{center} \vspace*{-3mm} \end{table} Table~\ref{tab:image_and_joint} shows that motion compensation has no effect on the Myanmar validation set. For Videoset4 there is an increase of $0.12$ with motion compensation using Drulea's method~\cite{drulea11}. For FlowNet2 the increase of $0.42$ is even bigger. Since FlowNet2-SD is completely trainable, it is also possible to refine the optical flow for the task of video super-resolution by training the whole network end-to-end with the super-resolution loss. We do so by using a resolution of $256\times256$ to enable a batch size of $8$ and train for $100$k more iterations. The results from Table~\ref{tab:image_and_joint} again show that for Myanmar there is no significant change. However, for Videoset4 the joint training further improves the result by $0.1$ leading to a total PSNR increase of $0.52$. We show a qualitative evaluation in Figure~\ref{fig:qualitative}. On the enlarged building, one can see that bicubic upsampling introduces some smearing across the windows. This effect is also present in the methods without motion compensation and in the original VSR~\cite{kapp16} with motion compensation. When using image-based trained models, the effect is successfully removed. Motion compensation with FlowNet2~\cite{ilg17} seems to be marginally sharper than motion compensation with Drulea~\cite{drulea11}. We find that the joint training reduces ringing artifacts; an example is given in the supplemental material. \begin{figure} \resizebox{\linewidth}{!}{ \subfigure[ground truth]{\includegraphics[width=0.33\linewidth]{images/city-gt.png}} \subfigure[SRCNN \cite{dong16}]{\includegraphics[width=0.33\linewidth]{images/city-SRCNN.png}} \subfigure[VSR$^\dagger$ (only center)]{\includegraphics[width=0.33\linewidth]{images/city-3xcenter.png}} \subfigure[VSR$^\dagger$ (Drulea \cite{drulea11})]{\includegraphics[width=0.33\linewidth]{images/city-Druleas.png}} \subfigure[Baysian \cite{liu14}]{\includegraphics[width=0.33\linewidth]{images/city-Baysian.png}} } \resizebox{\linewidth}{!}{ \subfigure[bicubic]{\includegraphics[width=0.33\linewidth]{images/city-Bicubic.png}} \subfigure[VSR \cite{kapp16}]{\includegraphics[width=0.33\linewidth]{images/city-VSR.png}} \subfigure[VSR$^\dagger$ (no warp)]{\includegraphics[width=0.33\linewidth]{images/city-unwarped.png}} \subfigure[VSR$^\dagger$ (FlowNet2-SD)]{\includegraphics[width=0.33\linewidth]{images/city-flownet2-sd.png}} \subfigure[VSR$^\dagger$ (FlowNet2-SD-joint)]{\includegraphics[width=0.33\linewidth]{images/city-flownet2-sd-joint.png}} } \caption{ Comparison of existing super-resolution methods to our trained models. $^\dagger$ indicates models retrained by us using image-based training. Note that b) and g) are patch-based, while c), d), e), h), i) and j) are image-based. } \label{fig:qualitative} \vspace*{-2mm} \end{figure} \section{Combined Warping and Upsampling Operation} The approach of Kappeler~\etal~\cite{kapp16} and the VSR architecture discussed so far follow the common practice of first upsampling and then warping the images. Both operations involve an interpolation during which image information is lost. Therefore, we propose a joint operation that performs upsampling and backward warping in a single step, which we name Joint Upsampling and Backward Warping (\emph{JUBW}). This operation does not perform any interpolation at all, but additionally outputs sub-pixel distances and leaves finding a meaningful interpolation to the network itself. Let us consider a pixel $p$ and let $x_p$ and $y_p$ denote the coordinates in high resolution space, while $x^{s}_p$ and $y^{s}_p$ denote the source coordinates in low resolution space. First, the mapping from low to high resolution space using high resolution flow estimations $(u_p, v_p)$ is computed according to the following equation: \begin{equation} \left( \begin{array}{c} x^{s}_p \\ y^{s}_p \end{array} \right) = \frac{1}{\alpha} \left( \begin{array}{c} x_p + u_p + 0.5 \\ y_p + v_p + 0.5 \end{array} \right) - \left( \begin{array}{c} 0.5 \\ 0.5 \end{array} \right) \mathrm{,} \end{equation} \pagebreak \noindent where $\alpha = 4$ denotes the scaling factor and subtraction/addition of $0.5$ places the origin at the top left corner of the first pixel. Then the warped image is computed as: \begin{equation} I_w(p)= \begin{cases} I(\left\lfloor x^{s}_p \right\rceil,\left\lfloor y^{s}_p \right\rceil) & \text{if } \left\lceil x^{s}_p \right\rfloor,\left\lceil y^{s}_p \right\rfloor \text{is inside $I$,} \\ 0 & \text{otherwise} \mathrm{,} \end{cases} \end{equation} where $\lfloor \cdot \rceil$ denotes the round to nearest operation. Note, that no interpolation between pixels is performed. The operation then additionally outputs the following distances per pixel (see Figure~\ref{fig:bspmc_layer} for illustration): \begin{equation} \left( \begin{array}{c} d^{x}_p \\ d^{y}_p \end{array} \right) = \left( \begin{array}{c} \left\lfloor x^{s}_p \right\rceil - x^{s}_p \\ \left\lceil y^{s}_p \right\rceil - y^{s}_p \end{array} \right) \text{if } \left\lceil x^{s}_p \right\rfloor,\left\lceil y^{s}_p \right\rfloor \text{is inside $I$ and } \left(\begin{array}{c} 0 \\ 0 \end{array}\right) \text{otherwise.} \end{equation} \begin{figure} \begin{center} \includegraphics[width=0.5\linewidth]{images/jubw.pdf} \caption{ Illustration of the Joint Upsampling and Backward Warping operation (JUBW). The output is a dense image (left sparse here for illustration purposes) and includes $x$/$y$ distances of the source locations to the source pixel centers. \label{fig:bspmc_layer} } \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\linewidth]{images/srjubw.pdf}\vspace*{-2mm} \caption{ Network setup with FlowNet2-SD and joint upsampling and warping operation (JUBW or SPMC-FW). Upsampling before feeding into FlowNet2-SD happens only for JUBW. The output of the upsampling and warping operation is stacked and then fed into the SPMC-ED network. \label{fig:jubw_setup} } \end{center} \vspace*{-2mm} \end{figure} We also implemented the joint upsampling and forward warping operation from Tao~\etal~\cite{tao17} for comparison and denote it as SPMC-FW. Contrary to our operation, SMPC-FW still involves two types of interpolation: 1.) subpixel-interpolation for the target position in the high resolution grid and 2.) interpolation between values if multiple flow vectors point to the same target location. For comparison, we replaced the architecture from the previous section by the \mbox{encoder-/decoder} part from Tao~\etal~\cite{tao17} (which we denote here as SPMC-ED). We also find that this architecture itself performs better than \mbox{SRCNN~\cite{dong16}/VSR~\cite{kapp16}} on the super-resolution only task (see supplementary material for details). The resulting configuration is shown in Figure~\ref{fig:jubw_setup}. Furthermore, we also extended the training set by downloading Youtube videos and downsampling them to create additional training data. The larger dataset comprises 162k images and we call it MYT. \begin{table} \centering \resizebox{\linewidth}{!}{ \begin{tabular}{|c|@{}p{2pt}@{}|c|@{}p{2pt}@{}|c|c|c|@{}p{2pt}@{}|c|c|c|c|} \cline{3-3}\cline{5-7}\cline{9-12} \multicolumn{1}{c}{\multirow{2}{*}}&& \multicolumn{1}{c|}{SPMC}&& \multicolumn{3}{c|}{SPMC-FW}&& \multicolumn{4}{c|}{JUBW}\\ \cline{5-7}\cline{9-12} \multicolumn{1}{c}{} && original~\cite{tao17} && ours & only center & joint && ours & no dist. & only center & joint\\ \cline{3-3}\cline{5-7}\cline{9-12} \multicolumn{9}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-7}\cline{9-12} Myanmar (ours) && - && $32.90$ & $32.45$ & $33.05$ && $\textbf{33.13}$ & $33.02$ & $32.55$ & $32.69$ \\ \cline{1-1}\cline{3-3}\cline{5-7}\cline{9-12} Videoset4 && $25.52$ && $25.68$ & $24.94$ & $25.62$ && $\textbf{25.85}$ & $25.74$ & $24.96$ & $25.09$ \\ \cline{1-1}\cline{3-3}\cline{5-7}\cline{9-12} \end{tabular} } \vspace*{1mm} \caption{ PSNR values for different joint upsampling and warping approaches. The first column shows the original results from Tao~\etal~\cite{tao17} using the SPMC upsampling, forward warping, and the SPMC-ED architecture with an LSTM unit. Columns two to four show our reimplementation of the SPMC-FW operation~\cite{tao17} without an LSTM unit. Columns five to eight show our joint upsampling and backward warping operation with the same encoder-decoder network on top. With \emph{ours} we denote our implementation according to Figure~\ref{fig:jubw_setup}. In \emph{only center} we input zero-flows and the duplicated center image three times (no temporal information). The entry \emph{joint} includes joint training of FlowNet2-SD and the super-resolution network. For columns two to eight, the networks are retrained on MYT and tested for each setting respectively. } \label{tab:warping_results} \vspace*{-2mm} \end{table} \begin{figure} \resizebox{\linewidth}{!}{ \subfigure[ground truth]{\includegraphics[width=0.33\linewidth]{images/city_or.png}} \subfigure[FN2-SD+VSR joint]{\includegraphics[width=0.33\linewidth]{images/city_joint.png}} \subfigure[FN2-SD+SPMC-FW]{\includegraphics[width=0.33\linewidth]{images/city_spmc.png}} \subfigure[FN2-SD+JUBW]{\includegraphics[width=0.33\linewidth]{images/city_bspmc.png}} }\vspace*{-2mm} \caption{ Examples of a reconstructed image from Videoset4 using different warping methods. FN2-SD stands for FlowNet2-SD. Clearly using JUBW yields sharper and more accurate reconstruction of the estimated frames compared to SPMC-FW~\cite{tao17} and the best VSR~\cite{kapp16} result. } \label{fig:qualitative_warping} \vspace*{-5mm} \end{figure} \pagebreak Results are given in Table~\ref{tab:warping_results}. First, we note that our feed-forward implementation of FlowNet2-SD with SPMC-ED, which simply stacks frames and does not include an LSTM unit, outperforms the original recurrent implementation from Tao~\etal\cite{tao17}. Second, we see that our proposed JUBW operation generally outperforms SPMC-FW. We again performed experiments where we excluded temporal information, by inputting zero flows and duplicates of the center image. We now observe that including temporal information yields large improvements and increases the PSNR by $0.5$ to $0.9$. In contrast to the previous sections, we see such increase also for the Myanmar dataset. This shows that the proposed motion compensation can also exploit small motion vectors. The qualitative results in Fig.~\ref{fig:qualitative_warping} confirm these findings. Including the sub-pixel distance outputs from JUBW layer to enable better interpolation to the network leads to a smaller improvement than expected. Notably, without these distances the JUBW operation degrades to a simple nearest neighbor upsampling and nearest neighbor warping, but it still outperforms SPMC-FW. We conclude from this that one should generally avoid any kind of interpolation and leave it to the network. Finally, fine-tuning FlowNet2 on the video super-resolution task decreases the PSNR in some cases and does not provide the best results. We conjecture that this is due to the nature of optimization of the gradient through the warping operation, which is based on the reconstruction error and is prone to local minima. \section{Conclusions} In this paper, we performed an evaluation of different video super-resolution approaches using CNNs including motion compensation. We found that the common practice of patch-based training and upsampling and warping separately yields almost no improvement when comparing the video super-resolution setting against the single-image setting. We obtained a significant improvement over prior work by replacing the patch-based approach by a network that analyzes the whole image. As a remedy for the lacking standard motion compensation, we proposed a joint upsampling and backward warping operation and combined it with FlowNet2-SD~\cite{ilg17} and the SPMC-ED~\cite{tao17} architecture. This combination outperforms all previous work on video super-resolution. In conclusion, our results show that: 1.) we can achieve the same or better performance with a formulation as a feed-forward instead of a recurrent network; 2.) performing joint upsampling and backward warping with no interpolation outperforms joint upsampling and forward warping and the common backward warping with interpolation; 3.) including sub-pixel distances yields a small additional improvement; and 4.) joint training with FlowNet2-SD so far does not lead to consistent improvements and we leave a more detailed analysis to future work. \section*{Acknowledgements} We acknowledge the DFG Grant BR-3815/7-1. \bibliographystyle{splncs03} \section{Computation of PSNR values} For all our evaluation results, the reported PSNR values are computed using only the Y channel of the estimated YCbCr image. In case of RGB images, we first convert to YCbCr color space and then compute on the Y channel. For all experiments using the SRCNN~\cite{dong16} or VSR~\cite{kapp16} architecture, we follow ~\cite{kapp16} and for technical reasons crop away 12 pixels of the boundary from the estimated high-resolution images before computing PSNR values. \section{Displacement magnitudes} We have noted that improvements using motion compensation are generally smaller on Myanmar than on Videoset4. In Table~\ref{tab:disp_mags}, we compute the average motion magnitudes of the datasets and note that the displacements are also generally smaller in the Myanmar validation set. \begin{table} \centering \begin{tabular}{|c|c|} \hline Dataset & Avg. Mag. \\ \hline \hline Myanmar training & $1.50$px\\ Myanmar validation & $0.43$px \\ Videoset4 & $1.29$px \\ \hline \end{tabular} \vspace*{2mm} \caption{Average motion magnitudes computed using FlowNet2~\cite{ilg17}. The numbers show that the Myanmar validation set has the smallest displacements.} \label{tab:disp_mags} \end{table} \section{Video super-resolution with patch-based training} \newcolumntype{C}{>{\centering\arraybackslash}p{1.5cm}} Using patch-based training, we retrain and evaluate SRCNN~\cite{dong16} and VSR~\cite{kapp16} using different kind of motion compensations. However, the resulting PSNR scores in Table~\ref{tab:retrain_results} are all similar and we conclude that motion compensation on Myanmar has no effect. We also evaluate on Videoset4 (Table~\ref{tab:patch_videoset4}) and there see a small increment of $0.18$ for FlowNet2~\cite{ilg17} and FlowNet2-SD~\cite{ilg17}. \begin{table} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|@{}p{2pt}@{}|c|@{}p{2pt}@{}|C|C|C|C|C|} \cline{1-1}\cline{3-3}\cline{5-9} \multirow{2}{*}{Arch.} && \multirow{2}{*}{\backslashbox{Trained on}{Tested on}} && only & no & Drulea & FlowNet2- & FlowNet2 \\ && && center & warp & \cite{drulea11} & SD \cite{ilg17} & \cite{ilg17} \\ \cline{1-1}\cline{3-3}\cline{5-9} \multicolumn{8}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-9} SRCNN && only center && $\textbf{31.62}$ & - & - & - & - \\ \cline{1-1}\cline{3-3}\cline{5-9} \multicolumn{8}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-9} \multirow{5}{*}{VSR} && only center && $\textbf{31.76}$ & - & - & - & - \\ \cline{3-3}\cline{5-9} && no warp && $31.80$ & $\textbf{31.83}$ & - & - & - \\ \cline{3-3}\cline{5-9} && Drulea \cite{drulea11} && $31.77$ & $31.74$ & $\textbf{31.81}$ & - & - \\ \cline{3-3}\cline{5-9} && FlowNet2-SD \cite{ilg17} && $31.75$ & $31.75$ & $31.79$ & $\textbf{31.77}$ & - \\ \cline{3-3}\cline{5-9} && FlowNet2 \cite{ilg17} && $31.76$ & $31.76$ & $31.80$ & $31.78$ & $\textbf{31.79}$ \\ \cline{1-1}\cline{3-3}\cline{5-9} \end{tabular} } \vspace*{2mm} \captionof{table}{PSNR scores for patch-based video superresolution on the Myanmar validation set. We retrained the architecture of \cite{kapp16} using only the center frames (replicated three times), original images, and motion compensated frames. One can observe that all scores are nearly the same and motion compensation on the Myanmar validation set has no effect over providing original images or even only the center image. } \label{tab:retrain_results} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Motion compensation & Videoset4 \\ during training and testing & PSNR \\ \hline \hline only center & $24.60$ \\ \hline no warp & $24.59$ \\ \hline Drulea \cite{drulea11} & $24.69$ \\ \hline FlowNet2-SD \cite{ilg17} & $24.77$ \\ \hline FlowNet2 \cite{ilg17} & $24.77$ \\ \hline \end{tabular} \vspace*{2mm} \caption{Evaluation of the different retrained VSR models from Table~\ref{tab:retrain_results} on Videoset4. Motion compensation shows a small performance improvement.} \label{tab:patch_videoset4} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|l|c|c|} \hline \multicolumn{1}{|c|}{Setting} & Patch-based & Image-based\\ \hline \hline Learning rate & $1e-05$ & $1e-05$ \\ Learning rate policy & fixed & multistep$^\dagger$ \\ Momentum & $0.9$ & $0.9$ \\ Weight decay & $0.0005$ & $0.0004$ \\ Batch size & $240$ & $2$ \\ Input resolution & $36\times36$ & $960\times540$ \\ Image pixels in batch & $311$k & $1$M \\ Training iterations & $200$k & $300$k \\ Training time & $7$ hours & $32$ hours \\ \hline \end{tabular} \vspace*{2mm} \caption{Different settings of patch- and image-based traing. Settings are very similar, except that the number of pixels and trainig time in image-based training are larger. Note that the number of pixels is also further boosted much more by sliding the convolutions from the VSR architecture over the entire images with a stride of one. $^\dagger$multiplied by $0.5$ every $50$k iterations.} \label{tab:setting} \end{center} \end{table} \section{Video super-resolution with image-based training} We perform the same set of experiments as in the last section for image-based training. Comparing Table~\ref{tab:retrain_results} to Table~\ref{tab:image_retrain_results}, we find that PSNRs are generally around 1 point higher. We also provide all the training settings in Table~\ref{tab:setting}. Image-based training in general processes more training data and sees a lot of similar data during training by sliding the convolutions over an entire image. Motion compensation on Myanmar (Table~\ref{tab:image_retrain_results}) still seams to have little effect, while motion compensation on Videoset4 does show better PSNR values (Table~\ref{tab:image_videoset4}). \begin{table} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|c|@{}p{2pt}@{}|c|@{}p{2pt}@{}|C|C|C|C|C|} \cline{1-1}\cline{3-3}\cline{5-9} \multirow{2}{*}{Arch.} && \multirow{2}{*}{\backslashbox{Trained on}{Tested on}} && only & no & Drulea & FlowNet2- & FlowNet2 \\ && && center & warp & \cite{drulea11} & SD \cite{ilg17} & \cite{ilg17} \\ \cline{1-1}\cline{3-3}\cline{5-9} \multicolumn{8}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-3}\cline{5-9} \multirow{5}{*}{VSR} && only center && $\textbf{32.41}$ & - & - & - & - \\ \cline{3-3}\cline{5-9} && no warp && $32.38$ & $\textbf{32.55}$ & - & - & - \\ \cline{3-3}\cline{5-9} && Drulea \cite{drulea11} && $32.37$ & $32.26$ & $\textbf{32.60}$ & - & - \\ \cline{3-3}\cline{5-9} && FlowNet2-SD \cite{ilg17} && $32.35$ & $32.37$ & $32.58$ & $\textbf{32.62}$ & - \\ \cline{3-3}\cline{5-9} && FlowNet2 \cite{ilg17} && $32.37$ & $32.36$ & $32.61$ & $32.61$ & $\textbf{32.63}$ \\ \cline{1-1}\cline{3-3}\cline{5-9} \end{tabular} } \vspace*{2mm} \captionof{table}{PSNR scores from Myanmar validation (ours). We now train the architecture of \cite{kapp16} by applying it as a convolution over the complete images. We again evaluate using only the center frame, original images and differently motion compensated frames. One can observe that scores are significantly better compared to the patch-based training, but motion compensation on the Myanmar validation set still has negligible effect compared to training on original frames. } \label{tab:image_retrain_results} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline Training input & PSNR \\ \hline \hline only center & $24.66$ \\ \hline no warp & $24.79$ \\ \hline Drulea \cite{drulea11} & $24.91$ \\ \hline FlowNet2-SD \cite{ilg17} & $25.12$ \\ \hline FlowNet2 \cite{ilg17} & $25.13$ \\ \hline \end{tabular} \vspace*{2mm} \caption{Evaluation of the different image-based models on Videoset4. Motion compensation in this case also shows a performance improvement.} \label{tab:image_videoset4} \end{center} \end{table} \section{Joint Training} Since the FlowNet2-SD~\cite{ilg17} is completely trainable, we can refine the optical flow for the task of video super-resolution by training the whole network end-to-end with the super-resolution loss. This potentially allows the optical flow estimation to focus on aspects that are most relevant for the super-resolution task. As an initialization we took the VSR network trained on FlowNet2-SD~\cite{ilg17} from the last section and used the same settings from Table~\ref{tab:setting}, but now cropped the images to a resolution of $256\times256$ to enable a batch size of $8$. We then trained for $100$k more iterations. The result is given in Table~\ref{tab:joint_training} and Figures~\ref{fig:joint_img_b} and~\ref{fig:joint_flow_b}. We cannot see the flow itself improve, but we see a small improvement in the PSNR value on Videoset4 and from the images one can observe that the ringing artifacts disappear. \begin{figure} \resizebox{\linewidth}{!}{ \subfigure[Initialization\label{fig:joint_img_a}]{\includegraphics[width=0.33\linewidth]{images/joint/flownet2.png}} \subfigure[After joint training\label{fig:joint_img_b}]{\includegraphics[width=0.33\linewidth]{images/joint/joint.png}} \subfigure[After joint training with smoothness\label{fig:joint_img_c}]{\includegraphics[width=0.33\linewidth]{images/joint/smooth.png}} } \resizebox{\linewidth}{!}{ \subfigure[Initialization\label{fig:joint_flow_a}]{\includegraphics[width=0.33\linewidth]{images/joint/flownet2-flow0.png}} \subfigure[After joint training\label{fig:joint_flow_b}]{\includegraphics[width=0.33\linewidth]{images/joint/joint-flow0.png}} \subfigure[After joint training with smoothness\label{fig:joint_flow_c}]{\includegraphics[width=0.33\linewidth]{images/joint/smooth-flow0.png}} } \caption{ Example super-resolved image after training FlowNet2-SD~\cite{ilg17} with VSR (a and d) jointly (b and e) and including a smoothness constraint (c and f). } \label{fig:joint} \end{figure} \begin{table} \begin{center} \begin{tabular}{|l|c|c|c|} \hline \multicolumn{1}{|c|}{Test set} & After & After joint & After joint \\ & initialization & training & training with smoothness \\ \hline \hline Myanmar validation & 32.62 & 32.63 & 32.61 \\ \hline VideoSet4 & 25.12 & 25.21 & 25.19 \\ \hline \end{tabular} \vspace*{2mm} \caption{Evaluation of refining FlowNet2-SD~\cite{ilg17} on the super-resolution task.} \label{tab:joint_training} \end{center} \end{table} In Figure \ref{fig:joint_flow_b}, one can observe that many image details become flow artifacts. This is due to nature of the gradient through the warping operation; it corrects the flow vector to the best directly neighboring pixel, which is in most cases a local minimum. Following \cite{godard16}, we add a regularization loss that penalizes deviations from smoothness in the optical flow field, weighted with an image edge-aware term: \begin{equation} \mathcal{L}_R = \sum_{i,j} \left( e^{-||\partial_x I_{i,j}||}\left(|\partial_x u| + |\partial_x v|\right) + e^{-||\partial_y I_{i,j}||}\left(|\partial_y u| + |\partial_y v|\right)\right) \mathrm{\,} \end{equation} where $I$ is the first image. $i,j$ is a pixel location and $u,v$ are the $x,y$ components of the flow vector. The results of training with this additional smoothness term are given in Table~\ref{tab:joint_training} and Figures~\ref{fig:joint_img_c} and~\ref{fig:joint_flow_c}. The flow shows less artifacts than Figure~\ref{fig:joint_flow_b}, but compared to Figure~\ref{fig:joint_img_b} some very slight ringing artifacts still remain. \section{Evaluating Architectures and Datasets\label{sec:arch_eval}} In first part of the paper, the architecture from Dong~\etal~\cite{dong16} adapted to video super-resolution by Kappeler~\etal~\cite{kapp16} and the Myanmar training dataset were used. Here, we investigate the effect of architectures and training datasets. We extended the Myanmar training set by more high-resolution videos that we downloaded from Youtube. The resulting dataset has 162k frames of resolution $960\times540$ and we named it MYT. We evaluate and compare the SRCNN~\cite{dong16}, the FlowNet2-SD~\cite{ilg17} (here used for super-resolution, not flow) and the \mbox{encoder-/decoder} part of the architecture from Tao~\etal~\cite{tao17} (SPMC-ED) for single image super-resolution on the old and new datasets. The results are given in Table~\ref{tab:arch_results}. \begin{table} \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{|l|@{}p{2pt}@{}|c|c|c|c|} \cline{1-1}\cline{3-6} && SRCNN \cite{dong16} trained on & SRCNN \cite{dong16} & FlowNet2-SD~\cite{ilg17} & SPMC-ED~\cite{tao17} \\ && Myanmar training (ours) & trained on MYT & trained on MYT & trained on MYT \\ \cline{1-1}\cline{3-6} \multicolumn{6}{c}{} \\[-0.8\normalbaselineskip] \cline{1-1}\cline{3-6} Myanmar validation (ours) && $32.42$ & $31.98$ & $31.47$ & $32.63$ \\ Videoset4 && $24.63$ & $24.70$ & $24.93$ & $25.07$ \\ \cline{1-1}\cline{3-6} Number of parameters & & 57K & 57K & 14M & 491K \\ \cline{1-1}\cline{3-6} \end{tabular} } \vspace*{2mm} \caption{ PSNR values for different architectures and training datasets tested for single-image super-resolution. } \label{tab:arch_results} \end{center} \end{table} One can observe that SRCNN~\cite{dong16} tends to overfit on the Myanmar dataset. The much deeper FlowNet2-SD~\cite{ilg17} architecture performs worse on Myanmar, but can generalize better to Videoset4. The size of SPMC-ED~\cite{tao17} is between the former two and we observe that it performs best on Myanmar and also for generalization to Videoset4. It clearly gives better results than SRCNN~\cite{dong16} and for this reason we also use it for the final network in the paper. \bibliographystyle{splncs03}
1,108,101,564,440
arxiv
\section{Introduction}\label{intro} The partition functions of three-dimensional Chern-Simons theories show various interesting aspects of M2-branes. The would-volume theory of $N$ M2-branes on ${\mathbb R}^8/{\mathbb Z}_k$ is described by an ${\cal N}=6$ superconformal Chern-Simons theory called ABJM theory \cite{ABJM}, which has a gauge group U$(N)_k\times$U$(N)_{-k}$ (with the subscripts denoting the Chern-Simons levels) and two pairs of bifundamental matters connecting the two U$(N)$ factors. Due to the progress in the supersymmetric localization \cite{KWY}, the partition function on a sphere is reduced to a matrix model with a finite-dimensional multiple integral. One of the major developments is the full determination of the partition function of the ABJM theory in the large $N$ expansion, including the perturbative \cite{DMP1,DMP2,FHM,MP} and non-perturbative \cite{HMO2,CM,HMO3,HMMO} effects. In the study, among others, it is interesting to find that the matrix model has several interpretations. On one hand, it can be superficially regarded as the pure Chern-Simons matrix model with an unconventional super gauge group U$(N|N)$ \cite{DT}. On the other hand, the matrix model can be regarded as the partition function of a Fermi gas system \cite{MP} \begin{align} Z^\text{ABJM}_k(N)=\frac{1}{N!}\sum_{\sigma\in S_N}(-1)^\sigma \int\frac{d^N\mu}{(2\pi)^N}\prod_{i=1}^N \langle\mu_i|\widehat\rho_{\text{U}(N|N)}|\mu_{\sigma(i)}\rangle, \label{trace} \end{align} with a non-trivial density matrix \begin{align} \widehat\rho_{\text{U}(N|N)} =\frac{1}{\sqrt{2\cosh\frac{\widehat q}{2}}} \frac{1}{2\cosh\frac{\widehat p}{2}} \frac{1}{\sqrt{2\cosh\frac{\widehat q}{2}}}, \label{density} \end{align} which is closely related to the quantum mechanical system associated to the local ${\mathbb P}^1\times{\mathbb P}^1$ geometry \cite{MPtop}. It is then interesting to ask whether we can generalize the results to theories with a large number of supersymmetries.\footnote{ For other generalizations whose exact large $N$ expansion is known, see \cite{MN1,MN2,MN3} for the $(2,2)$ model and \cite{GHM1,OZ,Ha} for the local ${\mathbb P}^2$ model.} One direction is the generalization to the matrix model with a superficial gauge group U$(N_1|N_2)$ \cite{HLLLP2,ABJ} where two factors of the bosonic subgroup have different ranks and the physical interpretation of the difference is the introduction of fractional M2-branes \cite{ABJ}. In studying the partition function with the deformation \cite{AHS,Ho1,MM,HoOk,HNS}, there are two formulations. The first one, called closed string formalism in \cite{PTEP}, changes the expression of the density matrix $\widehat\rho$ \eqref{density} while preserving the trace structure \eqref{trace}. This formalism was first conjectured in \cite{AHS} and then proved in \cite{Ho1}. Partially due to the lack of a proof of the formalism for a long time, in \cite{MM} another formalism, called open string formalism, was proposed. This formalism, on the other hand, keeps the expression of the density matrix \eqref{density}, while modifying the trace structure \eqref{trace} with an extra determinant factor. Another direction is the replacement of the unitary supergroup by the orthosymplectic supergroup \cite{HLLLP2,ABJ}, whose physical interpretation is the introduction of the orientifold plane in the type IIB description. The study of the partition function was initiated in\footnote{ Some works which may be related to a similar physical setup are \cite{MN4,ADF,Ok1}. } \cite{MS1} by the case of OSp$(2N|2N)$ with equal sizes of bosonic submatrices from the expectation that the case without the fractional branes should play a fundamental role. Among others it was found that the density matrix for this theory is closely related to $\bigl[\widehat\rho_{\text{U}(N|N)}\bigr]_+$, the density matrix for the ABJM theory with a projection to the even chirality. Here the chirally projected density matrices \begin{align} \bigl[\widehat\rho_{\text{U}(N|N)}\bigr]_\pm =\widehat\rho_{\text{U}(N|N)}\frac{1\pm\widehat R}{2}, \end{align} were introduced in \cite{HMO1,MePu} with $\widehat R$ being the reflection operator changing the sign of the coordinate. Then, it was found that when we double the quivers following the prescription in \cite{HoMo}, the partition function schematically reduces to the ABJM partition function. Recently, there appeared an interesting paper \cite{Ho2}. In \cite{Ho2}, it was observed that the OSp$(2N+1|2N)$ theory, still having equal ranks and hence no fractional branes \cite{ABJ}, seems to serve an equally fundamental role. It was found that the density matrix for the OSp$(2N+1|2N)$ theory is exactly that of the ABJM theory with the projection to the odd chirality \begin{align} \widehat\rho_{\text{OSp}(2N+1|2N)} =\bigl[\widehat\rho_{\text{U}(N|N)}\bigr]_-. \label{oddproj} \end{align} It is then interesting to ask whether and how this relation holds in the deformation into the case of different ranks. The first part of this paper is devoted to answering this question. We have found that, when we deform the theory into that with a superficial gauge group OSp$(2N+1|2(N+M))$ (or OSp$(2(N+M)+1|2N)$ which shares the same partition function), the density matrix is again exactly the odd projection of the density matrix for the theory with a superficial unitary gauge group U$(N|N+2M)$: \begin{align} \widehat\rho_{\text{OSp}(2N+1|2(N+M))} =\bigl[\widehat\rho_{\text{U}(N|N+2M)}\bigr]_-. \label{oddprojM} \end{align} See figure \ref{oddpic} for a schematic picture explaining the relation. We stress that the relation \eqref{oddprojM} gives a Fermi gas formalism for the OSp$(2N+1|2(N+M))$ theory, which enables the study of the grand potential and its relation to topological string theory. Our manipulations start with an expression rather similar to the open string formalism \cite{MM}. It is useful to keep the determinant factor coming from the open string formalism to see many cancellations in the expressions. After performing a similarity transformation and an integration of delta functions, we can put the expression into the form of the closed string formalism and prove the relation \eqref{oddprojM}. In both of the U$(N|N+2M)$ and OSp$(2N+1|2(N+M))$ theories there is a physical bound \cite{ABJ} stating that $0\le 2M\le k$.\footnote{ Note that the level in the orthosymplectic matrix model is $k$ instead of $2k$. In other words, the number of D5-branes in the brane construction of \cite{ABJ} is $k$ in our convention.} It is interesting to find that our relation between these two theories is consistent with the bound. We stress that, although we are influenced by the work of \cite{Ho1}, it seems difficult to arrive at our proof of the relation \eqref{oddprojM} if we simply follow the change of variables in \cite{Ho1}. \begin{figure} \centering\includegraphics[scale=0.6,bb=200 350 400 500]{ospodd.eps} \caption{A schematic relation between the density matrix for the orthosymplectic theory and that for the unitary theory.} \label{oddpic} \end{figure} Following the observation \eqref{oddproj}, in the second part, we turn to the study of the simplest $M=0$ case, the OSp$(2N+1|2N)$ theory, which is equivalent to the ABJM U$(N|N)$ theory with the odd chiral projection. We study the exact values of the partition functions constructed from the chirally projected density matrices and read off the grand potentials $J_{\pm,k}(\mu)$ from the numerical fitting. We find an interesting functional relation stating that the difference between $J_{+,k}(\mu)$ and $J_{-,k}(\mu)$ is extremely simple for integral $k$, with an explicit relation expressed in $k$ mod $8$ as in the case of the OSp$(2N|2N)$ theory \cite{MS1}. We further turn to the worldsheet instanton effects and identify the diagonal Gopakumar-Vafa invariants. This paper is organized as follows. In section \ref{odd}, we present a proof for \eqref{oddprojM}. After establishing this relation, we turn to the study of the grand potential in section \ref{functional}. Finally we conclude with some discussions. The appendix is devoted to a collection of several data which are needed for our claim in section \ref{functional}. \noindent {\bf Note Added}: After this work was done and while we are preparing the draft, \cite{Ok2} appears on arXiv, which has some overlaps with our section \ref{functional} (especially \eqref{npdiff}). \section{Orthosymplectic matrix model as odd projection}\label{odd} In this section we shall prove that the density matrix for the orthosymplectic matrix model with the superficial gauge group OSp$(2N_1+1|2N_2)$ is equivalent to a chiral half of that for a matrix model with a suitable unitary super gauge group. Let us start with the partition function of the orthosymplectic theory\footnote{ Compared with the standard normalization in the literature such as \cite{MS1}, the integral variables $\mu_i$ and $\nu_k$ are rescaled by $k$ from the beginning.} \begin{align} &Z_{k}(N_1,N_2) =\int\frac{D^{N_1}\mu}{N_1!}\frac{D^{N_2}\nu}{N_2!} \frac{V_\text{O}V_\text{Sp}}{H}, \end{align} where the integration from the tree-level contribution is \begin{align} D\mu_i=\frac{d\mu_i}{4\pi k}e^{\frac{i}{4\pi k}\mu_i^2},\quad D\nu_k=\frac{d\nu_k}{4\pi k}e^{-\frac{i}{4\pi k}\nu_k^2}, \end{align} while the measures from the one-loop contributions of the vector multiplets and the hypermultiplets are \begin{align} V_\text{O}&=\prod_{i<j}^{N_1}\Bigl(2\sinh\frac{\mu_i-\mu_j}{2k}\Bigr)^2 \Bigl(2\sinh\frac{\mu_i+\mu_j}{2k}\Bigr)^2 \prod_{i=1}^{N_1}\Bigl(2\sinh\frac{\mu_i}{2k}\Bigr)^2,\nonumber\\ V_\text{Sp}&=\prod_{k<l}^{N_2}\Bigl(2\sinh\frac{\nu_k-\nu_l}{2k}\Bigr)^2 \Bigl(2\sinh\frac{\nu_k+\nu_l}{2k}\Bigr)^2 \prod_{k=1}^{N_2}\Bigl(2\sinh\frac{\nu_k}{k}\Bigr)^2,\nonumber\\ H&=\prod_{i=1}^{N_1}\prod_{k=1}^{N_2} \Bigl(2\cosh\frac{\mu_i-\nu_k}{2k}\Bigr)^2 \Bigl(2\cosh\frac{\mu_i+\nu_k}{2k}\Bigr)^2 \prod_{k=1}^{N_2}\Bigl(2\cosh\frac{\nu_k}{2k}\Bigr)^2. \end{align} After taking care of the trivial cancellation between $V_\text{Sp}$ and $H$, we find that the partition function is symmetric under the simultaneous exchange of $(N_1,N_2)$ and the sign change of $k$. Hereafter let us assume $N_1\le N_2$ and $k>0$ without loss of generality and rewrite $Z_k(N_1,N_2)$ as $Z_{k,M}(N)$ by introducing $N=N_1$ and $M=N_2-N_1$. Otherwise we can simply consider its complex conjugate. As in the case of the non-equal rank deformation of the ABJM theory \cite{MM}, let us first prepare a determinant formula suitable for the application to the current situation, \begin{align} &\det\begin{pmatrix} \Bigl[\frac{1} {(z_i+w_k)(1+1/(z_iw_k))}\Bigr] _{(i,k)\in Z_N\times Z_{N+M}}\\ \Bigl[\frac{w_k^{m-\frac12}-w_k^{-(m-\frac12)}} {w_k^{\frac12}-w_k^{-\frac12}} \Bigr] _{(m,k)\in Z_M\times Z_{N+M}}\\ \end{pmatrix} =(-1)^{MN+\frac12M(M-1)} \nonumber\\&\qquad\times \frac{\prod_{i<j}^{N}(z_i-z_j)(1-1/(z_iz_j)) \prod_{k<l}^{N+M}(w_k-w_l)(1-1/(w_kw_l))} {\prod_{i=1}^{N}\prod_{k=1}^{N+M}(z_i+w_k)(1+1/(z_iw_k))}, \label{Cauchy} \end{align} where $Z_L=\{1,2,\cdots,L\}$ is a set of $L$ elements in this ordering. This formula can be derived as follows. We start with the standard Cauchy determinant \cite{MePu,MS1} \begin{align} &\det\begin{pmatrix} \Bigl[\frac{1}{(z_i+w_k)(1+1/(z_iw_k))}\Bigr] _{(i,k)\in Z_{N+M}\times Z_{N+M}}\\ \end{pmatrix} \nonumber\\&\qquad =\frac{\prod_{i<j}^{N+M}(z_i-z_j)(1-1/(z_iz_j)) \prod_{k<l}^{N+M}(w_k-w_l)(1-1/(w_kw_l))} {\prod_{i=1}^{N+M}\prod_{k=1}^{N+M}(z_i+w_k)(1+1/(z_iw_k))}. \end{align} Then, we send $z_{N+1},z_{N+2},\cdots,z_{N+M}$ to infinity one after another using the series expansion in $z$, \begin{align} \frac{1}{(z+w)(1+1/(zw))}=\sum_{m=1}^\infty\frac{(-1)^{m-1}}{z^m} \frac{w^m-w^{-m}}{w-w^{-1}}. \end{align} Since in the determinant we can add a multiple of one row to another without changing its value, the leading contribution in the $m$-th row of the lower block is the $z^{-m}$ term, $(w^m-w^{-m})/(w-w^{-1})$. Note that this coefficient is a Laurent polynomial of $w$. Again, due to the same property of the row addition, we can keep only the top terms of the polynomials $w^{m-1}+w^{-(m-1)}$ or change the lower terms arbitrarily. We choose to replace this coefficient by another with half intermediate steps \begin{align} \frac{w^m-w^{-m}}{w-w^{-1}}\to \frac{w^{m-\frac12}-w^{-(m-\frac12)}} {w^{\frac12}-w^{-\frac12}}. \end{align} This proves the determinant formula \eqref{Cauchy}. Then, after substituting $z_i=e^{\mu_i}$ and $w_k=e^{\nu_k}$ into \eqref{Cauchy}, we can rewrite the measure as the product of two determinants \begin{align} \frac{V_\text{O}V_\text{Sp}}{H} =\det\begin{pmatrix} \Bigl[\frac{(2\sinh\frac{\mu_i}{2k})(2\sinh\frac{\nu_k}{2k})} {(2\cosh\frac{\mu_i-\nu_k}{2k})(2\cosh\frac{\mu_i+\nu_k}{2k})}\Bigr] _{(i,k)\in Z_N\times Z_{N+M}}\\ \Bigl[2\sinh\frac{(m-\frac12)\nu_k}{k}\Bigr] _{(m,k)\in Z_M\times Z_{N+M}}\\ \end{pmatrix}^2. \label{meadet} \end{align} As usual, it is useful to introduce the operators $\widehat{q}$ and $\widehat{p}$ satisfying the canonical commutation relation $[\widehat{q},\widehat{p}]=i\hbar$ with the Planck constant identified with $\hbar=2\pi k$. In terms of the eigenstates $|\mu\rangle$ of $\widehat{q}$ normalized by $\langle\mu|\nu\rangle=2\pi\delta(\mu-\nu)$, the entries in the upper block of the determinant can be rewritten using the matrix elements \begin{align} \langle\mu_i|\frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} |\nu_k\rangle =\frac{1}{2k}\frac{(2\sinh\frac{\mu_i}{2k})(2\sinh\frac{\nu_k}{2k})} {(2\cosh\frac{\mu_i-\nu_k}{2k})(2\cosh\frac{\mu_i+\nu_k}{2k})}. \label{cosh} \end{align} For the lower entries, we introduce states $\llangle m|$, $|m\rrangle$ defined such that\footnote{ In terms of the suitably normalized zero-momentum eigenstate $|\widetilde 0\rangle$ introduced in \cite{MS1}, this can be expressed as $|m\rrangle=2\sinh\frac{(m-\frac{1}{2})\widehat q}{k}|\widetilde 0\rangle$. Hence, this state is a linear combination of momentum eigenstates $|\widetilde{p}\rangle$ with imaginary momenta $p=\pm(2m-1)\pi i$. The subtlety of this state will need a special care later in this section. } \begin{align} \llangle m|\nu_k\rangle=\langle\nu_k|m\rrangle =2\sinh\frac{(m-\frac{1}{2})\nu_k}{k}. \label{doubly} \end{align} We can trivialize one of the permutations coming from the determinants in \eqref{meadet} by relabeling the indices of $\nu_k$. After including the Gaussian factors $e^{\frac{i}{4\pi k}\mu_i^2}$ and $e^{-\frac{i}{4\pi k}\nu_k^2}$, the partition function becomes \begin{align} Z_{k,M}(N) &=\frac{1}{N!} \int\frac{d^{N}\mu}{(4\pi k)^{N}}\frac{d^{N+M}\nu}{(4\pi k)^{N+M}} \prod_{i=1}^{N}2k \langle\mu_i|e^{\frac i{2\hbar}\widehat{q}^2} \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} e^{-\frac i{2\hbar}\widehat{q}^2}|\nu_i\rangle \prod_{m=1}^M\llangle m| e^{-\frac i{2\hbar}\widehat{q}^2}|\nu_{N+m}\rangle \nonumber\\ &\quad\times\det\begin{pmatrix} \Bigl[2k\langle\nu_k|\frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} |\mu_j\rangle\Bigr]_{(k,j)\in Z_{N+M}\times Z_N} \Bigl[\langle\nu_k|n\rrangle\Bigr]_{(k,n)\in Z_{N+M}\times Z_M} \end{pmatrix}. \label{Z1} \end{align} In the case of equal ranks, it was a standard technique to perform a similarity transformation \cite{HHMO} \begin{align} \langle\mu_i|\to\langle\mu_i|e^{\frac i{2\hbar}\widehat{p}^2},\quad |\mu_i\rangle\to e^{-\frac i{2\hbar}\widehat{p}^2}|\mu_i\rangle,\quad \langle\nu_k|\to\langle\nu_k|e^{\frac i{2\hbar}\widehat{p}^2},\quad |\nu_k\rangle\to e^{-\frac i{2\hbar}\widehat{p}^2}|\nu_k\rangle, \label{similar} \end{align} which is allowed because all of these states appear only in \begin{align} 1=\int\frac{d\mu_i}{2\pi}|\mu_i\rangle\langle\mu_i| =\int\frac{d\nu_k}{2\pi}|\nu_k\rangle\langle\nu_k|. \end{align} Here we follow this similarity transformation and see the effects on each component. Roughly speaking, in the following we shall see that the matrix elements in the two products in \eqref{Z1} in front of the determinant become delta functions, which enable us to perform the $\nu_k$ integrations. First, let us consider the determinant part \begin{align} \det\begin{pmatrix} \Bigl[2k\langle\nu_k|e^{\frac i{2\hbar}\widehat{p}^2} \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} e^{-\frac i{2\hbar}\widehat{p}^2}|\mu_j\rangle\Bigr] _{(k,j)\in Z_{N+M}\times Z_N}& \Bigl[\langle\nu_k|e^{\frac i{2\hbar}\widehat{p}^2}|n\rrangle\Bigr] _{(k,n)\in Z_{N+M}\times Z_M} \end{pmatrix}. \label{det1} \end{align} It is trivial to see the left block of the determinant is unchanged under the similarity transformation, while the right block can be easily computed as \begin{align} \langle\nu_k|e^{\frac{i}{2\hbar}\widehat p^2}|n\rrangle =e^{-\frac{i}{2\hbar}(2\pi(n-\frac{1}{2}))^2}\langle\nu_k|n\rrangle. \end{align} After taking care of the extra phase factors, the determinant \eqref{det1} can be written as \begin{align} e^{-\frac{\pi i}{12k}M(2M+1)(2M-1)}\det\begin{pmatrix} \Bigl[2k\langle\nu_k|\frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} |\mu_j\rangle\Bigr]_{(k,j)\in Z_{N+M}\times Z_N}& \Bigl[\langle\nu_k|n\rrangle\Bigr]_{(k,n)\in Z_{N+M}\times Z_M} \end{pmatrix}. \label{det2} \end{align} Note that this is an odd function of both $\mu_i$ and $\nu_k$ which can be shown by the determinant formula \eqref{Cauchy}. Next, let us consider the matrix elements in \eqref{Z1} in front of the determinant. For the first product, after the similarity transformations which changes $(2\cosh\frac{\widehat p}{2})^{-1}$ into $(2\cosh\frac{\widehat q}{2})^{-1}$, we find \begin{align} 2k\langle\mu_i|e^{\frac i{2\hbar}\widehat{p}^2} e^{\frac i{2\hbar}\widehat{q}^2} \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} e^{-\frac i{2\hbar}\widehat{q}^2} e^{-\frac i{2\hbar}\widehat{p}^2}|\nu_i\rangle =\frac{2\pi k}{2\cosh\frac{\mu_i}{2}} (\delta(\mu_i-\nu_i)-\delta(\mu_i+\nu_i)), \label{prodi} \end{align} where we have explicitly spelled out the matrix element $\langle\mu_i|\widehat{\Pi}_-|\nu_i\rangle$. For the second product, we have \begin{align} \llangle m|e^{-\frac{i}{2\hbar}\widehat q^2} e^{-\frac{i}{2\hbar}\widehat p^2}|\nu_{N+m}\rangle =\int\frac{d\lambda}{2\pi} 2\sinh\frac{(m-\frac{1}{2})\lambda}{k} e^{-\frac{i}{2\hbar}\lambda^2}\frac{1}{\sqrt{ik}} e^{\frac{i}{2\hbar}(\lambda-\nu_{N+m})^2}. \label{subtle} \end{align} There is a subtlety on the definition of this integral which will be clarified at the end of this section. For the moment, we perform the Gaussian integral formally \begin{align} &\llangle m|e^{-\frac{i}{2\hbar}\widehat q^2} e^{-\frac{i}{2\hbar}\widehat p^2}|\nu_{N+m}\rangle =\frac{2\pi k}{\sqrt{ik}} e^{-\frac i{2\hbar}(2\pi(m-\frac12))^2} \nonumber\\&\quad\times (\delta(\nu_{N+m}+(2m-1)\pi i)-\delta(\nu_{N+m}-(2m-1)\pi i)). \label{prodm} \end{align} As a result, all the $\nu_k$ integrations can be done explicitly due to the delta functions in \eqref{prodi} and \eqref{prodm}. There are further simplifications. Since the remaining determinant \eqref{det2} in the integrand is an odd function of $\nu_k$, we can simply replace the matrix elements discussed above as \begin{align} &2k\langle\mu_i|e^{\frac i{2\hbar}\widehat{p}^2} e^{\frac i{2\hbar}\hat{q}^2} \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2} e^{-\frac i{2\hbar}\widehat{q}^2} e^{-\frac i{2\hbar}\widehat{p}^2}|\nu_i\rangle \to\frac{4\pi k}{2\cosh\frac{\mu_i}{2}}\delta(\mu_i-\nu_i),\nonumber\\ &\llangle m|e^{-\frac{i}{2\hbar}\widehat q^2} e^{-\frac{i}{2\hbar}\widehat p^2}|\nu_{N+m}\rangle \to\frac{4\pi k}{\sqrt{ik}} e^{-\frac{i}{2\hbar}(2\pi(m-\frac{1}{2}))^2} \delta\bigl(\nu_{N+m}+(2m-1)\pi i\bigr). \end{align} After substituting these replacements and taking care of the extra phase factors, the partition function is given by \begin{align} Z_{k,M}(N)&=e^{-\frac{\pi i}{6k}M(2M+1)(2M-1)}(ik)^{-\frac M2} \frac{1}{N!} \int\frac{d^N\mu}{(4\pi k)^N} \prod_{i=1}^N\frac{1}{2\cosh\frac{\mu_i}{2}} \nonumber\\ &\qquad\times\det\begin{pmatrix} \Bigl[2k\langle\mu_i| \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2}|\mu_j\rangle\Bigr] _{(i,j)\in Z_N\times Z_N}& \Bigl[\langle\mu_i|n\rrangle\Bigr] _{(i,n)\in Z_N\times Z_M}\\ \Bigl[2k\langle\rho_m| \frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2}|\mu_j\rangle\Bigr] _{(m,j)\in Z_M\times Z_N}& \Bigl[\langle\rho_m|n\rrangle\Bigr] _{(m,n)\in Z_M\times Z_M} \end{pmatrix}, \label{Zdet} \end{align} where $\rho_m=-(2m-1)\pi i$. Using again the Cauchy determinant formula \eqref{Cauchy} for the determinant factor in \eqref{Zdet}, finally we find that the partition function is given by \begin{align} \frac{(-1)^{MN}Z_{k,M}(N)}{Z_{k,M}(0)} &=\frac{1}{N!}\int\frac{d^N\mu}{(4\pi k)^N} \prod_{i=1}^N\frac{(2\sinh\frac{\mu_i}{2k})^2V(\mu_i)} {4\cosh\frac{\mu_i}k} \prod_{i<j}^N \left(\tanh\frac{\mu_i-\mu_j}{2k}\tanh\frac{\mu_i+\mu_j}{2k}\right)^2, \label{ZN/Z0} \end{align} where we have defined $V(\mu)$ as \begin{align} V(\mu)=\frac1{2\cosh\frac{\mu}2}\prod_{m=1}^M \tanh\frac{\mu-\rho_m}{2k}\tanh\frac{\mu+\rho_m}{2k}, \label{V} \end{align} and the normalization factor as \begin{align} Z_{k,M}(0)&=(-1)^{\frac{1}{2}M(M-1)} e^{-\frac{\pi i}{6k}M(2M+1)(2M-1)} \left({i}{k}\right)^{-\frac M2}\nonumber\\ &\quad\times\prod_{m=1}^M2\sinh\frac{\rho_m}{2k} \prod_{m<n}^M4\sinh\frac{\rho_m-\rho_n}{2k}\sinh\frac{\rho_m+\rho_n}{2k}. \end{align} The expression \eqref{ZN/Z0} can be interpreted as the partition function of a Fermi gas system \begin{align} \frac{(-1)^{MN}Z_{k,M}(N)}{Z_{k,M}(0)} =\frac{1}{N!}\sum_{\sigma\in S_N}(-1)^\sigma \int\frac{d^N\mu}{(2\pi)^N} \prod_{i=1}^N\langle\mu_i|\widehat\rho|\mu_{\sigma(i)}\rangle, \end{align} with the density matrix \begin{align} \widehat\rho=\sqrt{V(\widehat q)} \frac{\widehat\Pi_-}{2\cosh\frac{\widehat p}{2}} \sqrt{V(\widehat q)}. \end{align} If we rewrite the function $V(\mu)$ \eqref{V} as \begin{align} V(\mu)=\frac{1}{2\cosh\frac{\mu}{2}} \prod_{s=-(M-\frac{1}{2})}^{M-\frac{1}{2}} \tanh\frac{\mu+2\pi is}{2k}. \end{align} and compare it with the result for U$(N_1|N_2)$, we easily find that this is nothing but (2.21) in \cite{PTEP} with $M$ replaced by $2M$. Let us now return to the subtlety in \eqref{subtle}. One way to regularize the integral is to insert $e^{-i\epsilon\lambda^2}$ into \eqref{subtle} with an infinitesimal parameter $\epsilon>0$ and rotate the integration contour clockwise. Then, the integration becomes \begin{align} \llangle m|e^{-\frac{i}{2\hbar}\widehat q^2} e^{-\frac{i}{2\hbar}\widehat p^2}|\nu_{N+m}\rangle =\frac{1}{\sqrt{ik}}e^{\frac{i}{2\hbar}\nu_{N+m}^2} \biggl[\Delta_\epsilon\Bigl(\nu_{N+m},\frac{2m-1}{2k}\Bigr) -\Delta_\epsilon\Bigl(\nu_{N+m},-\frac{2m-1}{2k}\Bigr)\biggr], \end{align} where $\Delta_\epsilon(\nu_{N+m},\alpha)$ is given by \begin{align} \Delta_\epsilon(\nu_{N+m},\alpha) =\int\frac{d\lambda}{2\pi}e^{\alpha\lambda} e^{-\frac{i}{\hbar}\lambda\nu_{N+m}}e^{-i\epsilon\lambda^2}, \end{align} which is vanishing in the limit $\epsilon\to 0$ for \begin{align} (\re\nu_{N+m})\Bigl(\alpha+\frac{\im\nu_{N+m}}{2\pi k}\Bigr)>0. \label{vanishcond} \end{align} In \eqref{prodm} we have formally rotated $\nu_{N+m}$ counterclockwise to a pure imaginary variable as well and found the integration reduces to a sum of delta functions in the limit $\epsilon\to 0$. Of course, such a manipulation is allowed only if the integration contour of $\nu_{N+m}$ does not pick up any finite residues in the rotation. Possible residues might come from poles of the matrix element $2k\langle\nu_{N+m}|\frac{\widehat{\Pi}_-}{2\cosh\frac{\widehat{p}}2}|\mu_j\rangle$ in the determinant in \eqref{Z1}, which are located at $\nu_{N+m}=\pm\mu_j+lk\pi i$ with integral $l$, or more concisely $|\im(\nu_{N+m})|\ge k\pi$, as can be seen from the expression \eqref{cosh}. On the other hand, our computation \eqref{vanishcond} for the regularized expression shows that the residues in the region $\re(\nu_{N+m})>0,\im(\nu_{N+m})>(2m-1)\pi$ and $\re(\nu_{N+m})<0,\im(\nu_{N+m})<-(2m-1)\pi$ are accompanied by a vanishing factor in the limit $\epsilon\to 0$. Since the index $m$ runs over $m=1,2,\cdots,M$ and the consistency of the ${\rm OSp}(2N+1|2(N+M))$ theory requires $2M\le k$, only poles in the region $|\im(\nu_{N+m})|<k\pi$ are relevant. Therefore, we are allowed to use the formal expression \eqref{prodm} in the proof. \section{Exact functional relation and topological invariants}\label{functional} In the previous section, we have established the relation between the density matrix for the orthosymplectic OSp$(2N+1|2(N+M))$ (or OSp$(2(N+M)+1|2N)$) matrix model and that for the unitary U$(N|N+2M)$ matrix model with the projection to the odd chirality. Here we shall proceed to studying the simplest $M=0$ case \cite{Ho2}, the OSp$(2N+1|2N)$ grand potential, which is equivalent to the grand potential $J_{-,k}(\mu)$ constructed from the density matrix for the original ABJM U$(N|N)$ matrix model with the odd projection. Although the chiral projection of the density matrix was introduced early in \cite{HMO1} and the importance was already stressed in \cite{MePu,MS1}, there has not been a strong motivation to study them carefully\footnote{ Very recently, we are informed by Kazumi Okuyama that the grand potentials of general U$(N_1|N_2)$ theories with the chiral projections are studied \cite{Ok2} in the expectation of its physical relevance. This section has some overlaps with it. } until we know that it appears directly in the orthosymplectic matrix model \cite{Ho2}. In this section, we shall study the non-perturbative effects of $J_{-,k}(\mu)$ carefully. We point out a functional relation between the grand potentials with the chiral projections $J_{\pm,k}(\mu)$, from which the membrane instantons due to the chiral projections are determined. Then, we further turn to the study of the worldsheet instantons in $J_{-,k}(\mu)$. We first define the grand potentials constructed from the density matrices with the chiral projections \begin{align} \sum_{n=-\infty}^\infty e^{J_{\pm,k}(\mu+2\pi in)}=\det(1+e^\mu\rho_\pm). \end{align} The perturbative part of each grand potential is given by a cubic polynomial \begin{align} J^\text{pert}_{\pm,k}(\mu) =\frac{C_{\pm,k}}{3}\mu^3+B_{\pm,k}\mu+A_{\pm,k}, \end{align} with the coefficients related to those of the ABJM theory by \begin{align} C_{\pm,k}=\frac{C^\text{ABJM}_{k}}{2},\quad B_{\pm,k}=\frac{B^\text{ABJM}_{k}\pm1/2}{2},\quad A_{\pm,k}=\frac{A^\text{ABJM}_{k}\mp\log{2}}{2}, \end{align} which results in the Airy function as in the full case \cite{FHM}. Our observation is that the non-perturbative part of the difference between the even and odd grand potentials $J_{\pm,k}(\mu)$ looks quite simple for integral $k$. After extracting the perturbative terms by \begin{align} J_{+,k}(\mu)-J_{-,k}(\mu) =\frac{\mu}{2}-\log 2+\Delta_k(\mu), \end{align} we find that the non-perturbative terms of the difference $\Delta_k(\mu)=J^\text{np}_{+,k}(\mu)-J^\text{np}_{-,k}(\mu)$ satisfy \begin{align} &\Delta_{k\equiv 1,7\,\text{mod}\,8}(\mu) =-\Delta_{k\equiv 3,5\,\text{mod}\,8}(\mu) =\frac{1}{4}\log\frac{1+2\sqrt{2}e^{-\mu}+4e^{-2\mu}} {1-2\sqrt{2}e^{-\mu}+4e^{-2\mu}},\nonumber\\ &\Delta_{k\equiv 0\,\text{mod}\,8}(\mu)=\frac{1}{2}\log(1+4e^{-\mu}), \quad\Delta_{k\equiv 4\,\text{mod}\,8}(\mu) =\frac{1}{2}\log(1-4e^{-\mu}),\nonumber\\ &\Delta_{k\equiv 2,6\,\text{mod}\,8}(\mu) =\frac{1}{4}\log(1+16e^{-2\mu}), \label{npdiff} \end{align} from the numerical fitting. For the reader's convenience, we present in the appendix the exact values of the partition functions and the grand potentials found from the numerical fitting.\footnote{ These exact values are well-known to several experts. For example, the values for $k=1$ appear in \cite{HMO1} and the values for $k=2,3,4,6$ are the basic ingredients used to compute the values without projections in \cite{HMO2}. The non-perturbative large $\mu$ expansion of the grand potential should also be known to experts. For example, some functional relations using them appear in \cite{GHM2}. The reason that we collect these results here is to justify our functional relation \eqref{npdiff} and to identify the diagonal Gopakumar-Vafa invariants in table \ref{GV}. } Note that the expression in \eqref{npdiff} is reminiscent of the odd-power terms of $e^{-\mu}$ in the orthosymplectic OSp$(2N|2N)$ matrix model. See (2.45) in \cite{MS1}. In the above, we have seen that the membrane instanton part is corrected for the orthosymplectic matrix model $J_{-,k}(\mu)$. It is natural to expect that the worldsheet instanton part should be corrected as well if we believe that the total function should have a certain modular invariance connecting the membrane and worldsheet instanton parts. Since it seems that the membrane instantons do not contain new singularities, we expect that only the worldsheet instantons with genus greater than zero are corrected. To study the worldsheet instantons carefully, next let us turn to the sum of two grand potentials $J_{\pm,k}(\mu)$, since the difference seems to encode only the membrane instantons. We first define the non-perturbative effects of the sum $\Sigma_k(\mu_\text{eff})$ as \begin{align} J_{+,k}(\mu)+J_{-,k}(\mu) =\frac{C^\text{ABJM}_k}{3}\mu_\text{eff}^3 +B^\text{ABJM}_k\mu_\text{eff}+A^\text{ABJM}_k+\Sigma_k(\mu_\text{eff}), \end{align} where the right-hand side is expressed in terms of the effective chemical potential $\mu_\text{eff}$ given in \cite{HMO3}. Then, we can rewrite the results in appendix \ref{GP} as in table \ref{sum}. \begin{table}[!ht] \begin{align*} &\Sigma_1(\mu)=\biggl[\frac{8\mu^2+4\mu+1}{4\pi^2} -\frac{3}{8}\biggr]e^{-4\mu} +\biggl[-\frac{9(32\mu^2+8\mu+1)}{32\pi^2}+\frac{67}{16}\biggr]e^{-8\mu} \\&\quad +\biggl[\frac{41(72\mu^2+12\mu+1)}{54\pi^2} -\frac{133}{4}\biggr]e^{-12\mu}+{\cal O}(e^{-16\mu}),\\ &\Sigma_2(\mu)=\biggl[\frac{2\mu^2+2\mu+1}{\pi^2} -\frac{1}{2}\biggr]e^{-2\mu} +\biggl[-\frac{9(8\mu^2+4\mu+1)}{8\pi^2}+\frac{17}{4}\biggr]e^{-4\mu} \\&\quad +\biggl[\frac{82(18\mu^2+6\mu+1)}{27\pi^2}-\frac{101}{3}\biggr]e^{-6\mu} +\biggl[-\frac{777(32\mu^2+8\mu+1)}{64\pi^2}+\frac{2273}{8}\biggr] e^{-8\mu} +{\cal O}(e^{-10\mu}),\\ &\Sigma_3(\mu)=\frac{4}{3}e^{-\frac{4}{3}\mu} +\biggl[\frac{8\mu^2+4\mu+1}{12\pi^2}-\frac{145}{72}\biggr]e^{-4\mu} -2e^{-\frac{16}{3}\mu} +{\cal O}(e^{-\frac{20}{3}\mu}),\\ &\Sigma_4(\mu)=e^{-\mu} +\biggl[-\frac{2\mu^2+2\mu+1}{2\pi^2}+\frac{5}{2}\biggr]e^{-2\mu} +\frac{10}{3}e^{-3\mu} +\biggl[-\frac{9(8\mu^2+4\mu+1)}{16\pi^2}+\frac{49}{4}\biggr]e^{-4\mu} \\&\quad +{\cal O}(e^{-5\mu}),\\ &\Sigma_5(\mu)=\frac{2(5-\sqrt{5})}{5}e^{-\frac{4}{5}\mu} -\frac{5-\sqrt{5}}{5}e^{-\frac{8}{5}\mu} +\frac{2(5+7\sqrt{5})}{15}e^{-\frac{12}{5}\mu} +\frac{15-13\sqrt{5}}{10}e^{-\frac{16}{5}\mu} +{\cal O}(e^{-4\mu}),\\ &\Sigma_6(\mu)=\frac{4}{3}e^{-\frac{2}{3}\mu} +\biggl[\frac{2\mu^2+2\mu+1}{3\pi^2}-\frac{43}{18}\biggr]e^{-2\mu} -2e^{-\frac{8}{3}\mu}+{\cal O}(e^{-\frac{10}{3}\mu}),\\ &\Sigma_8(\mu)=2e^{-\frac{1}{2}\mu}-\frac{1}{2}e^{-\mu} -\frac{4}{3}e^{-\frac{3}{2}\mu} +\biggl[-\frac{2\mu^2+2\mu+1}{4\pi^2}+\frac{23}{4}\biggr]e^{-2\mu} +{\cal O}(e^{-\frac{5}{2}\mu}),\\ &\Sigma_{12}(\mu)=4e^{-\frac{1}{3}\mu} -\frac{8}{3}e^{-\frac{2}{3}\mu} +\frac{1}{3}e^{-\mu} +6e^{-\frac{4}{3}\mu} +{\cal O}(e^{-\frac{5}{3}\mu}). \end{align*} \caption{Non-perturbative effects of the sum $\Sigma_{k}(\mu)$ of grand potentials constructed for two chirally projected density matrices.} \label{sum} \end{table} Using the expression of $\Sigma_k(\mu)$ in table \ref{sum}, we find that the coefficients $d_m(k)$ of the worldsheet instantons $e^{-\frac{4m\mu_\text{eff}}{k}}$ for $J^\text{np}_{-,k}(\mu)=(\Sigma_k(\mu_\text{eff})-\Delta_k(\mu))/2$ fit well with the Gopakumar-Vafa formula \begin{align} d_m(k)=\frac{(-1)^m}{m}\sum_{g=0}^\infty\sum_{d|m}n^g_d\, d\Bigl(2\sin\frac{2\pi m}{dk}\Bigr)^{2g-2}. \end{align} From the comparison, we can read off the diagonal Gopakumar-Vafa invariants $n^g_d$ directly, which are shown in table \ref{GV}. It is interesting to note that these invariants are all integers, which is not guaranteed from the beginning. Here we have listed the invariants for the ABJM theory as well for convenience. We have found that, as we expected, twice of the invariants for $J_{-,k}(\mu)$ match exactly with those for the ABJM theory for genus zero. \begin{table}[!ht] \begin{center} \begin{tabular}{|c||c|c|c|c|} \hline $d$&1&2&3&4\\ \hline\hline $n_0^d$&$-2$&$-2$&$-6$&$-24$\\ \hline $n_1^d$& $0$& $1$& $8$& $73$\\ \hline $n_2^d$& $0$& $0$&$-2$&$-76$\\ \hline $n_3^d$& $0$& $0$& $0$& $39$\\ \hline $n_4^d$& $0$& $0$& $0$&$-10$\\ \hline $n_5^d$& $0$& $0$& $0$& $1$\\ \hline $n_6^d$& $0$& $0$& $0$& $0$\\ \hline \end{tabular}\qquad \begin{tabular}{|c||c|c|c|c|} \hline $d$&1&2&3&4\\ \hline\hline $n_0^d$&$-4$&$-4$&$-12$& $-48$\\ \hline $n_1^d$& $0$& $0$& $0$& $9$\\ \hline $n_2^d$& $0$& $0$& $0$& $0$\\ \hline $n_3^d$& $0$& $0$& $0$& $0$\\ \hline $n_4^d$& $0$& $0$& $0$& $0$\\ \hline $n_5^d$& $0$& $0$& $0$& $0$\\ \hline $n_6^d$& $0$& $0$& $0$& $0$\\ \hline \end{tabular} \caption{The diagonal Gopakumar-Vafa invariants identified for the chirally projected model $J_{-,k}(\mu)$ (left) and the ABJM matrix model (right).} \label{GV} \end{center} \end{table} In principle the diagonal Gopakumar-Vafa invariants come from the trivial relation \begin{align} \sum_{n=-\infty}^\infty e^{J^\text{ABJM}_k(\mu+2\pi in)} =\Biggl[\sum_{n_+=-\infty}^\infty e^{J_{+,k}(\mu+2\pi in_+)}\Biggr] \Biggl[\sum_{n_-=-\infty}^\infty e^{J_{-,k}(\mu+2\pi in_-)}\Biggr], \label{sumGP} \end{align} between two chirally projected grand potentials. It would be interesting to derive the invariants directly from \eqref{sumGP}. \section{Conclusion and discussion} In this paper we have shown that the claim \cite{Ho2} that the density matrix for the OSp$(2N+1|2N)$ matrix model matches with that for the U$(N|N)$ with the odd chiral projection is extended to \eqref{oddprojM}, after the inclusion of the fractional brane. We have also proceeded to study the grand potentials constructed from the density matrices projected to the even and odd chiralities, where we find a functional relation which determines the new membrane instanton effects. We have further studied the worldsheet instanton effects and identified the first few diagonal Gopakumar-Vafa invariants. We have restricted ourselves to the study of the non-equal rank deformation of the OSp$(2N+1|2N)$ density matrix. It is apparently interesting to see the same non-equal rank deformation of the OSp$(2N|2N)$ density matrix \cite{MS1} and/or the BPS Wilson loop one-point function in these theories along the line of \cite{HHMO,MM}. It is interesting to find that, as a general rule, the orientifold projection used to construct the orthosymplectic Chern-Simons theories from the unitary one seems to have a relation to the chiral projection of the corresponding density matrix appearing in the Fermi gas formalism of the matrix model. We would like to see the physical interpretation of this fact more directly. \section*{Acknowledgements} We are grateful to Masazumi Honda for explaining his result to us at the YITP workshop ``Strings and Fields'' prior to the publication \cite{Ho2}. We would also like to thank Yasuyuki Hatsuda, Shinji Hirano, Takuya Matsumoto, Tomoki Nosaka, Kazumi Okuyama, Masaki Shigemori for valuable discussions. The work of S.M.\ is supported by JSPS Grant-in-Aid for Scientific Research (C) \# 26400245. S.M.\ would like to thank Yukawa Institute for Theoretical Physics at Kyoto University for hospitality.
1,108,101,564,441
arxiv
\section*{ACKNOWLEDGMENT} This work was supported by ARC Laureate Fellowship FL130100102 to IR and the ARC Centre of Excellence for Robotic Vision CE140100016. \bibliographystyle{IEEEtran} \section{RELATED WORK}\label{sec:relatedwork} SLAM is well studied problem in mobile robotics and many different solutions have been proposed for solving it. The most recent of these is the graph-based approach that formulates SLAM as a nonlinear least squares problem \cite{grisetti2010tutorial}. SLAM with cameras has also seen advancement in theory and good implementations that have led to many real-time systems from sparse (\cite{orbslam},\cite{dso}) to semi-dense (\cite{lsdslam}, \cite{svo}) to fully dense (\cite{dtam}, \cite{kinectfusion}, \cite{infinitam}). Recently, there has been a lot of interest in extending the capability of a point-based representation by either applying the same techniques to other geometric primitives or fusing points with lines or planes to get better accuracy. In that regard, \cite{kaess-plane} proposed a representation for modeling infinite planes and \cite{yang2016pop} use Convolutional Neural Network (CNN) to generate plane hypothesis from monocular images which are refined over time using both image planes and points. \cite{taguchi2013point} proposed a method to fuse points and planes from an RGB-D sensor. In the latter works, they try to fuse the information of planar entities to increase the accuracy of depth inference. Quadrics based representation was first proposed in \cite{cross1998quadric} and later used in a structure from motion setup \cite{sfmquadric}. \cite{dualquadNiko2017arXiv} reconstructs quadrics based on bounding box detections, however it is not explicitly modeled to remain bounded ellipsoids. Addressing previous drawback, \cite{nicholson2019quadricslam} still relies on ground-truth data-association in a non-real-time quadric-only framework. \cite{sunderhauf2017meaningful} presented a semantic mapping system using object detection coupled with RGB-D SLAM, however object models do not inform localization. \cite{DBLP:conf/cvpr/Salas-MorenoNSKD13} presented an object based SLAM system that uses pre-scanned object models as landmarks for SLAM but can not be generalized to unseen objects. \cite{mccormac2017semanticfusion} presented a system that fused multiple semantic predictions with a dense map reconstruction. SLAM is used as the backbone to establish multiple view correspondences for fusion of semantic labels but the semantic labels do not inform localization. \section{INTRODUCTION}\label{sec:intro} Simultaneous Localization And Mapping (SLAM) is one of the fundamental problems in mobile robotics \cite{cadena2016past} that aims to reconstruct a previously unseen environment while localizing a mobile robot with respect to it. The representation of the map is an important design choice as it directly affects its usability and precision. A sparse and efficient representation for Visual SLAM is to consider the map as collection of points in 3D, which carries information about geometry but not about the semantics of the scene. Denser representations \cite{dso,lsdslam,dtam,infinitam,kinectfusion}, remain equivalent to a collection of points in this regard. Man-made environments contain many objects that can be used as landmarks in a SLAM map, encapsulating a higher level of abstraction than a set of points. Previous object-based SLAM efforts have mostly relied on a database of predefined objects -- which must be recognized and a precise 3D model fit to match the observation in the image to establish correspondence \cite{DBLP:conf/cvpr/Salas-MorenoNSKD13}. Other work \cite{Bao_CVPR2011_SSFM} has admitted more general objects (and constraints) but only in a slow, offline structure-from-motion context. In contrast, we are concerned with online (real-time) SLAM, but we seek to represent a wide variety of objects. Like \cite{Bao_CVPR2011_SSFM} we are not concerned with high-fidelity reconstruction of individual objects, but rather to represent the location, orientation and rough shape of objects, while incorporating fine point-cloud reconstructions on-demand. A suitable representation is therefore a quadric \cite{sfmquadric}, which captures a compact representation of rough extent and pose while allows elegant data-association. In addition to objects, much of the large-scale structure of a general scene (especially indoors) comprises dominant planar surfaces. Planes provide information complimentary to points by representing significant portions of the environment with few parameters, leading to a representation that can be constructed and updated online \cite{kaess-plane}. In addition to constraining points that lie on them, planes permit the introduction of useful affordance constraints between objects and their supporting surfaces that leads to better estimate of the camera pose. This work aims to construct a sparse semantic map representation consisting not only of points, but planes and objects as landmarks, all of which are used to localize the camera. We explicitly target real-time performance in a monocular setting which would be impossible with uncritical choices of representation and constraints. To that end, we use the representation for dual quadrics proposed in our previous work \cite{mehdi-arxiv} to represent and update general objects, \textbf{(1)} from front-end perspective such as: \textbf{a)} reliance on the \textit{depth} channel for plane segmentation and parameter regression, \textbf{b)} pre-computation of Faster R-CNN \cite{fasterrcnn} based object detections to permit real-time performance, and \textbf{c)} ad-hoc object and plane matching/tracking. \textbf{(2)} From the back-end perspective: \textbf{a)} conic observations are assumed to be axis-aligned thus limiting the robustness of the quadric reconstruction, \textbf{b)} all detected landmarks are maintained in a single global reference frame. This work in addition to addressing the mentioned limitations, proposes new factors amenable for real-time inclusion of plane and object detections while incorporating fine point-cloud reconstructions from a deep-learned CNN, wherever available, to the map and refine the quadric reconstruction according to this object model. The main contributions of the paper as follows: (1) integration of two different CNN-based modules to segment planes and regress the parameters (2) integrating a real-time deep-learned object detector in a monocular SLAM framework to detect general objects as landmarks along a data-association strategy to track them, (3) proposing a new observation factor for objects to avoid axis-aligned conics, (4) representing landmarks relative to the camera where they are first observed instead of a global reference frame, and (5) wherever available, integrating the reconstructed point-cloud model of the detected object from single image by a CNN to the map and imposing additional prior on the extent of the reconstructed quadric based on the reconstructed point-cloud. \section{EXPERIMENTS}\label{sec:experiments} \input{figs/figs_experiments} The proposed system is built in C++ on top of the state-of-the-art ORB-SLAM2~\cite{orbslam} and utilizes its front-end for tracking ORB features, while the back-end for the proposed system is implemented in C++ using g2o~\cite{kummerle2011g}. Evaluation is performed on a commodity machine with Intel Core i7-4790 processor and a single GTX980 GPU card in near 20 fps and carried out on publicly available TUM \cite{tum-dataset}, NYUv2 \cite{nyuv2-dataset}, and KITTI \cite{kitti} datasets that contain rich planar low-texture scenes to multi-object offices and outdoor scenes. Qualitative and quantitative evaluations are carried out using different mixture of landmarks and comparisons are presented against point-based monocular ORB-SLAM2~\cite{orbslam}. \subsection{TUM and NYUv2}\label{subsec:tumnyu} Qualitative evaluation on TUM and NYUv2 for sequences \texttt{fr2/desk}, \texttt{nyu/office\_1b}, and \texttt{nyu/nyu\_office\_1} is illustrated in Fig.~\ref{fig:experiments} for different scenes and landmarks. Columns~(a)-(d) show the image frame with tracked features and possible detected objects, detected and segmented planes, and the reconstructed map from two different viewpoints, respectively. For some low or no texture sequences in TUM and NYUv2 datasets point-based SLAM system fail to track the camera, however the present rich planar structure is exploited by our system along with the Manhattan constraints to yield more accurate trajectories and semantically meaningful maps. The reconstructed maps are semantically rich and consistent with the ground truth 3D scene, for instance in \texttt{fr2/desk}, with presence of all landmarks and constraints, the map consists of planar monitor orthogonal to the desk, and quadrics corresponding to objects are tangent to the supporting desk, congruous with the real scene. Red ellipses in Fig.~\ref{fig:experiments} column~(a) are the projection of their corresponding quadric objects in the map. Further evaluations can be found in the supplemental video. \input{figs/figs_plane_detector_experiment} One of the main reasons for the improved accuracy of camera trajectory and consistency of the global map is the addressing of subtle but extremely important problem of scale drift. In a monocular setting, the estimated scale of the map can change gradually over time. In our system, the consistent metric scale of the planes (from CNN) and the presence of point-plane constraints allow observation of the absolute scale, which can further be improved by adding priors about the extent of the objects represented as quadrics. \input{figs/figs_kitti_experiment.tex} \begin{table}[b] \centering \caption{RMSE (\texttt{cm}) of ATE for our monocular SLAM against monocular ORB-SLAM2. Percentage of improvement over ORB-SLAM2 is represented in [~]. See \ref{subsec:tumnyu}} \resizebox{\columnwidth}{!}{ \begin{tabular}{l |c|c|c|c|c|c} \hline Dataset & \# KF & ORB-SLAM2& PP & PP+M & PO & PPO+MS \\\hline \texttt{fr1/floor} & 125 & 1.7971 & 1.6923 & \textbf{1.6704} \scriptsize{[7.05\%]} & --- & --- \\\hline \texttt{fr1/xyz} & 30 & 1.0929 & 1.0291 & 0.9802 & 1.0081 & \textbf{0.9680} \scriptsize{[11.43\%]} \\\hline \texttt{fr1/desk} & 71 & 1.3940 & 1.2961 & 1.2181 & 1.2612 & \textbf{1.2126} \scriptsize{[13.01\%]} \\\hline \texttt{fr2/xyz} & 28 & 0.2414 & 0.2213 & 0.2189 & 0.2243 & \textbf{0.2179} \scriptsize{[9.72\%]} \\\hline \texttt{fr2/rpy} & 12 & 0.3728 & 0.3356 & 0.3354 & 0.3473 & \textbf{0.3288} \scriptsize{[11.79\%]} \\\hline \texttt{fr2/desk} & 111 & 0.8019 & 0.7317 & 0.7021 & 0.7098 & \textbf{0.6677} \scriptsize{[16.74\%]} \\\hline \texttt{fr3/long\_office} & 193 & 1.0697 & 0.9605 & 0.9276 & 0.9234 & \textbf{0.8721} \scriptsize{[18.47\%]} \\\hline \end{tabular} } \label{tab:errors} \end{table} One of the important factors that can affect the system performance is the quality of estimated plane parameters. Reconstructed maps are shown in Fig.~\ref{fig:plane_detector_experiment} for two different monocular plane detectors incorporated in our system: \textbf{a)} PlaneNet~\cite{plane-net}, \textbf{b)} our proposed plane detector (See Section \ref{sec:mono_plane}). Baseline comparison is made against a depth based plane detector that uses connected component segmentation of the point cloud (\cite{trevor2013efficient, mehdi-arxiv}). The detected planes are then used in the monocular system for refinement. As seen in Fig.~\ref{fig:plane_detector_experiment}(a) PlaneNet only captures the planar table region successfully and fails for the other regions. The proposed detector captures the monitors on the table shown in column (b), however it misses the monitor behind and also reconstructs the two same height tables with a slight vertical distance. As shown in Fig.~\ref{fig:plane_detector_experiment}(c) the baseline plane detector captures the smaller planar regions more accurately and same height tables as one plane, as expected because of using additional \textit{depth} information. Table~\ref{tab:errors_plane_detectors} reports the comparison of these three approaches for plane detection in different sequences of TUM datasets. It can be seen that the depth based detector is the most informative, however the proposed method is better than PlaneNet in most cases. We perform an ablation study to demonstrate the efficacy of introducing various combinations of the proposed landmarks and constraints. The RMSE of Absolute Trajectory Error (ATE) is reported in Table \ref{tab:errors}. Estimated trajectories and ground-truth are aligned using a similarity transformation \cite{horn}. In the first case, points are augmented with planes (\texttt{PP}) and constraint for points and corresponding planes is included. This already improves the accuracy over baseline and imposing additional Manhattan constraint in the second case (\texttt{PP+M}) improves ATE even further. In these two cases the error is significantly reduced by first exploiting the structure of the scene and second by reducing the scale-drift, as discussed earlier, using metric information about planes. For the sequences containing common COCO~\cite{coco} objects, the presence of objects represented by quadric landmarks along with points is explored in the third case (\texttt{PO}). This case demonstrates the effectiveness of integrating objects in the SLAM map. Finally, the performance of our full monocular system (\texttt{PPO+MS}) is detailed in the last right column of Table \ref{tab:errors} with the presence of all landmarks points, planes, and objects and also Manhattan and supporting/tangency constraints. This case shows an improvement against the baseline in all of the evaluated sequences, in particular for \texttt{fr3/long\_office} we have seen a significant decline in ATE (18.47\%) as a result of the presence of a large loop in this sequence, where our proposed multiple-edges for observations of planes and quadric objects in key-frames have shown their effectiveness in the global loop closure. \subsection{KITTI benchmark}\label{subsec:kitti} To demonstrate the efficacy of our proposed object detection factor, object tracking, and also shape prior factor induced from incorporated point-cloud (reconstructed by CNN from single-view) in our SLAM system, we evaluate our system on KITTI benchmark. For reliable frame-to-frame tracking, we use the stereo variant of ORB-SLAM2, however object detection and plane estimation are still carried out in a monocular fashion. The reconstructed map with quadric objects and incorporated point-clouds (See Section \ref{subsec:obj_pointcloud}) is illustrated for \texttt{\textbf{KITTI-7}} in Fig.~\ref{fig:kitti_experiment}. The instances of different cars are rendered in different colors. \begin{table}[t] \centering \caption{RMSE for ATE (\texttt{cm}) using different plane detection methods in our monocular SLAM. See \ref{subsec:tumnyu}} \begin{tabular}{l c|c|c} \hline Dataset & PlaneNet~\cite{plane-net} & Proposed Detector & Baseline \\\hline \texttt{fr1/xyz} & 0.9701 & \textbf{0.9680} & 0.8601 \\\hline \texttt{fr1/desk} & 1.2191 & \textbf{1.2126} & 1.0397 \\\hline \texttt{fr2/xyz} & 0.2186 & \textbf{0.2179} & 0.2061 \\\hline \texttt{fr1/floor} & \textbf{1.6562} & 1.6704 & 1.4074 \\\hline \end{tabular} \label{tab:errors_plane_detectors} \end{table} \section{CONCLUSIONS}\label{sec:conclusions} This work introduced a monocular SLAM system that can incorporate learned priors in terms of plane and object models in an online real-time capable system. We show that introducing these quantities in a SLAM framework allows for more accurate camera tracking and a richer map representation without huge computational cost. This work also makes a case for using deep-learning to improve the performance of traditional SLAM techniques by introducing higher level learned structural entities and priors in terms of planes and objects. \section{Overview of the Landmark Representations and Factors}\label{sec:overview} For the sake of completeness, this section presents an overview of the representations and factors proposed originally in our previous work \cite{mehdi-arxiv}. In the next sections, we propose new multi-edge observation and unary prior factors. The SLAM problem can be represented as a bipartite factor~graph $\mathcal{G}(\mathcal{V},\mathcal{F},\mathcal{E})$ where $\mathcal{V}$ represents the set of \textit{vertices} (variables) that need to be estimated and $\mathcal{F}$ represents the set of \textit{factors} (constraints) that are connected to their associated variables by the set of edges $\mathcal{E}$. We propose our SLAM system in the context of factor~graphs. The solution of this problem is the optimum configuration of vertices (MAP estimate), $\mathcal{V}^{*}$, that minimizes the overall error over the factors in the graph (log-likelihood of the joint probability distribution). The pipeline of our SLAM system is illustrated in Fig~\ref{fig:system}. \subsection{Quadric Representation}\label{subsec:overview_quadric} A quadric surface in 3D space can be represented by a homogeneous quadratic form defined on the 3D projective space $ \mathbb{P}^{3} $ that satisfies $ \mathbf{x^{\top}Qx=0} $, where $ \mathbf{x} \in \mathbb{R}^{4} $ is the homogeneous 3D point and $ \mathbf{Q} \in \mathbb{R}^{4\times4} $ is the symmetric matrix representing the quadric surface. However, the relationship between a point-quadric $ \mathbf{Q} $ and its projection into an image plane (a conic) is not straightforward \cite{Hartley:2003:MVG:861369}. A widely accepted alternative is to make use of the dual space (\cite{cross1998quadric,sfmquadric,dualquadNiko2017arXiv}) which represents a dual quadric $ \Q{} $ by the envelope of planes $ \Pl{} $ tangent to it, viz: $ \Pl{}^{\top}\Q{}\Pl{}=0 $, which simplifies the relationship between the quadric and its projection to a conic. A dual quadric $ \Q{} $ can be decomposed as $ \Q{} = \T{}{Q} \Q{c} \T{\top}{Q} $ where $ \T{}{Q} \in \mathbf{SE}(3)$ transforms an axis-aligned (canonical) quadric at the origin, $ \Q{c} $, to a desired $ \mathbf{SE}(3) $ pose. Quadric landmarks need to remain bounded, i.e. ellipsoids, which requires $\Q{c}$ to have 3 positive and 1 negative eigenvalues. In \cite{mehdi-arxiv} we proposed a decomposition and incremental update rule for dual quadrics that guarantees this conditions and provides a good approximation for incremental update. More specifically, the dual ellipsoid $ \Q{} $ is represented as a tuple $ \mathbf{(T,L)} $ where $ \mathbf{T \in SE}(3) $ and $ \mathbf{L} $ lives in $ \textbf{D}(3) $ the space of real diagonal $ 3 \times 3 $ matrices, i.e. an axis-aligned ellipsoid accompanied by a rigid transformation. The proposed approximate update rule for $ \Q{} = \mathbf{(T,L)} $ is: \begin{equation} \resizebox{0.9\columnwidth}{!}{$ \mathbf{\Q{} \oplus \varDelta\Q{} = (T,L) \oplus (\varDelta T, \varDelta L) = (T \cdot \varDelta T, L + \varDelta L)} $} \end{equation} where $ \mathbf{\oplus:\mathbb{E}\times\ \mathbb{E} \longmapsto \mathbb{E}} $ is the mapping for updating ellipsoids, $ \mathbf{\varDelta L} $ is the update for $ \mathbf{L} $ and $ \mathbf{\varDelta T} $ is the update for $ \mathbf{T} $ that are carried out in the corresponding lie-algebra of $ \mathfrak{d}(3) $ (isomorphic to $ \mathbb{R}^3 $) and $ \mathfrak{se}(3) $, respectively. \subsection{Plane Representation} Following \cite{kaess-plane}, a plane $ \Pl{} $ as a structural entity in the map is represented minimally by its normalized homogeneous coordinates $ \Pl{} = (a,b,c,d)^\top $ where $ \mathbf{n} = (a,b,c)^\top $ is the normal vector and $d$ is the signed distance to origin. \subsection{Constraints between Landmarks}\label{subsec:overview_constraints} In addition to the classic point-camera constraint formed by the observation of a 3D point as 2D feature point in the camera, we model constraints between higher level landmarks and their observations in the camera. These constraints also carry semantic information about the structure of the scene, such as Manhattan assumption and affordances. We present a brief overview of these constraints here. In the next sections we present the newly introduced factors regarding plane and object observations and object shape priors, induced by the single-view point-cloud reconstructions. \subsubsection{Point-Plane Constraint} For a point $ \mathbf{x} $ to lie on its associated plane $ \Pl{} $ with the unit normal vector $\mathbf{n}$, we introduce the following factor between them: \begin{equation} f_{d}(\mathbf{x}, \Pl{})={{\parallel \mathbf{n}^{\top}(\mathbf{x}-\x{o}) \parallel}_{\sigma_d}^{2}} \end{equation} which measures the orthogonal distance of the point and the plane, for an arbitrary point $ \x{o} $ on the plane. $\|\mathbf{e}\|_{\boldsymbol{\Sigma}} $ notation is the Mahalanobis norm of $ \mathbf{e} $ and is defined as $ \mathbf{e}^\top \boldsymbol{\Sigma}^{-1}\mathbf{e}$ where $\boldsymbol{\Sigma}$ is the associated covariance matrix. \subsubsection{Plane-Plane Constraint (Manhattan assumption)} Manhattan world assumption where planes are mostly mutually parallel or perpendicular, is modeled as: \begin{align} f_{\parallel}(\Pl{1}, \Pl{2}) & = {\parallel | \mathbf{n}_{1}^\top \mathbf{n}_{2} | - 1 \parallel}_{\sigma_{par}}^{2} \qquad \text{\footnotesize{\it for parallel planes}} \\ f_{\perp}(\Pl{1}, \Pl{2}) & = {\parallel \mathbf{n}_{1}^\top \mathbf{n}_{2} \parallel}_{\sigma_{per}}^{2} \qquad \text{\footnotesize{\it for perpendicular planes}} \end{align} where $ \Pl{1} $ and $ \Pl{2} $ have unit normal vectors $ \mathbf{n}_{1} $ and $ \mathbf{n}_{2} $. \subsubsection{Supporting/Tangency Constraint} In normal situations planar structure of the scene affords stable support for common objects, for instance floors and tables support indoor objects and roads support outdoor objects like cars. To impose a supporting affordance relationship between planar entities of the scene and common objects, we introduce a factor between dual quadric object $ \Q{} $ and plane $ \Pl{} $ that models the tangency relationship as: \begin{equation} f_t(\Pl{}, \Q{})={\parallel \Pl{}^{\top}\Qh{}\Pl{} \parallel}_{\sigma_t}^{2} \end{equation} where $\Qh{}$ is the normalized dual quadric by its matrix Frobenius norm. Please note that this tangency constraint is the direct consequence of choosing dual space for quadric representation, which is not straight-forward in point space. \input{figs/figs_system_pipeline.tex} \section{MONOCULAR PLANE DETECTION}\label{sec:mono_plane} Man-made environments contain planar structures, such as table, floor, wall, road, etc. If modeled correctly, they can provide information about large feature-deprived regions providing more map coverage. In addition, these landmarks act as a regularizer for other landmarks when constraints are introduced between them. The dominant approach for plane detection is to extract them from RGB-D input \cite{mehdi-arxiv} which provides reliable detection and estimation of plane parameters. In a monocular setting, planes need to be detected using a single RGB image and their parameters estimated, which is an ill-posed problem. However, recent breakthroughs enable us to detect and estimate planes. Recently, PlaneNet~\cite{plane-net} presented a deeply learned network to predict plane parameters and corresponding segmentation masks. While planar segmentation masks are highly reliable, the regressed parameters are not accurate enough for small planar regions in indoor scenes (see Section \ref{sec:experiments}). To address this shortcoming, we use a network that predicts depth, surface normals, and semantic segmentations. Depth and surface normal contain complementary information about the orientation and distance of the planes, while semantic segmentation allows reasoning about identity of the region such as wall, floor, etc. \subsection{Planes from predicted depth, surface normals, and semantic segmentation}\label{subsec:plane_detection} We utilize the state-of-the-art joint network~\cite{vlad-arxiv} to estimate depth, normals, and segmentation for each RGB frame in real-time. We exploit the redundancy in the three separate predictions to boost the robustness of the plane detection by generating plane hypotheses in two ways: \textbf{1)} for each planar region in the semantic segmentation (regions such as floor, wall, etc.) we fit 3D planes using surface normals and depth for orientation and distance of the plane respectively, and \textbf{2)} depth and surface normals predictions are utilized in the connected component segmentation of the reconstructed point-cloud in a parallel thread (\cite{trevor2013efficient, mehdi-arxiv}). Plane detection $ \Pl{} = (a,b,c,d)^{\top} $ is considered to be valid if the cosine distance of normal vectors $ \mathbf{n} = (a,b,c)^{\top} $ and also the distance between the $ d $ value of the two planes from two estimations are within a certain threshold. The corresponding plane segmentation is taken to be the intersection of the plane masks of the two hypotheses. Note that the association between 3D point landmarks and planes, useful for the factor described in \ref{subsec:overview_constraints}, is extracted from the resulting mask. The 3D point is considered as an inlier if the corresponding 2D keypoint inside the mask also satisfies the certain geometric distance threshold. \subsection{Plane Data Association}\label{subsec:plane_tracking} Once initialized and added to the map, the landmark planes need to be associated with the detected planes in the incoming frames. Matching planes is more robust than feature point matching due to the inherent geometrical nature of planes \cite{mehdi-arxiv}. To make data association more robust in cluttered scenes, when available, we additionally use the detected keypoints that lie inside the segmented plane in the image to match the observations. A plane in the map and a plane in the current frame are deemed to be a match if the number of common keypoints is higher than a threshold $ th_H $ and the unit normal vector and distance of them are within certain threshold. If the number of common keypoints is less than another threshold $ th_L $ (or zero for feature-deprived regions) meaning that there is no corresponding map plane for the detected plane, the observed plane is added to the map as a new landmark. The map can now contain two or more planar regions that might belong to the same infinite plane such as two tables with same height in the office. However, additional constraints on parallel planes are also introduced according to evidence (Section \ref{subsec:overview_constraints}). \subsection{Multi-Edge Factor for Plane Observation}\label{subsec:plane_factor} After successful data association, we can introduce the observation factor between the plane and the camera (keyframe). We use a relative key-frame formulation (instead of the global frame) for each plane landmark $ \Pl{r} $ which is expressed relative to the first key-frame ($ \T{w}{r} $) that observes it. For an observation $ \Pl{obs} $ from a camera pose $ \T{w}{c} $, the multi-edge factor (connected to more than two nodes) for measuring the plane observation is given by: \begin{equation} f_{\pi}(\Pl{r}, \T{w}{r}, \T{w}{c}) = {{\parallel d({\T{r}{c}}^{-\top}{\Pl{r}} , \Pl{obs}) \parallel}_{\boldsymbol{\Sigma}_\pi}^{2}} \end{equation} where $ {\T{r}{c}}^{-\top} \Pl{r} $ is the transformed plane from its reference frame to the camera coordinate frame and $ d $ is the geodesic distance of the $\mathbf{SO}(3)$ \cite{kaess-plane} and $\mathbf{T}_{c}^{w}$ is the pose of the camera which takes a point in the current camera frame ($\x{c}$) to a point in the world frame $\x{c} = {\mathbf{T}_{c}^{w}}{\x{w}}$. \section{Incorporating Object with Point-Cloud Reconstruction}\label{sec:objects} As noted earlier, incorporating general objects in the map as quadrics leads to a compact representation of the rough 3D extent and pose (location and orientation) of the object while facilitating elegant data association. State-of-the-art object detector such as YOLOv3~\cite{yolov3} can provide object labels and bounding boxes in real-time for general objects. The goal of introducing objects in SLAM is both to increase the accuracy of the localization and to yield a richer semantic map of the scene. While our SLAM proposes a sparse and coarse realization of the objects, wherever the fine model reconstruction of each object is available it can be seamlessly incorporated on top of the corresponding quadric and even refines the quadric reconstruction as discussed in \ref{subsec:obj_pointcloud}. \subsection{Object Detection and Matching}\label{subsec:obj_match} \input{figs/figs_point_cloud.tex} For real-time detection of objects, we use YOLOv3~\cite{yolov3} trained on COCO dataset~\cite{coco} that provides axis detections as aligned bounding boxes for common objects. For reliability we consider detections with 85\% or more confidence. \subsubsection*{Object Matching} To rely solely on the geometry of the reconstructed quadrics (by comparing re-projection errors) to track the object detections against the map is not robust enough particularly for high-number of overlapping or partially-occluded detections. Therefore to find optimum matches for all the detected objects in current frame, we solve the classic optimum assignment problem with Hungarin/Munkres~\cite{hungarian} algorithm. The challenge of using this classic algorithm is how to define the appropriate cost matrix. We establish the cost matrix of this algorithm based on the idea of maximizing the number of common robustly matched keypoints (2D ORB features) inside the detected bounding boxes. Since we want to solve the minimization problem, the cost matrix is defined as: \begin{align} \mathbf{C} & = \left[c_{ij}\right]_{N \times M} \\ c_{ij} & = K - p(b_i,q_j) \end{align} where $ p(b_i,q_j) $ gives the number of projected keypoints associated with candidate quadric $ q_j $ inside the bounding box $ b_i $, and $ K = \max_{\substack{i,j}} p(b_i,q_j) $ is the maximum number of all of these projected keypoints. $ N $ and $ M $ are the total number of bounding box detections in current frame and candidate quadrics of the map for matching, respectively. Candidate quadrics for matching are considered to be the quadrics of the map that are currently in front of the camera. To reduce the number of mismatches furthermore, after solving the assignment problem with the proposed cost matrix, the solved assignment of $ b_i^* $ to $ q_j^* $ is considered successful if the number of common keypoints satisfies a certain high threshold $ p(b_i^*,q_j^*) \geq th_{high} $ and the new quadric will be initialized in the map if $ p(b_i^*,q_j^*) \leq th_{low} $. Assignments with $ p(b_i^*,q_j^*) $ values between these thresholds will be ignored. \subsection{Point-Cloud Reconstruction and Shape Priors}\label{subsec:obj_pointcloud} In this section, we present a method of estimating fine geometric model of available objects established on top of quadrics to enrich their inherent coarse representation. It is difficult to estimate the full 3D shape of objects from sparse views using purely classic geometric methods. To bypass this limitation, we train a CNN adapted from Point Set Generation Net \cite{fan2017point} to predict (or hallucinate) the accurate 3D shape of objects as point clouds from single RGB images. The CNN is trained on a CAD model repository ShapeNet \cite{chang2015shapenet}. We render 2D images of CAD models from random viewpoints and, to simulate the background in real images, we overlay random scene backgrounds from the SUN dataset \cite{sundataset} on the rendered images. We demonstrate the efficacy of this approach for outdoor scenes, particularly for general car objects in KITTI~\cite{kitti} benchmark in section \ref{subsec:kitti}. Running alongside with the SLAM system, the CNN takes an amodal detected bounding box of an object as input and generates a point cloud to represent the 3D shape of the object. However, to ease the training of the CNN, the reconstructed point cloud is in a normalized scale and canonical pose. To incorporate the point cloud into the SLAM system, we need to estimate seven parameters to scale, rotate and translate this point cloud. First we compute the minimum enclosing ellipsoid of the normalized point cloud, and then estimate the parameters by aligning it to the object ellipsoid from SLAM. \subsubsection*{Shape Prior on Quadrics} After registering the reconstructed point-cloud and the quadric from SLAM, we impose a further constraint only on the shape (extent) of the quadric, Fig~\ref{fig:pointcloud}, feasible due to the decomposition of quadric representation. This prior affects the ratio of major axes of the quadric $ \Q{} $ by computing the intersection over union of the registered enclosing normalized cuboid of the point-cloud $ \mathcal{M} $ and enclosing normalized cuboid of the quadric: \begin{equation} \resizebox{0.9\columnwidth}{!}{$ f_{prior}(\Q{}) = {\| 1 - IoU_{cu}(cuboid(\Q{}), cuboid(\mathcal{M})) \|_{\sigma_p}^2} $} \end{equation} where $ cuboid $ is a function that gives the normalized enclosing cuboid of an ellipsoid. As an expedient approach, we currently pick a single high-quality detected bounding box as the input to the CNN, however, it is not complicated to extend to multiple bounding boxes by using a Recurrent Neural Net to fuse information from different bounding boxes, as done in 3D-R2N2 \cite{choy20163d}. \subsection{Multi-Edge Factor for Non-Aligned Object Observation}\label{subsec:obj_factor} We propose an observation factor for the quadric without enforcing that to be observed as an axis-aligned inscribed conic (ellipse). Unlike \cite{dualquadNiko2017arXiv} that uses the Mahalanobis distance of detected and projected bounding boxes, which is not robust and penalizes more for large errors and outliers, we use the error function based on Intersection-over-Union (IoU) of these bounding boxes that is also weighted according to the \textit{confidence score} $s$ of the object detector. This factor provides an inherent capped error, however it implicitly emphasizes on the significance of the good initialization of quadrics to have a successful optimization. Similar to plane landmarks, we use the relative reference key-frame $ \T{w}{r} $ to represent the coordinates of the objects, we introduce the multi-edge factor, for object observation error, between dual quadric $ \Q{r} $ and camera pose $ \mathbf{\T{w}{c}} $ as: \begin{equation} f_Q(\Q{r}, \T{w}{r}, \T{w}{c})= \parallel 1 - IoU_{bb}(B^{*} , B_{obs}) \parallel_{s^{-1}}^2 \end{equation} where $ B_{obs} $ is the detected bounding box and $ B^{*} $ is the enclosing bounding box of the projected conic $ \C{} \sim \mathbf{P} \Q{r} \mathbf{P}^\top $ with the projection matrix $ \mathbf{ P = K} \begin{bmatrix} \mathbf{I}_{3\times3} & \mathbf{0}_{3\times3} \end{bmatrix} \T{r}{c} $ of the camera with calibration matrix $ \mathbf{K} $ ,\cite{Hartley:2003:MVG:861369}, and $ \T{r}{c} = {\T{w}{c}}{(\T{w}{r})}^{-1} $ is the relative pose of the camera from the reference key-frame of the quadric.
1,108,101,564,442
arxiv
\section{Introduction}~\label{sec:1} The latest LHCb measurement observed more precise line shape of the $J/\psi p$ invariant mass distribution from the process $\Lambda_b^0\to J/\psi p K^-$~\cite{Aaij:2019vzc}. The experimental data suggested that the previous observed structure $P_c(4450)$ is resolved into two narrow states, $P_c(4440)$ and $P_c(4457)$ while the broad state $P_c(4380)$ have not been confirmed yet. In addition, a new structure $P_c(4312)$ is discovered with $7.3\sigma$ significance. Their masses and widths are given in the following table. \begin{table}[htpb] \centering \scalebox{1}{ \begin{tabular}{*{3}{c}} \Xhline{0.8pt} States & Mass ($\mev$)& Width ($\mev$)\\ \Xhline{0.4pt} $P_c(4312)^+$ & $4311.9\pm 0.7^{+6.8}_{-0.6}$ & $9.8\pm2.7^{+3.7}_{-4.5}$ \\ \Xhline{0.4pt} $P_c(4440)^+$ & $4440.3\pm 1.3^{+4.1}_{-4.7}$ & $20.6\pm4.9^{+8.7}_{-10.1}$ \\ \Xhline{0.4pt} $P_c(4457)^+$ & $4457.3\pm 0.6^{+4.1}_{-1.7}$ & $6.4\pm2.0^{+5.7}_{-1.9}$ \\ \Xhline{0.8pt} \end{tabular} } \end{table} The reported masses of $P_c(4312)$ and $P_c(4457)$ lie approximately $10\ \mev$ and $5\ \mev$ below the $\bar{D}\Sigma_c$ and $\bar{D}^*\Sigma_c$ thresholds, respectively. This closeness to the thresholds and their narrow widths make the interpretation of hadronic molecule consisting of the corresponding meson-baryon system naturally for these pentaquark-like states. And the experimental properties of previous $P_c(4380)$ and $P_c(4450)$ can be described well in the similar scenarios within some reasonable parameter range~\cite{Lin:2017mtz}. Actually, before the first observation of pentaquark structure in hidden charm sector by LHCb in 2015~\cite{Aaij:2015tga}, the existence of such near threshold bound states has been predicted systematically in some early theoretical works~\cite{Wu:2010jy,Wu:2010vk,Wang:2011rga,Yang:2011wz,Wu:2012md,Yuan:2012wz,Xiao:2013yca}. Especially, the predicted masses for these three observed $P_c$ states in Ref.~\cite{Wu:2012md} are exactly consistent with the reported experimental measurement within the uncertainty. And from the theoretical analysis in that work, we note that the $\bar{D}\Sigma_c$ and $\bar{D}^*\Sigma_c$ account for a large proportion of component in lower $P_c(4312)$ and higher two $P_c$ states, respectively. After that experimental discovery, various other theoretical scenarios have been also proposed to understand the nature of pentaquark-like states, which include compact pentaquarks~\cite{Jaffe:2003sg,Yuan:2012wz,Ali:2016dkf,Maiani:2015vwa,Li:2015gta,Wang:2015epa,Weng:2019ynv,Stancu:2019qga,Giannuzzi:2019esi,Zhu:2019iwm,An:2019idk}, baryocharmonia~\cite{Kubarovsky:2015aaa,Eides:2019tgv} and rescattering-induced kinematical effects~\cite{Guo:2015umn,Liu:2015fea,Guo:2016bkl,Bayar:2016ftu}, as well as other possible bounded mechanism~\cite{Mironov:2015ica,Scoccola:2015nia}. The definite conclusion on the inner structures of $P_c$ states, however, requires further experimental investigation for them, especially the determination of their spin and parity. Recently, starting off with the near threshold properties of the reported $P_c$ states, some theoretical works suggested the molecular interpretations are favorable to them~\cite{Chen:2019bip,Chen:2019asm,Guo:2019fdo,Liu:2019tjn,He:2019ify,Liu:2019zoy,Huang:2019jlf,Shimizu:2019ptd,Guo:2019kdc,Xiao:2019aya,Xiao:2019mst,Sakai:2019qph}. And additional four similar hadronic molecules are expected with the heavy quark spin symmetry~\cite{Liu:2019tjn,Sakai:2019qph}. The systemic introduction to the hadronic molecules can refer to the reviews~\cite{Chen:2016qju,Guo:2017jvc}. In the present work, we would like to investigate the decay properties of the newly observed $P_c$ states within the $S$-wave hadronic molecular pictures. The strong interactions among the involved hadrons are described with the effective Lagrangian method. As a result, the whole strong decay patterns are presented with the free parameters fixed to reproduce the measured total decay widths. It will help us to verify whether $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ are $S$-wave hadronic molecule states or not in future. Besides that, another four possible molecules in $\bar{D}^{(*)}\Sigma_c^*$ system predicted in Refs.~\cite{Xiao:2013yca,Liu:2019tjn} are also investigated. This work is organized as follows: In Sec.~\ref{sec:2}, we introduce formalism and some details about the theoretical tools used to calculate the decay modes of exotic hadronic molecular states. In Sec.~\ref{sec:3}, the numerical results and discussion are presented. The last section is devoted to the summary of the present work. \section{Formalism}~\label{sec:2} \subsection{Decay channels} Since there is no definite experimental evidence to identify the quantum numbers for all of the observed $P_c$ states up to now, we decipher them as the $S$-wave hadronic molecules in the present work. It indicates that $P_c(4312)$ is treated as a $J^P=1/2^-$ $\bar D\Sigma_c$ bound state while $P_c(4440)$ and $P_c(4457)$ are $\bar D^*\Sigma_c$ bound states with two possible quantum numbers $1/2^-$ and $3/2^-$. With the effective Lagrangian approach, the partial decay widths of $P_c$ molecules to all possible channels can be estimated consistently. Compared with the reported total widths of $P_c$ states, only the effect from the finite width of $\Sigma_c^{*}$~($\sim15~\mev$) needs to be considered and all other constituent hadrons, which include $\bar D$, $\bar D^{*}$ and $\Sigma_c$, can be treated as stable particles. And the natural three-body decays through the bounded $\Sigma_c^*$ decay will contribute to the widths of $\bar D^{(*)}\Sigma_c^*$ molecules, as shown in Fig.~\ref{Fig:three-body}. The two-body decays of hadronic molecules will be described conventionally by the triangle diagram mechanism with the one meson exchanged as in Fig.~\ref{Fig:triangle}. \begin{figure}[htbp] \begin{center} \includegraphics[width=9cm]{three-body.eps} \caption{Three-body decays of the $\bar D^{(*)}\Sigma_c^*$ molecules.\label{Fig:three-body}} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics{twobody.eps} \caption{The triangle diagram for the two-body decays of the $P_c$ states in the $\bar D^{(*)}\Sigma_c^{(*)}$ molecule scenarios, where $C1$, $C2$ denote the constituent particles of the $\bar D^{(*)}\Sigma_c^{(*)}$ composite system, $F1$, $F2$ denote the final states, $EP$ denotes the exchanged mesons.\label{Fig:triangle}} \end{center} \end{figure} All the two-body decay channels considered in our calculation are collected in Table~\ref{Tab:modes}. \begin{table}[htpb] \centering \caption{\label{Tab:modes}All possible decay channels for the $P_c$ states in the $\bar D^{(*)}\Sigma_c^{(*)}$ molecule scenario.} \scalebox{0.8}{ \begin{tabular}{c|*{2}{c}} \Xhline{1.0pt} \thead{Initial state} & \thead{Final states} & \thead{Exchanged particles} \\ \Xhline{0.8pt} \multirow{5}*{$P_c(4312)(\bar D \Sigma_c)$} & $J/\psi N$, $\omega p$, $\rho N$ & $D$, $D^*$ \\ \Xcline{2-3}{0.4pt} & $\bar D^*\Lambda_c$ & $\pi$, $\rho$ \\ \Xcline{2-3}{0.4pt} & $\bar D \Lambda_c$ & $\rho$ \\ \Xcline{2-3}{0.4pt} & $\eta_c N$ & $D^*$ \\ \Xcline{2-3}{0.4pt} & $\pi N$ & $D^*$, $\Lambda_c$, $\Sigma_c$ \\ \Xhline{0.8pt} \multirow{4}*{\thead{$P_c(4440)\& P_c(4457)$ \\ $(\bar D^* \Sigma_c)$} } & $\bar D^*\Lambda_c$, $\bar D\Lambda_c$, $\bar D \Sigma_c^*$, $\bar D\Sigma_c$ & $\pi$, $\rho$ \\ \Xcline{2-3}{0.4pt} & $J/\psi N$, $\omega p$, $\rho N$, $\eta N$ & $D^*$, $D$ \\ \Xcline{2-3}{0.4pt} & $\pi N$ & $D^*$, $D$, $\Lambda_c$, $\Sigma_c$ \\ \Xcline{2-3}{0.4pt} & $\chi_{c0} N$ & $D^*$ \\ \Xhline{0.8pt} \multirow{6}*{$P_c(4376)(\bar D \Sigma_c^*)$} & $\bar D^*\Lambda_c$ & $\pi$, $\rho$ \\ \Xcline{2-3}{0.4pt} & $\bar D\Lambda_c$, $\bar D\Sigma_c$ & $\rho$ \\ \Xcline{2-3}{0.4pt} & $J/\psi N$, $\omega p$, $\rho N$ & $D^*$, $D$ \\ \Xcline{2-3}{0.4pt} & $\eta_c N$ & $D^*$ \\ \Xcline{2-3}{0.4pt} & $\pi N$ & $D^*$, $\Lambda_c$, $\Sigma_c$ \\ \Xcline{2-3}{0.4pt} & $\chi_{c0} N$ & $D$ \\ \Xhline{0.8pt} \multirow{4}*{\thead{$P_c(4500)\& P_c(4511)$ \\ \&$P_c(4523)(\bar D^* \Sigma_c^*)$} } & $\bar D^*\Lambda_c$, $\bar D\Lambda_c$, $\bar D \Sigma_c^*$, $\bar D\Sigma_c$, $\bar D\Sigma_c^*$ & $\pi$, $\rho$ \\ \Xcline{2-3}{0.4pt} & $J/\psi N$, $\omega p$, $\rho N$, $\eta N$ & $D^*$, $D$ \\ \Xcline{2-3}{0.4pt} & $\pi N$ & $D^*$, $D$, $\Lambda_c$, $\Sigma_c$ \\ \Xcline{2-3}{0.4pt} & $\chi_{c0} N$ & $D^*$ \\ \Xhline{1.0pt} \end{tabular} } \end{table} \subsection{Effective Lagrangian} In the present work, we adopt the effective Lagrangian approach to compute the amplitudes of above decay diagrams. For the first vertex that $P_c$ states couple to the hadronic baryon-meson pairs, the Lorentz covariant $L$-$S$ scheme proposed in Ref.~\cite{Zou:2002yy} is used. A remarkable feature of this configuration is that the $L$-$S$ effective Lagrangian contains definite angular momentum contribution of the final two-body system in the decay process. In our $S$-wave molecule scenario, the involved Lagrangian is presented in the following, \begin{align} \Lag_{\bar{D} \Sigma_c P_c(1/2^-)} &= g_{\bar{D} \Sigma_c P_c}^{1/2^-} \bar{\Sigma}_c P_{c} \bar{D}, \\ \Lag_{\bar{D} \Sigma_c^* P_c(3/2^-)} &= g_{\bar{D} \Sigma_c^* P_c}^{3/2^-} \bar{\Sigma}_c^{* \mu}P_{c \mu} \bar{D}, \\ \Lag_{\bar{D}^* \Sigma_c P_c(1/2^-)} &= g_{\bar{D}^* \Sigma_c P_c}^{1/2^-} \bar{\Sigma}_c \gamma^5\tilde{\gamma}^{\mu}P_{c} \bar{D}^{*}_{\mu}, \\ \Lag_{\bar{D}^* \Sigma_c P_c(3/2^-)} &= g_{\bar{D}^* \Sigma_c P_c}^{3/2^-} \bar{\Sigma}_c P_{c \mu} \bar{D}^{* \mu}, \\ \Lag_{\bar{D}^* \Sigma_c^* P_c(1/2^-)} &= g_{\bar{D}^* \Sigma_c^* P_c}^{1/2^-} \bar{\Sigma}_c^{* \mu} P_{c} \bar{D}^{*}_{\mu}, \\ \Lag_{\bar{D}^* \Sigma_c^* P_c(3/2^-)} &= g_{\bar{D}^* \Sigma_c^* P_c}^{3/2^-} \bar{\Sigma}_c^{* \mu}\gamma^5\tilde{\gamma}^{\nu} P_{c \mu} \bar{D}^{*}_{\nu}, \\ \Lag_{\bar{D}^* \Sigma_c^* P_c(5/2^-)} &= g_{\bar{D}^* \Sigma_c^* P_c}^{5/2^-} \bar{\Sigma}_c^{* \mu} P_{c \mu\nu} \bar{D}^{* \nu}, \label{eq:vertex0} \end{align} with $\tilde{\gamma}^\mu$ defined as $(g^{\mu\nu}-p^\mu p^\nu/p^2)\gamma_\nu\equiv\tilde{g}^{\mu\nu}\gamma_\nu$, where $p$ denotes the momentum of initial $P_c$ state. The effective couplings $g_{\bar{D}^{(*)}\Sigma_c^{(*)}P_c}$ can be estimated with the compositeness criterion which states the relation between the derivative of self-energy operator of hadron resonance and its compositeness~\cite{Weinberg:1962hj,Weinberg:1965zz}. And the pure $\bar D^{(*)}\Sigma_c^{(*)}$ molecular structures are assumed for $P_c$ states which indicates the compositeness of $P_c$ states equals to one in this work, that is $\chi\equiv1-Z=1$. Working in the non-relativistic limit and expanding on the small account $\sqrt{2\mu E_B}/\Lambda$, the simplest estimation, denoted as $g_0$, for $g_{\bar{D}^{(*)}\Sigma_c^{(*)}P_c}$ can be obtained with only the leading term kept. It is \begin{align} g_0&=\sqrt{\frac{8\sqrt{2}\sqrt{E_B}m_1 m_2 \pi}{(m_1 m_2/(m_1+m_2))^{3/2}}}\sqrt{\frac{1}{\mM_{N}F_T}}\label{eq:coupling}\\ F_T&=\begin{cases} 1& \text{for spin-$1/2$ molecule},\\ 3/2& \text{for spin-$3/2$ molecule},\\ 5/3& \text{for spin-$5/2$ molecule}. \end{cases},\notag\\ \mM_{N}&=\begin{cases} 2\, m_1& \text{for spin-$1/2$ $\bar{D}\Sigma_c$ molecule},\\ 6\,m_1& \text{for spin-$1/2$ $\bar{D}^*\Sigma_c$ molecule},\\ 4/3\,m_1& \text{for spin-$3/2$ $\bar{D}\Sigma_c^*$ or $\bar{D}^*\Sigma_c$ molecule},\\ 4\,m_1& \text{for spin-$1/2$ $\bar{D}^*\Sigma_c^*$ molecule},\\ 20/9\, m_1& \text{for spin-$3/2$ $\bar{D}^*\Sigma_c^*$ molecule},\\ 6/5\, m_1& \text{for spin-$5/2$ $\bar{D}^*\Sigma_c$ molecule}. \end{cases}.\notag \end{align} As for the additional Lagrangians required to construct the one meson exchanged potential, we adopt the conventional formula used in a variety of phenomenological approaches. The specific formalism can refer to our previous work~\cite{Lin:2017mtz}. And these effective coupling constants have been organized consistently based on $SU(3)$ flavor symmetry in Refs.~\cite{deSwart:1963pdg,Polinder:2006zh,Ronchen:2012eg,Haidenbauer:2016pva,Haidenbauer:2017sws}. We take the same convention as in Ref.~\cite{Ronchen:2012eg} and extend to get whole coupling relations. In our hidden charm cases, the coupling constants between charmonium and charmed mesons are related to the couplings $ g_1 $, $ g_2 $, respectively, using the heavy quark symmetry~\cite{Colangelo:2003sa,Guo:2010ak}, where $g_1 $ and $ g_2 $, which can be related to the decay constants of $ \chi_{c0} $ and $ J/\psi $ by using the vecrtor-meson-dominance(VMD) arguments\footnote{Note that there is a factor 2 difference for these values in Ref.~\cite{Colangelo:2003sa} and~\cite{Guo:2010ak} due to the difference in conventions.}, are the couplings of the $ P $- and $S$-wave charmonium fields to the charmed and anti-charmed mesons, respectively. In the present calculation, we take the same convention as Ref.~\cite{Guo:2010ak}, that is, $g_1 = -5.4 \ \mathrm{GeV}^{-1/2}$ and $g_2 = 2.1 \ \mathrm{GeV}^{-3/2}$. And the couplings between charmed mesons and light vector mesons can be estimated with the VMD approach~\cite{Lin:1999ad,Oh:2000qr}. Note that the coupling $g_{D^{(*)}D^{(*)}J/\psi}$ is included in both of these two determinations, $g_{DDJ/\psi}=g_{D^*D^*J/\psi}=7.44$, $g_{D^*DJ/\psi}=7.91\ \gev^{-1}$ in VMD, while with heavy quark symmetry, one obtain $g_{DDJ/\psi}=6.95$, $g_{D^*D^*J/\psi}=7.48$, and $g_{D^*DJ/\psi}=7.21\ \gev^{-1}$ (Note that the different values for $g_{DDJ/\psi}$ and $g_{D^*D^*J/\psi}$ is because the experimental masses of $D$ and $D^*$ are used). Since there is no significant difference between these two methods, we take the value of coupling $g_{D^{(*)}D^{(*)}J/\psi}$ in VMD determination. For the effective couplings which have charmed baryon($\Sigma_c^{(*)}, \Lambda_c$) involved, the heavy quark spin symmetry(HQSS) can be applied to reduce the number of undetermined couplings in this part~\cite{Yan:1992gz,Cheng:2004ru}. And the left unknown couplings are estimated by taking the simplest approximation, that is, we assume that the role of charm quark is the same as that of strange quark. In this way, we use the same value from the $SU(3)$ relations, for example, $g_{\rho\Sigma_c\Lambda_c}=g_{\rho\Sigma\Lambda}$. Finally, there is another set of couplings, which includes $g_{D^*D\pi}$, $\pi\Sigma_c\Lambda_c$ and $\pi\Sigma_c^*\Lambda_c$, is inferred from the experimental decay widths. All effective couplings we used are listed in Table~\ref{table:constants}. One should note that most of these values can only be regarded as rough estimations, which should suffice for an order-of-magnitude estimate of the decay rates under consideration. \begin{table*}[htpb] \centering \caption{\label{table:constants}Coupling constants used in the present work. The $P$, $V$, $B$ and $D$ denote the pseudoscalar, vector mesons, octet and decuplet baryons respectively. Only absolute values of the couplings are listed with their signs ignored.} \scalebox{1}{ \begin{tabular}{*{11}{c}} \Xhline{1.0pt} $\alpha_{BBP}$ & $\alpha_{BBV}$ & $g_{BBP}$ & $g_{BBV}$ & $g_{VPP}$ & \thead{$g_{VVP}$ \\ ($\mathrm{GeV}^{-1}$)} & \thead{$g_{PBD}$ \\ ($\mathrm{GeV}^{-1}$)} & \thead{$g_{VBD}$ \\ ($\mathrm{GeV}^{-1}$)} & \thead{$g_{PDD}$ \\ ($\mathrm{GeV}^{-1}$)} & $g_{VDD}$ & $\kappa_{VDD}$ \\ \Xhline{0.4pt} 0.4 & 1.15 & 13.5 & 3.25 & 3.02 & 12.84 & 15.19 & 20.68 & 12.71 & 7.67 & 6.1 \\ \Xhline{0.8pt} \thead{$g_{\pi\Sigma_c\Sigma_c}$\\($g_{BBP}$)} & \thead{$g_{DN\Sigma_c}$\\($g_{BBP}$)} & \thead{$g_{DN\Lambda_c}$\\($g_{BBP}$)} & \thead{$g_{\rho\Sigma_c\Sigma_c}$\\($g_{BBV}$)} & \thead{$g_{\rho\Sigma_c\Lambda_c}$\\($g_{BBV}$)} & \thead{$g_{D^*N\Sigma_c}$\\($g_{BBV}$)}& \thead{$g_{D^*N\Lambda_c}$\\($g_{BBV}$)} & \thead{$g_{D^*N\Sigma_c^*}$\\($g_{VBD}$)}& \thead{$g_{DN\Sigma_c^*}$\\($g_{PBD}$)}&\thead{$g_{D^*D^*\eta_c}$ \\ ($\mathrm{GeV}^{-1}$)}&$g_{D^*D\eta_c}$ \\ \Xhline{0.4pt} $2\alpha_{BBP}$ & $1-2\alpha_{BBP}$ & $\frac{1+2\alpha_{BBP}}{\sqrt{3}}$ & $2\alpha_{BBV}$ & $\frac{2(1-\alpha_{BBV})}{\sqrt{3}}$ & $1-2\alpha_{BBV}$ & $\frac{1+2\alpha_{BBV}}{\sqrt{3}}$ & $\frac{1}{\sqrt{6}}$ & $\frac{1}{\sqrt{6}}$ & 3.52& 6.82 \\ \Xhline{0.8pt} $g_{\pi\Lambda_c\Sigma_c}$ & \thead{$g_{\pi\Lambda_c\Sigma_c^*}$ \\ ($\mathrm{GeV}^{-1}$)} & $g_{D^*D\pi}$ & \thead{$g_{D^*D^*\pi}$\footnote{$g_{D^*D^*\pi}$ is related to $g_{D^*D\pi}$ with HQSS, that is, $g_{D^*D^*\pi}=2g_{D^*D\pi}/\sqrt{m_{D^*}m_{D}}$. Note that compared with that in Ref.~\cite{Cheng:2004ru}, an additional factor 2 is included duo to the different Lagrangian for the $D^*D\pi$ interaction we used here. And the value of $g_{D^*D\pi}$ is a factor of $\sqrt{2}$ smaller than that in Ref.~\cite{Lin:2017mtz} due to the difference in conventions.} \\ ($\mathrm{GeV}^{-1}$)}& \thead{$g_{D^*D\rho}$ \\ ($\mathrm{GeV}^{-1}$)}& $g_{D^*D^*\rho}$& $g_{DD\rho}$& \thead{$g_{D^*D\omega}$ \\ ($\mathrm{GeV}^{-1}$)}& $g_{D^*D^*\omega}$& $g_{DD\omega}$&\thead{$g_{D^*DJ/\psi}$ \\ ($\mathrm{GeV}^{-1}$)} \\ \Xhline{0.4pt} 19.31 & 7.46 & 6.0 & 6.2 & 2.51 & 2.52 & 2.52 & 2.83 & 2.84 & 2.84 &7.94 \\ \Xhline{0.8pt} $g_{D^*D^*J/\psi}$& $g_{DDJ/\psi}$& $g_{DD\chi_{c0}}$&\thead{$g_{D^*D^*\chi_{c0}}$ \\ ($\mathrm{GeV}^{-1}$)} \\ \Xhline{0.4pt} 7.44 & 7.44 & 32.24 & 11.57 \\ \Xhline{1.0pt} \end{tabular} } \end{table*} \subsection{Form factors} As discussed in our previous work, some of the triangle diagrams, corresponding to the exchange of a pseudoscalar meson for the $D$-wave decay modes~\cite{Albaladejo:2015dsa,Shen:2016tzq}, are ultraviolet(UV) finite while the others diverge when the UV finite loops receive short-distance contributions if we integrate over the whole momentum space. We will employ the following UV regulator which suppress short-distance contributions and thus can render all the amplitudes UV finite~\cite{Faessler:2007gv,Dong:2009yp,Dong:2009tg,Lu:2016nnt,Xiao:2019mst} \begin{equation} f_1(p^2_E /\Lambda_0^2) = {\rm{exp}}(-p^2_E /\Lambda_0^2), \label{eq:regulator4} \end{equation} where $p_E$, defined as ${m_{\bar{D}^{(*)}}}p_{\Sigma_c^{(*)}}/({m_{\bar{D}^{(*)}}+m_{\Sigma_c^{(*)}}})- {m_{\Sigma_c^{(*)}}}p_{\bar{D}^{(*)}}/({m_{\bar{D}^{(*)}}+m_{\Sigma_c^{(*)}}}) $ for the $ \bar{D}^{(*)}\Sigma_c^{(*)}$ molecules, is the Euclidean Jacobi momentum. The cutoff $\Lambda_0$ denotes a hard momentum scale which suppresses the contribution of the two constituents at short distances $\sim 1/\Lambda_0$. There is no universal criterion for the determination of these cut-offs and even for the choice of the regulator functions, but as a general rule the value of $\Lambda_0$ should be much larger than the typical momentum in the bound state, given by $\sqrt{2\mu\epsilon}$ ($\sim 0.1\ \gev$ for the $P_c$ molecules). And it should also not be too large since we have neglected all other degrees of freedom, except for the two constituents, which would play a role at short distances. In the present work, we vary the value of $\Lambda_0$ from $0.6\ \mathrm{GeV}$ to $1.4\ \mathrm{GeV}$ for a rough estimate of the two-body partial widths. Note that there is another three-momentum Gaussian form factor is routinely used in a variety of non-relativistic phenomenological approaches~\cite{Nieves:2012tt,HidalgoDuque:2012pq,Guo:2017jvc}, \begin{equation} f_2(\bm{p}^2 /\Lambda_0^2) = {\rm{exp}}(-\bm{p}^2 /\Lambda_0^2), \label{eq:regulator3} \end{equation} where $\bm p$ is the spatial part of the momentums of $\bar{D}^{(*)}$ and $\Sigma_c^{(*)}$ in the rest frame of $P_c$ states. The significant difference between these two Gaussian form regulators is that $f_1$ includes an additional constraint on the energy of molecular components, which demands that the center of mass energy is divided as the mass distribution of compounding particles inside the molecular states. It occurs usually for the bound states in quantum mechanics. We will discuss the effect of this energy constraint when we present our numerical results. In addition, a multipolar form factor is introduced to suppress the off-shell contributions of the exchanged mesons in our triangle diagrams. It is chosen as \begin{equation} f_3(q^2) = \frac{\Lambda_1^4}{(m^2 - q^2)^2 + \Lambda_1^4}, \label{eq:multipolar} \end{equation} where $m$ and $q$ is the mass and momentum of the exchanged particle. The parameter $\Lambda_1$ is also varied in the range of $0.6$-$1.4 \ \mathrm{GeV}$. With the effective Lagrangian method, the partial decay widths of $P_c$ states are computed in the perturbative language, \begin{equation} {\rm d}\Gamma = \frac{F_I}{32 \pi^2} \overline{|{\cal M}|^2} \frac{|\mathbf{p_1}|}{M^2} {\rm d}\Omega, \label{eq:widths} \end{equation} where ${\rm d}\Omega = {\rm d}\phi_1 {\rm d}(\cos{\theta_1})$ is the solid angle of the final state in the rest frame of $P_c$, $M$ is the mass of decaying $P_c$ states, the factor $F_I$ is from the isospin symmetry, and the polarization-averaged squared amplitude $\overline{|{\cal M}|^2}$ means $\frac1{2J+1} \sum_\text{spin} |{\cal M}|^2$\footnote{Since the relative phase between the amplitudes contributed from the different exchanged particles in a specific decay channel can not be determined definitely, we compute the incoherence summation for various decay processes, e.g., $|{\cal M}|^2=|{\cal M_{\pi}}|^2+|{\cal M_{\rho}}|^2$ for $\bar D^*\Lambda_c$ final state.} with $J$ the spin of $P_c$. \section{Numerical Results and Discussions}~\label{sec:3} With the effective coupling constants collected, the partial decay widths of observed $P_c$ states can be computed numerically by using the effective Lagrangian approach in the hadronic molecule scenarios. Note that there are still two undetermined parameters in our calculation, $\Lambda_0$ and $\Lambda_1$. The existence of such energy scale parameters is inevitable in the phenomenological paradigms of strong interaction, either introduced to eliminate the loop divergence or to indicate the energy range where the effective approaches do work. As discussed above, we vary these two cut-offs in the range of $0.6$-$1.4\ \gev$ to scrutinize how the decay behaviors undergo changes as the cut-off is varied. And a specific set of values for $\Lambda_0$ and $\Lambda_1$ is chosen to give the decay patterns of $P_c$ molecules by fitting to the measured total widths. Before going to the discussion on partial decay widths, let us take a moment to figure out the determination of the effective couplings between $P_c$ states and the compounding $\bar D^{(*)}\Sigma_c^{(*)}$ system, $g_{P_c\bar D^{(*)}\Sigma_c^{(*)}}$. As mentioned before, this coupling is estimated with the compositeness condition. It suggests that such a coupling can be expressed as the square root of the inverse of the derivative of its self-energy operator, that has the constituents as the intermediate loop, for a pure molecule state. Since the mass of the hadronic molecule are usually close to the threshold of its constituents, the non-relativistic treatment can be adopted for the estimation of the couplings $g_{P_c\bar D^{(*)}\Sigma_c^{(*)}}$. In Fig.~\ref{Fig:coupling}, we show the differences among three different strategies for the $g_{P_c\bar D\Sigma_c}$ determination, that is, the relativistic calculation denoted as $g_{RT}$, non-relativistic calculation $g_{NR}$ and the $g_0$ which is the approximate estimate of $g_{NR}$ as discussed above. Here, the cutoff $\Lambda_0$ is appeared in the form factor $f_1$ and $f_2$ for removing the UV divergence in the self-energy operator. $f_1$ is related to the relativistic calculation while $f_2$ is used in the non-relativistic case. And with only the leading order left in Eq.~\eqref{eq:coupling}, $g_0$ is cut-off independent. The results show that $g_{RT}$ is always larger than the $g_{NR}$ while $g_0$ is smaller than $g_{NR}$. And as expected, the difference between them increases with the increasing of the binding energy. At the zero-binding-energy limit, the same coupling constant will be obtain from these three various determinations. Since $g_0$ is $\Lambda$-independent, the dependence of $g_{RT}$ and $g_{NR}$ on the cut-off can be translated into the behaviors of these relative ratios change with $\Lambda_0$. As shown in the Fig.~\ref{Fig:coupling}, the lower blue-diamond dot is larger than the lower red-circle dot at the same binding energy which reflects that $g_{NR}$ decreases with the increasing of $\Lambda_0$. And the relative ratio between the upper and lower dot with the same $\Lambda_0$ and binding energy is smaller for the larger cut-off. It means that $g_{RT}$ decreases also as $\Lambda_0$ increases. The cases are similar for the $\bar D^*\Sigma_c$ and $\bar D^{(*)}\Sigma_c^*$ molecule states. Notice that in our molecular scenarios, the binding energy is around $10$, $20$ and $5\ \mev$ for the observed $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ states respectively. Then there is no significant difference which strategy one adopt for the $g_{P_c\bar D^{(*)}\Sigma_c^{(*)}}$ determination. In the present work, $g_{RT}$ is adopted for these $P_c$ molecules with the binding energy larger than $10\ \mev$. And for $P_c(4312)$, $P_c(4457)$, $P_c(4376)$ and $P_c(4523)$ that have small binding energy, $g_{0}$ is used for simplicity. \begin{figure}[htbp] \begin{center} \includegraphics[width=9cm]{coupling.eps} \caption{The dependence of relative ratios $g_{RT}/g_{NR}$, $g_0/g_{NR}$ on binding energy, where $g_{NR}$ and $g_{RT}$ denote the non-relativistic and relativistic estimation for the effective couplings between $P_c(4312)$ and $\bar D\Sigma_c$ composite system. And $g_0$ is the approximate $g_{NR}$ as shown in Eq.~\eqref{eq:coupling}. The red-circle, orange-square and blue-diamond dots denote $\Lambda_0=0.6\ \gev$, $1.0\ \gev$ and $1.4\ \gev$, respectively. The upper dots are the values of $g_{RT}/g_{NR}$ while the lower dots are $g_0/g_{NR}$. \label{Fig:coupling}} \end{center} \end{figure} The partial decay widths of $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ in the $S$-wave hadronic molecule pictures with $\Lambda_0=1.0\ \gev$ and $\Lambda_1=0.6\ \gev$ are displayed in Table~\ref{table:widths1} for form factor set ($f_1$, $f_3$) and Table~\ref{table:widths2} for form factor set ($f_2$, $f_3$). And the cutoff-dependence of total widths and the branch fractions of $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels are presented in Fig.~\ref{Fig:width-4312} for $P_c(4312)$ and Fig.~\ref{Fig:width-VB-lambda0}, Fig.~\ref{Fig:width-4440}, as well as Fig.~\ref{Fig:width-4457} for $P_c(4440)$ and $P_c(4457)$ states. \begin{table}[htpb] \centering \caption{\label{table:widths1}Partial widths of $P_c(4312)$ as $S$-wave $\bar D \Sigma_c$ molecule, $P_c(4440)$ and $P_c(4457)$ as $S$-wave $\bar D^*\Sigma_c$ molecules with two possible quantum numbers, to various possible final states with $\Lambda_0=1.0\ \mathrm{GeV}$, $\Lambda_1=0.6\ \mathrm{GeV}$. The form factor set ($f_1$, $f_3$) is chosen. All of the decay widths are in the unit of $\mathrm{MeV}$, and the short bars denote that this decay channel is closed or the corresponding contribution is negligible. $2\time10^{-4}$ denotes $2\times10^{-4}$.} \begin{tabular}{l|*{5}{c}} \Xhline{1pt} \multirow{3}*{Mode} & \multicolumn{5}{c}{Widths ($\mathrm{MeV}$) with ($f_1$, $f_3$)} \\ \Xcline{2-6}{0.4pt} & \multicolumn{1}{c}{$\bar D \Sigma_c$} & \multicolumn{4}{c}{$\bar D^*\Sigma_c$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-4}{0.4pt}\Xcline{5-6}{0.4pt} & \multicolumn{1}{c}{$P_c(4312)$}& \multicolumn{2}{c}{$P_c(4440)$} & \multicolumn{2}{c}{$P_c(4457)$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-4}{0.4pt}\Xcline{5-6}{0.4pt} & \multicolumn{1}{c}{${\frac12}^-$}& \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} & \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} \\ \Xhline{0.8pt} $\bar D^*\Lambda_c$ &3.8 &13.9 &6.2 &12.5 &6.1 \\ $J/\psi p$ &0.001 &0.03 &0.02 &0.02 &0.01 \\ $\bar D\Lambda_c$ &0.06 &5.6 &1.7 &3.8 &1.5 \\ $\pi N$ &0.004 &0.002 &$2\time10^{-4}$ &0.001 &$1\time10^{-4}$ \\ $\chi_{c0}p$ &- &$8\time10^{-4}$ &$4\time10^{-5}$ &$9\time10^{-4}$ &$3\time10^{-5}$ \\ $\eta_c p$ &0.01 &$3\time10^{-4}$ &$8\time10^{-5}$ &$2\time10^{-4}$ &$6\time10^{-5}$ \\ $\rho N$ &$3\time10^{-5}$ &$3\time10^{-4}$ &$4\time10^{-5}$ &$2\time10^{-4}$ &$2\time10^{-5}$ \\ $\omega p$ &$1\time10^{-4}$ &0.001 &$2\time10^{-4}$ &$6\time10^{-4}$ &$9\time10^{-5}$ \\ $\bar D\Sigma_c$ &- &3.4 &0.5 &2.6 &1.0 \\ $\bar D\Sigma^*_c$ &- &0.8 &5.4 &1.9 &6.2 \\ \Xhline{0.8pt} Total &3.9 &23.7 &13.9 &20.7 &14.7 \\ \Xhline{1pt} \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{\label{table:widths2}The numerical results for the form factor set ($f_2$, $f_3$). The notation is same with Table~\ref{table:widths1}.} \begin{tabular}{l|*{5}{c}} \Xhline{1pt} \multirow{3}*{Mode} & \multicolumn{5}{c}{Widths ($\mathrm{MeV}$) with ($f_2$, $f_3$)} \\ \Xcline{2-6}{0.4pt} & \multicolumn{1}{c}{$\bar D \Sigma_c$} & \multicolumn{4}{c}{$\bar D^*\Sigma_c$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-4}{0.4pt}\Xcline{5-6}{0.4pt} & \multicolumn{1}{c}{$P_c(4312)$}& \multicolumn{2}{c}{$P_c(4440)$} & \multicolumn{2}{c}{$P_c(4457)$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-4}{0.4pt}\Xcline{5-6}{0.4pt} & \multicolumn{1}{c}{${\frac12}^-$}& \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} & \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} \\ \Xhline{0.8pt} $\bar D^*\Lambda_c$ &10.7 &12.5 &6.8 &10.8 &6.9 \\ $J/\psi p$ &0.1 &0.6 &1.8 &0.2 &0.6 \\ $\bar D\Lambda_c$ &0.3 &2.7 &1.2 &2.0 &1.2 \\ $\pi N$ &1.7 &0.2 &1.9 &0.07 &0.6 \\ $\chi_{c0}p$ &- &0.1 &0.009 &0.05 &0.003 \\ $\eta_c p$ &0.4 &0.07 &0.008 &0.02 &0.003 \\ $\rho N$ &0.0008 &0.4 &0.3 &0.1 &0.1 \\ $\omega p$ &0.003 &1.5 &1.2 &0.5 &0.4 \\ $\bar D\Sigma_c$ &- &3.4 &0.6 &2.8 &0.9 \\ $\bar D\Sigma^*_c$ &- &0.9 &7.3 &2.3 &7.2 \\ \Xhline{0.8pt} Total &13.2 &22.4 &21.0 &18.8 &17.9 \\ \Xhline{1pt} \end{tabular} \end{table} At first glance, one thing can be concluded that $\bar D^*\Lambda_c$ is the dominant decay channel for both $\bar D\Sigma_c$ and $\bar D^*\Sigma_c$ molecules which is similar with the results on $\bar D\Sigma_c^*$ and $\bar D^*\Sigma_c$ molecules in our previous work~\cite{Lin:2017mtz}. And one can also notice that $\bar D\Lambda_c$ and $\bar D\Sigma_c^{(*)}$ channels also account for a large portion of the widths for the $\bar D^*\Sigma_c$ molecules. In fact, the large partial widths of these channels come from the $\pi$ exchanged contribution. It is because that the exchanged $\pi$ can go nearly on the mass shell in these decay processes. The strong coupling to $\bar D^*\Lambda_c$ channel of $P_c(4312)$ is also claimed in Ref.~\cite{Weng:2019ynv} with the extended chromomagnetic model. And the small $J/\psi p$ decays for all of $S$-wave molecules in our calculation are consistent with the latest LHCb observation which shows that the upper limits of the branching fractions $\mathcal{B}(P_c^+\to J/\psi p)$ are $4.6\%$, $2.3\%$ and $3.8\%$ for $P_c(4312)$, $P_c(4312)$ and $P_c(4312)$ respectively at $90\%$ confidence level by assuming $J^P=3/2^-$ for all of $P_c$ states~\cite{Ali:2019lzf}. And as shown in Refs.~\cite{Weng:2019ynv,Voloshin:2019aut,Sakai:2019qph}, the partial width of $\eta_c p$ channel is almost three times larger than that of $J/\psi p$ for the lowest $P_c(4312)$ state. And the decay width of $P_c(4312)$ to $\bar D\Lambda_c$ is a factor of 0.02 smaller than $\bar D^*\Lambda_c$ channel~\cite{Weng:2019ynv}. These relative ratios are consistent with our calculation as we can see from Table~\ref{table:widths2}. Besides that, our results show that the partial width of $\eta_c p$ channel is around one order of magnitude smaller than that of $J/\psi p$ for the $P_c(4440)$ state. It agrees with the argument of the heavy quark symmetry in Ref.~\cite{Voloshin:2019aut}. Compared with Table~\ref{table:widths1} and Table~\ref{table:widths2}, it does not escape attention that a remarkable difference between the form factor $f_1$ and $f_2$ is that the much larger $D^{(*)}$ meson-exchanged contribution is obtained with $f_2$ when we take the same value of cutoff. According to the definitions of $f_1$ and $f_2$, we know that $f_1$ provides an additional constraint on the energy of compounding particles inside the $P_c$ molecules. Then in that case, the exchanged $D$ or $D^*$ mesons must be highly off the mass shell and this off-shell contribution will be suppressed by our second form factor $f_3$. Since the majority of the total widths of $P_c$ molecules is contributed by the $\pi$ exchanged processes which are similar for these two different form factors, the total decay widths of $\bar D\Sigma_c$ and $\bar D^*\Sigma_c$ molecules obtained with $f_1$ and $f_2$ are compatible with each other. And as shown in Fig.~\ref{Fig:width-4312}, Fig.~\ref{Fig:width-4440} and Fig.~\ref{Fig:width-4457}, the cut-off dependence of total widths and branch fractions of $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels is almost same for these two form factors. The total widths increase as $\Lambda_0$ or $\Lambda_1$ increases while the branch fractions are almost stable over the whole range of $\Lambda_1$. It should be noted that $\Lambda_0=1.0\ \gev$ and $\Lambda_1=0.6\ \gev$ are fixed to give a compatible descriptions with measured widths for all of three observed $P_c$ states. The numerical decay patterns with these cutoffs in Table~\ref{table:widths1} suggest that the spin parties of $P_c(4440)$ and $P_c(4457)$ are more likely to be $1/2^-$ and $3/2^-$, respectively. Looking further ahead, the relative ratios between the $\bar D\Sigma_c$ and $\bar D\Sigma_c^*$ and between the $\eta_c p$ and $J/\psi p$ are quite different for the different quantum numbers of $\bar D^*\Sigma_c$ molecules. $\Gamma_{\bar D\Sigma_c}/\Gamma_{\bar D\Sigma_c^*}$ is around 4 for the $1/2^-$-$P_c(4440)$ while it is 0.1 for the $3/2^-$-$P_c(4440)$. And $\Gamma_{J/\psi p}/\Gamma_{\eta p}$ is around 10 for the $1/2^-$-$P_c(4440)$ while it is around 200 for the $3/2^-$-$P_c(4440)$. These novel properties on the branch fractions also exist for the $P_c(4457)$. It will help us to determine the quantum numbers for $P_c(4440)$ and $P_c(4457)$ states experimentally in future. \begin{figure*}[htbp] \begin{center} \includegraphics[width=18cm]{cutoff-4312.eps} \caption{$\Lambda_1$-dependence of the total decay width and the branching fractions of the $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels for the $P_c(4312)$ in the $J^P=1/2^-$ $\bar D \Sigma_c$ molecule scenario. The form factor set is chosen as $(f_1, f_3)$(denoted as RT) for the left panel, and it is $(f_2, f_3)$(denoted as NR) for the right panel. The black solid line denotes the $\Lambda_1$-dependence of total widths while dashed line is the $\Lambda_0$-dependence. And the blue-solid, origin-dashed and red-dotted lines represent the $\Lambda_1$-dependence of partial widths for the $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels, respectively. The green bands in the upper half panels represent the measured widths with uncertainties and the green-solid line denotes the central value.\label{Fig:width-4312}} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=18cm]{cutoff0-VB.eps} \caption{$\Lambda_0$-dependence of the total decay widths for the $P_c(4440)$ in the left panel and $P_c(4457)$ in the right panel, where the blue-solid, blue-dashed, red-solid and red-dashed lines denote that $1/2^-$-$\bar D^*\Sigma_c$ molecule with the form factor set $(f_2, f_3)$, $3/2^-$-$\bar D^*\Sigma_c$ with $(f_2, f_3)$, $1/2^-$-$\bar D^*\Sigma_c$ with $(f_1, f_3)$ and $3/2^-$-$\bar D^*\Sigma_c$ with $(f_1, f_3)$, respectively. The green bands in the upper half panels represent the measured widths with uncertainties and the green-solid line denotes the central value. \label{Fig:width-VB-lambda0}} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=18cm]{cutoff1-4440.eps} \caption{$\Lambda_1$-dependence of the total decay width and the branching fractions of the $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels for the $P_c(4440)$ in the $\bar D^* \Sigma_c$ molecule scenario with $J^P=1/2^-$ or $3/2^-$. The form factor set is chosen as $(f_1, f_3)$(denoted as RT) for the left panel, and it is $(f_2, f_3)$(denoted as NR) for the right panel. The black solid line denotes the $\Lambda_1$-dependence of total widths. And the blue-solid, origin-dashed and red-dotted lines represent the $\Lambda_1$-dependence of partial widths for the $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels, respectively. The green bands in the upper half panels represent the measured widths with uncertainties and the green-solid line denotes the central value.\label{Fig:width-4440}} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \includegraphics[width=18cm]{cutoff1-4457.eps} \caption{$\Lambda_1$-dependence of the total decay width and the branching fractions of the $\bar D^*\Lambda_c$, $J/\psi p$ and $\bar D\Lambda_c$ channels for the $P_c(4457)$ in the $\bar D^* \Sigma_c$ molecule scenario with $J^P=1/2^-$ or $3/2^-$. The notations are same with Fig.~\ref{Fig:width-4440}.\label{Fig:width-4457}} \end{center} \end{figure*} The quantum numbers of these two $P_c$ states are also discussed with the molecular scenarios in Refs.~\cite{Yamaguchi:2019seo,Valderrama:2019chc,Liu:2019zvb,Pan:2019skd}. And following the heavy quark spin symmetry, Ref.~\cite{Liu:2019tjn} studied all possible heavy quark multiplets in $\bar D\Sigma_c$, $\bar D\Sigma_c^*$, $\bar D^*\Sigma_c$, $\bar D^*\Sigma_c^*$ systems with two sets of quantum numbers for $P_c(4440)$ and $P_c(4457)$ as inputs, ($1/2^-$, $3/2^-$) which they call set A and the opposite identification set B. Since the mass for $1/2^-$-$\bar D\Sigma_c$ molecule produced with set A is more compatible with the LHCb observation, four predicted heavy quark multiplets from the set A are considered in our work, that is, $3/2^-$-$P_c(4376)$, $1/2^-$-$P_c(4500)$, $3/2^-$-$P_c(4511)$ and $5/2^-$-$P_c(4523)$. With the same cutoffs, the partial decay widths of the $S$-wave $\bar D^{(*)}\Sigma_c^*$ molecules are presented in Table~\ref{table:widths3} for form factor set ($f_1$, $f_3$) and Table~\ref{table:widths4} for set ($f_2$, $f_3$). And also the cut-off dependence of total widths are presented in Fig.~\ref{Fig:width-VD}. The decay pattern of $3/2^-$-$\bar D\Sigma_c^*$ molecule is quite the same with the $1/2^-$-$\bar D\Sigma_c$ except for the additional three-body $\bar D\Lambda_c\pi$ decay of $\bar D\Sigma_c^*$ molecule. $\bar D^*\Lambda_c$ is still the largest decay channel of $S$-wave $\bar D\Sigma_c^*$ molecule. The difference of the decay patterns between two form factors $f_1$ and $f_2$ in $\bar D \Sigma_c^*$ and $\bar D^*\Sigma_c^*$ sectors is similar with $\bar D \Sigma_c$ and $\bar D^*\Sigma_c$ molecules. The non-relativistic form factor $f_2$ brings a larger $D$ and $D^*$ meson exchanged partial widths. In particular, for the $1/2^-$-$\bar D^*\Sigma_c^*$ molecule, a huge enhancement for the $D^*$ exchanged precesses in $J/\psi p$, $\rho N$, $\omega p$, $\chi_{c0} p$ channels and the $D$ exchanged precesses in $\pi p$, $\eta_c p$ channels is generated by $f_2$. And there are some intriguing results for three $\bar D^*\Sigma_c^*$ molecules. Among of them, the $1/2^-$-$\bar D^*\Sigma_c^*$ molecule has the strongest couplings to $\bar D\Lambda_c$, $\bar D\Sigma_c$, $\bar D^*\Sigma_c$ channels and the relative ratio is around $1:1:1$ while $3/2^-$-$\bar D^*\Sigma_c^*$ is strongly coupled to the $\bar D\Sigma_c^*$ channel. In addition, the relative ratio between $\bar D^*\Lambda_c$ and $\bar D\Lambda_c$ is also different for these three $S$-wave $\bar D^*\Sigma_c^*$ molecule, $\Gamma_{\bar D^*\Lambda_c}/\Gamma_{\bar D\Lambda_c}$ is a bit less than 1 for the $1/2^-$ state, $\Gamma_{\bar D^*\Lambda_c}/\Gamma_{\bar D\Lambda_c}=3$ for the $5/2^-$ and it is around 50 for the $3/2^-$ molecule. The results obtained here can expand our understanding on the nature of pentaquark states in the hadronic molecule scenarios and can serve as the theoretical references for testing the molecule interpretations in the future experiments. \begin{figure*}[htbp] \begin{center} \includegraphics[width=18cm]{cutoff-VD.eps} \caption{$\Lambda$-dependence of the total decay withs for the four spin partners predicted by Ref.~\cite{Liu:2019tjn} in the $\bar D^{(*)}\Sigma_c^*$ molecule pictures, where the blue-solid, blue-dashed, red-solid and red-dashed lines denote that the dependence on $\Lambda_0$ with form factor set $(f_2, f_3)$, dependence on $\Lambda_1$ with $(f_2, f_3)$, dependence on $\Lambda_0$ with $(f_1, f_3)$ and dependence on $\Lambda_1$ with $(f_1, f_3)$, respectively.\label{Fig:width-VD}} \end{center} \end{figure*} \begin{table}[htpb] \centering \caption{\label{table:widths3}The partial decay widths of $P_c(4376)$ as $S$-wave $\bar{D}\Sigma_c^*$ molecule and $P_c(4500)$, $P_c(4511)$ and $P_c(4523)$ as the $\bar{D}^*\Sigma_c^*$ molecule with different spin parity which are four spin partners of observed $P_c$ molecules within the HQSS framework. The form factor set $(f_1,f_3)$ is used and $\Lambda_0=1.0\ \mathrm{GeV}$, $\Lambda_1=0.6\ \mathrm{GeV}$. The notation is same with Table.~\ref{table:widths1}.} \begin{tabular}{l|*{4}{c}} \Xhline{1pt} \multirow{3}*{Mode} & \multicolumn{4}{c}{Widths ($\mathrm{MeV}$) with $(f_1,f_3)$} \\ \Xcline{2-5}{0.4pt} & \multicolumn{1}{c}{$\bar D \Sigma_c^*$} & \multicolumn{3}{c}{$\bar D^*\Sigma_c^*$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-5}{0.4pt} & \multicolumn{1}{c}{$P_c(4376)$}& \multicolumn{1}{c}{$P_c(4500)$} & \multicolumn{1}{c}{$P_c(4511)$} & \multicolumn{1}{c}{$P_c(4523)$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-3}{0.4pt}\Xcline{4-4}{0.4pt}\Xcline{5-5}{0.4pt} & \multicolumn{1}{c}{${\frac32}^-$}& \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} & \multicolumn{1}{c}{${\frac52}^-$} \\ \Xhline{0.8pt} $\bar D^*\Lambda_c$ &12.4 &7.1 &17.0 &4.5 \\ $J/\psi p$ &0.01 &0.006 &0.02 &0.006 \\ $\bar D\Lambda_c$ &$9\time10^{-5}$ &10.0 &0.3 &1.5 \\ $\pi N$ &$2\time10^{-4}$ &0.003 &$1\time10^{-4}$ &$3\time10^{-4}$ \\ $\chi_{c0}p$ &0.003 &0.01 &0.002 &$6\time10^{-7}$ \\ $\eta_c p$ &0.001 &0.01 &$6\time10^{-4}$ &$8\time10^{-4}$ \\ $\rho N$ &$5\time10^{-4}$ &0.001 &0.01 &$8\time10^{-5}$ \\ $\omega p$ &0.002 &0.004 &0.005 &$3\time10^{-4}$ \\ $\bar D\Sigma_c$ &$5\time10^{-4}$ &10.6 &0.2 &1.3 \\ $\bar D\Sigma^*_c$ &- &1.0 &33.8 &6.2 \\ $\bar D^*\Sigma_c$ &- &10.6 &0.07 &1.2 \\ $\bar D\Lambda_c \pi$ &5.0 &- &- &- \\ $\bar D^*\Lambda_c \pi$ &- &4.0 &7.7 &7.8 \\ \Xhline{0.8pt} Total &17.5 &43.3 &59.1 &22.5 \\ \Xhline{1pt} \end{tabular} \end{table} \begin{table}[htpb] \centering \caption{\label{table:widths4}The numerical results for the form factor set $(f_2,f_3)$. The notation is same with Table.~\ref{table:widths3}.} \begin{tabular}{l|*{4}{c}} \Xhline{1pt} \multirow{3}*{Mode} & \multicolumn{4}{c}{Widths ($\mathrm{MeV}$) with $(f_2,f_3)$} \\ \Xcline{2-5}{0.4pt} & \multicolumn{1}{c}{$\bar D \Sigma_c^*$} & \multicolumn{3}{c}{$\bar D^*\Sigma_c^*$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-5}{0.4pt} & \multicolumn{1}{c}{$P_c(4376)$}& \multicolumn{1}{c}{$P_c(4500)$} & \multicolumn{1}{c}{$P_c(4511)$} & \multicolumn{1}{c}{$P_c(4523)$} \\ \Xcline{2-2}{0.4pt}\Xcline{3-3}{0.4pt}\Xcline{4-4}{0.4pt}\Xcline{5-5}{0.4pt} & \multicolumn{1}{c}{${\frac32}^-$}& \multicolumn{1}{c}{${\frac12}^-$} & \multicolumn{1}{c}{${\frac32}^-$} & \multicolumn{1}{c}{${\frac52}^-$} \\ \Xhline{0.8pt} $\bar D^*\Lambda_c$ &21.6 &6.4 &16.7 &3.1 \\ $J/\psi p$ &0.7 &36.7 &4.4 &0.2 \\ $\bar D\Lambda_c$ &$3\time10^{-5}$ &2.0 &0.09 &0.7 \\ $\pi N$ &0.6 &49.9 &6.0 &0.5 \\ $\chi_{c0}p$ &0.1 &4.7 &0.5 &$8\time10^{-6}$ \\ $\eta_c p$ &$9\time10^{-4}$ &13.5 &0.1 &0.04 \\ $\rho N$ &0.2 &11.6 &0.6 &0.1 \\ $\omega p$ &0.8 &44.0 &2.3 &0.4 \\ $\bar D\Sigma_c$ &$2\time10^{-4}$ &6.7 &0.2 &1.0 \\ $\bar D\Sigma^*_c$ &- &1.2 &35.0 &4.1 \\ $\bar D^*\Sigma_c$ &- &13.6 &0.08 &0.7 \\ $\bar D\Lambda_c \pi$ &5.0 &- &- &- \\ $\bar D^*\Lambda_c \pi$ &- &4.0 &7.7 &7.8 \\ \Xhline{0.8pt} Total &29.0 &194.5 &73.7 &18.7 \\ \Xhline{1pt} \end{tabular} \end{table} \section{Summary}\label{sec:summary} A more precise spectrum of pentaquark-like states in the process of $\Lambda_b\to J/\psi p K$ was reported recently by the LHCb collaboration. As previous discovery of $P_c(4380)$ and $P_c(4450)$, the newly observed $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ have sparked a heated discussion. Inspired by the closeness of $P_c(4312)$ to the threshold of $\bar D\Sigma_c$ and $P_c(4440)$, $P_c(4457)$ states to the $\bar D^*\Sigma_c$, the natural hadronic molecular interpretation has been suggested in many theoretical works for these states. In analogy to our previous work on the $P_c(4380)$ and $P_c(4450)$, we investigate the strong decays of these newly observed $P_c$ states in the molecule scenarios. With the effective Lagrangian approach, the partial decay widths of $P_c$ states to all possible allowed channels are presented. It is found that the measured widths of $P_c(4312)$, $P_c(4440)$ and $P_c(4457)$ can be reproduced well respectively in the $1/2^-$-$\bar D\Sigma_c$, $1/2^-$-$\bar D^*\Sigma_c$ and $3/2^-$-$\bar D^*\Sigma_c$ molecule pictures. And the $3/2^-$-$\bar D^*\Sigma_c$ and $1/2^-$-$\bar D^*\Sigma_c$ molecule assignments for the $P_c(4440)$ and $P_c(4457)$ can not be ruled out at present. The novel difference on the decay patterns between the spin parity $1/2^-$ and $3/2^-$ $\bar D^*\Sigma_c$ molecule for both $P_c(4440)$ and $P_c(4457)$, such as $\Gamma_{\bar D\Sigma_c}/\Gamma_{\bar D\Sigma_c^*}$ and $\Gamma_{J/\psi p}/\Gamma_{\eta_c p}$, can be used to distinguish the quantum numbers in the future experiments. In addition, four possible heavy quark multiplets are also considered in our calculations. With the same cutoffs, their partial decay widths are presented. Albeit with large uncertainty, the findings here discussed can be considered as the direct consequences of the hadronic molecular assignments and can be tested by the further experimental investigation in future. It will improve our understanding on the inner structure of these pentaquark-like states. \bigskip \noindent \begin{center} {\bf ACKNOWLEDGEMENTS}\\ \end{center} We thank Cheng-Jian Xiao, Yin Huang, Feng-Kun Guo and Jia-Jun Wu for helpful discussions. This project is supported by NSFC under Grant No.~11621131001 (CRC110 cofunded by DFG and NSFC) and Grant No.~11747601, and the Chinese Academy of Sciences (CAS) under Grant No. XDPB09.
1,108,101,564,443
arxiv
\section{Introduction} \section{Introduction} \label{sec:intro} Electron beams find applications in a variety of devices that include the microwave as well as sub-millimeter wave generators and amplifiers, accelerators, microscopes as well for use in lithography, welding, furnace, medical and space applications \cite{booske2008,whaley2009,booske2011,whaley2014,polk2008,graves2012}. Common mechanisms for producing an electron beam are thermionic, field and photo emission. A topic of current research centres around large area arrays of pointed field emitters \cite{spindt76,spindt91,forbes2012,zhang2014,harris2015,jap2016} that offer high brightness, high current density beams having a small spread in energy at low operational temperatures. Field emission of electrons is commonly studied using a Fowler-Nordheim (FN) type model that involves a planar metallic surface subjected to a uniform external electrostatic field $E_0$ and the attendant image force between the electron and its image due to the grounded metallic plane \cite{FN,Nordheim,murphy,jensen2003,forbes_deane}. Since electron emission is predicted to be weak in the planar case, the focus has been on sharp protrusions from such a surface, where field enhancement is known to occur and can lead to a significant jump in electron emission. An improper surface finish can for example lead to undesirable dark currents in accelerators while properly grown nanotube arrays on a planar substrate can be the basis of a high performance cold cathode. In both cases, the protrusions are sharp and only their tips act as electron emitters. Field emission in such cases is handled by a quasi-planar extension where the local electric field continues to be uniform across the tunneling region but its magnitude is enhanced by the field enhancement factor $\gamma$. However, when the protrusions are sharp and the apex radius of curvature is only a few nanometers, the local electric field decreases significantly even within the tunneling regime. This change in local field away from the surface of curved emitters should thus be incorporated as corrections in order to predict the emitted current density accurately. The nonlinear nature of the external field near the surface of curved emitters is well known \cite{cutler93a,cutler93b,fursey,forbes2013}. For exactly solvable problems such as the hyperboloid, the deviation from the planar result has been demonstrated in the form of nonlinear FN-plots and the current densities were found to differ by orders of magnitude \cite{cutler93a,cutler93b}. In general however, a first approximation in dealing with curved emitters is to treat the surface locally as a sphere having the same local radius of curvature. Thus, the external potential may be expressed locally as \cite{fursey,forbes2013} \begin{equation} V_{ext} \simeq E_l \Delta s \frac{1}{1 + (\Delta s/R)} \label{eq:sph} \end{equation} \noindent where $R$ is the local radius of curvature and $\Delta s$ is the perpendicular distance from the surface. For axially symmetric emitters, the form of the nonlinear external potential has recently been studied using a different approach\cite{KX}. It has been shown that along the symmetry axis of the emitter, for $\Delta s < R_a$, the external potential energy $V_{ext}$ takes the form \begin{equation} V_{ext}^{(a)} (\Delta s) \simeq E_l \Delta s( 1 - \Delta s/R_a) \label{eq:KX} \end{equation} \noindent where $R_a$ is the apex radius of curvature and the normal distance $\Delta s$ is measured from the apex. The validity of the local spherical approximation can be scrutinized using exact results for curved emitters such as the hyperboloid or hemi-ellipsoid. A similar approach for the image potential shows that for the hyperboloid, where exact results for the image charge potential due to a ring of charges is known \cite{jensen_image}, the spherical approximation is found to hold \cite{BR2017a} near the tip of sharp hyperboloids when the local radius of curvature considerably exceeds the tunneling distance. The approach that we adopt here makes use of the exact results for the hemi-ellipsoid and hyperboloid emitters to derive a correction to Eq.~\ref{eq:KX} and determine the conditions under which it is identical for the two emitters. We then show that an identical result exists for the sphere provided corrections to Eq.~\ref{eq:sph} are incorporated. Our derivation also brings out the role of the principle radii of curvature ($R_1,R_2$) and the added clarification that the spherical approximation, where applicable, must be used with $R_2$ except at the apex where $R_1 = R_2$. Finally, the applicability of the result is tested numerically for a conical emitter with a quadratic tip and found to be in good agreement. \section{Potential variation normal to the surface} We shall first deal with the potential variation along field lines close to the surface of a hemiellipsoid and a hyperboloid \cite{kos,pogorelov}. In both cases, the structure is assumed to be vertically aligned ($\hat{z}$) in the presence of an external field. It is convenient to work in {\it prolate spheroidal coordinate} system ($\eta,\xi,\phi$). These are related to the Cartesian coordinates by the following relations: \begin{eqnarray} \nonumber &&x= c_2 \sqrt{({\eta^2}-1)(1-{\xi^2})}\cos{\phi}\\ \nonumber &&y= c_2 \sqrt{({\eta^2}-1)(1-{\xi^2})}\sin{\phi}\\ &&z= c_2 \xi \eta, \label{eq:cart_pro_sph} \end{eqnarray} \noindent Note that a surface obtained by fixing $\eta=\eta_0$ in this coordinate system is an ellipsoid while $\xi = \xi_0$ defines a hyperboloid. For a hemiellipsoid in an external field $-E_0 \hat{z}$, the field lines close to the surface are $\xi = constant$ curves. For a hyperboloid diode with both the cathode and anode as hyperboloid surfaces, the field lines are always $\eta = constant$ curves. Further, since we are concerned with potential variation over a distance of around $1~$nm at moderate fields of $5$~V/nm, we shall assume that the curved field lines are approximately straight over this distance. Its validity is tested in the appendix for a hemiellipsoid where it is shown using a Taylor expansion, that close to the apex from where field emission predominantly occurs, the straightness assumption is largely valid. \subsection{hemiellipsoid} Consider a hemiellipsoidal emitter, $\eta = \eta_0$, on a grounded conducting plane, placed in an external electrostatic field $-E_0 \hat{z}$. The solution of Laplace equation may be written as\cite{kos,pogorelov,jap2016} \begin{equation} V(\eta,\xi) = c_2 E_0 \eta \xi \Bigg( 1 - \frac{\log\big[ \frac{\eta+1}{\eta-1} \big] - \frac{2}{\eta}}{\log\big[ \frac{\eta_0+1}{\eta_0-1} \big] - \frac{2}{\eta_0}} \Bigg) \label{analytic_V} \end{equation} \noindent where $c_2 = \sqrt{h(h - R_a)}$, $h$ is the height and $R_a$ is the apex radius of curvature. The point $(\eta,\xi)$ may lie on the hemiellipsoid surface or outside. We wish to determine the variation in potential close to the surface along the field line $\xi = \xi_0$ at the point $(\eta_0,\xi_0)$. Using Eq.~\ref{analytic_V}, the electrostatic potential $V$ at this local point $(\eta_0+\Delta \eta,\xi_0)$ outside the surface can be calculated as \begin{equation} \begin{aligned} V( & \eta_0 + \Delta \eta,\xi_0) = {} {\cal U} \bigg[1 - \frac{\log(\frac{\eta_0 + \Delta\eta + 1}{\eta_0 + \Delta\eta - 1})-\frac{2}{\eta_0 + \Delta\eta}}{\log( \frac{\eta_0+1}{\eta_0-1}) - \frac{2}{\eta_0}}\bigg] \nonumber\\ &= - \tilde{{\cal U}} \bigg[\frac{2}{\eta_0} + \log\bigg(\frac{1+\frac{\Delta\eta}{\eta_0+1}}{1+\frac{\Delta\eta}{\eta_0 -1}}\bigg)-\frac{2}{\eta_0(1+\frac{\Delta \eta}{\eta_0})}\bigg] \nonumber\\ & = -\tilde{{\cal U}} \bigg[\frac{2}{\eta_0}\Big[\Big(\frac{\Delta \eta}{\eta_0}\Big) - \Big(\frac{\Delta \eta}{\eta_0}\Big)^2 + \Big(\frac{\Delta \eta}{\eta_0}\Big)^3 + \ldots\Big] \nonumber\\ &~~~~~ + \log\Big(1+\frac{\Delta\eta}{\eta_0+1}\Big) - \log{\Big(1+\frac{\Delta\eta}{\eta_0 -1}}\Big)\bigg] \end{aligned} \end{equation} \noindent where \begin{eqnarray} {\cal U} & = & c_2 E_0 \xi_0 \eta_0 + c_2 E_0 \xi_0 \Delta\eta = {\cal U}_0 + \Delta{\cal U} \\ \tilde{{\cal U}} & = & \frac{{\cal U}_0 + \Delta{\cal U}}{\log( \frac{\eta_0+1}{\eta_0-1}) - \frac{2}{\eta_0}} = \tilde{{\cal U}_0} + \tilde{\Delta{\cal U}}. \end{eqnarray} \noindent Using the expansion $\log(1+x) = x - x^2/2 + x^3/3 + \ldots$ the above expression for the potential can be approximated as \begin{equation} \begin{aligned} V(&\eta_0 + \Delta \eta,\xi_0) \simeq 2\tilde{{\cal U}} \bigg[ \frac{1}{\eta_0^2(\eta_0^2 - 1)}\Delta\eta ~+ \\ &\Big(\frac{1}{\eta_0^3}-\frac{\eta_0}{(\eta_0^2 -1)^2}\Big)(\Delta\eta)^2 - \Big( \frac{1}{\eta_0^4} - \frac{3\eta_0^2 + 1}{3(\eta_0^2-1)^3}\Big)(\Delta \eta)^3 \bigg] \nonumber \end{aligned} \end{equation} \noindent which on simplifying and keeping terms upto $(\Delta\eta)^3$, takes the form \begin{equation} \begin{aligned} V(\eta_0+\Delta \eta,\xi_0) \simeq {} & \frac{2\tilde{{\cal U}_0}\Delta\eta}{(\eta_0^2-1)\eta_0^2}~ \times \\ & \bigg[1-\frac{\eta_0 }{\eta_0^2-1}\Delta\eta + \frac{4\eta_0^2}{3(\eta_0^2-1)^2} (\Delta \eta)^2\bigg] \nonumber \label{eq:Vdeta} \\ \end{aligned} \end{equation} \noindent Rewriting in terms of magnitude of the local field $E_{l}$ \begin{equation} \begin{aligned} E_{l}(\eta_0,\xi_0) =\frac{2E_0 \xi_0}{\eta_0 \sqrt{\eta_0^2 - \xi_0^2}\sqrt{\eta_0^2 - 1} (\log\big[ \frac{\eta_0+1}{\eta_0-1} \big] - \frac{2}{\eta_0})} \end{aligned} \end{equation} \noindent and the normal distance $\Delta s$ from the point $(\eta_0,\xi_0)$ \begin{equation} \Delta \eta = \frac{\Delta s}{h_\eta} + \mathcal{O} ((\Delta s)^2) \label{eq:s} \end{equation} \noindent where \begin{equation} h_\eta = c_2\sqrt{\frac{\eta_0^2-\xi_0^2}{\eta_0^2-1}}, \label{eq:heta} \end{equation} \noindent the potential $V(\eta_0+\Delta \eta,\xi_0)$ can be expressed as \begin{equation} \begin{aligned} V(\Delta s) & \simeq {} E_{l}(\eta_0,\xi_0) \Delta s \bigg[1 - \frac{\Delta s}{R_1} \frac{(\eta_0^2-\xi_0^2)}{(\eta_0^2-1)}~+ \\ & \frac{4}{3}\Big(\frac{\Delta s}{R_1}\Big)^2 \Big(\frac{\eta_0^2-\xi_0^2}{\eta_0^2-1}\Big)^2 \bigg]. \label{eq:ellip_pre} \end{aligned} \end{equation} \noindent For an ellipsoid $\eta = \eta_0$, the principal local radii of curvature at the point ($\eta_0,\xi_0$) are \begin{eqnarray} R_1 & = & R_a \frac{\big(\eta_0^2 - \xi_0^2\big)^{3/2}}{\big(\eta_0^2 - 1\big)^{3/2}} \\ R_2 & = & R_a \frac{\big(\eta_0^2 - \xi_0^2\big)^{1/2}}{\big(\eta_0^2 - 1\big)^{1/2}} \end{eqnarray} \noindent while the Gaussian radius of curvature is \begin{equation} R_g = (R_1 R_2)^{1/2} = R_a \frac{\eta_0^2 - \xi_0^2}{\eta_0^2 - 1}. \end{equation} \noindent Thus, Eq.~\ref{eq:ellip_pre} can be further simplified as \begin{equation} \begin{aligned} V(\Delta s) \simeq E_{l}(\eta_0,\xi_0) \Delta s\Bigg [ 1-\bigg(\frac{\Delta s}{R_2}\bigg) + \frac{4}{3}\bigg(\frac{\Delta s}{R_2}\bigg)^2\Bigg] \label{eq:ellip_final} \end{aligned} \end{equation} \noindent and forms the central result of this paper. It can be used to estimate the tunneling transmission coefficient and hence the current density at a point close to the emitter apex. Note that Eq.~\ref{eq:ellip_final} represents approximately the potential variation along the normal to a point on the surface of the hemiellipsoid. However, in the apex neighbourhood of a sharp emitter, Eq.~\ref{eq:ellip_final} does represent the normal potential variation close to the surface quite accurately (see appendix) and can thus be used to determine emission currents. \subsection{Hyperboloid} The hyperboloid emitter surface is defined by $\xi = \xi_0 = \sqrt{D/(D + R_a)}$ while a flat anode $\xi = 0$ is placed a distance $D$ below the tip. In the transformation equations of Eq.~\ref{eq:cart_pro_sph}, $c_2 = \sqrt{D(D + R_a)}$ where $R_a$ is the apex radius of curvature \cite{hyperbola}. The derivation of the potential variation follows a similar line. If the potential difference between the anode and cathode is $V_0$, the potential at any point can be expressed as \begin{equation} V(\eta,\xi) = V_0~\Bigg( 1 - \frac{\ln\Big[\frac{1~ -~ \xi}{1~ +~ \xi}\Big]}{\ln\Big[\frac{1~ -~ \xi_0}{1~ + ~\xi_0}\Big]} \Bigg) \end{equation} \noindent Thus for small excursions along the field line $\eta = \eta_0$ starting from the point ($\eta_0,\xi_0$) on the hyperboloid surface, the potential \begin{equation} \begin{aligned} V(\eta_0,\xi_0 - \Delta\xi) = {} -\frac{V_0}{\ln\big(\frac{1 - \xi_0}{1 + \xi_0}\big)}\Big[ & \ln\big(1 + \frac{\Delta\xi}{1 - \xi_0}\big) - \\ & \ln\big(1 - \frac{\Delta\xi}{1 + \xi_0}\big) \Big] \end{aligned} \end{equation} \noindent Keeping terms upto $(\Delta\xi)^3$, we have \begin{equation} \begin{aligned} V(\eta_0, \xi_0 - \Delta\xi) \simeq & - \frac{2V_0}{\ln\big(\frac{1 - \xi_0}{1 + \xi_0}\big)} \frac{\Delta\xi}{1 - \xi_0^2} \Big[ 1 - \frac{\xi_0}{1 - \xi_0^2}\Delta\xi \\ & + \frac{1 + 3\xi_0^2}{3(1 - \xi_0^2)^2} (\Delta\xi)^2 \Big]. \end{aligned} \end{equation} \noindent In terms of the normal distance \begin{equation} \Delta s = c_2 \Delta\xi \sqrt{\frac{\eta_0^2 - \xi_0^2}{1 - \xi_0^2}} \end{equation} \noindent and the local electric field \begin{equation} E_{l} = - \frac{V_0}{c_2} \frac{1}{(1 - \xi_0^2)} \frac{2}{\ln\big[\frac{1 - \xi_0}{1 + \xi_0}\big]} \end{equation} \noindent the potential variation $V(\eta_0, \xi_0 - \Delta\xi)$ can be expressed as a function of $\Delta s = h_\xi \Delta \xi$ as \begin{equation} V(\Delta s) \simeq E_{l} \Delta s \Big[ 1 - \frac{\Delta s}{R_2} + \frac{1 + 3\xi_0^2}{3 \xi_0^2} \bigg(\frac{\Delta s}{R_2}\bigg)^2 \Big] \end{equation} \noindent where $R_2 = R_aR_{1}/R_g$ is a principal radius of curvature for the hyperboloid $\xi = \xi_0$ evaluated at the point ($\eta_0,\xi_0$). The respective radii of curvature can be expressed as \begin{eqnarray} R_1 & = & R_a \frac{\big(\eta_0^2 - \xi_0^2\big)^{3/2}}{\big(1 - \xi_0^2\big)^{3/2}} \\ R_2 & = & R_a \frac{\big(\eta_0^2 - \xi_0^2\big)^{1/2}}{\big(1 - \xi_0^2\big)^{1/2}} \\ R_g & = & (R_1 R_2)^{1/2} = R_a \frac{\eta_0^2 - \xi_0^2}{1 - \xi_0^2}. \end{eqnarray} \noindent For a reasonably sharp emitter tip, $\xi_0$ is close to unity. As an illustration, for $D = 5000$nm and $R_a = 5$nm, $\xi_0 = 0.99950$ while for $D = 1500$nm and $R_a = 5$nm, $\xi_0 = 0.99834$. Thus, setting $\xi_0$ to be 1, \begin{equation} V(\Delta s) \simeq E_{l} \Delta s \Big[ 1 - \frac{\Delta s}{R_2} + \frac{4}{3} \bigg(\frac{\Delta s}{R_2}\bigg)^2 \Big] \label{eq:finalpot} \end{equation} \noindent as in the case of hemiellipsoid. As before, Eq.~\ref{eq:finalpot} is more accurately the potential variation along field lines of constant $\eta = \eta_0$ and only approximately so along the normal distance. Close to the tip of a sharp hyperboloid however, it is expected that Eq.~\ref{eq:finalpot} is a good approximation for the potential variation normal to the emitter near the apex. \subsection{The Sphere} For a grounded conducting sphere of radius $R$ in an electric field $-E_0 \hat{z}$, the potential outside the sphere is \begin{equation} V_{ext} = E_0 r \cos\theta \big[1 - \frac{R^3}{r^3}\big]. \end{equation} \noindent Writing $r = R + \Delta s$, \begin{equation} V_{ext} = E_0 \cos\theta \Big[ \frac{3R\Delta s}{R + \Delta s} + \frac{(\Delta s)^3}{(R + \Delta s)^2} \Big]. \end{equation} \noindent Writing $E_l = 3E_0 \cos\theta$ and neglecting the second term leads us to Eq.~\ref{eq:sph}. However, since we are interested in a correction term of the order of $(\Delta s)^3$, the second term must be retained. Now assuming $\Delta s/R << 1$, \begin{eqnarray} V_{ext} & \simeq & E_l \Delta s \Big [\big\{1 - \frac{\Delta s}{R} + \big(\frac{\Delta s}{R}\big)^2 \big\} + \frac{1}{3}\big(\frac{\Delta s}{R}\big)^2 \big] \\ & = & E_l \Delta s \Big [1 - \frac{\Delta s}{R} + \frac{4}{3} \big(\frac{\Delta s}{R}\big)^2 \Big] \end{eqnarray} \noindent which is identical to the result obtained above for the hemi-ellipsoid and the hyperboloid. \subsection{Generic emitter tips} A derivation of a corrected formula for the external potential variation applicable to generic emitter is not readily available. However, we shall investigate the applicability of Eq.~\ref{eq:finalpot} for generic emitters with parabolic tips. Note that cylindrically symmetric emitter tips that are vertically aligned can be approximated as \begin{eqnarray} z & = & h + \frac{1}{2} \Big(\frac{d^2 z}{d\rho^2}\Big)_{\rho = 0} \rho^2 + \ldots \\ & \simeq & h\Big[1 - \frac{1}{2} \frac{\rho}{R_a}\frac{\rho}{h} \Big] \label{eq:quadratic} \end{eqnarray} \noindent where $R_a$ is the magnitude of the apex radius of curvature, $\rho = (x^2 + y^2)^{1/2}$, $h$ is the height of the emitter and we have assumed that the tip is not flat ($(d^2 z/d\rho^2)_{\rho = 0} \neq 0$). Also, since field emission occurs close to the tip, higher order terms in $\rho$ can be ignored in the expansion of $z$. Eq.~\ref{eq:quadratic} can be used to find the local and gaussian curvatures in terms of the apex radius of curvature. Moreover, recent results \cite{B2017b} show that local surface electric field around the tip can be expressed in terms of the local electric field at the apex ($E_a$) and a generalized $\cos\tilde{\theta}$ factor: \begin{equation} E_{l}(z) = E_a \cos\tilde{\theta} = E_0 \frac{\gamma_a (z/h)}{\sqrt{(z/h)^2 + (\rho/R_a)^2}} \label{eq:localE} \end{equation} \noindent where $\gamma_a$ is the field enhancement factor at the apex and $z$ is the height on the emitter surface measured from the conducting plane. For a surface parameterized as ($\rho\cos\varphi,\rho\sin\varphi,h - a\rho^2$), where $a = 1/(2R_a)$ and $\rho = (x^2 + y^2)^{1/2}$, the local Principal and Gaussian radii of curvature are respectively \begin{eqnarray} R_1 & = & -R_a \bigg[1 + \big(\frac{\rho}{R_a}\big)^2\bigg]^{3/2} \label{eq:r1} \\ R_2 & = & -R_a \bigg[1 + \big(\frac{\rho}{R_a}\big)^2\bigg]^{1/2} \label{eq:r2} \\ R_g & = & R_a \bigg[1 + \big(\frac{\rho}{R_a}\big)^2\bigg] \label{eq:rg} \end{eqnarray} \noindent Thus, for quadratic emitters, $E_{l}$ and $R_2$ in Eq.~\ref{eq:finalpot} are given by Eq.~\ref{eq:localE} and \ref{eq:r2} respectively if the apex radius of curvature and field enhancement factors are known. Alternately, they can be computed at each point on the emitter surface if the exact numerical solution for the potential is available. As in case of the hyperboloid, Eq.~\ref{eq:finalpot} is expected to hold for general quadratic emitters that are sharp. \section{Numerical Results} We shall first make a crude estimate of the domain of validity of Eq.~\ref{eq:finalpot} for a typical local electric field $E_{l} \simeq 5 \times 10^9$ V/m. For an emitter with work function of $4.5$~eV, the tunneling distance at this local field is about $1$nm. At the apex, with $R_a = 5$nm, the quadratic term is about $20\%$ of the linear while the cubic is about $5\%$ of the linear. Thus, along the symmetry axis, the neglect of terms higher than cubic appears justified when $R_a > 5$nm. Away from the emitter apex, the principle radius of curvature $R_2$ increases (albeit slowly compared to $R_1$) for typical quadratic tips. At the same time, the local electric field decreases for a given external electric field. Thus, while the tunneling distance increases marginally, the domain of validity of Eq.~\ref{eq:finalpot} also increases. In the following, we shall explore the difference between the exact current density and the one obtained using the approximate potential of Eq.~\ref{eq:finalpot}, for various emitter shapes and position. \begin{figure}[htb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{pot_b2.pdf} \vskip -0.6 cm \caption{The potential energy due to the external field along the normal to three different points on a hemiellipsoidal emitter surface located (i) at the tip ($z = h$) (ii) at $z = h - R_a$ (iii) at $z = h - 2R_a$. The external field strength is $E_0 = 6\times 10^4$~V/m. The height of the hemiellipsoid $h = 1500~\mu$m while the base radius $b = 2~\mu$m. The filled triangles are the quasi-planar result ($-E_{l} \Delta s$), the filled squares are obtained using Eq.~\ref{eq:finalpot} while the solid curve is the exact result. } \label{fig:pot_ellip_b2} \end{figure} \begin{figure}[htb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{pot_b3.pdf} \vskip -0.6 cm \caption{As in Fig.~\ref{fig:pot_ellip_b2} for $R_a = 6$nm, $E_0 = 7.5\times 10^4$~V/m and base radius $b = 3~\mu$m. } \label{fig:pot_ellip_b3} \end{figure} First, we consider a hemiellipsoidal emitter on a grounded conducting plane placed in a uniform electric field. Fig.~\ref{fig:pot_ellip_b2} shows the potential energy due to the external field at three locations on the emitter surface (i) at the tip (ii) at $z = h - R_a$ (iii) at $z = h - 2R_a$. At the tip where $R_2 = 2.67$nm, the exact potential and Eq.~\ref{eq:finalpot} match quite well to about $1$nm while at locations (ii) and (iii) the agreement gets better since $R_2$ increases. Fig.~\ref{fig:pot_ellip_b3} shows a similar plot for $R_a = 6$nm and $E_0 = 7.5 \times 10^4$~V/m. The agreement at all three location now gets better. For an even larger apex radius $R_a = 16.67$nm, the agreement extends beyond $4$nm at all three locations. We next turn our attention to the tunneling current densities generated using these potentials. Assuming a free electron model, the current density is evaluated at zero temperature as \begin{equation} J = \frac{2me}{(2\pi)^2 \hbar^3} \int_0^{E_F} T({\cal E})(E_F - {\cal E}) d{\cal E} \end{equation} \noindent where $T({\cal E})$ is the transmission coefficient at energy ${\cal E}$, $m$ is the mass of the electron, $e$ is the magnitude of the electron charge and $E_F$ is the Fermi level. Instead of using the WKB expression for the transmission coefficient, we shall determine $T({\cal E})$ numerically using suitable boundary conditions for the 1-dimensional Schr\"{o}dinger equation and a modified transfer matrix method \cite{DBVishal}. In the results presented here, curvature corrections to the image potential have been neglected in order to bring out the role of corrections to the external potential. \begin{figure}[htb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{Ra2_67_h.pdf} \vskip -0.6 cm \caption{A Fowler-Nordheim plot of the current density for a hemiellipsoid with base radius $b = 2\mu$m at the three different locations mentioned in Fig.~\ref{fig:pot_ellip_b2}. The solid line is the exact result while the filled-squares are obtained using Eq.~\ref{eq:finalpot} for the external potential. The filled-triangles are obtained using the quasi-planar approximation for the external potential. Here, ${\rm 1/E}_l$ is expressed in the unit $[{\rm V/nm}]^{-1}$. } \label{fig:J_ellip_b2} \end{figure} The corresponding current densities for $R_a = 2.67$nm are shown in Fig.~\ref{fig:J_ellip_b2} at the locations mentioned earlier. Clearly the two correction terms in the potential (see Eq.~\ref{eq:finalpot}) are adequate to reproduce the exact results. For $b = 3\mu$m ($R_a = 6$~nm), the current density is shown in Fig.~\ref{fig:J_ellip_b3}. The agreement with the exact result remains excellent using Eq.~\ref{eq:finalpot} while the agreement between the exact and quasi-planar case improves considerably as expected. \begin{figure}[tb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{Ra6_h.pdf} \vskip -0.6 cm \caption{As in Fig.~\ref{fig:J_ellip_b2} for $b = 3\mu$m ($R_a = 6$nm) at the three different locations. The solid line is the exact result while the filled-squares are obtained using Eq.~\ref{eq:finalpot} for the external potential. The filled-triangles are obtained using the quasi-planar approximation for the external potential. } \label{fig:J_ellip_b3} \end{figure} We next turn our attention to a case where the analytical solution for the potential is not known. Using a suitable nonlinear line-charge of height $L$ placed on a grounded conducting plane in the presence of a uniform electric field, a conical zero-potential surface is obtained of height $300~\mu$m, base radius $16~\mu$m, having a rounded top with an apex radius of curvature $R_a = 4.56$nm. The emitter tip is modeled very well\cite{B2017b} by the quadratic $z = h - \rho^2/(2R_a)$. \begin{figure}[htb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{cone.pdf} \vskip -0.6 cm \caption{The potential energy due to the external field along the normal to points on a rounded conical surface located (i) at the tip ($\rho \simeq 0$nm) (ii) at $\rho \simeq 1.8$nm and (iii) at $\rho \simeq 4.5$~nm. The external field $E_0 = 5\times 10^5$~V/m. The values of the field enhancement factor at these points are 11555, 10730 and 8160 respectively. The filled triangles are the quasi-planar result (infinite radius of curvature) while the filled squares are obtained using Eq.~\ref{eq:finalpot}. The solid curve is the exact result. } \label{fig:pot_cone} \end{figure} Fig.~\ref{fig:pot_cone} is a plot of the potential energy variation along the normal to the emitter surface. The points (from left to right) are located at (i) $\rho = 0$ (the emitter tip) (ii) $\rho \simeq 1.8$ nm and (iii) $\rho \simeq 4.5$~nm. The exact potential is calculated using the line charge distribution. Clearly, Eq.~\ref{eq:finalpot} provides a fair approximation to the exact potential in the tunneling regime. It gets marginally better away from the apex due to the increase in $R_2$ but thereafter minor deviations in the tunneling region occur, perhaps due to the uncertainty in the 4/3 multiplying factor. The corresponding current densities are shown as a Fowler-Nordheim plot in Fig~\ref{fig:J_cone}. In the first two cases, the current densities using Eq.~\ref{eq:finalpot} for the external potential are in good agreement with the exact result (solid line) obtained using the nonlinear line charge distribution. In the third case (plot to the right), Eq.~\ref{eq:finalpot} underestimates the current density marginally. The difference with the quasi-planar case is again substantial especially at smaller values of local field $E_l$, in all three cases. \begin{figure}[htb] \vskip -2.1 cm \hspace*{-1.0cm}\includegraphics[width=0.6\textwidth]{cone_J.pdf} \vskip -0.6 cm \caption{The current density as a function of the local electric field $E_l$ at three points on the rounded conical tip as mentioned in Fig.~\ref{fig:pot_cone}. Here, ${\rm 1/E}_l$ is expressed in the unit $[{\rm V/nm}]^{-1}$. } \label{fig:J_cone} \end{figure} In order to determine the effectiveness of Eq.~\ref{eq:finalpot} in determining the total electron current from a single emitter, we have computed the emitter current at two values of the external field, $E_{0}$. At $E_0 = 5 \times 10^5$~V/m (corresponding to a local apex field $\simeq 5.77 \times 10^9$~V/m), the currents obtained from the quasi-planar approximation, Eq.~\ref{eq:finalpot} and the exact potential are $0.302~\mu$A, $0.0455~\mu$A and $0.0445~\mu$A respectively. While the last two values are close, the quasi-planar current is nearly 7 times more. At the higher external field $E_0 = 10^6$~V/m, the currents obtained from the quasi-planar approximation, Eq.~\ref{eq:finalpot} and the exact potential are 0.36~mA, 0.205~mA and 0.220~mA respectively. The last two values are still close while the quasi-planar approximation improves considerably. \section{Discussion and Conclusions} The study of two analytically solvable models, the hemiellipsoid on a conducting plane and the hyperboloid diode, led us to Eq.~\ref{eq:finalpot}. In both cases, when the emitter is sharp, Eq.~\ref{eq:finalpot} is accurate near the apex for short excursions in the normal direction. An identical result was derived for a sphere thereby establishing that the spherical approximation for curved emitters (where applicable) must be used with the the second principle radius of curvature $R_2$ as the radius of the sphere. Finally, we have also numerically explored the validity of Eq.~\ref{eq:finalpot} for an analytically unsolvable case, the cone with a quadratic tip. Our numerical studies show that in all the examples, the current densities obtained using Eq.~\ref{eq:finalpot} agree well with the exact result near the emitter tip and show a considerable improvement compared to the quasi-planar case in predicting the emitter current. At low external field strengths, where the difference with the quasi-planar case is almost an order of magnitude, Eq.~\ref{eq:finalpot} predicts the current with less than $2\%$ error. At higher field strengths, the quasi-planar result improves but is still poor compared to the prediction of Eq.~\ref{eq:finalpot}. The results presented here have also been tested for a cylindrical emitter with a quadratic tip. While the preceding discussion has centred around a single sharp emitter, it is clear that the form of the external potential remains the same even if the emitter is part of a regular array or a randomly distributed bunch of emitters, so long as the emitter tip is smooth and parabolic. For a bunch of emitters with identical height and apex radius of curvature, the only quantity in Eq.~\ref{eq:finalpot} that depends on the neighbourhood is the local external electric field, $E_{l}$. This is determined by the extent of shielding which must be determined separately before calculating the field emission current. In conclusion, the quasi-planar approximation to the potential due to the external field leads to large errors in emitted current when the apex radius of curvature $R_a \lessapprox 20$~nm and the applied external field is small. For $ R_a \gtrapprox 2~\rm{nm} $, Eq.~\ref{eq:finalpot} seems to provide a very good approximation to the external potential and accurately reproduces the emitter current. Finally, in addition to curvature effects in the external potential, corrections to the image potential are also important and must be included in determining the emitter current. \section{Acknowledgement} The authors acknowledge several useful discussions with Dr. Raghwendra Kumar. \section{Appendix} For orthogonal co-ordinate systems in which the Laplace and Schr\"odinger equations are separable, tunneling transmission coefficients along field lines can be calculated using the standard 1-d formalisms. In the general case however, curved field lines would necessitate use of the multi-dimensional tunneling formalism. In view of a possible general applicability of Eq.~\ref{eq:ellip_final} to non-separable systems, we have instead chosen to express the external potential in terms of the normal distance $\Delta s$ so that standard 1-dimensional tunneling results can be used. This also leaves open the possibility of incorporating the results of this paper in a modified Fowler-Nordheim equation. The assumption so far in using the normal distance $\Delta s$ has been that field lines are more or less straight over the tunneling distance of about $1$~nm at moderate local field strengths. In the following, we shall test this assumption for a hemiellipsoid by Taylor expanding the potential along the normal direction and comparing with Eq.~\ref{eq:ellip_final}. Consider a point ($\eta_0,\xi_0$) on the hemiellipsoid $\eta = \eta_0$. A point outside, at a distance $\Delta s$ normal to the hemiellipsoid at ($\eta_0,\xi_0$) can be written as \begin{eqnarray} z_1 & = & z_0 + \Delta s \sin\theta \nonumber \\ \rho_1 & = & \rho_0 + \Delta s \cos\theta \end{eqnarray} \noindent where $z_0 = c_2\xi_0\eta_0$, $\rho_0 = c_2 \sqrt{(\eta_0^2 - 1)(1 - \xi_0^2)}$ and $\tan\theta = z_0 (\eta_0^2 - 1)/(\eta_0^2 \rho_0)$. The point ($\rho_1,z_1$) can be assumed to lie on another hemiellipsoid $\eta_1 = \eta_0 + \Delta \eta$ and is defined alternately by the co-ordinates ($\eta_1,\xi_1$) = ($\eta_0 + \Delta \eta, \xi_0 + \Delta \xi$) where $\Delta \eta$ and $\Delta \xi$ can be computed by demanding that the point outside satisfies the ellipsoid/hyperboloid equation. Thus, $\Delta \eta$ is determined using \begin{equation} \frac{z_1^2}{c_2^2 \eta_1^2} + \frac{\rho_1^2}{c_2^2(\eta_1^2 - 1)} = 1 \label{eq:defining1} \end{equation} \noindent while $\Delta \xi$ can be evaluated either using \begin{equation} \frac{z_1^2}{c_2^2 \xi_1^2} - \frac{\rho_1^2}{c_2^2(1 - \xi_1^2)} = 1 \label{eq:defining2} \end{equation} \noindent or using $\xi_1 = (z_0 + \Delta s \sin\theta)/(c_2 \eta_1)$ and Eq.~\ref{eq:defining1}. The solutions, to the accuracy required, can be expressed respectively as \begin{eqnarray} \Delta \eta(\Delta s) & = & a_1 \Delta s + a_2 (\Delta s)^2 + a_3 (\Delta s)^3 + \mathcal{O}((\Delta s)^4) \label{eq:deta} \\ \Delta \xi(\Delta s) & = & b_1 \Delta s + b_2 (\Delta s)^2 + \mathcal{O}((\Delta s)^3) \label{eq:dxi} \end{eqnarray} \noindent where \begin{eqnarray} a_1 & = & \frac{1}{h_\eta} \\ a_2 & = & \frac{1}{2h_\xi^2} \frac{\eta_0}{\eta_0^2 - \xi_0^2} \\ a_3 & = & - \frac{1}{2 h_\eta h_\xi^2} \frac{\eta_0^2 + \xi_0^2}{(\eta_0^2 - \xi_0^2)^2} \end{eqnarray} \noindent while \begin{eqnarray} b_1 & = & 0 \\ b_2 & = & \frac{1}{2h_\xi^2} \frac{\xi_0}{\eta_0^2 - \xi_0^2} \end{eqnarray} \noindent with \begin{eqnarray} h_\eta & = & c_2 \sqrt{\frac{\eta_0^2 - \xi_0^2}{\eta_0^2 - 1}} \\ h_\xi & = & c_2 \sqrt{\frac{\eta_0^2 - \xi_0^2}{1 - \xi_0^2}}. \end{eqnarray} A Taylor expansion of the potential at the point ($\eta_0,\xi_0$) along the normal can be expressed as \begin{equation} \begin{aligned} V(\Delta s) = V_0 + V_\eta \Delta \eta + V_\xi \Delta \xi + \frac{1}{2} V_{\eta\eta} (\Delta \eta)^2 + \\ \frac{1}{2} V_{\xi\xi} (\Delta \xi)^2 + V_{\xi\eta} \Delta \eta \Delta \xi + \frac{1}{6} V_{\eta\eta\eta} (\Delta \eta)^3 + \ldots \end{aligned} \end{equation} \noindent where $V_0 = V(\eta_0,\xi_0)$ and $\Delta \eta$ and $\Delta \xi$ are given by Eqns.~(\ref{eq:deta}) and (\ref{eq:dxi}) respectively. Clearly, this expansion suffices to expand the potential upto $\mathcal{O}((\Delta s)^3)$ since $b_1$ is zero. Also, since the $V_{\xi\xi}$ term contributes $\mathcal{O}((\Delta s)^4)$, it will be ignored henceforth. The relevant partial derivatives can be evaluated as follows: \begin{eqnarray} & V_\eta & = -E_l h_\eta \\ & V_\xi & = 0 \\ & V_{\eta\eta}& = E_l h_\eta~ \frac{ 2 \eta_0}{\eta_0^2 - 1} \\ & V_{\xi\eta} & = -E_l h_\eta~ \frac{1}{\xi_0} \\ & V_{\eta\eta\eta} & = -E_lh_\eta ~\frac{8\eta_0^2}{(\eta_0^2 - 1)^2} \end{eqnarray} \noindent Collecting together terms $\mathcal{O}((\Delta s)^k)$, the potential for the hemiellipsoid is expressed as \begin{equation} V(\Delta s) = V_0 + d_1 \Delta s + d_2 (\Delta s)^2 + d_3 (\Delta s)^3 + \mathcal{O}((\Delta s)^4) \end{equation} \noindent where \begin{eqnarray} d_1 & = & V_\eta a_1 \\ d_2 & = & V_\eta a_2 + \frac{1}{2} V_{\eta\eta} a_1^2 \\ d_3 & = & V_\eta a_3 + V_{\eta\eta} a_1 a_2 + V_{\xi\eta} a_1 b_2 + \frac{1}{6} V_{\eta\eta\eta} a_1^3 \end{eqnarray} Consider now a sharp hemiellipsoid emitter $\eta = \eta_0$ for which $R_a/h << 1$ and a point ($\eta_0,\xi_0$) on its surface at a height $z_0 = h - R_a/n$. At the apex, $n \rightarrow \infty$, while the apex neighbourhood from where emission predominantly takes place corresponds generally to $n >> 10$. The following approximations can then be made: \begin{eqnarray} \eta_0 & = & \frac{h}{c_2} \simeq 1 + \frac{1}{2} \frac{R_a}{h} \\ \xi_0 & = & \frac{z_0}{h} \simeq 1 - \frac{1}{n} \frac{R_a}{h} \\ \eta_0^2 - 1& \simeq & \frac{R_a}{h} \\ 1 - \xi_0^2 & \simeq & \frac{2}{n} \frac{R_a}{h} \\ \eta_0^2 - \xi_0^2 & \simeq & \frac{R_a}{h} (1 + \frac{2}{n}) \\ R_2 & = & R_a (1 + \frac{2}{n})^{1/2} \end{eqnarray} \noindent A comparison of the terms in $d_2$ at $E_l = 1$ yields \begin{eqnarray} & V_\eta a_2 & = \mathcal{O}(\frac{1}{n}\frac{1}{R_a}) \\ & V_{\eta\eta} a_1^2 & = \mathcal{O}(\frac{1}{R_a}) \end{eqnarray} \noindent while the terms in $d_3$ at $E_l = 1$ are \begin{eqnarray} & V_\eta a_3 & = \mathcal{O}(\frac{1}{n}\frac{1}{R_a^2}) \\ & V_{\xi\eta} a_1 b_2 & = \mathcal{O}(\frac{1}{n}\frac{1}{ h R_a}) \\ & V_{\eta\eta} a_1 a_2 & = \mathcal{O}(\frac{1}{n}\frac{1}{R_a^2})\\ & V_{\eta\eta\eta} a_1^3 & = \mathcal{O}(\frac{1}{R_a^2}) \end{eqnarray} \noindent Thus, for a sharp emitter with $R_a/h << 1$, in the region close to the apex ($n >> 10$) from where electron emission predominantly occurs at moderate fields, $d_2 \simeq \frac{1}{2} V_{\eta\eta} a_1^2$ while $d_3 \simeq \frac{1}{6} V_{\eta\eta\eta} a_1^3$. This leads to Eq.~\ref{eq:ellip_final}. In part therefore, the results obtained using Eq.~\ref{eq:ellip_final} are in good agreement because the apex neighbourhood contributes substantially to the current. Our results show that at a local fields of $5$~V/nm, the region $n \geq 10$ contributes as much as 70\% to the total current. There is also a cancellation of effects. The correction to the coefficient of the $(\Delta s/R_2)^2$ term leads to an increase in current while the correction to the coefficent of the $(\Delta s/R_2)^3$ term leads to a decrease. Our studies for various field strengths and apex radius of curvature, show that Eq.~\ref{eq:ellip_final} provides an optimum description of the external potential. \vskip 0.05 in $\;$\\ \section{References}
1,108,101,564,444
arxiv
\section{Introduction} It was shown in \cite{larsen} that one can compute the fractional chromatic number of $M(G)$ in terms of that of $G$, where $M(G)$ stands for the Mycielskian of $G$. There are a few interesting and similar results for the circular chromatic number. Hence, it is of interest to find a map or a functor ${\cal F}$ from the category of graphs to itself such that, for any graph $G$, it is possible to determine the exact value of the circular chromatic number of ${\cal F}(G)$ in terms of that of $G$. In this paper, we show that graph powers can be considered as such fanctors (graph powers preserve the graph homomorphism). In Section~$1$, we set up notation and terminology. Section~$2$ establishes the tight relation between the circular chromatic number and graph powers. In fact, we show that it is possible to determine the circular chromatic number of $G^{2r+1\over 2s+1}$ in terms of that of $G$ provided that ${2r+1\over 2s+1}$ is sufficiently small. In Section~$3$, we investigate the fractional chromatic number and the $n$th multichromatic number of subdivison graphs. Throughout this paper we consider finite simple graphs which have no loops and multiple edges. For a given graph $G$, the notation ${\rm og}(G)$ stands for the odd girth of $G$. We denote by $[m]$ the set $\{1,2,\ldots,m\}$. Let $G$ and $H$ be two graphs. A homomorphism from $G$ to $H$ is a mapping $f:V(G)\longrightarrow V(H)$ such that $f(u)f(v)\in E(H)$ whenever $uv\in V(G)$. We write $G\longrightarrow H$ if there exists a homomorphism from $G$ to $H$. Two graphs $G$ and $H$ are homomorphically equivalent if $G\longrightarrow H$ and $H\longrightarrow G$ and it is indicated by the symbol $G\longleftrightarrow H$. Let $d$ and $n$ be positive integers, where $n\geq 2d$. The circular complete graph $K_{n\over d}$ has the vertex set $\{0,1,\ldots,n-1\}$ in which $ij$ is an edge if and only if $d\leq |i-j|\leq n-d$. An $(n,d)-$coloring of graph $G$ is a homomorphism from $G$ to the circular complete graph $K_{n\over d}$. The circular chromatic number $\chi_c(G)$ of $G$ is defined as $$\chi_c(G)=\inf\{{n\over d}| G\,\, {\rm admits\,\, an}\,\, (n,d)-{\rm coloring}\}.$$ Two kinds of graph powers were introduced in \cite{hajiabolhassan, MR2587748}. Especially, it was illustrated that there is a tight relationship between graph powers and the circular chromatic number. Also, the connection between graph homomorphism and graph powers has been studied in \cite{hajiabolhassan, MR2587748, MR2171371}. For a graph $G$, let $G^{^{k}}$ be the $k$th power of $G$, which is obtained on the vertex set $V(G)$, by connecting any two vertices $u$ and $v$ for which there exists a walk of length $k$ between $u$ and $v$ in $G$. Also, assume that $G^{^{1\over s}}$ is the graph obtained by replacing each edge of $G$ with the path $P_{s+1}$. Set $G^{^{r\over s}}= (G^{^{1\over s}})^r$. This power, called fractional power as a functor, preserves the graph homomorphism. In this terminology, we have the following lemma. \begin{alphlem}\label{hada}{\rm\cite{MR2587748}} Let $r$ and $s$ be positive integers and G be a graph. Then $$ \begin{array}{ccc} G\longrightarrow H& \Longrightarrow & G^{^{r\over s}}\longrightarrow H^{^{{r\over s}}}. \\ \end{array} $$ \end{alphlem} \begin{alphlem}\label{haal}{\rm\cite{MR2587748}} Let $r$, $s$, $p$, and $q$ be non-negative integers and G be a graph. Then $$(G^{^{2r+1\over 2s+1}})^{2p+1\over 2q+1}\longrightarrow G^{^{(2r+1)(2p+1)\over (2s+1)(2q+1)}}.$$ \end{alphlem} For a given graph $G$ with $v\in V(G)$, set $$N_i(v)= \{u|\ {\rm there\ is\ a\ walk\ of\ length}\ i\ {\rm joining}\ u\ {\rm and}\ v\}.$$ For two subsets $A$ and $B$ of the vertex set of a graph $G$, we write $A \bowtie B$ if every vertex of $A$ is joined to every vertex of $B$. Also, for any non-negative integer $s$, define the graph $G^{^{\stackrel{1}{\widetilde{2s+1}}}}$ as follows. $$V(G^{^{\stackrel{1}{\widetilde{2s+1}}}})= \{(A_1,\ldots,A_{s+1})|\ A_i\subseteq V(G), |A_1|=1, \varnothing\not =A_i\subseteq N_{i-1}(A_1) ,\,i\leq s+1\}.$$ Two vertices $(A_1,\ldots,A_{s+1})$ and $(B_1,\ldots,B_{s+1})$ are adjacent in $G^{^{\stackrel{1}{\widetilde{2s+1}}}}$ if for any $1\leq i\leq s$ and $1\leq j\leq s+1$, $A_i\subseteq B_{i+1}$, $B_i\subseteq A_{i+1}$, and $A_{j}\bowtie B_{j}$. Here is the definition of dual power as a functor as follows. Let $r$ and $s$ be non-negative integers. For any graph $G$ define the graph $G^{^{\stackrel{2r+1}{\widetilde{2s+1}}}}$ as follows $$G^{^{\stackrel{2r+1}{\widetilde{2s+1}}}}= \left(G^{^{\stackrel{1}{\widetilde{2s+1}}}}\right)^{2r+1}.$$ These powers, in sense of graph homomorphism, inherit several properties from power in numbers. \begin{alphlem}\label{equi}{\rm\cite{MR2587748}} Let $r$, $p$, and $q$ be non-negative integers. For any graph $G$ we have \begin{description} \item[a)] $G^{^{(2r+1)(2p+1)\over (2r+1)(2q+1)}}\longleftrightarrow G^{^{2p+1\over 2q+1}}$. \item[b)] $G^{^{\stackrel{(2r+1)(2p+1)}{\widetilde{(2r+1)(2q+1)}}}}\longleftrightarrow G^{^{\stackrel{2p+1}{\widetilde{2q+1}}}}$. \end{description} \end{alphlem} It was proved in \cite{MR2587748} that these two powers are dual of each other as follows. \begin{alphthm}\label{dual}{\rm\cite{MR2587748}} Let G and H be two graphs. Also, assume that ${2r+1\over 2s+1} < {\rm og}(G)$ and $2s + 1 <{\rm og}(H^{^{\stackrel{1}{\widetilde{2r+1}}}})$. We have $$G^{^{2r+1\over2s+1}}\longrightarrow H \Longleftrightarrow G\longrightarrow H^{^{\stackrel{2s+1}{\widetilde{2r+1}}}}.$$ \end{alphthm} Now, we consider the parameter $\theta_i(G)$ which in some sense measures the homomorphism capabilities of $G$. \begin{defin}{ Assume that $G$ is a non-bipartite graph. Also, let $i \geq -\chi(G)+3$ be an integer. The {\it $i$th power thickness} of $G$ is defined as follows. $$\theta_i(G) = \sup\{{2r+1 \over 2s+1}| \chi(G^{{2r+1 \over 2s+1}})\leq \chi(G)+i, {2r+1 \over 2s+1}< {\rm og}(G) \}.$$ For simplicity, when i = 0, the parameter is called the power thickness of $G$ and is denoted by $\theta(G)$. Also, when $i=\chi(G)-3$, we set $\theta_{3-\chi(G)}(G)=\mu(G)$. } \end{defin} \begin{alphlem}{\rm\cite{MR2587748}}\label{NOHOM} Let $G$ and $H$ be two non-bipartite graphs with $\chi(G)=\chi(H)-j,\ j\geq 0$. If $G \longrightarrow H$ and $i+j \geq -\chi(G)+3$, then $$\theta_{i+j}(G) \geq \theta_i(H).$$ \end{alphlem} It is interesting that $\mu(G)$ is computed in terms of circular chromatic number. Hence, $\theta_i(G)$'s can be considered as a generalization of circular chromatic number. \begin{alphthm}\label{ptcir}{\rm\cite{MR2587748}} Let $G$ be a non-bipartite graph. Then $$\mu(G)= {\chi_c(G)\over 3(\chi_c(G)-2)}.$$ \end{alphthm} \section{Circular Chromatic Number of Graph Powers} Some properties of graph powers and its close relationship to the circular chromatic number of non-bipartite graphs have been studied in \cite{MR2587748}. In particular, an equivalent definition of the circular chromatic number in terms of graph powers was introduced as follows. \begin{alphthm}{\rm\cite{MR2587748}}{\label{haal1}} Let G be a non-bipartite graph with chromatic number $\chi(G)$. \begin{description} \item[a)] If $0 < {{2r+1}\over {2s+1}} \leq {\frac{\chi(G)}{3(\chi(G)-2)}}$, then $\chi(G^{{2r+1}\over {2s+1}})=3$. Furthermore, $\chi(G)\neq\chi_c(G)$ if and only if there exists a rational number ${{2r+1}\over {2s+1}}>{\frac{\chi(G)}{3(\chi(G)-2)}}$ for which $\chi(G^{{2r+1}\over {2s+1}})= 3$. \item[b)] $\chi_c(G)=\inf \{{2n+1\over n-t}| \chi(G^{2n+1\over 3(2t+1)})=3, n> t >0\}.$ \end{description} \end{alphthm} Here, we show that if $2r+1 < {\rm og}(K_{n\over d})$, then $K_{n\over d}^{2r+1}$ is isomorphic to a circular complete graph. \begin{lem}\label{cirpow} Let $n$ and $d$ be positive integers, where $n> 2d$. \begin{description} \item [a)] If $r$ is a non-negative integer and ${n\over d}<{2r+1\over r}$, then $K_{n\over d}^{2r+1}\cong K_{n\over (2r+1)d-rn}$. \item[b)] If $s$ is a nonnegative integer, then $K_{n\over d}\longleftrightarrow K_{(2s+1)n\over sn+d}^{^{2s+1}}.$ \end{description} \end{lem} \begin{proof}{Let $t\leq r$ be a non-negative integer. If $i$ is an arbitrary vertex of $K_{n\over d}$, it is~not hard to check that $N_{2t+1}(i)=\{i+(2t+1)d-tn, i+(2t+1)d-tn+1,\ldots,i-(2t+1)d+tn+1\}$, where the summation is modulo $n$. Therefore, $K_{n\over d}^{2r+1}$ is isomorphic to the circular complete graph $K_{n\over (2r+1)d-rn}$. The next part is an immediate consequence of part (a). } \end{proof} Now, we introduce an upper bound for the circular chromatic number of graph powers. \begin{thm} Let $r$ and $s$ be non-negative integers and $G$ be a non-bipartite graph with circular chromatic number $\chi_c(G)$. If ${2r+1\over 2s+1}<{\chi_c(G)\over \chi_c(G)-2}$, then $$\chi_c(G^{^{2r+1\over 2s+1}})\leq{(2s+1)\chi_c(G)\over (s-r)\chi_c(G)+2r+1}.$$ \end{thm} \begin{proof}{ Let $\chi_c(G)= {n\over d }$. It is easy to see that if ${2r+1\over 2s+1}<{{n\over d }\over {n\over d }-2}$, then ${(2s+1)n\over sn+d}<{2r+1\over r}$. $$ \begin{array}{ccll} \vspace{.3cm}G\longrightarrow K_{n\over d}& \Longrightarrow &G^{^{2r+1\over 2s+1}}\longrightarrow (K_{n\over d})^{2r+1\over 2s+1} &({\rm By~Lemma~\ref{hada}}) \\ ~& \Longrightarrow &G^{^{2r+1\over 2s+1}} \longrightarrow (K_{(2s+1)n\over sn+d}^{^{2s+1}})^{2r+1\over 2s+1}&({\rm By~Lemma~\ref{cirpow}(b)}) \\ \end{array} $$ $$ \begin{array}{ccll} \vspace{.3cm} ~& \Longrightarrow &G^{^{2r+1\over 2s+1}} \longrightarrow K_{(2s+1)n\over sn+d}^{^{2r+1}}&({\rm By~Lemmas~\ref{haal}~and~\ref{equi})}\\ \vspace{.3cm} ~& \Longrightarrow &G^{^{2r+1\over 2s+1}} \longrightarrow K_{(2s+1)n\over (2r+1)(sn+d)-r(2s+1)n}&({\rm By~Lemma~\ref{cirpow}(a))}\\ \vspace{.3cm} ~& \Longrightarrow &{\chi_c(G^{^{2r+1\over 2s+1}})}\leq {(2s+1)n\over (s-r)n+(2r+1)d} & \\ ~& \Longrightarrow &{\chi_c(G^{^{2r+1\over 2s+1}})}\leq {(2s+1)\chi_c(G)\over (s-r)\chi_c(G)+(2r+1)}. & \\ \end{array}$$ } \end{proof} Tardif {\rm\cite{MR2171371}} has shown that the cube root, in sense of dual power, of any circular complete graph with circular chromatic number less than 4, is homomorphically equivalent to a circular complete graph. \begin{alphlem} \label{tardif}{\rm\cite{MR2171371}} Let $n$ and $d$ be positive integers, where $n > 2d$. If ${n\over d}<4, then$ $K_{n\over d}^{^{\stackrel{1}{\widetilde{~3~}}}}\longleftrightarrow K_{3n\over n+d}$. \end{alphlem} Here is a generalization of Lemma \ref{tardif}. \begin{lem}\label{tar1} Let $n$ and $d$ be positive integers, where $n > 2d$. If ${n\over d}<4, then$ $$K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}}\longleftrightarrow K_{(2r+1)n\over rn+d}.$$ \end{lem} \begin{proof}{ Theorem~\ref{dual} implies that $K_{(2r+1)n\over rn+d}\longrightarrow K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}}$ if and only if $K_{(2r+1)n\over rn+d}^{^{2r+1}}\longrightarrow K_{n\over d}$. On the other hand, Lemma~\ref{cirpow}(b) shows that the circular complete graphs $K_{(2r+1)n\over rn+d}^{^{2r+1}}$ and $K_{n\over d}$ are homomorphically equivalent. Conversely, it is sufficient to prove that $$\chi_c((K_{n\over d})^{^{\stackrel{1}{\widetilde{2r+1}}}})\leq {(2r+1)n\over rn+d}.$$ Take a rational number ${2k+1\over 3^i}$ such that $ {1\over 2r+1}\leq {2k+1\over 3^i}<{1\over 2}$. It is easy to see that $$K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}}\longrightarrow (K_{n\over d})^{\stackrel{2k+1}{\widetilde{~{~3^i~}~}}}.$$ If $G$ is non-bipartite graph, Theorem~\ref{dual} and Lemma~\ref{equi} yield that $G^{^{\stackrel{1}{\widetilde{~{~3^i~}~}}}}\longrightarrow (G^{^{\stackrel{1}{\widetilde{~{~3~}~}}}})^{^{\stackrel{1}{\widetilde{~{~3^{i-1}~}~}}}}$. Since ${n\over d}<4$, by induction on $i$ and Lemma~\ref{tardif} we have $K_{n\over d}^{\stackrel{2k+1}{\widetilde{~{~3^i~}~}}}\longleftrightarrow K_{3^in\over{{3^i-1\over 2}n+d}}^{^{2k+1}} $. Therefore, there is a homomorphism from $K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}}$ to $K_{3^in\over{{3^i-1\over 2}n+d}}^{^{2k+1}} $. By Lemma~\ref{cirpow}(a), two graphs $K_{3^in\over{{3^i-1\over 2}n+d}}^{^{2k+1}}$ and $K_{3^in\over{(2k+1)({{3^i-1\over 2}n+d})-k3^in}}$ are homomorphically equivalent. Hence, $$\chi_c(K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}})\leq {3^in\over{(2k+1)d+({3^i-1\over 2}-k)n}}.$$ Since the set of parameters $\{{2k+1\over 3^i}\mid k\geq 1, i\geq 1 \}$ is dense in the interval $(0,+\infty)$, $$\chi_c(K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}})\leq \inf\left\{{3^in\over{(2k+1)d+({3^i-1\over 2}-k)n}} {\bigg |}{{1\over 2r+1}\leq {2k+1\over 3^i}}<{1\over 2}\right\}.$$ This infimum is equal to ${(2r+1)n\over rn+d}$, as desired. } \end{proof} Here, we determine the circular chromatic number of some graph powers. \begin{thm}\label{cirsub} Let $G$ be a non-bipartite graph with circular chromatic number $\chi_c(G)$. Also, assume that $r$ and $s$ are non-negative integers. Then we have $\chi_c(G^{^{1\over 2s+1}})=\frac{(2s+1)\chi_c(G)}{s\chi_c(G)+1}.$ Moreover, If $\chi_c(G^{^{2r+1\over 2s+1}})<4$, then $$\chi_c(G^{^{2r+1\over 2s+1}})={(2s+1)\chi_c(G)\over (s-r)\chi_c(G)+2r+1}.$$ \end{thm} \begin{proof}{ Note that, in view of Theorem~\ref{dual}, $G^{^{1\over 2s+1}}\longrightarrow K_{(2s+1)\chi(G) \over s\chi(G)+1}$ if and only if $G\longrightarrow K_{(2s+1)\chi(G) \over s\chi(G)+1}^{2s+1}$. On the other hand, by using Lemma~\ref{cirpow}(a), two graphs $K_{(2s+1)\chi(G) \over s\chi(G)+1}^{2s+1}$ and $K_{(2s+1)\chi(G) \over (2s+1)}$ are homomorphically equivalent. Consequently, $\chi_c(G^{^{1\over 2s+1}})<{2s+1\over s}$. Let ${n\over d}<{2s+1\over s}$. $$ \begin{array}{ccll} \vspace{.3cm}\chi_c(G^{^{1\over 2s+1}})\leq {n\over d} & \Longleftrightarrow & G^{^{1\over 2s+1}}\longrightarrow K_{n\over d}& \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow K_{n\over d}^{^{2s+1}}&({\rm By~Theorem~\ref{dual}}) \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow K_{n\over (2s+1)d-sn}&({\rm By~Lemma~\ref{cirpow}}({\rm a})) \\ \vspace{.3cm} ~& \Longleftrightarrow & \chi_c(G)\leq \chi_c(K_{n\over (2s+1)d-sn})& \\ ~& \Longleftrightarrow &{(2s+1)\chi_c(G)\over s\chi_c(G)+1}\leq {n\over d} & \\ \end{array} $$ To prove the next part, it suffices to show that for any $2\leq{n\over d}<4$, $\chi_c(G^{^{2r+1\over 2s+1}})\leq {n\over d}$ is equivalent to ${(2s+1)\chi_c(G)\over (s-r)\chi_c(G)+2r+1}\leq {n\over d}$. Assume that $\chi_c(G^{^{2r+1\over 2s+1}})\leq {n\over d}<4$. $$ \begin{array}{ccll} \vspace{.3cm}\chi_c(G^{^{2r+1\over 2s+1}})\leq {n\over d} & \Longleftrightarrow & G^{^{2r+1\over 2s+1}}\longrightarrow K_{n\over d}& \\ \vspace{.3cm} ~& \Longleftrightarrow & G^{^{1\over 2s+1}}\longrightarrow K_{n\over d}^{^{\stackrel{1}{\widetilde{2r+1}}}}&({\rm By~Theorem~\ref{dual}}) \\ \vspace{.3cm} ~& \Longleftrightarrow & G^{^{1\over 2s+1}}\longrightarrow K_{(2r+1)n\over rn+d}&({\rm By~Lemma~\ref{tar1}}) \\ \vspace{.3cm} ~& \Longleftrightarrow & \chi_c(G^{^{1\over 2s+1}})\leq {(2r+1)n\over rn+d}& \\ \vspace{.3cm} ~& \Longleftrightarrow & {(2s+1) \chi_c(G)\over s\chi_c(G)+1}\leq{{(2r+1){n\over d}}\over r{n\over d}+1}& \\ ~& \Longleftrightarrow &{(2s+1)\chi_c(G)\over (s-r)\chi_c(G)+2r+1}\leq {n\over d}. & \\ \end{array} $$ } \end{proof} \begin{cor} Let $r$ and $s$ be non-negative integers and $G$ be a non-bipartite graph. if ${2r+1 \over 2s+1} \leq {\chi_c(G) \over 3(\chi_c(G)-2)}$, then $\chi_c(G^{^{\frac{2r+1}{2s+1}}})=\frac{(2s+1)\chi_c(G)}{(s-r)\chi_c(G)+2r+1}$. \end{cor} \begin{proof}{ Since ${2r+1 \over 2s+1} \leq {\chi_c(G) \over 3(\chi_c(G)-2)}$, Theorem \ref{haal1} implies that $\chi_c(G^{^{\frac{2r+1}{2s+1}}})\leq 3$. Now, by the previous theorem, we have $\chi_c(G^{^{\frac{2r+1}{2s+1}}})=\frac{(2s+1)\chi_c(G)}{(s-r)\chi_c(G)+2r+1}$. } \end{proof} \begin{cor} Let $r$ and $s$ be non-negative integers and $G$ be a non-bipartite graph such that $\chi_c(G^{^{2r+1\over 2s+1}})<4$. Then we have $$\mu(G^{^{2r+1\over 2s+1}})={2s+1\over 2r+1}\mu(G)={2s+1\over 3(2r+1)}{\chi_c(G)\over (\chi_c(G)-2)}.$$ \end{cor} Given a rational number ${n \over d}$, a rational number ${n' \over d'}$ is unavoidable by ${n \over d}$ if every graph $G$ with $\chi_c(G) = {n \over d}$ contains a subgraph $H$ with $\chi_c(H) = {n' \over d'}$. It is known \cite{MR2003514} if $m$ is an integer and $m < {n \over d}$, then $m$ is unavoidable by ${n\over d}$. Suppose $(n, d)=1$, i.e., $n$ and $d$ are coprime. Let $n'$ and $d'$ be the unique integers such that $0 < n' < n$ and $nd'-n'd=1$. We call ${n' \over d'}$ the lower parent of ${n \over d}$, and denote it by $F({n \over d})$. The following question was posed in {\rm \cite{MR1815614,MR2249284}}. \begin{alphqu}\label{zhuqu}{\rm \cite{MR1815614,MR2249284}} Is true that for every rational ${n \over d}>2$, $F({n \over d})$ is unavoidable by ${n \over d}${\rm ?} \end{alphqu} Here, we give a negative answer to the aforementioned question. \begin{cor} Let $k$ be a positive integer. Then there exists a graph $G$ with $\chi_c(G)={9k+3\over 3k+2}$ such that $G$ does not contain any subgraph with circular chromatic number equal to ${6k+1\over 2k+1}$. \end{cor} \begin{proof}{ Let $n=9k+3$, $d=3k+2$, $n'=6k+1$, and $d'=2k+1$. Obviously, $nd'-n'd=1$. By Theorem~\ref{cirsub}, we have $\chi_c(K_{3k+1}^{^{1\over3}})={9k+3\over 3k+2}$. Suppose that $e\in E(K_{3k+1}^{^{1\over3}})$. It is readily seen that there exists a homomorphism from $K_{3k+1}^{^{1\over3}}\setminus e$ to $K_{3k}^{^{1\over3}}$. Hence, if $H$ is a proper subgraph of $K_{3k+1}^{^{1\over3}}$, then $\chi_c(H)\leq \chi_c(K_{3k}^{^{1\over3}})={9k\over 3k+1}< {6k+1\over 2k+1}$. Therefore, $G$ contains no subgraph with circular chromatic number $n'\over d'$. } \end{proof} It should be noted that one can introduce more rational numbers such that their lower parents are not unavoidable. For instance, we show that ${15n+7 \over 5n+4}$ is not unavoidable by ${18n+9 \over 6n+5}$. To see this, for $d\geq 2$ and $n\geq 3$, define the graph $H_d(K_n)$ as follows. Let $G_1,\ldots ,G_d$ be $d$ graphs such that each of them is isomorphic to the complete graph $K_n$. Assume that $v_iw_i \in E(G_i)$ for any $1\leq i \leq d$. The graph $H_d(K_n)$ obtained from the disjoint union of $G_1\cup \cdots \cup G_d$ by identifying the vertices $w_i$ with $v_{i+1}$ for any $1\leq i \leq d-1$, deleting the edges $v_iw_i$ for any $1\leq i \leq d$, and by adding the edge $v_1w_d$. In fact, it is a simple matter to check that $H_d(K_n)$ follows by applying Haj\'{o}s construction to the complete graphs $G_1,\ldots ,G_d$. Hence, $\chi(H_d(K_n))=n$ and the graph $H_d(K_n)$ is a critical graph, i.e., $\chi(H_d(K_n)\setminus e)=n-1$ for any $e\in E(H_d(K_n))$. Now, we show that $\chi_c(H_d(K_n))={d(n-1)+1 \over d}$. To see this, assume that $V(G_i)=\{v_i,u_{i2}, \ldots, u_{i(n-1)}, w_i\}$. Define a coloring $c:V(H_d(K_n)) \rightarrow \{1,2,\ldots,dn-d+1\}$ as follows. For any $1\leq i\leq d$ and $2\leq j\leq n-1$, set $c(u_{ij})=(j-1)d+i$, $c(w_i)=i$, and $c(v_1)=d(n-1)+1$. It is easy to check that $c$ is a $(d(n-1)+1,d)$ coloring of $H_d(K_n)$. On the other hand, it is straightforward to check that the independence number of $H_d(K_n)$ is equal to $d$. Consequently, $\chi_c(H_d(K_n))={dn-d+1 \over d}$. The graph $H_2(K_{3n+2})^{^{1\over3}}$ has circular chromatic number $\frac{18n+9}{6n+5}$. It is readily seen that there is a homomorphism from $H_2(K_{3n+2})^{^{1\over3}}\setminus e$ to $K_{3n+1}^{^{1\over3}}$. Hence, if $H$ is a proper subgraph of $H_2(K_{3n+2})^{^{1\over3}}$, then $\chi_c(H)\leq \chi_c(K_{3n+1}^{^{1\over3}})={9n+3\over 3n+2}< {15n+7\over 5n+4}$. Therefore, $H_2(K_{3n+2})^{^{1\over3}}$ contains no subgraph with circular chromatic number ${15n+7\over 5n+4}$. Let $\zeta(G)$ be the minimum number of vertices of $G$, necessary to be deleted, in order to reduce the chromatic number of the graph. \begin{alphqu}{\rm\cite{MR2340388}}{\label{zeta}} Let $\chi_{_{c}}(G)=\frac{n}{d}$, where $(n,d)=1$ and $n=(\chi(G)-1)d+r$. Is it true that $\zeta(G) \geq r$ {\rm ?} \end{alphqu} When $G$ is a critical graph, we have $\zeta(G)=1$. If the aforementioned question is true, then for every critical graph $G$ with $\chi(G)=n$, its circular chromatic number is equal to ${dn-d+1 \over d}$ for an appropriate $d$. It is worth noting that $H_d(K_n)$ is a critical graph with $\chi_c(H_d(K_n))={dn-d+1 \over d}$. \section{Fractional and Multichromatic Number} As usual, we denote by $[m]$ the set $\{1, 2, \ldots, m\}$, and denote by ${[m] \choose n}$ the collection of all $n$-subsets of $[m]$. The {\em Kneser graph} ${\rm KG}(m,n)$ ({\em resp. the generalized Kneser graph} ${\rm KG}(m,n,s)$) is the graph on the vertex set ${[m] \choose n}$, in which two distinct vertices $A$ and $B$ are adjacent if and only if $A \cap B = \varnothing$ (resp. $|A\cap B|\leq s$). It was conjectured by Kneser \cite{Kneser} in 1955, and proved by Lov\'{a}sz \cite{MR514625} in 1978, that $\chi({\rm KG}(m,n))=m-2n+2$. The fractional chromatic number is defined as a generalization of the chromatic number as follows $$\chi_f(G)=\inf\{{m\over n}| G\rightarrow {\rm KG}(m,n)\}.$$ An $n$-tuple coloring of graph $G$ with $m$ colors assigns to each vertex of $G$, an $n$-subset of $[m]$ so that adjacent vertices receive disjoint sets. Equivalently, $G$ has an $n$-tuple coloring with $m$ colors if there exists a homomorphism from $G$ to ${\rm KG}(m,n)$. The $n$th multichromatic number of $G$, denoted by $\chi_n(G)$, is the smallest $m$ such that $G$ has a $n$-tuple coloring with $m$ colors. These colorings were first studied in the early 1970s and the readers are referred to \cite{MR1475894,MR1481157,MR1614286} for more information. \begin{alphthm}{\rm\cite{MR1883597}}\label{pir} Suppose that $m$ and $n$ are positive integers with $m>2n$. Then the following two conditions on non-negative integers $k$ and $l$ are equivalent. \begin{itemize} \item For any two {\rm(}not necessarily distinct{\rm)} vertices $A$ and $B$ of ${\rm KG}(m,n)$ with $|A\cap B|=k$, there is a walk of length exactly $l$ in ${\rm KG}(m,n)$ beginning at $A$ and ending at $B$. \item $l$ is even and $k\geq n-{l\over 2}(m-2n)$, or $l$ is odd and $k\leq {l-1\over 2}(m-2n)$. \end{itemize}\end{alphthm} In view of Theorem~\ref{cirsub}, we have $\chi_c(G^{^{1\over 2s+1}})={{(2s+1)\chi_c(G)}\over{s\chi_c(G)+1}}$. Here, we present a tight upper bound for the fractional chromatic number of subdivision graphs. \begin{thm}\label{fracb} Let $G$ be a non-bipartite graph and $s$ be a non-negative integer. Then $$\chi_f(G^{^{1\over 2s+1}})\leq{{(2s+1)\chi_f(G)}\over{s\chi_f(G)+1}}.$$ \end{thm} \begin{proof}{ Let $f$ be a homomorphism from $G$ to ${\rm KG}(m,n)$. We claim that there is a homomorphism from $G$ to the generalized Kneser graph ${\rm KG}((2s+1)m,sm+n,(m-2n)s)$. To see this, for every vertex $v\in V(G)$, define $g(v)$ as follows $$ \displaystyle \bigcup_{i \not\in f(v)}\{(i-1)(2s+1)+1,\ldots,(i-1)(2s+1)+s\} \bigcup_{i \in f(v)}\{(i-1)(2s+1)+s+1,\ldots,i(2s+1)\}. $$ It is easy to see that, for any vertex $v\in V(G)$, $|g(v)|=sm+n$. Also, if $u$ and $v$ are two adjacent vertices in $G$, then $|g(u)\cap g(v)|=(m-2n)s$. Now, in view of Theorem~\ref{pir}, we have $${\rm KG}((2s+1)m,sm+n,(m-2n)s)\longleftrightarrow {\rm KG}((2s+1)m,sm+n)^{^{2s+1}}.$$ Let $G\longrightarrow {\rm KG}(m,n)$ and $\chi_f(G)={m\over n}$. By the previous discussion, there is a homomorphism from $G$ to ${\rm KG}((2s+1)m,sm+n,(m-2n)s)$. Now, Theorem~\ref{dual} implies that $G^{^{1\over2s+1}} \longrightarrow{\rm KG}((2s+1)m,sm+n).$ Hence, $\chi_f(G^{^{1\over2s+1}} )\leq {{(2s+1)\chi_f(G)}\over{s\chi_f(G)+1}}.$ } \end{proof} Equality does~not always hold in Theorem~\ref{fracb}. For instance, consider the graph $K_{10}^{1\over 3}$. We know that the third power of the Petersen graph $P^3$ is isomorphic to $K_{10}$. Hence, in view of Lemma~\ref{haal}, there exists a homomorphism from $K_{10}^{1\over 3}$ to the Petersen graph. Consequently, $\chi_f(K_{10}^{1\over 3})\leq {5 \over 2}$ which is less than ${31 \over 11}$. It is simple to see that there exists a homomorphism from $G^{^{1\over 2n+1}}$ to $C_{2n+1}$. On the other hand, the odd cycle $C_{2n+1}$ is an induced subgraph of the Kneser graph $KG(2n+1,n)$. Therefore, if $G$ is a non-bipartite graph and $s\geq n$, then $\chi_n(G^{^{1\over 2s+1}})=2n+1$. \begin{thm}\label{multi} Let $G$ be a non-bipartite graph. If $i,n$ and $s$ are positive integers such that $is=n-1$, then $$\chi_n(G^{^{1\over 2s+1}})\leq 2n+i\Longleftrightarrow \chi(G)\leq {2n+i\choose n}$$ \end{thm} \begin{proof}{ $$ \begin{array}{ccll} \vspace{.3cm} \chi_n(G^{^{1\over 2s+1}})\leq 2n+i & \Longleftrightarrow & G^{^{1\over 2s+1}}\longrightarrow {\rm KG}(2n+i,n) & \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow {\rm KG}(2n+i,n)^{2s+1}&({\rm By~Theorem~\ref{dual}}) \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow {\rm KG}(2n+i,n,is)&({\rm By~Theorem~\ref{pir}}) \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow {\rm KG}(2n+i,n,n-1)& \\ \vspace{.3cm} ~& \Longleftrightarrow & G\longrightarrow K_{2n+i\choose n}& \\ \vspace{.3cm} ~& \Longleftrightarrow & \chi(G)\leq {2n+i\choose n}. \\ \end{array} $$ } \end{proof} We know that $\chi_2(G^{^{1\over 2s+1}})=5$ whenever $s\geq 2$. The following corollary, which is an immediate consequence of the aforementioned theorem, determines the other cases. \begin{cor} Let $G$ be a non-bipartite graph. If $\chi(G)\leq 10$, then $\chi_2(G^{^{1\over 3}})=5$. Otherwise, $\chi_2(G^{^{1\over 3}})=6$. \end{cor}
1,108,101,564,445
arxiv
\section{Introduction}\label{sec: 1} \vspace{-0.2cm} With the popularity of the data-driven methods in the deep learning community, the dataset scale and the model size have both got huge explosions. There is a tendency to explore large models and then adopt these pre-trained models in downstream tasks to achieve better performance and faster convergence, which gradually becomes a common way. However, the current procedure depends on full fine-tuning heavily, where all the parameters of the model are updated. It inevitably causes the model to be over-fitted to the small target dataset and thus cannot be used for other tasks after the fine-tuning. As a result, the device will need to save a dedicated set of model parameters for each task, which causes a huge amount of storage space, especially for today's large models (\emph{e.g.}, ViT-G/14 \cite{dosovitskiy2020image} 1.8G, CoAtNet \cite{dai2021coatnet} 2.4G). A simple solution for the above problem is linear probing \cite{he2020momentum}, where only the last head layer is fine-tuned. However, this practice usually yields inferior performance compared to the full fine-tuning proxy. Motivated by the success of the parameter-efficient fine-tuning strategy with prompt in the field of natural language processing (NLP) \cite{houlsby2019parameter,li2021prefix,hu2021lora,he2022parameter}, the recent work implements a similar proxy on vision tasks \cite{jia2022visual}, termed as Visual Prompt Tuning (VPT). Specifically, VPT \cite{jia2022visual} proposes to insert learnable prompts as inputs and append them to the original image tokens. These prompts will interact with the image tokens by performing self-attention and are updated during the fine-tuning process. In this manner, a significant performance improvement can be achieved in downstream tasks compared to a linear probing proxy. Nevertheless, compared to the full fine-tuning and linear probing, it additionally raises two issues: i) VPT tunes the number of prompts for different tasks, which introduces a task-dependent learnable parameter space. The fine-tuning performance is sensitive to the number of prompts for each task and needs to be carefully designed. Too few or too many prompts might either degrade the accuracy of fine-tuning or increase the redundancy of the computation (\emph{e.g.}, 200 prompts on Clevr/count \emph{vs.} 1 prompt on Flowers102); ii) VPT \cite{jia2022visual}, as well as other Adapter-based methods \cite{houlsby2019parameter, mahabadi2021compacter}, introduces additional parameters and computational cost in the inference phase compared to the original pre-trained model. For instance, VPT introduces additional inputs for self-attention with image tokens. Adapter-based methods insert additional modules into the pre-trained model. These methods change the specific network architecture or the input of the network, which might result in frequent structure modifications and heavy workload, especially for those models that are already deployed in edge devices (\emph{e.g.}, mobile phones). \begin{figure*}[!t] \centering \vspace{-0.2cm} \begin{minipage}{0.45\linewidth} \scalebox{0.56}{\begin{tabular}{c|c|c|c|c} \toprule Method & Acc. & \makecell[c]{~Params.~ \\ (M)} & \makecell[c]{Unified \\ parameter space} & \makecell[c]{No extra \\ inference params.} \tabularnewline \midrule Full fine-tuning & \underline{93.82} & 85.88 & $\checkmark$ & $\checkmark$ \tabularnewline Linear probing & 88.70 & 0.08 & $\checkmark$ & $\checkmark$ \tabularnewline \midrule Adapter \cite{houlsby2019parameter} & 93.34 & 0.31 & $\checkmark$ & $\times$ \tabularnewline VPT \cite{jia2022visual} & 93.17 & 0.54 &$\times$ & $\times$ \tabularnewline \midrule SSF (ours) & \textbf{93.99} & 0.28 & $\checkmark$ & $\checkmark$ \tabularnewline \bottomrule \end{tabular} } \def\@captype{table}\caption{Characteristics of different fine-tuning methods. Acc. means the Top-1 accuracy (\%) on CIFAR-100 with a pre-trained ViT-B/16 for tuning. Params. means the learnable parameters at fine-tuning. Our SSF has a unified learnable parameter space and does not require extra inference parameters while obtaining superior performance.} \label{table: intro_comparison} \end{minipage}\hspace{5mm} \begin{minipage}{0.505\linewidth} \vspace{-0.1cm} \includegraphics[width=1\linewidth]{figures/intro_acc.pdf} \caption{Performance comparisons of seven fine-tuning methods with a pre-trained ViT-B/16 model on the FGVC dataset and VTAB-1k benchmark. Our SSF (red dots) achieves state-of-the-art performance only with about 0.3M average learnable parameters.} \label{fig: intro_acc} \end{minipage} \vspace{-0.8cm} \end{figure*} To cope with the above issues, we attempt to find a general proxy for parameter-efficient fine-tuning, where the learnable parameter space is unified (task-independent) and no additional inference parameters are introduced. Inspired by some feature modulation methods \cite{wu2018group, huang2017arbitrary, perez2018film}, we propose a new parameter-efficient fine-tuning method named SSF, where you only need to \underline{S}cale and \underline{S}hift your deep \underline{F}eatures extracted by a pre-trained model for fine-tuning. The intuition behind our approach come from the fact that the upstream datasets and downstream datasets have different data distributions \cite{sun2016return}. Therefore, it is difficult to apply the model weights trained in the upstream dataset to the downstream dataset. For instance, a naive linear probing strategy with keeping the weights of backbone frozen will cause performance degradation. To alleviate the above problem, SSF introduces scale parameters and shift parameters, which could be considered as variance and mean to modulate the features of the downstream dataset extracted with the pre-trained model on the upstream dataset, such that the modulated feature falls in a discriminative space. These scale parameters and shift parameters do not depend on any input and have a unified learnable parameter space for different tasks. Another advantage of SSF is that it only introduces linear transformations because we scale and shift the extracted features. These linear transformations could be further merged into the original pre-trained weight via model re-parameterization \cite{ding2021repvgg} in the inference phase, thus avoiding the extra parameters and FLOPs for downstream tasks. For a deployed model in edge devices, only the updated weights after fine-tuning need to be uploaded instead of changing the network architecture. Table \ref{table: intro_comparison} shows the specific characteristics comparisons between SSF and other fine-tuning methods. SSF is simple, effective, and efficient, which also conforms to Occam's Razor principle. Therefore, we explore this new baseline and find that it surprisingly outperforms all other parameter-efficient fine-tuning methods. We evaluate our method on 26 classification datasets in total and 3 robustness~\&~out-of-distribution datasets. SSF obtains state-of-the-art performance compared to other parameter-efficient fine-tuning methods with the trainable parameters and accuracy trade-off (Table \ref{table: intro_comparison} and Figure \ref{fig: intro_acc}). Compared to the full fine-tuning, our method obtains 2.46\% (90.72\% \emph{vs.} 88.54\%) and 11.48\% (73.10\% \emph{vs}. 65.57\%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy but only with about 0.3M trainable parameters. Furthermore, our SSF does not require additional parameters during the inference phase. It is plug-and-play and is very easy to extend to various model families (CNNs, Transformers, and MLPs). Our SSF establishes a new baseline and we hope that it brings more insight into the field of the efficient model tuning. \section{Related Work} \vspace{-0.2cm} \subsection{Model Families} \vspace{-0.2cm} Convolution has been used for a long time as the main module to extract the image features in computer vision tasks, and CNN-based architectures have been studied \cite{simonyan2014very,he2016deep,Xie_2017_CVPR,liu2022convnet,XingyiECCV22,SonghuaECCV22,JingwenECCV22} with extension on graph-based data~\cite{YidingCVPR20,YidingNIPS20,HuihuiAAAI21}. Recently, another architecture family, Transformer, has gained widespread attention owing to its great success in NLP~\cite{vaswani2017attention,devlin2018bert,hu2021lora}. Following this direction, Dosovitskiy \emph{et al.}~\cite{dosovitskiy2020image} first employ a transformer in the domain of computer vision and introduce a new architecture paradigm, ViT, which achieves promising results~\cite{MetaformerCVPR22,SuchengCVPR22}. Subsequently, various transformer-based models, such as DeiT \cite{touvron2020deit} and Swin Transformer~\cite{liu2021swin}, are introduced and shown to be effective on a variety of tasks such as object detection, semantic segmentation, action recognition \cite{liu2021video}, \emph{etc}. In another line, Tolstikhin \emph{et al.} \cite{tolstikhin2021mlp} propose a pure MLP-based architecture, and subsequent papers \cite{hou2021vision,Lian_2021_ASMLP} have interestingly demonstrated that the MLP-based architectures can catch up to transformers. However, in addition to the well-designed modules, their excellent performance is also attributed to the deployment of large-scale models. Given a large-scale model pre-trained on a large dataset, how to perform parameter-efficient fine-tuning in downstream tasks is essential but is currently less explored. In this paper, we propose SSF as a new baseline and show its promising performance with comprehensive validation in a wide variety of tasks. \vspace{-0.2cm} \subsection{Pre-training and Fine-tuning} \vspace{-0.2cm} Early models \cite{he2016deep, huang2017densely, howard2017mobilenets, Xie_2017_CVPR, tan2019efficientnet} are usually pre-trained on the ImageNet-1K dataset, and then fine-tuned on downstream tasks to achieve faster convergence \cite{he2019rethinking} or better performance. Such a procedure is called pre-training and fine-tuning, or transfer learning. Recent works tend to employ larger models (\emph{e.g.}, ViT \cite{dosovitskiy2020image} and Swin Transformer V2 \cite{liu2021swin}) and train them on larger datasets (\emph{e.g.}, ImageNet-21K and JFT-300M) in pursuit of better performance. Both in the domains of NLP and computer vision, these large models \cite{devlin2018bert, liu2021swin, radford2021learning, he2022masked, zhou2021deepvit, zhou2022understanding} achieve enormous performance improvements compared to the small-scale models and provide pre-trained weights for downstream tasks. Some other works attempt to explore how to efficiently fine-tune the pre-trained models \cite{guo2019spottune,zhou2021learning} on the target tasks. For instance, given a target task, SpotTune \cite{guo2019spottune} investigates which layers need to be fine-tuned. Touvron \emph{et al.} \cite{touvron2022three} find that fine-tuning the weights of the attention layers and freezing weights of the other parts is sufficient to adapt the vision transformers to other downstream tasks. Some works also propose to insert adapters into the network to fine-tune in a parameter-efficient way. These adapters can be a small non-linear network~\cite{houlsby2019parameter}, a hyper-network that generates model weights~\cite{mahabadi2021parameter}, or a compactor \cite{mahabadi2021compacter} which performs a low-rank decomposition to reduce the parameters. Some works have also tried to only update the bias term \cite{NEURIPS2020_81f7acab, zaken2021bitfit}. More recently, VPT \cite{jia2022visual} proposes to insert a small number of learnable parameters (prompts) and optimize them while freezing the backbone, which achieves significant performance improvement compared to the full fine-tuning. During the submission of this work, some methods \cite{chen2022adaptformer, zhang2022neural} are also proposed for parameter-efficient fine-tuning, \emph{e.g.}, inserting a adapter module or neural prompt search. Different from all the above works, we propose to scale and shift deep features extracted by a pre-trained model, which is simple but effective and outperforms other parameter-efficient fine-tuning methods. \vspace{-0.2cm} \subsection{Feature Modulation} \vspace{-0.2cm} Many works have attempted to modulate features to obtain better performance. The most relevant ones to our work are various normalization methods \cite{ioffe2015batch, ba2016layer, wu2018group}. BN, LN, and GN usually normalize the features and then transform them linearly with scale and shift factors to modulate feature distribution, which has been verified to be effective in amounts of tasks. STN \cite{jaderberg2015spatial} introduces a learnable module to spatially transform feature maps. In the field of image generation, AdaIN \cite{huang2017arbitrary} generates scale and shift factors to characterize specific image styles. Self-modulation \cite{chen2018self} shows GANs benefit from self-modulation layers in the generator. In vision-language tasks, Conditional BN \cite{de2017modulating} and FiLM \cite{perez2018film} are often utilized to modulate the features of two modalities. Unlike some algorithms such as BN, our SSF is not limited to the modulation of normalization layer, and it has a different motivation that is to alleviate the distribution mismatch between upstream tasks and downstream tasks for parameter-efficient fine-tuning. As a comparison, we also conduct experiments in Sec. \ref{sec: 4.3} and show that our SSF is more effective compared to only tuning the normalization layer. Compared to STN, AdaIN, FiLM and so on, our method is input-independent and these scale and shift parameters model the distribution of the whole dataset so that they can be absorbed into the original pre-trained model weights in the inference phase. \vspace{-0.2cm} \subsection{Model Re-parameterization} \vspace{-0.2cm} Model re-parameterization has been a common practice to improve inference efficiency. One of the representative techniques is batch normalization folding used in the model compression algorithms \cite{jacob2018quantization}. The parameters introduced by the batch normalization layers \cite{ioffe2015batch} are merged into the convolutional layers usually stacked before them. This technique is further utilized to merge different branches of networks into a new branch \cite{zagoruyko2017diracnets, ding2021repvgg, ding2021repmlp}. Similarly, our SSF fully adopts linear transformations, which allows the scale and shift parameters in the training phase to be merged into the original pre-trained model weights, thus avoiding the introduction of the extra parameters and computational cost during the inference phase. \vspace{-0.2cm} \section{Approach}\label{sec: 3} \vspace{-0.2cm} \subsection{Preliminaries}\label{sec: 3.1} \vspace{-0.2cm} \textbf{Transformers.} In a vision transformer (ViT) \cite{dosovitskiy2020image}, an RGB image $I \in \mathbb{R}^{3 \times H \times W}$ is divided into $N \times N$ non-overlapping patches, and then these image patches appended a class token are fed into an embedding layer followed by the $L$-layer vision transformer blocks with self-attention as the core operation. The input $x \in \mathbb{R}^{(N^2 + 1) \times d}$, where $d$ is the embedding dimension, is first transformed to keys $K \in \mathbb{R}^{(N^2 + 1) \times d}$, values $V \in \mathbb{R}^{(N^2 + 1) \times d}$, and queries $Q \in \mathbb{R}^{(N^2 + 1) \times d}$. After that, we can calculate a global self-attention by \begin{equation}\label{eq: attention} {\rm Attention}(Q, K, V) = {\rm Softmax} (\frac{QK^T}{\sqrt{d}})V. \end{equation} The output of the attention layer will be fed to a two-layer MLP to extract information in the channel dimension. \textbf{Adapter.} Adapter \cite{houlsby2019parameter} is inserted into the transformer layer for efficient fine-tuning. It is a bottleneck module with a few trainable parameters, which contains a down-projection to reduce the feature dimension, a non-linear activation function, and an up-projection to project back to the original dimension. Therefore, given the input $x \in \mathbb{R}^{(N^2 + 1) \times d}$, the output is calculated by \begin{equation}\label{eq: adapter} {\rm out} = [W^{\rm up} \phi (W^{\rm down} x^T)]^T, \end{equation} where $W^{\rm down} \in \mathbb{R}^{d' \times d}$ (where $d' \ll d$), $\phi$, and $W^{\rm up} \in \mathbb{R}^{d \times d'}$ represent the down-projection matrix, non-linear function, and up-projection matrix, respectively. \textbf{VPT.} VPT \cite{jia2022visual} inserts some learnable parameters (\emph{i.e.}, prompts) into the input space after the embedding layer. These prompts interact with the original image tokens by performing self-attention. During the fine-tuning, the weights of the backbone network are kept frozen and only the parameters of the prompts are updated. VPT-Shallow inserts prompts in the first layer while VPT-Deep inserts prompts in all the layers of the transformer. Assuming that the input is $x \in \mathbb{R}^{(N^2 + 1) \times d}$, denote the inserted prompts as $p \in \mathbb{R}^{n \times d}$, where $n$ is the number of prompts, the combined tokens $x'$ is \begin{equation}\label{eq: vpt} x' = [x; p], \end{equation} where $x' \in \mathbb{R}^{(N^2 + n + 1) \times d}$ will be fed into the transformer block for self-attention (Eq. (\ref{eq: attention})). \vspace{-0.2cm} \subsection{Scaling and Shifting Your Features for Fine-tuning}\label{sec: 3.2} \vspace{-0.2cm} Different from the above methods, we introduce both the scale and shift factors to modulate deep features extracted by a pre-trained model with linear transformation to match the distribution of a target dataset, as mentioned in Sec. \ref{sec: 1}. Five main properties are covered in our method: i) SSF achieves on-par performance with the full fine-tuning strategy; ii) all downstream tasks can be inputted to the model independently without relying on any other task; iii) the model only needs to fine-tune very few parameters; iv) unlike VPT \cite{jia2022visual}, which adjusts the number of prompts for each task, the set of parameters for fine-tuning in SSF does not change as the task changes, making it feasible to further fine-tune the parameters later by adding more tasks for multi-task learning or continuous learning\footnote{It provides more flexibility, which is not a contradiction to ii).}; v) thanks to the linear transformation, SSF avoids the introduction of the extra parameters and computational cost during the inference phase, making our method zero overhead. \begin{wrapfigure}[19]{r}{0.5\textwidth} \centering \includegraphics[width=0.5\textwidth]{figures/method_v3.pdf} \caption{The overall pipeline of SSF. (a) Training pipeline via SSF, where an OP means an operation, \emph{e.g.}, MSA, MLP or LN. (b) A pre-trained model or inference pipeline. (c) Our SSF-ADA.} \label{fig: pipeline} \vspace{-0.2cm} \end{wrapfigure} \noindent{\bf The design of SSF.} SSF performs the linear transformation to modulate the features for parameter-efficient fine-tuning as shown in Figure \ref{fig: pipeline}. In Figure \ref{fig: pipeline} (a), given a model pre-trained in the upstream task, we insert SSF-ADA\footnote{Here, we refer to our proposed method as SSF and the specific module as SSF-ADA.} after each operation (OP) of the network to modulate features. There are $K$ OPs in total and these operations might contain multi-head self-attention (MSA), MLP and layer normalization (LN), \emph{etc}. During the fine-tuning, the pre-trained weights in these operations are kept frozen and the SSF-ADA parameters are kept updated. The specific SSF-ADA structure is shown in Figure\ref{fig: pipeline} (c), where the features output from the previous operation are performed dot product with a scale factor and then summed with a shift factor, which are input-independent. Formally, given the input $x \in \mathbb{R}^{(N^2 + 1) \times d}$, the output $y \in \mathbb{R}^{(N^2 + 1) \times d}$ (is also the input of the next operation) is calculated by \begin{equation}\label{eq: SSF-ADA} y = \gamma \odot x + \beta, \end{equation} where $\gamma \in \mathbb{R}^d$ and $\beta \in \mathbb{R}^d$ are the scale and shift factors, respectively. $\odot$ is the dot product. \noindent{\bf Re-parameterization.} Since SSF-ADA is a completely linear transformation, we can re-parameterize it by absorbing the scale and shift terms into the previous linear layer as follows \begin{equation}\label{eq: re-parameterization} y = \gamma \odot x + \beta = \gamma \odot (w * t + b) + \beta = (\gamma \odot w) * t + \gamma \odot b + \beta, \end{equation} where $w$ and $b$ are the weight and bias terms, respectively. $*$ represents the `convolution' operation in the convolutional layer or the `multiplication' operation in the MLP layer. $t$ is the input of the previous linear layer. Since $w$ and $b$ are frozen and $\gamma$ and $\beta$ are updated in the fine-tuning, $\gamma$ and $\beta$ can be merged into the original parameter space ($w$ and $b$) in the inference stage through the above formulation. From this perspective, our SSF-ADA makes it possible to perform downstream tasks without adding any extra parameters and computational costs, as shown in Figure\ref{fig: pipeline} (b). \noindent{\bf Discussion.} The first question is why we want the input $\gamma$ and $\beta$ to be input-independent. As FiLM \cite{perez2018film} and AdaIN \cite{huang2017arbitrary} show, we could obtain $\gamma$ and $\beta$ by conditioning an image sample, however, this might cause two shortcomings. One is that we want $\gamma$ and $\beta$ to be input-independent to represent the distribution of the whole downstream dataset so that we can modify the previous weight distribution to fit the downstream dataset by modulating the feature. Secondly, the conditional input requires the introduction of some additional networks (\emph{e.g.}, MLPs) to generate $\gamma$ and $\beta$, which introduces more trainable parameters. More importantly, to better generate $\gamma$ and $\beta$, a non-linear activation function might be required, which will lead to the intractability of the re-parameterization. Therefore, we directly perform a fully linear transformation to merge the $\gamma$ and $\beta$ factors into the original pre-trained weights, so that weights can be easily uploaded to the edge devices without any modification of the architecture. The second question is which operations should be followed by SSF-ADA. Our experience is that you can insert SSF-ADA after each operation with a linear coefficient in ViT. Although we can search for some optimal layers or operations with Neural Architecture Search (NAS) \cite{pham2018efficient, liu2018darts, guo2020single, lian2020iclr}, to reduce the number of the trainable parameters, we believe that our method will produce better results (or not worse than NAS) without introducing too many trainable parameters that can be merged for inference, as will be shown in Sec. \ref{sec: 4.3}. \vspace{-0.2cm} \subsection{Complexity Analysis}\label{sec: 3.3} \vspace{-0.2cm} We also compare the complexity of Adapter, VPT and our SSF. Take a ViT as an example, the dimension and number of the tokens are $d$ and $N^2$. Assuming that Adapter projects features from $d$-dim to $d'$-dim (where $d' \ll d$) so that the extra trainable parameters are $2dd'$ in each layer, \begin{wraptable}{r}{8.3cm} \centering \small \setlength{\tabcolsep}{6pt} \scalebox{0.65}{ \begin{tabular}{c|c|c|c|c} \toprule \diagbox{}{Method} & Adapter & VPT-Shallow & VPT-Deep & SSF (ours) \\ \midrule \# Extra Params. & $2Ldd'$ (1) & $nd$ (1) & $nLd$ (1) & $mLd$ (0) \\ \midrule \# Extra FLOPs & $2N^2Ldd'$ (1) & $2n(2N^2 + n)d $ (1) & $2n(2N^2 + n)Ld $ (1) & $mN^2Ld$ (0) \\ \bottomrule \end{tabular}} \caption{The complexity comparisons of Adapter \cite{houlsby2019parameter}, VPT \cite{jia2022visual} and our SSF. `(1)': the same parameters and FLOPs for training and inference; `(0)': no additional parameters and FLOPs are required for inference.} \label{table: complexity} \vspace{-0.2cm} \end{wraptable} VPT inserts $n$ prompts to obtain $nd$ extra parameters in each layer, and SSF inserts SSF-ADA after each operation with a linear coefficient to obtain $md$ extra parameters in each layer, when the total number of layers is $L$, the complexity of Adapter, VPT and SSF is shown in Table \ref{table: complexity}. The specific number of additional parameters used by Adapter, VPT and SSF depends on the values of $d'$, $n$ and $m$. However, in practice, SSF outperforms Adapter and VPT-Deep even with slightly fewer parameters in the training stage as we will see in Sec. \ref{sec: 4}. Further, in the inference stage, borrowing the model re-parameterization strategy, the extra parameters and FLOPs of SSF are zero. However, the complexity of Adapter and VPT remain the same compared to the training, which establishes the strengths of our approach. \vspace{-0.2cm} \section{Experiments}\label{sec: 4} \vspace{-0.2cm} \subsection{Experimental Settings}\label{sec: 4.1} \vspace{-0.2cm} \textbf{Datasets}. We mainly conduct our experiments on a series of datasets that can be categorized into three types as detailed below: \textit{FGVC}. Following VPT \cite{jia2022visual}, we employ five Fine-Grained Visual Classification (FGVC) datasets to evaluate the effectiveness of our proposed SSF, which consists of CUB-200-2011 \cite{wah2011caltech}, NABirds \cite{van2015building}, Oxford Flowers \cite{nilsback2008automated}, Stanford Dogs \cite{khosla2011novel} and Stanford Cars \cite{gebru2017fine}. \textit{VTAB-1k}. VTAB-1k benchmark is introduced in \cite{zhai2019large}, which contains 19 tasks from diverse domains: i) Natural images that are captured by standard cameras; ii) Specialized images that are captured by non-standard cameras, \emph{e.g.}, remote sensing and medical cameras; iii) Structured images that are synthesized from simulated environments. This benchmark contains a variety of tasks (\emph{e.g.}, object counting, depth estimation) from different image domains and each task only contains 1,000 training samples, thus is extremely challenging. \textit{General Image Classification Datasets}. We also validate the effectiveness of SSF on general image classification tasks. We choose the CIFAR-100 \cite{krizhevsky2009learning} and ImageNet-1K \cite{deng2009imagenet} datasets as evaluation datasets, where CIFAR-100 contains 60,000 images with 100 categories. ImageNet-1K contains 1.28M training images and 50K validation images with 1,000 categories, which are very large datasets for object recognition. \textbf{Models}. For a fair comparison, we follow VPT \cite{jia2022visual} and mainly select ViT-B/16 \cite{dosovitskiy2020image} model pre-trained on ImageNet-21K as the initialization for fine-tuning. In addition, we also generalize our method to backbones of different model families, including the recent Swin Transformer \cite{liu2021swin} (Swin-B), ConvNeXt-B \cite{liu2022convnet} and AS-MLP-B \cite{Lian_2021_ASMLP}. The former builds a hierarchical transformer-based architecture, and the latter two belong to CNN-based architecture and MLP-based architecture respectively. \textbf{Baselines}. We first compare our method with the two basic fine-tuning methods: i) full fine-tuning, where all parameters of the models are updated at fine-tuning; ii) linear probing, where only the parameters of the classification head (an MLP layer) are updated. We also compare our method with recent parameter-efficient fine-tuning methods: iii) Adapter \cite{houlsby2019parameter}, where a new adapter structure with up-projection, non-linear function, and down-projection is inserted into the transformer and only the parameters of this new module are updated; iv) Bias \cite{zaken2021bitfit}, where all the bias terms of parameters are updated; v) VPT \cite{jia2022visual}, where the prompts are inserted into transformers as the input tokens and they are updated at fine-tuning. \textbf{Implementation Details.} For the FGVC datasets, we process the image with a randomly resize crop to $224 \times 224$ and a random horizontal flip for data augmentation. For VTAB-1k, we directly resize the image to $224 \times 224$, following the default settings in VTAB \cite{zhai2019large}. For CIFAR-100 and ImageNet-1K, we follow the fine-tuning setting of ViT-B/16 in \cite{dosovitskiy2020image}, where the stronger data augmentation strategies are adopted. We employ the AdamW \cite{loshchilov2017decoupled} optimizer to fine-tune models for 100 epochs for CIFAR-100, and 30 epochs for ImageNet-1K. The cosine decay strategy is adopted for the learning rate schedule, and the linear warm-up is used in the first 10 epochs for CIFAR-100 and 5 epochs for ImageNet-1K. \begin{table}[t] \centering \vspace{-0.3cm} \setlength{\tabcolsep}{4pt} \scalebox{0.98}{\begin{tabular}{c|c|c|c|c|c|c|c} \toprule \diagbox{Method}{Dataset}&\makecell[c]{~CUB-200~ \\ -2011} & ~NABirds~ & \makecell[c]{~Oxford~ \\ Flowers} & \makecell[c]{~Stanford~ \\ Dogs} & \makecell[c]{~Stanford~ \\ Cars} & ~~Mean~~ & \makecell[c]{~Params. \\ (M)} \tabularnewline \midrule Full fine-tuning & 87.3 & 82.7 & 98.8 & 89.4 & \underline{84.5} & 88.54 & 85.98 \tabularnewline Linear probing & 85.3 & 75.9 & 97.9 & 86.2 & 51.3 & 79.32 & 0.18 \tabularnewline \midrule Adapter \cite{houlsby2019parameter} & 87.1 & \underline{84.3} & 98.5 & 89.8 & 68.6 & 85.67 & 0.41 \tabularnewline Bias \cite{zaken2021bitfit} & 88.4 & 84.2 & 98.8 & \textbf{91.2} & 79.4 & 88.41 & 0.28 \tabularnewline VPT-Shallow \cite{jia2022visual} & 86.7 & 78.8 & 98.4 & \underline{90.7} & 68.7 & 84.62 & 0.25 \tabularnewline VPT-Deep \cite{jia2022visual} & \underline{88.5} & 84.2 & \underline{99.0} & 90.2 & 83.6 & \underline{89.11} & 0.85 \tabularnewline \midrule SSF \textbf{(ours)} & \textbf{89.5} & \textbf{85.7}& \textbf{99.6} & 89.6 & \textbf{89.2} & \textbf{90.72} & 0.39 \tabularnewline \bottomrule \end{tabular} } \caption{Performance comparisons on five FGVC datasets with ViT-B/16 models pre-trained on ImageNet-21K.} \label{table: fgvc} \vspace{-0.5cm} \end{table} \begin{table}[t] \setlength\tabcolsep{2.8pt} \centering \scalebox{0.68}{\begin{tabular}{c|ccccccc|cccc|cccccccc|cc} \toprule & & \multicolumn{6}{c|}{\textbf{Natural}} & \multicolumn{4}{c|}{\textbf{Specialized}} & \multicolumn{8}{c|}{\textbf{Structured}} \\ \midrule \diagbox{Method}{Dataset} & \rotatebox{90}{CIFAR-100} & \rotatebox{90}{Caltech101} & \rotatebox{90}{DTD} & \rotatebox{90}{Flowers102} & \rotatebox{90}{Pets} & \rotatebox{90}{SVHN} & \rotatebox{90}{Sun397} & \rotatebox{90}{Patch Camelyon~} & \rotatebox{90}{EuroSAT} & \rotatebox{90}{Resisc45} & \rotatebox{90}{Retinopathy} & \rotatebox{90}{Clevr/count} & \rotatebox{90}{Clevr/distance} & \rotatebox{90}{DMLab} & \rotatebox{90}{KITTI/distance~} & \rotatebox{90}{dSprites/loc} & \rotatebox{90}{dSprites/ori} & \rotatebox{90}{SmallNORB/azi~} & \rotatebox{90}{SmallNORB/ele~} & \rotatebox{90}{Mean} & \rotatebox{90}{Params. (M)} \\ \midrule Full fine-tuning \cite{jia2022visual} & 68.9 & 87.7 & 64.3 & 97.2 & 86.9 & \underline{87.4} & 38.8 & 79.7 & \underline{95.7} & \underline{84.2} & 73.9 & 56.3 & 58.6 & 41.7 & 65.5 & 57.5 & 46.7 & 25.7 & 29.1 & 65.57 & 85.84 \\ Linear probing \cite{jia2022visual} & 63.4 & 85.0 & 63.2 & 97.0 & 86.3 & 36.6 & 51.0 & 78.5 & 87.5 & 68.6 & \underline{74.0} & 34.3 & 30.6 & 33.2 & 55.4 & 12.5 & 20.0 & 9.6 & 19.2 & 52.94 & 0.04 \\ \midrule Adapter \cite{houlsby2019parameter} & 74.1 & 86.1 & 63.2 & 97.7 & 87.0 & 34.6 & 50.8 & 76.3 & 88.0 & 73.1 & 70.5 & 45.7 & 37.4 & 31.2 & 53.2 & 30.3 & 25.4 & 13.8 & 22.1 & 55.82 & 0.27 \\ Bias \cite{zaken2021bitfit} & 72.8 & 87.0 & 59.2 & 97.5 & 85.3 & 59.9 & \underline{51.4} & 78.7& 91.6& 72.9& 69.8& 61.5 & 55.6&32.4 & 55.9& 66.6& 40.0& 15.7& 25.1 & 62.05 & 0.14 \\ VPT-Shallow \cite{jia2022visual} & \underline{77.7} & 86.9 & 62.6& 97.5& 87.3& 74.5& 51.2& 78.2& 92.0& 75.6& 72.9& 50.5& 58.6& 40.5& 67.1 & 68.7& 36.1& 20.2& 34.1 & 64.85 & 0.11 \\ VPT-Deep \cite{jia2022visual} & \textbf{78.8} & \underline{90.8} & \underline{65.8} & \underline{98.0} & \underline{88.3} & 78.1& 49.6& \underline{81.8}& \textbf{96.1}& 83.4& 68.4& \underline{68.5} & \underline{60.0} & \underline{46.5} & \underline{72.8} & \underline{73.6} & \underline{47.9} & \textbf{32.9} & \underline{37.8} & \underline{69.43} & 0.60 \\ \midrule SSF \textbf{(ours)} & 69.0 & \textbf{92.6} & \textbf{75.1} & \textbf{99.4} & \textbf{91.8} & \textbf{90.2} & \textbf{52.9} & \textbf{87.4} & \underline{95.9} & \textbf{87.4} & \textbf{75.5} & \textbf{75.9} &\textbf{62.3} & \textbf{53.3} & \textbf{80.6} & \textbf{77.3} & \textbf{54.9} & \underline{29.5} & \textbf{37.9} & \textbf{73.10} & 0.24\\ \bottomrule \end{tabular} } \caption{Performance comparisons on the VTAB-1k benchmark with ViT-B/16 models pre-trained on ImageNet-21K.} \label{table: vtab} \vspace{-0.8cm} \end{table} \subsection{Performance Comparisons on Image Classification}\label{sec: 4.2} \vspace{-0.2cm} We compare the performance of our SSF and other baseline methods in 26 image classification tasks and the results on FGVC and VTAB-1k are shown in Table \ref{table: fgvc} and Table \ref{table: vtab} (also see Figure \ref{fig: intro_acc}), respectively, and the results on CIFAR-100 and ImageNet-1K are shown in Table \ref{table: imagenet}, which are evaluated in Top-1 accuracy (\%). In these three tables, the bold font shows the best accuracy of all methods and the underline font shows the second best accuracy. We have the following findings by observing them: i) In Table \ref{table: fgvc} and Table \ref{table: vtab}, where the last column is the average of the fine-tuned parameters for each method on the corresponding datasets, our SSF outperforms VPT \cite{jia2022visual} and other parameter-efficient fine-tuning methods, and even achieves better performance than full fine-tuning, which is mainly owing to the linear transformation applied on the features. Specifically, SSF obtains 1.81\% (90.72\% \emph{vs}. 89.11\%) and 2.46\% (90.72\% \emph{vs}. 88.54\%) accuracy improvement on five FGVC datasets, and 5.29\% (73.10\% \emph{vs}. 69.43\%) and 11.48\% (73.10\% \emph{vs}. 65.57\%) improvement on the VTAB-1k benchmark compared to VPT and full fine-tuning. Meanwhile, SSF also uses fewer trainable parameters compared to VPT-Deep in both datasets (0.39M \emph{vs}. 0.85M, 0.24M \emph{vs}. 0.60M). SSF maintains a unified learnable parameter space for different tasks with a few parameters while VPT \cite{jia2022visual} needs to design the different number of prompts for each task, which also shows the conciseness of our approach; ii) In Table \ref{table: imagenet}, \emph{i.e.}, in CIFAR-100 and ImageNet-1K, SSF and other parameter-efficient fine-tuning methods have difficulty in achieving the similar performance to the full fine-tuning, probably because these datasets have sufficient data to prevent over-fitting of the model, especially in ImageNet-1K. In contrast, in the VTAB-1k benchmark, the amount of data is not very large (\emph{e.g.}, only 1,000 training images), which might cause over-fitting of the model for the full fine-tuning. Nevertheless, in CIFAR-100 and ImageNet-1K, our SSF still outperforms previous parameter-efficient fine-tuning methods (Adapter, Bias, and VPT), which shows the effectiveness of our method; iii) In Table \ref{table: imagenet}, the results of our SSF with Swin Transformer, ConvNeXt, and AS-MLP models consistently outperform those of other parameter-efficient fine-tuning methods, which also verifies the effectiveness of SSF on a wide variety of models. \textbf{Computational cost.} To validate the efficiency of our method, we show the computational cost of SSF in Figure \ref{fig: computational cost}. We employ a batch size of 16 for the training stage and inference stage, and use mixed precision training. All running results in Figure \ref{fig: computational cost} are measured in a single GeForce RTX 2080Ti GPU. We can see that SSF has similar training time and training memory with VPT but with less inference time and inference memory. Here, we show the computational cost of VPT with 200/50 prompts (the same number of prompts to obtain the performance in Table \ref{table: imagenet}) for VPT-Shallow and VPT-Deep, respectively. When adding the number of prompts, the time cost and memory will be larger but our SSF achieves zero-overhead inference, which is more advantageous. \begin{table}[t] \centering \setlength{\tabcolsep}{2.8pt} \scalebox{0.81}{\begin{tabular}{c|cc|cc|cc|cc|cc|cc|cc} \toprule Model & \multicolumn{4}{c|}{ViT-B/16 \cite{dosovitskiy2020image}} & \multicolumn{4}{c|}{Swin-B \cite{liu2021swin}} & \multicolumn{4}{c|}{ConvNeXt-B \cite{liu2022convnet}} & \multicolumn{2}{c}{AS-MLP-B \cite{Lian_2021_ASMLP}} \\ \midrule \diagbox{Method}{Dataset} & \rotatebox{90}{CIFAR-100~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{ImageNet-1K~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{CIFAR-100~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{ImageNet-1K~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{CIFAR-100~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{ImageNet-1K~} & \rotatebox{90}{Params. (M)~} & \rotatebox{90}{CIFAR-100~} & \rotatebox{90}{Params. (M)~} \\ \midrule Full fine-tuning & \underline{93.82} & 85.88 & \textbf{83.58} & 86.57 & \textbf{93.85} & 86.85 & \textbf{85.20} & 88.03 & \textbf{94.14} & 87.67 & \textbf{85.80} & 88.85 & \textbf{89.96} & 86.83 \\ Linear probing & 88.70 & 0.08 & 82.04 & 0.77 & 89.27 & 0.10 & 83.25 & 1.03 & 89.20 & 0.10 & 84.05 & 1.03 & 79.04 & 0.10 \\ \midrule Adapter \cite{houlsby2019parameter} & 93.34 & 0.31 & 82.72 & 1.00 & 92.49 & 0.33 & 83.82 & 1.26 & 92.86 & 0.45 & 84.49 & 1.37 & 88.01 &0.33 \\ Bias \cite{zaken2021bitfit} & 93.39 & 0.18 & 82.74 & 0.87 & 92.19 & 0.24 & 83.92 & 1.16 & 92.80 & 0.23 & 84.63 & 1.16 & 87.46 & 0.26 \\ VPT-Shallow \cite{jia2022visual} & 90.38 & 0.23 & 82.08 & 0.92 & 90.02 & 0.13 & 83.29 & 1.05 & - & - & - & - & - & - \\ VPT-Deep \cite{jia2022visual} & 93.17 & 0.54 & 82.45 & 1.23 & 92.62 & 0.70 & 83.44 & 1.63 & - & - & - & - & -& -\\ \midrule SSF \textbf{(ours)} &\textbf{93.99} & 0.28 & \underline{83.10} & 0.97 & \underline{93.06} & 0.37 & \underline{84.40} & 1.29 & \underline{93.45} & 0.36 & \underline{84.85} & 1.28& \underline{88.28} & 0.37\\ \bottomrule \end{tabular} } \caption{Performance comparisons on CIFAR-100 and ImageNet-1K with various model families, where ViT-B/16, Swin-B, and ConvNeXt-B are pre-trained on ImageNet-21K, and AS-MLP-B is pre-trained on ImageNet-1K.} \vspace{-0.2cm} \label{table: imagenet} \vspace{-0.5cm} \end{table} \begin{figure*}[t] \centering \begin{minipage}[b]{1\textwidth} \includegraphics[width=1\linewidth]{figures/time_memory.pdf} \end{minipage} \caption{Computational cost of different tuning methods. From left to right: training time, training memory, test time, and test memory.} \label{fig: computational cost} \vspace{-0.5cm} \end{figure*} \vspace{-0.2cm} \subsection{The Impacts of Different Designs}\label{sec: 4.3} \vspace{-0.2cm} As the core operation of SSF, we thoroughly evaluate how SSF-ADA affects results, \emph{e.g.}, the insertion locations, the initialization of SSF-ADA and its components. We conduct experiments to analyze the impacts of different designs for fine-tuning. All experiments are implemented with pre-trained ViT-B/16 models on CIFAR-100 and the results are shown in Table \ref{table: ablation}. \textbf{The impact of the number of layers.} We directly insert SSF-ADA into different layers to evaluate the effect of inserting layers, and the results are shown in table \ref{table: layer}. The values in the \#layers column indicate the number of layers with SSF-ADA, where \#layers-0 represents linear probing. From the first and second rows, we find that the results will improve from 88.70\% to 92.69\% and grow with a small number of trainable parameters (0.08M \emph{vs.} 0.11M) when only inserting SSF-ADA into the first two layers. Keep adding SSF-ADA in the subsequent layers will make the results better. The growth of the results is almost linear with the number of layers of inserted SSF-ADA. Therefore, we directly choose to insert SSF-ADA into all (12) layers of vision transformer to bring the best results (93.99\%) with 0.28M trainable parameters. \textbf{The impact of the different insertion locations.} Based on the different operations of ViT, we evaluate the impact of the insertion locations of SSF-ADA. We separately remove SSF-ADA after these operations and the results are shown in Table \ref{table: location}. We find that removing the SSF-ADA in the MLP operation achieves inferior results than removing those in the Attention operation (93.46\% \emph{vs}. 93.69\%) with comparable trainable parameters (0.19M \emph{vs.} 0.21M), which suggests that performing feature modulation for the MLP operation might be more important. Although one can use NAS to search for the importance of different operations and thereby insert SSF-ADA in specific locations, the results might not be better than inserting SSF-ADA in all operations. Therefore, in order to obtain excellent performance, we do not perform NAS but directly insert SSF-ADA into all operations. \textbf{The impact of initialization.} We also investigate how different ways of initializing the scale and shift factors affect performance in Table \ref{table: init}. In our experiments, we first randomly initialize both scale and shift parameters with a mean value of zero, but find that the performance is inferior (90.11\%) and cannot converge in some experiments. After that, we randomly initialize the scale factor with a mean value of one and find better performance, which implies that the weights of a pre-trained model should not be completely disrupted in the fine-tuning, instead, we should start from this pre-trained model to optimize our model. Experiments show that using the normal initialization achieves the best performance, where the mean values of the scale factor and shift factor are one and zero, respectively. \textbf{The impact of different components.} We also evaluate the impacts of different components in SSF-ADA and the results are shown in Table \ref{table: case}. We find that removing the scale term yields worse performance than removing the shift term with the same trainable parameters, which shows that the scale term might be more important than the shift term. Also, note that the difference between `w/o. scale' and the `Bias' method in Table \ref{table: imagenet} is that we fine-tune the model with an additional shift term in `w/o. scale', while `Bias' fine-tunes the model based on the original biases, suggesting that fine-tuning the model in a res-like manner can obtain slightly better performance (93.49\% \emph{vs}. 93.39\%). We also try to only fine-tune all scale and shift factors in the normalization layer (LN), or fine-tune the model with SSF but set the scale term as a scalar. These experiments yield inferior performance than SSF (93.26\% \emph{vs}. 93.99\%, 93.59\% \emph{vs}. 93.99\%), but could probably be considered as an alternative due to the fact that they only use about half of the trainable parameters of SSF. \begin{table}[t] \centering \subfloat[\label{table: layer}]{ \setlength{\tabcolsep}{2pt} \scalebox{0.865}{ \begin{tabular}{c|cc} \toprule \#layers & Acc. & Params. \\ \midrule 0 & 88.70 & 0.08 \\ 2 & 92.69 & 0.11\\ 4 & 93.30 & 0.15\\ 8 & 93.60 & 0.22\\ 12 (ours) & \textbf{93.99} & 0.28\\ \bottomrule \end{tabular}} } \subfloat[\label{table: location}]{ \setlength{\tabcolsep}{2pt} \scalebox{0.865}{ \begin{tabular}{c|cc} \toprule location & Acc. & Params. \\ \midrule w/o. mlp & 93.46 & 0.19 \\ w/o. attn & 93.69 & 0.21 \\ w/o. embed & 93.91 &0.28 \\ w/o. norm & 93.80 & 0.25 \\ ours & \textbf{93.99} & 0.28\\ \bottomrule \end{tabular}} } \subfloat[\label{table: init}]{ \setlength{\tabcolsep}{2pt} \scalebox{0.865}{ \begin{tabular}{c|c} \toprule initialization & Acc. \\ \midrule random & 90.11 \\ constant & 93.91 \\ uniform & 93.87 \\ trunc\_normal & 93.93 \\ normal (ours) & \textbf{93.99} \\ \bottomrule \end{tabular}} } \subfloat[\label{table: case}]{ \setlength{\tabcolsep}{2pt} \scalebox{0.865}{ \begin{tabular}{c|cc} \toprule case & Acc. & Params. \\ \midrule w/o. scale & 93.49 & 0.18 \\ w/o. shift & 93.74 & 0.18 \\ only norm & 93.26 & 0.11 \\ scalar scale & 93.59 & 0.18 \\ ours & \textbf{93.99} & 0.28 \\ \bottomrule \end{tabular}} } \caption{The impacts of different designs. (a) The impact of the number of layers with SSF-ADA. (b) The impacts of the different insertion locations of SSF-ADA. (c) The impacts of initialization. (d) The impacts of different components. Acc.: Top-1 accuracy (\%); Params.: parameters (M).} \label{table: ablation} \vspace{-0.9cm} \end{table} \begin{wraptable}{r}{7.8cm} \vspace{-0.3cm} \centering \small \setlength{\tabcolsep}{7pt} \scalebox{0.78}{\begin{tabular}{c|c|c|c|c} \toprule \diagbox{Method}{Dataset}& IN-1K ($\uparrow$) & IN-A ($\uparrow$) & IN-R ($\uparrow$) & IN-C ($\downarrow$) \tabularnewline \midrule Full fine-tuning & \textbf{83.58} & 34.49 & 51.29 & 46.47 \tabularnewline Linear probing & 82.04 & 33.91 & 52.87 & 46.91 \tabularnewline \midrule Adapter \cite{houlsby2019parameter} & 82.72 & \underline{42.21} & 54.13 & 42.65 \tabularnewline Bias \cite{zaken2021bitfit} & 82.74 & 42.12 & \underline{55.94} & \underline{41.90} \tabularnewline VPT-Shallow \cite{jia2022visual} & 82.08 & 30.93 & 53.72 & 46.88 \tabularnewline VPT-Deep \cite{jia2022visual} & 82.45 & 39.10 & 53.54 & 43.10 \tabularnewline \midrule SSF \textbf{(ours)} & \underline{83.10} & \textbf{45.88} & \textbf{56.77} & \textbf{41.47} \tabularnewline \bottomrule \end{tabular} } \caption{Performance comparisons on robustness and out-of-distribution datasets. `IN' means ImageNet. The performance on IN-1K, IN-A and IN-R is evaluated in Top-1 accuracy (\%). The performance on IN-C is evaluated in mCE (mean corruption error). The lower ($\downarrow$), the better.} \label{table: robustness} \vspace{-0.5cm} \end{wraptable} \vspace{-0.2cm} \subsection{Performance Comparisons on Robustness and OOD Datasets}\label{sec: 4.4} \vspace{-0.2cm} We also conduct experiments to analyze the robustness and Out-Of-Distribution (OOD) ability of our SSF method with the following datasets: ImageNet-A, ImageNet-R and ImageNet-C. Please refer to Appendix \ref{appendix: robustness} for their details. We perform the robustness and OOD evaluation on these three datasets with the fine-tuned models on ImageNet-1K. All experimental results are listed in Table \ref{table: robustness}. From this table, we can see that our SSF obtains better performance than VPT and other parameter-efficient fine-tuning methods on three datasets, which shows our fine-tuning method has stronger robustness and out-of-distribution generalization. Furthermore, although SSF has lower accuracy than full fine-tuning on ImageNet-1K, the performance on ImageNet-A, ImageNet-R and ImageNet-C is better, which also shows the performance between ImageNet-1K and ImageNet-A/R/C is not absolutely positive relevant. Such improvements in robustness and OOD datasets might come from the fact that SSF freezes most of the pre-trained parameters, which maximally preserves the knowledge learned from the large-scale dataset and thus maintains a better generalization ability. \begin{figure*}[t] \centering \subfloat[Pre-trained model \emph{vs}. Fine-tuned model via SSF.]{ \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=1\linewidth]{figures/weight_distribution_ours.pdf} \end{minipage} \label{fig: weight_distribution_ours} } \subfloat[Pre-trained model \emph{vs}. Full fine-tuned model.]{ \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=1\linewidth]{figures/weight_distribution_full.pdf} \end{minipage} \label{fig: weight_distribution_full} } \caption{Comparisons of parameter distribution between the original pre-trained model and different fine-tuning methods. The first row shows weight distribution and the second row is bias distribution. The blue histograms show the original pre-trained model, and the orange ones show the fine-tuned model via SSF in (a) and full fine-tuned model in (b).} \label{fig: visualization} \vspace{-0.5cm} \end{figure*} \vspace{-0.2cm} \subsection{Visualization and Analysis} \vspace{-0.2cm} Although our goal is to modulate the features extracted by a pre-trained model, the scale and shift parameters are input-independent indeed. Therefore, these parameters can also be regarded as encoding information of the whole downstream dataset. After re-parameterization, these scale and shift parameters are absorbed into the original model weights. To better understand information learned by the SSF, we visualize the distributions of weights and biases before and after fine-tuning via SSF in Figure \ref{fig: weight_distribution_ours}. We can see that the scale and shift parameters adjust the original weights and biases, and change the distribution of weights and biases to fit the downstream task. \begin{wrapfigure}[19]{r}{0.35\textwidth} \vspace{-0.2cm} \centering \includegraphics[width=0.35\textwidth]{figures/similarity.pdf} \caption{The visualization of the feature similarities between full fine-tuning and linear probing, full fine-tuning and VPT-Deep, full fine-tuning and SSF, in different layers of ViT-B/16. } \label{fig: similarity} \vspace{-0.5cm} \end{wrapfigure} As a comparison, we also visualize the original weight distribution and the weight distribution after full fine-tuning in Figure \ref{fig: weight_distribution_full}, from which we can find an interesting phenomenon that full fine-tuning does not change the distribution of weights and biases much, but probably only a small portion of the values is changed. It is worth noting that although SSF does not match the weight distribution of full fine-tuning, it achieves better performance (93.99\% \emph{vs}. 93.82\% in Table \ref{table: imagenet}) on CIFAR-100. To further investigate why SSF can achieve superior performance, beyond weight distribution, we also visualize the feature similarities between full fine-tuning and linear probing, full fine-tuning and VPT-Deep, full fine-tuning and SSF, as shown in Figure \ref{fig: similarity}. In the last layer, SSF has the most similar feature to full fine-tuning and the accuracy is also the closest. This shows that even if the weight distribution learned by SSF is different from full fine-tuning, SSF is also able to extract the features of the images in the downstream task very well, which validates the effectiveness of our method. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.2cm} In this paper, we focus on parameter-efficient fine-tuning and propose an SSF method to scale and shift the features extracted by a pre-trained model. The intuition behind our method comes from alleviating the distribution mismatch between upstream tasks and downstream tasks by modulating deep features. SSF surprisingly outperforms other parameter-efficient fine-tuning approaches with a small number of learnable parameters. Besides, the introduced scale and shift parameters during the fine-tuning can be merged into the original pre-trained model weights via re-parameterization in the inference phase, thereby avoiding extra parameters and FLOPs. With the proposed SSF method, our model obtains 2.46\% (90.72\% \emph{vs.} 88.54\%) and 11.48\% (73.10\% \emph{vs}. 65.57\%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. Experiments on 26 image classification datasets in total and 3 robustness~\&~out-of-distribution datasets with various model families (CNNs, Transformers, and MLPs) show the effectiveness of SSF, which establishes a new baseline. \vspace{-0.3cm} \section*{Acknowledgement} \vspace{-0.3cm} The authors acknowledge the support from the Singapore National Research Foundation (“CogniVision – Energy-autonomous always-on cognitive and attentive cameras for distributed real-time vision with milliwatt power consumption” grant NRF-CRP20-2017-0003) – \url{www.green-ic.org/CogniVision}. Xinchao Wang is the corresponding author. \clearpage { \bibliographystyle{ieee_fullname}
1,108,101,564,446
arxiv
\section{Introduction} Let $\mathscr{P}$ be the set of primes represented by the quadratic polynomial $x^2+y^2+1$. We consider the Goldbach problem for the set $\mathscr{P}$, our main result being the following. \begin{theorem}\label{theo_goldbach} Almost all even positive integers $n\not \equiv 5,8\hspace{-0.1cm} \pmod{9}$ can be represented as $n=p+q$ with $p,q \in \mathscr{P}$. \end{theorem} By ''almost all'' we mean that the number of exceptional $n\leq N$ is $o(N)$. The local condition $n\not\equiv 5,8\hspace{-0.1cm} \pmod{9}$ is necessary (unless $p$ or $q$ equals $3$ in which case we can only represent $o(N)$ integers), as is easily seen by considering primes of the form $x^2+y^2+1$ modulo $9$. An earlier result of Matom\"aki \cite{matomaki-goldbach}, using a somewhat different method, showed that one of the primes $p$ and $q$ can be taken to be from $\mathscr{P}$, the other one being a generic prime. A few years later, Tolev \cite{tolev_binarygoldbach} gave an asymptotic formula for a weighted count of the representations $n=p+q$ with $p\in \mathscr{P}$ and $q$ a generic prime for almost all even $n$. Naturally, there is a close connection between the almost all version of the binary Goldbach problem and the ternary Goldbach problem, so we can also solve the ternary problem for the primes $x^2+y^2+1$. \begin{theorem} \label{theo_ternary} All large enough odd positive integers $n$ can be represented as $n=p+q+r$ with $p,q, r \in \mathscr{P}$. \end{theorem} We remark that Tolev \cite{tolev_ternarygoldbach} established an asymptotic formula for the weighted count of the representations of $n$ as $n=p+q+r$ with $p,q\in \mathscr{P}$ but $r$ a generic prime. The proof of Theorem \ref{theo_ternary} is very similar to that of Theorem \ref{theo_goldbach}, and is remarked on in Section \ref{Sec: transference}. \\ As a byproduct of the method for proving Theorem \ref{theo_goldbach}, we will obtain an analog of Roth's theorem for the set of primes of the form $x^2+y^2+1$, so that in particular the set $\mathscr{P}$ contains infinitely many three term arithmetic progressions. \begin{theorem} \label{theo_roth} Any subset of $\mathscr{P}^{*}=\{x^2+y^2+1:\,\, x,y\,\, \textnormal{coprime}\}\cap \mathbb{P}$ having a positive upper density with respect to $\mathscr{P}^{*}$ contains infinitely many non-trivial three term arithmetic progressions. \end{theorem} We will also conclude from the proof of Theorem \ref{theo_goldbach} that for any irrational $\xi$, there is some uniformity in the distribution of the fractional parts of the numbers $\xi p$ with $p\in \mathscr{P}$. \begin{theorem}\label{theo_alphap} Let $\xi$ be irrational and $\kappa\in \mathbb{R}$. Then there are infinitely many primes $p\in \mathscr{P}$ such that $\|\xi p+\kappa\|\leq p^{-\theta}$, where $\theta=\frac{1}{80}-\varepsilon=0.0125-\varepsilon$ and $\varepsilon>0$ is arbitrary. Here $\|\cdot\|$ stands for the distance to the nearest integer. \end{theorem} Theorems \ref{theo_roth} and \ref{theo_alphap} are proved in Sections \ref{Sec: restriction} and \ref{Sec: fractional parts}, respectively. In Theorem \ref{theo_alphap}, we have not pursued maximizing the value of $\theta$, and the main message is that $\theta$ can be taken to be positive.\\ It should be remarked that the distribution of $\xi p \hspace{-0.1cm}\pmod 1$ has been studied also for some other subsets of the primes, such as for Chen primes \cite{matomaki-bombieri}, \cite{shi} and very recently for Gaussian primes \cite{baier} and Piatetski-Shapiro primes \cite{guo}. In the case of Chen primes the analog of Theorem \ref{theo_alphap} with $\theta>0$ was obtained in \cite{matomaki-bombieri} (and improved in \cite{shi} to $\theta=\frac{3}{200}=0.015$).\\ The proof of Theorem \ref{theo_goldbach} is based on a recent paper of Matom\"aki and Shao \cite{matomaki-shao}, where a transference type theorem for additive problems of Goldbach type was established, allowing one to deduce from certain desirable properties of a set $A$ the conclusion that $A+A+A$ contains all large enough integers. One should mention that a closely related transference principle for translation invariant additive problems was famously introduced by Green \cite{green-annals} and Green-Tao \cite{green-restriction}, \cite{green-tao} to find arithmetic progressions in the primes, their principle stating that a set $A$ with certain desirable properties contains infinitely many $3$-term arithmetic progressions (or $k$-term arithmetic progressions if one assumes stronger conditions). The hypotheses of the transference type result for Goldbach type equations \cite[Theorem 2.3]{matomaki-shao} resemble the ones of the transference principle for translation invariant equations \cite[Proposition 5.1]{green-restriction}, but include an additional assumption. An additional assumption is evidently needed, since for example the primes $p$ satisfying $\|\sqrt{2}p\|<\frac{1}{100}$ contain a lot of arithmetic progressions, but most odd integers are not the sum of three such primes.\\ The first property required from a set $A$ in the transference type result of \cite{matomaki-shao} is "well-distribution" in \textit{Bohr sets}, meaning that for $\xi,\kappa\in \mathbb{R}$ and $\eta>0$ the sets $\{n:\, \|\xi n+\kappa\|\leq \eta\}$ and their intersections contain a fair proportion of the elements of $A$. The second property, which is present in \cite{green-restriction} as well, is that $A$ is "Fourier bounded", in the sense that the Fourier transform $\widehat{1_A}$ is small in $\mathcal{\ell}^r$ norm for $r>2$. The last and simplest to check condition is that there should be a lower bound of the correct order of magnitude for the number of elements in $A$ up to $N$. In \cite{matomaki-shao}, the transference type result was applied to solve the ternary Goldbach problem with three Chen primes or with three primes $p$ such that $[p,p+C]$ contains at least two primes for some large constant $C$.\\ We employ a variant of the transference type result of \cite{matomaki-shao} in this paper, the conditions for the principle being nearly identical, but with the conclusion that $A+A$ contains almost all positive integers (in the sense that there are $o(N)$ integers $n\leq N$ not representable in this form). This modification is easy to implement, so the main part of our proof is devoted to verifying the conditions involved in the transference type result in the context of the set $\mathscr{P}$. The lower bound condition follows essentially from earlier work, so we are mostly concerned with proving two requirements.\\ The Fourier boundedness requirement follows from the restriction theory of the primes, in the form developed by Green and Tao in \cite{green-restriction}. However, the ''enveloping sieve'' $\beta(n)$ (which is a pseudorandom majorant of a subset of the primes and enjoys certain pleasant Fourier properties) has to be modified. It turns out that the necessary modification is available in a paper of Ramaré and Ruzsa \cite{ramare}, where the enveloping sieve was developed for purposes related to additive bases, and actually the results in that paper imply that $\mathscr{P}$ is an additive basis of finite (but large and unspecified) order.\\ Proving the well-distribution of the set $\mathscr{P}$ in Bohr sets requires more work and occupies the majority of this paper. We use a strategy similar to the one that was used in \cite{matomaki-shao} to deal with Chen's primes or with primes $p$ with $[p,p+C]$ containing two primes for some large constant $C$, but we must use a different sieve to detect primes of the form $x^2+y^2+1$. The sieve suitable for this purpose is a combination of the linear sieve and the semilinear sieve (also called the half-dimensional sieve), developed by Iwaniec in \cite{iwaniec-semilinear} and used by him in \cite{iwaniec-quadraticform} to prove that the number of primes in $\mathscr{P}$ up to $N$ is $\gg N(\log N)^{-\frac{3}{2}}$ (the infinitude of the primes in $\mathscr{P}$ was established earlier by Linnik \cite{linnik} in 1960, using his dispersion method). An upper bound for $|\mathscr{P}\cap [1,N]|$ of the same order of magnitude follows from the Selberg sieve, so $\mathscr{P}$ is a sparse set of primes.\\ When it comes to the sieve theoretic part of the argument, we proceed along the lines of \cite{matomaki-m2+n2+1} and \cite{wu} that consider the problem of finding primes from $\mathscr{P}$ in short intervals. However, unlike in these works, one cannot apply the Bombieri-Vinogradov theorem for the prime counting function, but one has to resort to a Bombieri-Vinogradov type result for exponential sums $\sum_{n\leq N}\Lambda(n)e(\alpha n)$ over primes. Such average results for exponential sums appeared for instance in \cite{tolev_bombieri}, \cite{mikawa-bombieri}, \cite{matomaki-bombieri}, but the level of distribution achieved in these works when the weight sequence is not well-factorable (in the sense defined in \cite[Chapter 12]{friedlander}) is $\frac{1}{3}-\varepsilon$, which is not good enough for our purposes. We derive a combinatorial factorization for the semilinear sieve weights and apply \cite[Lemma 8.4]{matomaki-shao} (closely related to the estimates in \cite{mikawa-bombieri}) on Bombieri-Vinogradov type averages for $\sum_{n\leq N}\Lambda(n)e(\alpha n)$ to increase the level of distribution sufficiently and hence obtain Theorem \ref{theo_goldbach}. In particular, the results of Sections \ref{Sec: Bombieri}, \ref{Sec: sieveweight} and \ref{Sec: hypotheses} imply the following Bombieri-Vinogradov type bound. \begin{theorem}\label{theo_sievebombieri} Let $N\geq 1$ be large and $\varepsilon>0$, $C\geq 10$ fixed, and let $\lambda_d^{+,\textnormal{SEM}}$ and $\lambda_d^{-,\textnormal{SEM}}$ be the upper and lower bound semilinear sieve weights defined by restricting the M\"obius function $\mu(d)$ to the sets \begin{align*} \mathcal{D}^{+,\textnormal{SEM}}&=\{p_1\cdots p_r\leq N^{\rho_{+}}:\,\, z_{+}\geq p_1> \ldots > p_r,\,\,p_1\cdots p_{2k-2}p_{2k-1}^2\leq N^{\rho_{+}}\,\, \textnormal{for all}\,\, k\geq 1\},\\ \mathcal{D}^{-,\textnormal{SEM}}&=\{p_1\cdots p_r\leq N^{\rho_{-}}:\,\, z_{-}\geq p_1> \ldots > p_r,\,\,p_1\cdots p_{2k-1}p_{2k}^2\leq N^{\rho_{-}}\,\, \textnormal{for all}\,\, k\geq 1\}, \end{align*} with the choices $\rho_{+}=\frac{2}{5}-\varepsilon$, $\rho_{-}=\frac{3}{7}-\varepsilon$, $z_{+}\leq N^{\frac{1}{2}}$ and $z_{-}\leq N^{\frac{1}{3}-\varepsilon}$. Let $\alpha$ be a real number with $|\alpha-\frac{a}{q}|\leq \frac{1}{q^2}$ for some coprime integers $a$ and $q$ with $q\in [(\log N)^{1000C},N(\log N)^{-1000C}]$. Then for any integer $b\neq 0$ we have (choosing either $+$ or $-$ sign throughout) \begin{align*} \sum_{\substack{d\leq N^{\rho_{\pm}}\\(d,b)=1}}\bigg|\lambda_{d}^{\pm,\textnormal{SEM}}\sum_{\substack{n\sim N\\n\equiv b \pmod{d}}}\Lambda(n)e(\alpha n)\bigg|\ll \frac{N}{(\log N)^{C}}. \end{align*} \end{theorem} We remark that the arguments of this paper would easily generalize to primes of the form $x^2+y^2+a$, where $a\neq 0$ is any integer. We also note that since for all the primes of the form $x^2+y^2+1$ appearing in the rest of the paper the only possible common prime factors of $x$ and $y$ are $2$ and $3$, Theorem \ref{theo_goldbach} could be stated in the form that almost all even $n\not \equiv 5,8\pmod 9$ are representable as $n=p+q$ with $p$ and $q$ primes and neither $p-1$ nor $q-1$ having any prime factors greater than $3$ that are $\equiv -1\pmod{4}$. One should also mention that we did not get an asymptotic formula for the number of representations of $n$ as sums of two or three primes from $\mathscr{P}$ (unlike in the work of Tolev \cite{tolev_binarygoldbach}, \cite{tolev_ternarygoldbach} on related problems), nor did we show that the number of exceptional $n$ in Theorem \ref{theo_goldbach} is $\ll \frac{N}{(\log N)^{A}}$ instead of merely $o(N)$. We can nevertheless get a lower bound of $c n(\log n)^{-3}$ for the number of representations in Theorem \ref{theo_goldbach} for almost all $n$ for some small $c>0$, and this is the correct order of magnitude. \subsection{Structure of the proofs} We give a brief outline of the dependencies between different theorems and propositions. The proof of Theorem \ref{theo_goldbach} is deduced from the transference type theorem (Proposition \ref{prop_transference}) in Section \ref{Sec: 3}, provided that the two key conditions in the transference type theorem are satisfied. One condition is the well-distribution of the set $\mathscr{P}$ in Bohr sets and the other one is a Fourier uniformity result for $\mathscr{P}$ (Propositions \ref{prop_bohr} and \ref{prop_restriction}, respectively). The proof of Proposition \ref{prop_restriction} is presented in Section \ref{Sec: restriction}, and in Section \ref{Sec: 3} it is shown that Propositions \ref{prop_bohr} and \ref{prop_restriction} immediately imply Theorem \ref{theo_roth}.\\ The largest part of the paper is then devoted to proving Proposition \ref{prop_bohr} using sieve theory. The purpose of Section \ref{Sec: reductions} is to show that Proposition \ref{prop_bohr} follows from Proposition \ref{prop2}, which involves more notation but is easier to approach. In Section \ref{Sec: weighted}, a weighted sieve for finding primes of the form $x^2+y^2+1$ is presented, in the form of Theorem \ref{t2}. Section \ref{Sec: decomposition} constructs the weighted sequence $(\omega_n)$ to which Theorem \ref{t2} is applied, as well as sets up the circle method. Section \ref{Sec: hypotheses} is then devoted to proving Hypothesis \ref{h1} for $(\omega_n)$, since this hypothesis is the requirement for applying Theorem \ref{t2}. Section \ref{Sec: hypotheses}, which finishes the proofs of Theorems \ref{theo_goldbach} and \ref{theo_sievebombieri}, involves bounding Bombieri-Vinogradov sums related to either semilinear or linear sieve coefficients and weighted by additive characters that lie either on minor or major arcs. The type I and II input required in Section \ref{Sec: hypotheses} comes from Section \ref{Sec: Bombieri}, while the required combinatorial input comes from Section \ref{Sec: sieveweight}. As Remark \ref{rmk1} tells, the only difference in the proofs of Theorems \ref{theo_ternary} and \ref{theo_goldbach} is the form of transference type result being used. Finally, when it comes to proving Theorem \ref{theo_alphap}, one needs the sections from Section \ref{Sec: weighted} onwards, the last of which, Section \ref{Sec: fractional parts}, is required only for this purpose. We also remark that none of the sections \ref{Sec: transference}, \ref{Sec: restriction}, \ref{Sec: reductions}, \ref{Sec: weighted}, \ref{Sec: Bombieri} and \ref{Sec: sieveweight} depend on each other. \subsection{Notation} The symbols $j,k,\ell,m, n$ and $q$ always denote integers, and $p$ is a prime number. We denote by $e(\alpha)=e^{2\pi i \alpha}$ the complex exponential, by $\text{Li}(x)=\int_{2}^{x}\frac{dt}{\log t}$ the logarithmic integral, and by $\pi(x;q,a)$ the number of primes up to $x$ in the residue class $a\hspace{-0.1cm} \pmod q$. We denote by $\|\cdot\|$ the distance to the nearest integer function, by $(\cdot, \cdot)$ the greatest common divisor and by $[\cdot,\cdot]$ the least common multiple. We denote by $\mathbb{Z}_q$ the set of integers $\pmod{q}$, sometimes interpreting functions defined on this set as $q$-periodic functions on $\mathbb{Z}$ and vice versa. The expression $m^{-1} \pmod{q}$ stands for the inverse of $m$ in $\mathbb{Z}_{q}$.\\ Starting from Section \ref{Sec: 3}, there are various symbols that have been reserved a specific meaning. The integer $\mathcal{C}$ is given by \eqref{eq13}, the function $s(n)$ by \eqref{eq60}, the set $\mathcal{S}$ by \eqref{eq47}, the integer $b$ by Definition \ref{def1}, the numbers $U, J$ and $W$ by \eqref{eq30}, the set $\mathcal{Q}$ by \eqref{eq21}, the product $\mathfrak{S}(L)$ by Definition \ref{def3}, the function $g(\ell)$ by Definition \ref{def2}, and lastly the parameter $Q$ by Lemma \ref{le11}. When it comes to sieve theoretic notation, $\lambda_d$ are sieve weights and for a set $\mathcal{A}$ of integers and $\mathcal{P}$ of primes, $S(\mathcal{A},\mathcal{P},z)$ counts the elements of $\mathcal{A}$ that are coprime to all the primes in $\mathcal{P}\cap [2,z)$, with each integer $n$ weighted by $\omega_n\geq 0$, where $(\omega_n)$ will be clear from context. The arithmetic functions $\Lambda(n)$, $\mu(n)$ and $\varphi(n)$ are the von Mangoldt, M\"obius and Euler functions, as usual, and the functions $\tau(n)$ and $\nu(n)$ count the number of divisors and distinct prime factors of $n$, respectively.\\ The parameters $\varepsilon,\eta>0$ are always assumed to be small enough, but fixed. The variables $N$ and $x$ tend to infinity, and in Sections \ref{Sec: decomposition} and \ref{Sec: hypotheses}, $A,B$ and $C$ are large enough constants (say greater than $10^{10}$). The numbers $\mathcal{C}$, $W$ and $J$ are $\ll 1$, but may be large. The expression $1_{S}$ is the indicator function of a set $S$, so that $1_{S}(n)=1$ when $n\in S$ and $1_{S}(n)=0$ otherwise. We use the usual Landau and Vinogradov asymptotic notations $o(\cdot), O(\cdot), \ll, \gg$. When we write $n\sim X$ in a summation, we mean $X\leq n<2X$. By $n\asymp X$, in turn, we mean $X\ll n\ll X$. \subsection{Acknowledgments} The author is grateful to his supervisor Kaisa Matom\"aki for various useful comments and discussions. The author thanks the referee for careful reading of the paper and for useful comments. While working on this project, the author was funded by UTUGS Graduate School and project number 293876 of the Academy of Finland. \section{A transference type result}\label{Sec: transference} We need a transference type result for binary Goldbach type problems for proving Theorem \ref{theo_goldbach}. We begin with some definitions.\\ Let $\Omega\subset \mathbb{Z}_N$ and $\eta\in (0,\frac{1}{2})$, and write \begin{align*} B(\Omega,\eta)=\left\{n\in \mathbb{Z}_N:\quad \left\|\frac{\xi n}{N}\right\|\leq \eta\quad \text{for all}\quad \xi \in \Omega\right\} \end{align*} for the \textit{Bohr set} associated to these parameters. We will need a function $\chi=\chi_{\Omega,\eta}:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ that is a smoothed version of the characteristic function of the Bohr set $B(\Omega,\eta)$. The exact construction of $\chi$ is not necessary, and we just list the properties of $\chi$ we use, found in \cite[Lemma 3.1]{matomaki-shao}. We have \begin{equation}\label{eq29}\begin{split} &0\leq \chi(n)\ll_{|\Omega|}1,\hspace{2.4cm} \chi(n)=\chi(-n)\,\, \text{and}\,\, \chi(n+N)=\chi(n),\\ &\chi(n)\geq 1\,\, \text{for}\,\,\ n\in B(\Omega,\eta),\quad \quad \chi(n)\leq \left(\frac{\eta^2}{8}\right)^{|\Omega|},\,\, \text{for}\,\, n \not \in B(\Omega,2\eta)\\ &\frac{1}{N}\sum_{n\in \mathbb{Z}_N}\chi(n):=\|\chi\|_1\geq \left(\frac{\eta}{2}\right)^{|\Omega|}. \end{split} \end{equation} Also from \cite{matomaki-shao}, we know that $\chi$ has \textit{Fourier complexity} $\mathcal{C}\ll_{|\Omega|,\eta} 1$, where the Fourier complexity is defined as the smallest integer $\mathcal{C}$ for which we have a Fourier representation \begin{align}\label{eq13} \chi(n)=\sum_{k=1}^{\mathcal{C}}c_k e(\alpha_k n),\,\, |c_k|\leq \mathcal{C}\,\, \text{and}\,\, \alpha_k\in \mathbb{R}/\mathbb{Z}. \end{align} The formulation of the transference type result requires harmonic analysis, so we should state which normalization of the Fourier transform we use. For functions $f,g:\mathbb{Z}_N\to \mathbb{C}$ we define the Fourier transform and the convolution as \begin{align*} \hat{f}(\xi)&=\frac{1}{N}\sum_{n\in \mathbb{Z}_N}f(n)e\left(-\frac{\xi n}{N}\right)\quad \text{and}\quad f*g(n)=\frac{1}{N}\sum_{k\in \mathbb{Z}_N}f(k)g(n-k), \end{align*} so that Parseval's identity and the convolution formula of the Fourier transform take the forms \begin{align*} \sum_{n\in \mathbb{Z}_N}|f(n)|^2&=N\sum_{\xi \in \mathbb{Z}_N}|\hat{f}(\xi)|^2 \quad \text{and}\quad \widehat{f*g}(\xi)=\hat{f}(\xi)\hat{g}(\xi). \end{align*} \begin{proposition}\label{prop_transference} Let functions $f_1, f_2:\mathbb{Z}_N\to \mathbb{R}_{\geq 0}$ and parameters $K_0\geq 1$, $\delta>0$, $\varepsilon>0$ be given. Then there exist $\eta=\eta(K_0,\delta,\varepsilon)>0$ and $\Omega\subset\mathbb{Z}_N$, $|\Omega|\ll_{K_0,\delta,\varepsilon} 1$ with $1\in \Omega$ such that the following holds. Assume that, for a function $\chi=\chi_{\Omega,\eta}:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ obeying \eqref{eq29}, we have\\ (i) $f_2*\chi(t)\geq \delta \|\chi\|_1$ for all $t\in (\frac{N}{3},\frac{2N}{3})$,\\ (ii) $\displaystyle \sum_{\frac{N}{3}<n<\frac{N}{2}}f_1(n)\geq \delta N$,\\ (iii) $\displaystyle \sum_{\xi \in \mathbb{Z}_N}|\widehat{f_j}(\xi)|^r\leq K_0$ for $j\in \{1,2\}$ and $r\in \{3,4\}$.\\ Then\\ (iv) $f_1*f_2(n)\geq \frac{\delta^2}{3}$ for all but $\leq \varepsilon N$ values of $n\in [0.9N,N]$. \end{proposition} \textbf{Proof.} This is inspired by and similar to \cite[Theorem 2.3]{matomaki-shao} of Matom\"aki and Shao. See also \cite[Proposition 5.1]{green-restriction}, where similar ideas were applied for Roth type problems. Take $\Omega=\{\xi \in \mathbb{Z}_N:\,\, |\widehat{f_1}(\xi)|\geq \varepsilon_0\}\cup\{1\}$, where $\varepsilon_0$ will be chosen small enough in terms of $\delta$, $\varepsilon$ and $K_0$. Condition (iii) tells that $|\Omega|\leq K_0\varepsilon_0^{-3}+1$. Let $\chi=\chi_{\Omega,\eta}:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ be as in the proposition (so that $\chi$ fulfills \eqref{eq29}). We will later choose $\eta$ to be small enough in terms of $\delta$, $\varepsilon$ and $K_0$. Introduce the functions \begin{align*} g_2=\frac{1}{\|\chi\|_1}f_2*\chi\quad \text{and}\quad h_2=f_2-g_2. \end{align*} We have \begin{align*} \widehat{g_2}= \frac{1}{\|\chi\|_1}\widehat{f_2}\widehat{\chi}\quad \text{and}\quad \widehat{h_2}=\widehat{f_2}\left(1-\frac{\widehat{\chi}}{\|\chi\|_1}\right), \end{align*} so that in particular $|\widehat{h_2}(\xi)|\leq 2|\widehat{f_2}(\xi)|$.\\ Next we estimate from above and below the average $\frac{1}{N}\sum_{n\in \mathbb{Z}_N}|f_1*h_2(n)|^2$, starting with the lower bound. Owing to conditions (i) and (ii), for $n\in [0.9N,N]$ we have \begin{align}\label{eq33} f_1*g_2(n)=\frac{1}{\|\chi\|_1}f_2*\chi*f_1(n)\geq \frac{\delta }{N}\sum_{\substack{n-\frac{2N}{3}<k<n-\frac{N}{3}\\ k\in \mathbb{Z}_N}}f_1(k)\geq \delta^2 \end{align} since $(\frac{N}{3},\frac{N}{2})\subset (n-\frac{2N}{3},n-\frac{N}{3})$ for $n\in [0.9N,N]$. Denoting $T=\{n\in [0.9N,N]:\,\, f_1*f_2(n)<\frac{\delta^2}{3}\}$ and using the simple inequality $|a-b|^2\geq \frac{a^2}{2}-b^2$ and \eqref{eq33}, we infer that \begin{align}\begin{split}\label{eq31} \frac{1}{N}\sum_{n\in \mathbb{Z}_N}|f_1*h_2(n)|^2&\geq \frac{1}{N}\sum_{n\in T}\left(\frac{1}{2}|f_1*g_2(n)|^2-|f_1*f_2(n)|^2\right)\\ &\geq \left(\frac{\delta^4}{2}-\left(\frac{\delta^2}{3}\right)^2\right)\frac{|T|}{N}\geq \frac{\delta^4 }{10}\frac{|T|}{N}. \end{split} \end{align} When it comes to an upper bound, Parseval's identity gives \begin{align*} \frac{1}{N}\sum_{n\in \mathbb{Z}_N}|f_1*h_2(n)|^2&=\sum_{\xi\in \mathbb{Z}_N}|\widehat{f_1*h_2}(\xi)|^2\\ &=\sum_{\xi\in \mathbb{Z}_N}|\widehat{f_1}(\xi) \widehat{h_2}(\xi)|^2\\ &\leq \varepsilon_0^{\frac{1}{2}}\sum_{\xi\not \in \Omega}|\widehat{f_1}(\xi)|^{\frac{3}{2}} |\widehat{h_2}(\xi)|^2+\sum_{\xi\in \Omega}|\widehat{f_1}(\xi)|^2 |\widehat{h_2}(\xi)|^2. \end{align*} Here the first sum can be bounded with the Cauchy-Schwarz inequality and (iii), implying \begin{align*} \varepsilon_0^{\frac{1}{2}}\sum_{\xi\not \in \Omega}|\widehat{f_1}(\xi)|^{\frac{3}{2}} |\widehat{h_2}(\xi)|^2 \leq \varepsilon_0^{\frac{1}{2}}\left(\sum_{\xi \in \mathbb{Z}_N}|\widehat{f_1}(\xi)|^3\right)^{\frac{1}{2}}\left(\sum_{\xi \in \mathbb{Z}_N}|\widehat{h_2}(\xi)|^4\right)^{\frac{1}{2}}\leq 8\varepsilon_0^{\frac{1}{2}}K_0. \end{align*} The sum over $\xi \in\Omega$ in turn can be bounded by using the fact that \begin{align*} \left|1-\frac{\widehat{\chi}(\xi)}{\|\chi\|_1}\right|\leq 30\eta \quad \text{for every}\quad \xi \in \Omega, \end{align*} the proof of which is contained in the proof of Theorem 2.3 in \cite[Section 4]{matomaki-shao}. After this, we may again use the Cauchy-Schwarz inequality and (iii) to get \begin{align*} \sum_{\xi\in \Omega}|\widehat{f_1}(\xi)|^2 |\widehat{h_2}(\xi)|^2&\leq (30\eta)^{2}\sum_{\xi\in \Omega}|\widehat{f_1}(\xi)|^2 |\widehat{f_2}(\xi)|^{2}\\ &\leq 1000\eta^{2}K_0. \end{align*} At this stage, we fix the choices $\varepsilon_0=\eta=\frac{\delta^8\varepsilon^2}{10^4K_0^{2}}$, so that \begin{align}\label{eq32} \frac{1}{N}\sum_{n\in \mathbb{Z}_N}|f_1*h_2(n)|^2\leq 8\varepsilon_0^{\frac{1}{2}}K_0+1000\eta^{2}K_0\leq \frac{1}{10}\delta^4 \varepsilon. \end{align} Combining \eqref{eq31} and \eqref{eq32} above, we discover that $|T|\leq 10\delta^{-4}\cdot \frac{1}{10}\delta^4\varepsilon N=\varepsilon N$, which concludes the proof.\qedd \section{Deducing Theorem \ref{theo_goldbach} from the transference type result}\label{Sec: 3} We will apply the transference type result (Proposition \ref{prop_transference}) to prove Theorem \ref{theo_goldbach}. This deduction is done in this section assuming the conditions (i)-(iii) of the transference type result, and the rest of the paper is focused on verifying these conditions. Naturally, the functions $f_1$ and $f_2$ in the transference type result are taken to be the characteristic functions of the primes of the form $x^2+y^2+1$ (restricted to a residue class), normalized in such a way that they have mean comparable to $1$. First, we introduce some notation.\\ Define the function \begin{align}\label{eq60} s(n)=\prod_{\substack{p\mid n\\ p\equiv -1 \hspace{-0.1cm} \pmod 4\\p\neq 3}}p, \end{align} which excludes from the prime factorization of $n$ the primes $2$, $3$ and those primes that are $\equiv 1 \hspace{-0.1cm} \pmod 4$. Denote \begin{align}\label{eq47} \mathcal{S}=\{a^2+b^2:\quad a,b\in \mathbb{Z},\quad (a,b)\mid 6^{\infty}\}. \end{align} We also define a property that we require from the linear functions we work with in what follows. \begin{definition}\label{def1} We say that a linear polynomial $L$ with integer coefficients is \textit{amenable} if $L(n)=Kn+b$ for some integers $K\geq 1$ and $b$, and\\ (i) $6^3\mid K$,\\ (ii) $(b,K)=(b-1,s(K))=1$,\\ (iii) $b-1=2^j 3^{2t}(4h+1)$ for some $h\in \mathbb{Z}$, $3\nmid 4h+1$ and $j,t\geq 0$ with $2^{j+2}3^{2t+1}\mid K$. \end{definition} What these conditions imply is that there are no local obstructions (modulo divisors of $K$) to $L(n)$ being prime and $L(n)-1$ belonging to $\mathcal{S}$ (in particular, $L(n)-1$ crucially has an even number of prime factors $p\equiv -1\hspace{-0.1cm} \pmod 4$ with multiplicities by (iii)). We note that it is essential that $b-1$ is allowed to be divisible by a power of $3$. Indeed, if $L_i(n)=Kn+b_i$ are two amenable linear functions with $3\mid K$ and $3\nmid b_1-1$, $3\nmid b_2-1$, then $L_1(m)+L_2(n)$ can only represent numbers that are $\equiv 1 \hspace{-0.1cm}\mod 3$. We also note that in our application we must allow $K$ to be divisible by arbitrarily high powers of $2$. This is due to the fact that if $L_i(n)=2^sn+b_i$ are amenable, then $L_i(n)-1\equiv 2^{a_i}\hspace{-0.1cm} \pmod{2^{a_i+2}}$ for some integers $0\leq a_i\leq s-2$, which implies that $L_1(m)+L_2(n)$ is never $\equiv 2 \pmod{2^s}$.\\ The majority of this paper is devoted to proving for functions $f_i$ related to the characteristic function of $\mathscr{P}$ the following versions of the conditions (i) and (iii) of the transference type result. Throughout the rest of the paper, we denote \begin{align}\label{eq30}\begin{split} U&=2^{J}\cdot 3^3 \quad \text{with} \quad 5\leq J\ll 1,\\ W&=U\cdot \prod_{5\leq p\leq w} p\quad \text{with}\quad 10^{{10}^{10}}\leq w\ll 1. \end{split} \end{align} \begin{proposition}\label{prop_bohr} Let $\chi:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ have Fourier complexity $\mathcal{C}\ll 1$. Let $W$ be as in \eqref{eq30} with $w\geq \mathcal{ C}^{20}$, and suppose that the linear function $Wn+b$ is amenable. For an integer $N\geq 1$, set \begin{align}\label{eq76} f(n)=(\log N)^{\frac{3}{2}}\left(\frac{\varphi(W)}{W}\right)^{\frac{3}{2}}1_{Wn+b\in \mathbb{P},\,\,Wn+b-1\in \mathcal{S}} \quad \textnormal{for}\quad n\in \left(\frac{N}{3},\frac{2N}{3}\right), \end{align} and $f(n)=0$ for other values of $n\in [0,N)$. Then for $N\geq N_0(w,\mathcal{C})$ we have \begin{align*} \sum_{n\sim \frac{N}{3}}f(n)\chi(t-n)&\geq \delta_0 \bigg(\sum_{n\sim \frac{N}{3}}\chi(t-n)-\frac{CN}{w^{\frac{1}{3}}}\bigg) \end{align*} for $t\in (\frac{N}{3},\frac{2N}{3})$ and some absolute constants $\delta_0>0, C>0$. \end{proposition} \begin{proposition}\label{prop_restriction} Suppose that the linear function $Wn+b$ is amenable with $W$ as in \eqref{eq30}. Let $N\geq 1$ be an integer and $g:\mathbb{Z}_N\to \mathbb{R}_{\geq 0}$ with $0\leq g(n)\leq f(n)$ for $n\in [0,N)$ and $f$ as in \eqref{eq76}. Then for all $r>2$, \begin{align*} \sum_{\xi \in \mathbb{Z}_N}|\widehat{g}(\xi)|^r\leq K_r \end{align*} for some positive constant $K_r$ depending only on $r$. \end{proposition} In this section, we show that Propositions \ref{prop_bohr} and \ref{prop_restriction} indeed imply Theorem \ref{theo_goldbach}. First we prove some lemmas about local representations of integers modulo powers of $2$ and $3$. \begin{lemma} \label{le12} Let $J\geq 5$ and $n\not \equiv 0\hspace{-0.1cm} \pmod{2^{J-1}}$ be integers. Then we may write $n=a+b$ for some integers $a$ and $b$ with $a\equiv 2^{i} \hspace{-0.1cm} \pmod{2^{i+2}}$ and $b\equiv 2^j \hspace{-0.1cm} \pmod{2^{j+2}}$ for some integers $0\leq i,j\leq J-3$. \end{lemma} \textbf{Proof.} Since $2^{J-1}\nmid n$, we may write $n=2^{g} s$ where $0\leq g\leq J-5$ and $s\not \equiv 0\pmod{16}$. It is easy to check that every such $s$ may be written as $s=a'+b'$ with $a'\equiv 2^{i} \hspace{-0.1cm} \pmod{2^{i+2}}$, $b'\equiv 2^j \hspace{-0.1cm} \pmod{2^{j+2}}$ for some $0\leq i,j\leq 3$. Then $n=a+b$ with $a=2^{g}a'$, $b=2^g b'$ is a representation of the desired form. \qedd \begin{lemma}\label{le13} Let $m'$ be any integer such that $m'\not \equiv 3,6 \hspace{-0.1cm} \pmod 9$. Then there exist integers $x_1$, $x_2$, $x_3$ and $x_4$ such that \begin{align*} &m'\equiv x_1^2+x_2^2+x_3^2+x_4^2 \hspace{-0.1cm} \pmod{3^{3}}\\ &x_1^2+x_2^2,\quad x_3^2+x_4^2\not \equiv 1\hspace{-0.1cm} \pmod 3\\ &x_1^2+x_2^2,\quad x_3^2+x_4^2\not \equiv 0\hspace{-0.1cm} \pmod{3^3} \end{align*} \end{lemma} \textbf{Proof.} One easily sees that $x^2 +y^2 \hspace{-0.1cm} \pmod{27}$ attains all residue classes except those that are $\equiv 3 \hspace{-0.1cm} \pmod 9$ or $\equiv 6 \hspace{-0.1cm} \pmod 9$ as $x$ and $y$ vary. Now the lemma only states that every $m'\not \equiv 3,6 \hspace{-0.1cm} \pmod 9$ is the sum of two numbers, each of which is $0,2,5$ or $8 \hspace{-0.1cm} \pmod 9$ and neither of which is $0\hspace{-0.1cm} \pmod{27}$. This can quickly be verified by hand. \qedd\\ \textbf{Proof of Theorem \ref{theo_goldbach} assuming Propositions \ref{prop_bohr} and \ref{prop_restriction}.} Given any small $\varepsilon>0$, we must show that once $N$ is large enough, the interval $[0.9N,N]$ contains at most $\varepsilon N$ integers $m\equiv 0\hspace{-0.1cm} \pmod 2$, $m\not \equiv 5,8 \hspace{-0.1cm} \pmod{9}$ that cannot be written as $m=p+q$ with $p$ and $q$ primes of the form $x^2+y^2+1$.\\ Let $U$ and $W$ be given by \eqref{eq30} with $J=\lfloor \frac{10}{\varepsilon}\rfloor$ and $w \ll 1$ large enough. We start by showing that for any $m\in [0.9N,N]$, $m\equiv 0\hspace{-0.1cm} \pmod 2$, $m\not \equiv 5,8\hspace{-0.1cm} \pmod 9$, $m\not \equiv 2\hspace{-0.1cm} \pmod{2^{J}}$, we may find integers $0\leq B_1, B_2\leq W-1$ such that $m= B_1+B_2$ and the linear functions $Wn+B_1$ and $Wn+B_2$ are amenable. The integers $m\equiv 2\hspace{-0.1cm} \pmod{2^{J}}$ can be disposed of since there are $\leq \frac{\varepsilon^2}{10} N$ such integers up to $N$.\\ To see that $B_1$ and $B_2$ exist, write $m=2m'+2$, so that $m'\not \equiv 3,6 \hspace{-0.1cm} \pmod 9$. Then $2^{J-1}\nmid m'$, so using Lemma \ref{le12} we may write $m'\equiv a_1+a_2\hspace{-0.1cm} \pmod{2^{J}}$ with $a_1\equiv 2^{i}\hspace{-0.1cm} \pmod{2^{i+2}}$, $a_2\equiv 2^{j}\hspace{-0.1cm} \pmod{2^{j+2}}$ for some $0\leq i,j\leq J-3$. Moreover, using Lemma \ref{le13}, we may write $m'\equiv a_1'+a_2' \hspace{-0.1cm}\pmod{3^3}$ with $a_1'$ and $a_2'$ numbers such that $3^3\nmid a_1'$, $3^3\nmid a_2'$, $2a_1'+1,2a_2+1'\not \equiv 0 \pmod 3$, and the largest powers of $3$ dividing $a_1'$ and $a_2'$ have even exponents (take $a_1'=x_1^2+x_2^2$ and $a_2'=x_3^2+x_4^2$ in that lemma and notice that the largest power of $3$ dividing $x^2+y^2$ has an even exponent).\\ Now pick numbers $b_p$ for $5\leq p\leq w$ such that $b_p\not \equiv 0,1,m,m-1\hspace{-0.1cm} \pmod{p}$. By the Chinese remainder theorem, we can find an integer $B$ such that $B\equiv 2a_1+1\hspace{-0.1cm} \pmod{2^{J}}$, $B\equiv 2a_1'+1\hspace{-0.1cm} \pmod{3^3}$, and $B\equiv b_p\hspace{-0.1cm} \pmod{p}$ for all $5\leq p<w$. Therefore, we have found some integers $B_1:=B$ and $B_2:=m-B$ such that $m=B_1+B_2$ $p\nmid B_i$, $p\nmid B_i-1$ for $5\leq p<w$, and $B_1-1$ and $B_2-1$ satisfy condition (iii) in the definition of amenability.\\ Therefore, we have a representation of any $m$ of the form above as \begin{align*} m\equiv B_1(m)+B_2(m)\hspace{-0.1cm} \pmod W \end{align*} with $Wn+B_1(m)$, $Wn+B_2(m)$ amenable linear functions and $0\leq B_i(m)\leq W-1$ (we use the notation $B_i(m)$ to emphasize that the $B_i$ depend on $m \hspace{-0.1cm}\pmod W$). For each $0\leq a\leq W-1$ we denote \begin{align*} \mathcal{B}_{a}=\{m\in [0.9N,N]:\,\, m\equiv a \pmod W\}. \end{align*} We will show that each $\mathcal{B}_{a}$ with $a\equiv 0\hspace{-0.1 cm} \pmod{2}$, $a\not \equiv 5,8\hspace{-0.1cm} \pmod 9$, $a\not \equiv 2\pmod{2^{J}}$ contains at most $\varepsilon \frac{N}{2W}$ values of $m\in [0.9N,N]$, that are not of the form $p+q$ with $p$ and $q$ primes of the form $x^2+y^2+1$, and afterwards we sum this result over $a$.\\ If $a$ satisfies the congruence conditions above, the polynomials $Wn+B_1(a)$ and $Wn+B_2(a)$ are amenable linear polynomials. Set $M'=\lfloor\frac{N}{W}\rfloor$, and for $\ell\in \{1,2\}$ set \begin{align*} f_{\ell}(n)=(\log N)^{\frac{3}{2}}\left(\frac{\varphi(W)}{W}\right)^{\frac{3}{2}}1_{Wn+B_{\ell}(a)\in \mathbb{P},\,\,Wn+B_{\ell}(a)-1\in \mathcal{S}}\quad \text{for}\quad n\in \left(\frac{M'}{3},\frac{2M'}{3}\right), \end{align*} with $\mathcal{S}$ as in \eqref{eq47} and let $f_{\ell}(n)=0$ for $n\in [0,M')\setminus (\frac{M'}{3},\frac{2M'}{3})$.\\ Concerning condition (ii) of the transference type result, applying Proposition \ref{prop_bohr} to the function $\chi\equiv 1$, we see that \begin{align*} \sum_{\frac{M'}{3}<n<\frac{2M'}{3}}f_1(n)\geq \frac{\delta_0}{10} M', \end{align*} but we evidently get the same outcome with summation over $\frac{M'}{3}<n<\frac{M'}{2}$ (since one could clearly replace $n\sim \frac{N}{3}$ with $\frac{N}{3}<n<\frac{N}{2}$ in Proposition \ref{prop_bohr}). This takes care of condition (ii).\\ Next, by Proposition \ref{prop_restriction}, \begin{align*} \sum_{\xi \in \mathbb{Z}_{M'}}|\widehat{f_{\ell}}(\xi)|^{r}\leq K_0 \end{align*} for some absolute constant $K_0$ when $r\in \{3,4\}$, so also condition (iii) holds. Let then $\chi=\chi_{\Omega,\eta}:\mathbb{Z}_{M'}\to \mathbb{R}_{\geq 0}$ be as in Proposition \ref{prop_transference} (with $\chi$ depending on $K_0$ and $\delta_0$ that appeared above), where $\Omega\subset \mathbb{Z}_{M'}$ satisfies $1\in \Omega$, $|\Omega|\ll_{\varepsilon} 1, and 1\ll_{\varepsilon} \eta\leq 0.05$. According to \eqref{eq29}, $\chi$ is symmetric around the origin and \begin{align*} \sum_{\substack{n\in [-\frac{M'}{2},\frac{M'}{2}]\\|n|\geq 0.1M'}}\chi(n)\leq \left(\frac{\eta^2}{8}\right)^{|\Omega|}M'\leq \eta\left(\frac{\eta}{2}\right)^{|\Omega|}M'\leq 0.05\|\chi_1\|M'. \end{align*} Keeping this in mind and using Proposition \ref{prop_bohr}, for $t\in (\frac{M'}{3},\frac{2M'}{3})$ we obtain \begin{align*} \sum_{n\sim \frac{M'}{3}}f_2(n)\chi(t-n)&\geq \delta_0 \bigg(\sum_{n\sim \frac{M'}{3}}\chi(t-n)-\frac{CM'}{w^{\frac{1}{3}}}\bigg)\\ &\geq \frac{\delta_0}{10}\bigg(\sum_{n\in \mathbb{Z}_{M'}}\chi(t-n)-\frac{CM'}{w^{\frac{1}{3}}}\bigg)\\ &\geq \frac{\delta_0}{20} M'\|\chi\|_1 \end{align*} for $w$ large enough, the final step coming from \eqref{eq29}, since \begin{align*} \|\chi\|_{1}\geq \left(\frac{\eta}{2}\right)^{|\Omega|}\geq \frac{1}{w^{0.1}} \end{align*} for $w$ large enough. This means that condition (i) of the transference type result holds with $\delta=\frac{\delta_0}{20}$.\\ From the transference type result (Proposition \ref{prop_transference}), we conclude that $f_1*f_2(n)>0$ for all $n\in [0.9M',M']$, $n\not \in T_{a}$ where $T_{a}$ is some set of integers with $|T_{a}|\leq \frac{\varepsilon}{2} M'=\varepsilon\frac{N}{2W}$. This leads to $n\equiv n_1+n_2\hspace{-0.1cm} \pmod{M'}$ with \begin{align}\label{eq46}\begin{split} &Wn_i+B_{i}(a)\in \mathbb{P}, \quad Wn_i+B_{i}(a)-1\in \mathcal{S} \end{split} \end{align} for $n\in [0.9M',M']$, $n\not \in T_{a}$. Since $n_1,n_2\in (\frac{M'}{3},\frac{2M'}{3})$, we can actually say that $n=n_1+n_2$. What we showed at the beginning of the proof is that any $m\in \mathcal{B}_{a}$, $m\in [0.9N+2W,N]$ with $m\equiv 0\hspace{-0.1cm} \pmod 2$, $m\not\equiv 5,8\hspace{-0.1cm} \pmod 9$ and $m\not \equiv 2 \hspace{-0.1cm} \pmod{2^{J}}$ can be written as $m=Wn+B_1(a)+B_2(a)$ with $n\in[0.9M',M']$ and $Wn+B_1(a)$ and $Wn+B_2(a)$ amenable (the interval $[0.9N,0.9N+2W]$ contains $\leq \frac{\varepsilon^2}{10}N$ numbers and can hence be ignored). Then \begin{align*} m=(Wn_1+B_1(a))+(Wn_2+B_2(a)) \end{align*} for some $n_1$ and $n_2$ satisfying \eqref{eq46} whenever $m\in \mathcal{B}_{a}\setminus T'_{a}$, $m\in [0.9N+2W,N]$, $m\equiv 0\hspace{-0.1cm} \pmod 2$, $m\not \equiv 5,8\hspace{-0.1cm} \pmod 9$ and $m\not \equiv 2\hspace{-0.1cm} \pmod{2^{J}}$, where $T'_{a}=\{a+W\tau:\,\, \tau\in T_a\}$ satisfies $|T'_{a}|\leq \varepsilon \frac{N}{2W}$. Since \begin{align*} \sum_{\substack{0\leq a\leq W-1\\a\equiv 0\hspace{-0.1cm}\pmod 2\\a\not \equiv 5, 8 \hspace{-0.1cm}\pmod 9\\a\not \equiv 2 \hspace{-0.1cm}\pmod{2^{J}}}}|T_{a}|\leq W\cdot \varepsilon\frac{N}{2W}=\frac{\varepsilon}{2} N, \end{align*} we conclude that all but $\leq (\frac{\varepsilon}{2}+\varepsilon^2) N\leq \varepsilon N$ even integers $m\in [0.9N,N]$ satisfying $m\not\equiv 5,8\hspace{-0.1cm} \pmod{9}$ can be written as $m=p+q$ with $p,q$ primes of the form $x^2+y^2+1$.\qedd \begin{remark}\label{rmk1} The proof of the ternary result, Theorem \ref{theo_ternary}, goes along very similar lines. One would replace Proposition \ref{prop_transference} with the analogous ternary transference type result, namely \cite[Theorem 2.3]{matomaki-shao}. The premises in both transference type results are essentially the same (except that \cite[Theorem 2.3]{matomaki-shao} has one additional function $f_3$), and therefore the differences in the proofs can only arise when showing that the transference type theorem implies the additive result. In fact, these proofs are also very similar, and one would simply replace Lemma \ref{le12} with a version where we want to represent an arbitrary integer $n$ as a sum of three numbers of the form $2^{i}\hspace{-0.1cm} \pmod{2^{i+2}}$, and one would replace Lemma \ref{le13} with a version where there is no restriction on $m'$ and there are six variables $x_i$ (and one would define $f_3$ analogously to $f_1$ and $f_2$). \end{remark} \section[Restriction theory for primes]{Restriction theory for primes of the form $x^2+y^2+1$}\label{Sec: restriction} The objective of the current section is proving Proposition \ref{prop_restriction}, after which proving Theorem \ref{theo_goldbach} has been reduced to demonstrating Proposition \ref{prop_bohr}. As a byproduct of the arguments, we will obtain Theorem \ref{theo_roth}. The proof of Proposition \ref{prop_restriction} is based on the Green-Tao approach \cite{green-restriction} that offers a way to estimate the Fourier norms of prime-related functions and therefore to detect translation invariant constellations within the primes. The Green-Tao approach is based on proving a restriction theorem for the Fourier transform from $\ell^r(\mathbb{Z}_N)$ to $\ell^2(\mathbb{Z}_N)$ weighted by a certain "enveloping sieve" that acts as a pseudorandom majorant for the characteristic function of the primes of the desired form. Therefore, we start by asserting that there is a suitable enveloping sieve $\beta(\cdot)$ for the primes of the form $x^2+y^2+1$. \begin{proposition}\label{prop3} Let $W$ and $w$ be as in \eqref{eq30}, and suppose that $B$ is an integer for which $Wn+B$ is an amenable linear function. Then, for any large $N$, there exists a function $\beta:\mathbb{N}\to \mathbb{R}_{\geq 0}$ with the following properties (for some absolute constants $\kappa_1,\kappa_2>0$):\\ (i) $\beta(n)\geq\kappa_1 (\log N)^{\frac{3}{2}}(\log w)^{-\frac{3}{2}}$ for $n\sim \frac{N}{3}$ when $Wn+B\in \mathbb{P}\cap (\mathcal{S}+1)$,\\ (ii) $\sum_{n\leq N}\beta(n)\leq\kappa_2 N$,\\ (iii) For every fixed $\varepsilon>0$, we have $\beta(n)\ll N^{\varepsilon}$,\\ (iv) We may write, for $z=N^{0.1}$, \begin{align}\label{eq81} \beta(n)=\sum_{q\leq z^2}\sum_{a\in \mathbb{Z}_q^{\times}} v\left(\frac{a}{q}\right)e\left(-\frac{an}{q}\right), \end{align} where $v\left(\frac{a}{q}\right)\ll q^{\varepsilon-1}$ (and $\mathbb{Z}_q^{\times}$ is the set of primitive residue classes $\hspace{-0.1cm} \pmod q$),\\ (v) We have $v(1)=1$ and $v\left(\frac{a}{q}\right)=0$ in \eqref{eq81} whenever $q$ is not square-free or $q\mid W,q\neq 1$. \end{proposition} The message of the previous proposition, which we will soon prove, is that $\beta(\cdot)$ is an upper bound for the normalized characteristic function of the primes $x^2+y^2+1$ in a residue class, $\beta(\cdot)$ has average comparable to $1$, and $\beta(\cdot)$ has a Fourier expansion with small coefficients. The above result implies the following restriction theorem, which is identical to \cite[Proposition 4.2]{green-restriction}, except that $\beta(\cdot)$ has a different definition. \begin{proposition}\label{prop_extension} Let $\beta:\mathbb{N}\to \mathbb{R}_{\geq 0}$ be as in Proposition \ref{prop3}. Let $N\geq 1$ be large, and let $(a_n)_{n\leq N}$ be any sequence of complex numbers. Given a real number $r>2$, for some $C_r>0$ we have \begin{align*} \left(\sum_{\xi\in \mathbb{Z}_N}\left|\frac{1}{N}\sum_{n\leq N}a_n\beta(n)e\left(\frac{-\xi n}{N}\right)\right|^r\right)^{\frac{1}{r}}\leq C_r\left(\frac{1}{N}\sum_{n\leq N}|a_n|^2\beta(n)\right)^{\frac{1}{2}}. \end{align*} \end{proposition} \textbf{Proof of Proposition \ref{prop_extension} assuming Proposition \ref{prop3}}: Our function $\beta(\cdot)$ fulfills the same axioms as in the paper of Green-Tao (except the pointwise lower bound, which is not used for the proof of \cite[Proposition 4.2]{green-restriction}). Therefore, the proof of \cite[Proposition 4.2]{green-restriction} goes through in this setting.\qedd\\ At this point, we show that Proposition \ref{prop_extension} easily implies Proposition \ref{prop_restriction}, which corresponds to condition (iii) in the transference type result.\\ \textbf{Proof of Proposition \ref{prop_restriction} assuming Proposition \ref{prop3}}: We already know that if Proposition \ref{prop3} is true, so is Proposition \ref{prop_extension}. We choose $a_n=\frac{g(n)}{\beta(n)}$ whenever $\beta(n)\neq 0$, and $a_n=0$ otherwise. Since $0\leq g(n)\leq f(n)\leq \kappa_1^{-1} \beta(n)$ in the notation of Proposition \ref{prop_restriction}, from Proposition \ref{prop_extension} we immediately derive \begin{align*} \left(\sum_{\xi \in \mathbb{Z}_N}|\widehat{g}(\xi)|^r\right)^{\frac{1}{r}}&\leq C_r \left(\frac{1}{N}\sum_{\substack{n\leq N\\ \beta(n)\neq 0}}\frac{g(n)^2}{\beta(n)}\right)^{\frac{1}{2}}\leq C_r \left(\frac{\kappa_1^{-2}}{N}\sum_{n\leq N}\beta(n)\right)^{\frac{1}{2}}\leq C_r\kappa_1^{-1}\kappa_2^{\frac{1}{2}} \end{align*} by part (ii) of Proposition \ref{prop3}.\qedd\\ What remains to be shown is that the enveloping sieve promised by Proposition \ref{prop3} exists. This is based on an argument of Ramaré and Ruzsa \cite{ramare} (which incidentally developed the enveloping sieve for purposes unrelated to restriction theory). The enveloping sieve $\beta(n)$ turns out to be a normalized Selberg sieve corresponding to sifting primes of the form $p=x^2+y^2+1$, $p\equiv B\pmod W$.\\ \textbf{Proof of Proposition \ref{prop3}}: We first introduce some notation. For a prime $p$, let $\mathcal{A}_p\subset \mathbb{Z}_p$ denote the residue classes $\hspace{-0.1cm} \pmod p$ that are sifted away when looking for primes of the form $x^2+y^2+1\equiv B\hspace{-0.1cm} \pmod W$. In other words, \begin{align*} \mathcal{A}_p=\begin{cases}\emptyset\,\, \text{for}\,\, p\leq w,\\ \{0\}\,\, \text{for}\,\,p\equiv 1 \hspace{-0.1cm} \pmod 4, \,\,p>w\\ \{0,1\}\,\, \text{for}\,\,p\equiv -1 \hspace{-0.1cm} \pmod 4,\,\,p>w. \end{cases} \end{align*} Further, for square-free $d$ let \begin{align*} \mathcal{A}_d=\bigcap_{p\mid d}\mathcal{A}_p, \end{align*} where $\mathcal{A}_d$ is interpreted as a subset of $\mathbb{Z}_d$. Set also $\mathcal{A}_1=\mathbb{Z}_1$ and $\mathcal{A}_d=\emptyset$ when $d$ is not square-free. For $d\geq 2$, we have $|\mathcal{A}_d|=\omega(d)$, where $\omega(\cdot)$ is a multiplicative function supported on the square-free integers and having the values \begin{align*} \omega(p)=\begin{cases}0\,\, \text{for}\,\, p\leq w,\\ 1\,\, \text{for}\,\,p\equiv 1 \hspace{-0.1cm} \pmod 4,\,\,p>w,\\ 2 \,\, \text{for}\,\, p\equiv -1\hspace{-0.1cm} \pmod 4,\,\,p>w. \end{cases} \end{align*} For later use, we also define \begin{align}\label{eq80} \mathcal{ K}_1=\mathbb{Z}_1,\quad \mathcal{K}_p=\mathbb{Z}_p\setminus \mathcal{A}_p,\quad \mathcal{K}_d=\bigcap_{p\mid d}\mathcal{K}_p\quad \text{for}\quad \mu(d)^2=1 \end{align} and let $\mathcal{K}_d=\mathbb{Z}_d$ for $\mu(d)=0$.\\ Let the Selberg sieve coefficients $\rho_d$ (not the same as sieve weights) be given by \begin{align*} &\rho_d=\mu(d)\frac{G_d(z)}{G_1(z)},\quad \text{where}\quad z=N^{0.1},\quad G_d(z)=\sum_{\substack{\delta\leq z\\ [d,\delta]\leq z}}h(\delta),\\ &h(\delta)=\prod_{p\mid \delta}h(p)\quad \text{and}\quad h(p)=\frac{\omega(p)}{p-\omega(p)}. \end{align*} The above notations are otherwise the same as in \cite[Section 4]{ramare}, except that $\lambda_d$ there has been replaced with $\rho_d$ and $\mathcal{L}_d$ with $\mathcal{A}_d$. We define \begin{align}\label{eq78} \beta(n)=G_1(z)\bigg(\sum_{\substack{d\mid P(z)\\ Wn+B\in \mathcal{A}_d}}\rho_d\bigg)^2, \end{align} where \begin{align*} P(z)=\prod_{w<p<z}p. \end{align*} In \cite{ramare} the factor $G_1(z)$ does not appear in their definition of $\beta(n)$, but this is just a normalization constant. In \eqref{eq78} the condition $m\in \mathcal{A}_d$ means $m\hspace{-0.1cm} \pmod d \in \mathcal{A}_d$. Now we can check parts (i)-(v) of Proposition \ref{prop3}.\\ For part (i), first observe that if $Wn+B=x^2+y^2+1\in \mathbb{P}\cap (\mathcal{S}+1)$ with $n\sim \frac{N}{3}$, then $x^2+y^2+1\not \equiv 0\hspace{-0.1cm} \pmod p$ for $w< p<z=N^{0.1}$ and $x^2+y^2\not \equiv 0 \pmod p$ for $p\equiv -1\hspace{-0.1cm} \pmod 4$, $w< p<z$, since $(x,y)\mid 6^{J}$. This means that if $Wn+B=x^2+y^2+1\in \mathbb{P}\cap (\mathcal{S}+1)$ with $n\sim \frac{N}{3}$, then $\beta(n)=G_1(z)$. Now the assertion follows from \begin{align*} G_1(z)\geq 10^{-10}\prod_{w<p<z}\left(1-\frac{\omega(p)}{p}\right)^{-1}\geq 10^{-20}(\log N)^{\frac{3}{2}}(\log w)^{-\frac{3}{2}}. \end{align*} Part (ii) in turn follows by applying the Selberg sieve \cite[Chapter 7]{iwaniec-kowalski} to estimate \begin{align*} &G_1(z)\sum_{n\leq N}\bigg(\sum_{\substack{d\mid P(z)\\Wn+B\in \mathcal{A}_d}}\rho_d\bigg)^2\\ &\leq 10^{10}(\log N)^{\frac{3}{2}}(\log w)^{-\frac{3}{2}}\cdot \left(N\prod_{w<p<z}\left(1-\frac{\omega(p)}{p}\right)+z^3\right)\\ &\leq 10^{20} (\log N)^{\frac{3}{2}}(\log w)^{-\frac{3}{2}}\cdot N\left(\frac{\log w}{\log z}\right)^{\frac{3}{2}}\leq 10^{30} N. \end{align*} Part (iii) is verified as follows. From the definition of $\rho_d$ it is clear that $|\rho_d|\leq 1$, so that \begin{align}\label{eq79} \beta(n)\leq G_1(z)\bigg(\sum_{\substack{d\mid P(z)\\Wn+B\in \mathcal{A}_d}}1\bigg)^2. \end{align} Note that if $Wn+B\in \mathcal{A}_p$ for some $w<p\leq z$, then $p\mid Wn+B$ or $p\mid Wn+B-1$, so that $p$ can be chosen in at most $\nu(Wn+B)+\nu(Wn+B-1)$ ways, where $\nu(\cdot)$ is the number of distinct prime factors. Since $d$ is square-free and a product of such primes $p$, $d$ can be chosen in at most $2^{\nu(Wn+B)+\nu(Wn+B-1)}\ll N^{\frac{\varepsilon}{3}}$ ways in \eqref{eq79}. Therefore, \eqref{eq79} is $\ll (\log N)^{\frac{3}{2}}N^{\frac{2}{3}\varepsilon}\ll N^{\varepsilon}$.\\ Part (iv), which is the most crucial part concerning pseudorandomness, was verified in \cite{ramare}. Namely, our set of primes of the form $Wn+B=x^2+y^2+1$ is "sufficiently sifted" in the sense of the definition given on pages 1 and 2 of \cite{ramare} (to see that, take in that paper $A$ to be the set of primes of the form under consideration up to $N$ and $\kappa=\frac{3}{2}$). This property is all that is needed to obtain (iv) with the bound $v\left(\frac{a}{q}\right)\ll q^{-\frac{1}{2}}$, by formula (4.1.19) of \cite{ramare}. It is clear that this can be replaced with the stronger bound $v\left(\frac{a}{q}\right)\ll q^{\varepsilon-1}$, since we have defined the sets $\mathcal{K}_d$ in \eqref{eq80} so that formula (4.1.18) of \cite{ramare} holds for $\xi=\frac{\varepsilon}{2}$, instead of just some $0<\xi<\frac{1}{2}$.\\ We are then left with part (v). Equations (4.1.13) and (4.1.21) of \cite{ramare} reveal that \eqref{eq81} holds when $v(\frac{a}{q})$ is defined for $(a,q)=1$ by \begin{align*} &v\left(\frac{a}{q}\right)=G_1(z)\sum_{q\mid [d_1,d_2]}\frac{\rho_{d_1}^{*}\rho_{d_2}^{*}}{[d_1,d_2]}|\mathcal{K}_{[d_1,d_2]}|\cdot \frac{\sum_{b\in \mathcal{K}_q}e\left(\frac{ab}{q}\right)}{|\mathcal{K}_q|}\,\,\text{with}\,\,\rho_{\ell}^{*}=\sum_{d\equiv 0\hspace{-0.1cm} \pmod \ell}\mu\left(\frac{d}{\ell}\right)\mu(d)\rho_d, \end{align*} where the set $\mathcal{K}_d$ is given by \eqref{eq80}. As in formula (4.1.17) of \cite{ramare}, we have \begin{align*} \left|\sum_{b\in \mathcal{K}_q}e\left(\frac{ab}{q}\right)\right|=\left|\sum_{b\in \mathbb{Z}_q\setminus\mathcal{K}_q}e\left(\frac{ab}{q}\right)\right|\leq |\mathbb{Z}_q\setminus \mathcal{K}_q|\leq \prod_{p^{\alpha}\mid \mid q}(p^{\alpha}-|\mathcal{K}_{p^{\alpha}}|), \end{align*} which immediately gives $v(\frac{a}{q})=0$ unless $q$ is square-free and $(q,W)=1$. In addition, by formula (4.1.13) of the same paper (with the right-hand side multiplied by $G_1(z)$), we have \begin{align}\label{eq82} v\left(\frac{a}{q}\right)=G_1(z)w_{q}^{\#} \cdot \frac{\sum_{b\in \mathcal{K}_q}e\left(\frac{ab}{q}\right)}{|\mathcal{K}_q|}, \end{align} where by (4.1.14) we have \begin{align*} w_{q}^{\#}=\frac{1}{G_1(z)}\sum_{\delta\leq z}h(\delta)\rho_z(q,\delta), \end{align*} and $\rho_z(q,\delta)$ satisfies (4.1.15). Putting $q=1$ into (4.1.15), we clearly get $w_1^{\#}=\frac{1}{G_1(z)}$, so that $v(1)=1$ by \eqref{eq82}.\qedd\\ We have now proved Proposition \ref{prop_restriction}, which will be needed in the proof of Theorem \ref{theo_goldbach}. As a consequence of the above considerations, we can now establish Theorem \ref{theo_roth}, that is, Roth's theorem for the subset $\mathscr{P}$ of primes.\\ \textbf{Proof of Theorem \ref{theo_roth}:} This is very similar to the proof of \cite[Theorem 1.2]{green-restriction}. Let $\mathcal{A}\subset \mathscr{P}^{*}$ have positive upper density in $\mathscr{P}^{*}$. Then there is $\delta>0$ (which may be assumed small) such that $|\mathcal{A}\cap (\frac{N}{3},\frac{2N}{3})|\geq \delta |\mathscr{P}^{*}\cap (\frac{N}{3},\frac{2N}{3})|$ for $N\in \mathcal{N}$, where $\mathcal{N}$ is some infinite set of positive integers. Let $W$, $w$ and $J$ be as in \eqref{eq30} with $J=\lfloor \frac{10}{\delta}\rfloor$.\\ Let $S_B=S\cap \{Wn+B:\,\, n\geq 1\}$ for any set $S$ and integer $B$. Note that if $n=x^2+y^2+1\in (\frac{N}{3},\frac{2N}{3})$ is a prime with $(x,y)=1$ and $N\geq 10W$, then $(n,W)=(n-1,s(W))=1$ and $(n-1,3)=1$, $4\nmid n-1$. Therefore, \begin{align*} \sum_{\substack{1\leq B\leq W\\Wn+B \,\, \text{amenable}}} \left|\mathcal{A}_B\cap (\frac{N}{3},\frac{2N}{3})\right|&=\left|\mathcal{A}\cap (\frac{N}{3},\frac{2N}{3})\right|\geq \delta \left|\mathscr{P}^{*}\cap (\frac{N}{3},\frac{2N}{3})\right|, \end{align*} for $N\geq 10W$ and $N\in \mathcal{N}$, so using the pigeonhole principle and the lower bound for $|\mathscr{P}^{*}\cap (\frac{N}{3},\frac{2N}{3})|$ coming from Proposition \ref{prop_bohr} with $\chi\equiv 1$, we can find a value of $B\in [1,W]$ such that the polynomial $Wn+B$ is amenable and \begin{align}\label{eq102} \left|\mathcal{A}_B\cap (\frac{N}{3},\frac{2N}{3})\right|\geq \delta_1\cdot \delta(\log w)^{\frac{3}{2}}\frac{N}{W(\log N)^{\frac{3}{2}}} \end{align} for $N\in \mathcal{N}'$ with $\mathcal{N}'$ an infinite set of positive integers and for some small absolute constant $\delta_1>0$, since the Chinese remainder theorem shows that there are $\leq 10^{10} W(\log w)^{-\frac{3}{2}}$ amenable functions $Wn+B$ with $1\leq B\leq W$.\\ Next, set \begin{align*} g(n)=\delta_2(\log N)^{\frac{3}{2}}(\log w)^{-\frac{3}{2}}1_{\mathcal{A}_B\cap (\frac{N}{3},\frac{2N}{3})}(n)\quad \text{for}\quad N\in \mathcal{N}'\quad \text{and}\quad 1\leq n\leq N \end{align*} with $\delta_2>0$ small, and extend $g$ periodically to $\mathbb{Z}_N$. The assertion of the theorem will follow from the Green-Tao transference principle \cite[Proposition 5.1]{green-restriction} as soon as we check formulas (5.3)-(5.6) of that paper for the functions $g(n)$ and $\nu(n)=\beta(n)1_{[1,N]}(n)$ (extended periodically to $\mathbb{Z}_N$) with $\beta(\cdot)$ given by Proposition \ref{prop3}. We know (5.3) from Proposition \ref{prop3} and (5.6) from Proposition \ref{prop_restriction}. Formula (5.5) follows from the properties (i)-(v) of $\beta(n)$ just as in \cite[Chapter 6]{green-restriction}. We are left with (5.4), which follows (for a different value of $\delta$) for $N\in \mathcal{N}'$ from \eqref{eq102}. Now, as mentioned, \cite[Proposition 5.1]{green-restriction} yields the result, since any triple of the form $(a,a+d+j_1N,a+2d+j_2N)$ is an arithmetic progression in $\mathbb{Z}$ if $a,a+2d+j_1N, a+2d+j_2N\in (\frac{N}{3},\frac{2N}{3})$. \qedd \section{Reductions for finding primes in Bohr sets}\label{Sec: reductions} The proof of Proposition \ref{prop_bohr} goes through an intermediate result (namely Proposition \ref{prop2} below) that resembles it and is slightly more technical, but at the same time easier to approach. The proof of Proposition \ref{prop2} uses among other things the circle method, Bombieri-Vinogradov type estimates, and ideas similar to Iwaniec's proof \cite{iwaniec-quadraticform} of the infinitude of primes $x^2+y^2+1$, and will occupy Sections \ref{Sec: weighted} to \ref{Sec: hypotheses}. \begin{proposition}\label{prop2} Let $\chi:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ have Fourier complexity $\mathcal{C}\ll 1$. Let $N\geq 1$ be an integer and $W$ be as in \eqref{eq30} with $w\geq \mathcal{C}^{20}$, and suppose that $Wn+b$ is an amenable linear function. There exists an integer $Q\leq (\log N)^{B}$, depending only on $\chi$, with $B\ll_{\mathcal{C}} 1$, such that the following holds. For $N\geq N_0(w,\mathcal{C})$, $|t|\leq 5N$ and $c_0\in \mathcal{Q}$ we have \begin{align*} \sum_{\substack{n\sim N\\n\equiv c_0\hspace{-0.1cm} \pmod Q\\Wn+b\in \mathbb{P}\\Wn+b-1\in \mathcal{S}}}\chi(t-n)\geq \frac{\delta_1}{(\log N)^{\frac{3}{2}}}\left(\frac{W}{\varphi(W)}\right)^{\frac{3}{2}}\frac{Q}{|\mathcal{Q}|}\bigg(\sum_{\substack{n\sim N\\ n\equiv c_0\hspace{-0.1cm} \pmod Q}}\chi(t-n)+o\left(\frac{N}{Q}\right)\bigg), \end{align*} where $\delta_1>0$ is an absolute constant and \begin{align}\label{eq21} \mathcal{Q}=\{c_0\hspace{-0.2cm}\hspace{-0.1cm} \pmod Q:\,\, (Wc_0+b,Q)=(Wc_0+b-1,s(Q))=1\}. \end{align} \end{proposition} We remark that, by the Chinese remainder theorem, \begin{align}\label{eq62} |\mathcal{Q}|=Q\prod_{\substack{p\mid Q\\p\nmid W\\p\equiv 1\hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)\prod_{\substack{p\mid Q\\p\nmid W\\p\equiv -1\hspace{-0.1cm}\pmod 4}}\left(1-\frac{2}{p}\right) , \end{align} considering that $(b,W)=(b-1,s(W))=1$ by the definition of amenability.\\ In this section, we will show that Proposition \ref{prop2} implies Proposition \ref{prop_bohr}, by appealing to the following lemma. \begin{lemma}\label{lemma1} Let $\chi:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ have Fourier complexity at most $\mathcal{C}$. Let $N,Q\geq 1$ be such that $N\geq 2Q^2$. Let $\mathcal{Q}$ be a collection of residue classes $\hspace{-0.1cm} \pmod Q$ such that for all $q\mid Q, q\neq 1$ and for all $(a,q)=1$ we have \begin{align*} \bigg|\sum_{c_0\in \mathcal{Q}}e\left(\frac{a}{q}c_0\right)\bigg|\leq \eta_0 |\mathcal{Q}| \end{align*} for some $\eta_0>0$. Then, with the same notations as in Proposition \ref{prop2}, for some absolute constant $C'>0$ and for all integers $t$ we have \begin{align*} \frac{Q}{|\mathcal{Q}|}\sum_{\substack{c_0\in \mathcal{Q}}}\sum_{\substack{n\sim N\\ n\equiv c_0\hspace{-0.1cm} \pmod Q}}\chi(t-n)\geq \sum_{n\sim N}\chi(t-n)-C'(\eta_0 \mathcal{C}^2N+Q\mathcal{C}^2N^{\frac{1}{2}}). \end{align*} \end{lemma} \textbf{Proof.} This is \cite[Lemma 7.4]{matomaki-shao}.\qedd\\ Note that the conclusion of Proposition \ref{prop_bohr} (with $\frac{N}{3}$ replaced with $N$) can be rewritten as \begin{align}\label{eq77} \sum_{\substack{n\sim N\\Wn+b\in \mathbb{P}\\Wn+b-1\in \mathcal{S}}}\chi(t-n)\geq \frac{\delta_0}{(\log N)^{\frac{3}{2}}}\left(\frac{W}{\varphi(W)}\right)^{\frac{3}{2}}\bigg(\sum_{n\sim N}\chi(t-n)-\frac{CN}{w^{\frac{1}{3}}}\bigg), \end{align} for $N\geq N_0(w,\mathcal{C})$ and $t\in \left(N,3N\right)$, with $\delta_0>0$ and $C>0$ absolute constants. In view of the previous lemma, Proposition \ref{prop_bohr} follows immediately from Proposition \ref{prop2} by splitting in \eqref{eq77} the sum over $n$ on the left-hand side to a sum over $n$ in different residue classes $\hspace{-0.1cm} \pmod Q$, provided that the premise of Lemma \ref{lemma1} is true for $\eta_0=w^{-\frac{1}{2}}$. This is what we will prove in the remainder of this section. \begin{lemma}\label{lemma2} Let $Q\geq 1$, and let $\mathcal{Q}$ be defined by \eqref{eq21} (and $W$ and $w$ in the definition of $\mathcal{Q}$ given by \eqref{eq30}). Let $a$ and $q\mid Q$ be positive integers with $(a,q)=1$, $q\neq 1$. We have \begin{align}\label{eqq5} \bigg|\sum_{c_0\in \mathcal{Q}}e\left(\frac{a}{q}c_0\right)\bigg|\leq w^{-\frac{1}{2}}|\mathcal{Q}|. \end{align} \end{lemma} Before proving this, we present another lemma, which will be used to prove Lemma \ref{lemma2}. \begin{lemma}\label{lemma3} Let $a$ and $q$ be positive integers, $q \neq 1$, $(a,q)=1$, and let $Wn+b$ be an amenable linear polynomial with $W$ and $w$ as in \eqref{eq30}. Let $V\geq 1$ be an integer with $(q,V)=1$. Then \begin{align}\label{eqq1} \bigg|\sum_{\substack{n\hspace{-0.1cm} \pmod{q}\\(WVn+b,q)=1\\(WVn+b-1,s(q))=1}}e\left(\frac{a}{q}n\right)\bigg|\leq \tau(q)\cdot 1_{(q,W)=1}. \end{align} \end{lemma} \textbf{Proof.} Using M\"obius inversion, the sum in question (without absolute values) becomes \begin{align}\label{eqq4} \sum_{d\mid q}\mu(d)\sum_{k\mid s(q)}\mu(k)\sum_{\substack{n\hspace{-0.1cm} \pmod{q}\\WVn\equiv -b\hspace{-0.1cm} \pmod d\\WVn\equiv -(b-1) \hspace{-0.1cm} \pmod k}}e\left(\frac{a}{q}n\right). \end{align} Now consider the sum \begin{align}\label{eqq2} \sum_{\substack{n\hspace{-0.1cm} \pmod{q}\\WVn\equiv -b\hspace{-0.1cm} \pmod d\\WVn\equiv -(b-1) \hspace{-0.1cm} \pmod k}}e\left(\frac{a}{q}n\right). \end{align} Note that the sum is nonempty only if $(d,k)=1$. Let $x_{1},\ldots, x_{R(d,k)} \hspace{-0.1cm} \pmod{dk}$ be the pairwise incongruent solutions to the system $WVx\equiv -b\hspace{-0.1cm} \pmod d$, $WVx\equiv -(b-1)\hspace{-0.1cm} \pmod k$ (if there are none, the sum \eqref{eqq2} is empty). Since $dk=[d,k]\mid q$, after writing $n=x_j+dk t$ for some $1\leq j\leq R(d,k)$ and $1\leq t\leq \frac{q}{dk}$, \eqref{eqq2} transforms into \begin{align}\label{eqq3} \sum_{j=1}^{R(d,k)}\sum_{\substack{n\hspace{-0.1cm} \pmod{q}\\ n\equiv x_j \hspace{-0.1cm} \pmod{dk}}}e\left(\frac{an}{q}\right)&=\sum_{j=1}^{R(d,k)}e\left(\frac{ax_j}{q}\right)\sum_{t \hspace{-0.1cm} \pmod{\frac{q}{dk}}}e\left(\frac{at}{\frac{q}{dk}}\right). \end{align} The inner sum is nonzero only when $dk=q$, in which case it is $1$. Taking these considerations into account, \eqref{eqq4} has absolute value at most \begin{align}\label{eq34} \sum_{\substack{d\mid q\\ k\mid s(q)\\ dk=q}}R(d,k)|\mu(d)||\mu(k)|. \end{align} We estimate this differently depending on whether $(q,W)>1$ or $(q,W)=1$. In the former case, there is some prime $p$ such that $p\mid q$, $p\mid W$, so $ dk=q$ tells that $p$ divides either $d$ or $k$. If $p\mid d$, then supposing that $R(d,k)\neq 0$, the congruence $WVx\equiv -b\hspace{-0.1cm} \pmod p$ must be solvable. It however is not solvable, since $p\nmid b$ for $p\mid W$ by the amenability of $Wn+b$. If $p\mid k$, then $k\mid s(q)$ implies that $p\equiv -1\hspace{-0.1cm} \pmod 4$, $p\neq 3$. If $R(d,k)\neq 0$, the congruence $WVx\equiv -(b-1) \hspace{-0.1cm} \pmod p$ has a solution, but $p\nmid b-1$ by amenability, so we have a contradiction. We deduce that all the summands in \eqref{eq34} vanish for $(q,W)>1$.\\ Then let $(q,W)=1$. As $d,k\mid q$ in \eqref{eq34}, we also have $(d,W)=(k,W)=1$ and $(d,V)=(k,V)=1$. Now clearly both of the congruences $WVx\equiv -b\hspace{-0.1cm} \pmod d$, $WVx\equiv -(b-1)\hspace{-0.1cm} \pmod k$ have a unique solution, so if the two congruences are thought of as a simultaneous equation, it has at most one solution $\hspace{-0.1cm} \pmod{dk}$. Therefore $R(d,k)\leq 1$, which leads to \eqref{eq34} being at most \begin{align*} \sum_{dk=q}1\leq \tau(q), \end{align*} as asserted.\qedd\\ \textbf{Proof of Lemma \ref{lemma2}.} This is similar to the argument on page 21 of \cite{matomaki-shao}. We can find unique $q'$ and $Q'$ such that $Q=qq'Q'$ and $(q,Q')=1$ and all the prime divisors of $q'$ divide $q$. Writing $c_0=c_1q+c_2Q'$, $c_0$ runs through each residue class $\hspace{-0.1cm} \pmod{Q}$ exactly once as $c_1$ runs through residue classes $\hspace{-0.1cm} \pmod{q'Q'}$ and $c_2$ runs independently through residue classes $\hspace{-0.1cm} \pmod{q}$. Now the left-hand side of \eqref{eqq5} (without absolute values) becomes \begin{align}\label{eqq6} \Sigma:=\sum_{\substack{c_1\hspace{-0.1cm} \pmod{q'Q'}\\(Wqc_1+b,Q')=1\\(Wqc_1+b-1,s(Q'))=1}}\sum_{\substack{c_2\hspace{-0.1cm} \pmod{q}\\ (WQ'c_2+b,q)=1\\(WQ'c_2+b-1,s(q))=1}}e\left(\frac{aQ'}{q}c_2\right). \end{align} Since $(aQ',q)=1$, the inner sum is exactly of the form appearing in Lemma \ref{lemma3}. Therefore, \begin{align*} |\Sigma|\leq \sum_{\substack{c_1\hspace{-0.1cm} \pmod{q'Q'}\\(Wqc_1+b,Q')=1\\(Wqc_1+b-1,s(Q'))=1}}\tau(q)\cdot 1_{q>w}. \end{align*} Since $w\geq 10^{{10}^{10}}$, estimating the divisor function crudely yields \begin{align*} |\Sigma|\leq 1_{q> w}\cdot q^{0.1}\sum_{\substack{c_1\hspace{-0.1cm} \pmod{q'Q'}\\(Wqc_1+b,Q')=1\\(Wqc_1+b-1,s(Q'))=1}}1&=1_{q>w}\cdot q'q^{0.1}\sum_{\substack{c_1\hspace{-0.1cm} \pmod{Q'}\\(Wqc_1+b,Q')=1\\(Wqc_1+b-1,s(Q'))=1}}1\\ &= 1_{q>w}\cdot q'q^{0.1}Q'\prod_{\substack{p\mid Q'\\ p>w}}\left(1-\frac{\omega(p)}{p}\right) \end{align*} where $\omega(p)\in \{1,2\}$ and $\omega(p)=2$ precisely when $p\equiv -1\hspace{-0.1cm} \pmod 4$. The previous expression is, for $q>w\geq 10^{{10}^{10}}$, \begin{align*} &\leq q'q^{0.2}\prod_{\substack{p\mid q\\p>w}}\left(1-\frac{\omega(p)}{p}\right)\cdot Q'\prod_{\substack{p\mid Q'\\p>w}}\left(1-\frac{\omega(p)}{p}\right)\\ &=\frac{Q}{q^{0.8}}\prod_{\substack{p\mid Q\\p>w}}\left(1-\frac{\omega(p)}{p}\right)\leq \frac{|\mathcal{Q}|}{w^{\frac{1}{2}}}, \end{align*} where the last step comes from \eqref{eq62}.\qedd\\ From Lemma \ref{lemma2}, we conclude that proving Proposition \ref{prop2} is enough for establishing Proposition \ref{prop_bohr} (and hence Theorem \ref{theo_goldbach}). \section[Weighted sieve for primes]{Weighted sieve for primes of the form $p=x^2+y^2+1$} \label{Sec: weighted} Next we investigate primes of the form $x^2+y^2+1$ in Bohr sets and prove Proposition \ref{prop2} concerning these, from which Theorem \ref{theo_goldbach} will follow. We will prove in this section Theorem \ref{t2} about weighted counting of primes in the shifted set $\mathcal{S}+1=\{s+1:s\in \mathcal{S}\}$. The proof resembles Iwaniec' s proof \cite{iwaniec-quadraticform} of the infinitude of primes of the form $x^2+y^2+1$, as well as the later works \cite{wu}, \cite{matomaki-m2+n2+1} on the same problem in short intervals, but the theorem involves a weighted version of the sieve procedure and hence requires a hypothesis about the weights. We will later verify the conditions of this hypothesis for a weight function related to the function $\chi(n)$ in Proposition \ref{prop2}, and this will imply Proposition \ref{prop2} and consequently Theorem \ref{theo_goldbach}. To formulate Theorem \ref{t2}, we first introduce the hypothesis regarding our weight coefficients. To this end, we need a couple of definitions. \begin{definition}\label{def3} Given a linear function $L$, let $\mathfrak{S}(L)$ be the \textit{singular product} \begin{align*} \mathfrak{S}(L)&=\prod_{\substack{p\equiv -1\hspace{-0.1cm} \pmod 4\\p\neq 3}}\left(1-\frac{|\{n\in \mathbb{Z}_p:\,\, L(n)\equiv 0\,\, \textnormal{or}\,\, 1 \hspace{-0.1cm} \pmod p\}|}{p}\right)\left(1-\frac{2}{p}\right)^{-1}\\ &\cdot\prod_{p\not \equiv -1\hspace{-0.1cm} \pmod 4}\left(1-\frac{|\{n\in \mathbb{Z}_p:\,\, L(n)\equiv 0\hspace{-0.1cm} \pmod p\}|}{p}\right)\left(1-\frac{1}{p}\right)^{-1}. \end{align*} \end{definition} \begin{definition}\label{def2} We say that a sequence $(g({\ell}))_{\ell\geq 1}$ of complex numbers is of \textit{convolution type} (for a given large integer $N$ and constant $\sigma\in (3,4)$) if \begin{align*} g(\ell)=\sum_{\substack{\ell=km\\N^{\frac{1}{\sigma}}\leq k\leq N^{1-\frac{1}{\sigma}}}}\alpha_k\beta_m \end{align*} for some complex numbers $|\alpha_k|, |\beta_k|\leq \tau(k)^2\log k$. \end{definition} \begin{definition}\label{def4} For $\frac{1}{3}<\rho_2<\rho_1<\frac{1}{2}$ and $\sigma\in (3,4)$, let $\textnormal{H}(\rho_1,\rho_2,\sigma)$ be the proposition \begin{align}\label{eq96} \frac{1}{2\sqrt{\rho_2}}\int_{1}^{\rho_2 \sigma}\frac{dt}{\sqrt{t(t-1)}}>\frac{1}{2\rho_1}\int_{2}^{\sigma}\frac{\log(t-1)}{t(1-\frac{t}{\sigma})^{\frac{1}{2}}}dt+10^{-10}. \end{align} \end{definition} In the proof of Theorem \ref{theo_goldbach}, we will use the fact that \begin{align*} \text{H}\left(\frac{1}{2}-\varepsilon,\frac{3}{7}-\varepsilon,3+\varepsilon\right)\quad \text{is true for small enough}\quad \varepsilon>0. \end{align*} This holds for $\varepsilon=0$ by a numerical computation and by continuity in a small neighborhood of $0$. Indeed, the difference between the integrals in \eqref{eq96} is then $>10^{-3}$. We are ready to state our Bombieri-Vinogradov type hypothesis, whose validity depends on the weight sequence $(\omega_n)$, as well as on the parameters $\rho_1,\rho_2$ and $\sigma$. \begin{hypothesis}\label{h1} Let $L(n)=Kn+b$ be an amenable linear function with $K\ll (\log N)^{O(1)}$. Let $(\omega_n)_{n\sim N}$ be a nonnegative sequence of real numbers, and let $\delta=(b-1,K)$. Let $\varepsilon>0$ be any small number. Let $\frac{1}{3}<\rho_2<\rho_1<\frac{1}{2}-\varepsilon$, $\sigma\in (3,4)$. Then for any sequence $(g(\ell))_{\ell\leq N^{0.9}}$ of convolution type (with parameter $\sigma$) \begin{align*} \sum_{\substack{d\leq N^{\rho_1}\\(d,K)=1}}\lambda_d^{+,\textnormal{LIN}}\sum_{\substack{\ell\leq N^{0.9}\\(\ell,K)=\delta\\(\ell,d)=1}}g(\ell)\bigg(\sum_{\substack{n\sim N\\L(n)=\ell p+1\\L(n)\equiv 0\hspace{-0.1cm} \pmod d}}\omega_n-\frac{1}{\varphi(d)}\frac{K}{\varphi(\frac{K}{\delta})}\sum_{n\sim N}\frac{\omega_n}{\ell \log \frac{Kn}{\ell}}\bigg)&\ll \frac{\sum_{n\sim N}\omega_n}{(\log N)^{100}},\\ \sum_{\substack{d\leq N^{\rho_2}\\ (d,K)=1}}\lambda_d^{-,\textnormal{SEM}}\bigg(\sum_{\substack{n\sim N\\L(n)\in \mathbb{P}\\L(n)\equiv 1\hspace{-0.1cm} \pmod d}}\omega_n-\frac{1}{\varphi(d)}\frac{ K}{\varphi(K)}\sum_{n\sim N}\frac{\omega_n}{\log(Kn)}\bigg)&\ll \frac{\sum_{n\sim N}\omega_n}{(\log N)^{100}}, \end{align*} where $\lambda_{d}^{+,\textnormal{LIN}}$ are the upper bound linear sieve weights with sifting parameter $z_1= N^{\frac{1}{5}}$ and $\lambda_{d}^{-,\textnormal{SEM}}$ are the lower bound semilinear sieve weights with sifting parameter $z_2=N^{\frac{1}{\sigma}}$ (the weights $\lambda_d^{\pm,\textnormal{SEM}}$ were defined in Theorem \ref{theo_sievebombieri}, and the weights $\lambda_d^{\pm,\textnormal{LIN}}$ are defined analogously by replacing $\beta=1$ by $\beta=2$ in that definition). \end{hypothesis} \begin{theorem}\label{t2} Assume Hypothesis \ref{h1} for a linear form $L(n)$, sequence $(\omega_n)_{n\sim N}$, and parameters $\rho_1,\rho_2,\sigma$ satisfying $\textnormal{H}(\rho_1,\rho_2,\sigma)$. Then \begin{align*} \sum_{\substack{n\sim N\\L(n)\in \mathbb{P}\\L(n)-1\in \mathcal{S}}}\omega_n\geq \frac{\delta_0\cdot \mathfrak{S}(L)}{(\log N)^{\frac{3}{2}}}\sum_{n\sim N}\omega_n+O(N^{\frac{1}{2}}), \end{align*} where $\delta_0>0$ is an absolute constant. \end{theorem} \begin{remark} We will be able to prove Hypothesis \ref{h1} in Section \ref{Sec: hypotheses} for $\rho_1=\frac{1}{2}-\varepsilon$, $\rho_2=\frac{3}{7}-\varepsilon$ and $\sigma=3+\varepsilon$ when $L(n)$ is suitable and $\omega_n$ is of bounded Fourier complexity. It would suffice to prove the same with $\rho_2=0.385$ instead of $\rho_2=\frac{3}{7}-\varepsilon=0.428\ldots$ (since then $\text{H}(\rho_1,\rho_2,\sigma)$ is true). On the other hand, existing Bombieri-Vinogradov estimates such as \cite[Lemma 12]{tolev_bombieri} would only give us $\rho_2=\frac{1}{3}-\varepsilon=0.333\ldots$, which falls short of what we need. \end{remark} \textbf{Proof.} Put \begin{align*} \mathcal{A}&=\{L(n)-1:\, n\sim N, L(n)\in \mathbb{P}\}\\ \mathcal{P}_{4,-1}&=\{p\in \mathbb{P}:\, p\equiv -1\hspace{-0.1cm} \pmod 4,\, p\neq 3\},\\ P(z)&=\prod_{\substack{p< z\\p\in \mathcal{P}_{4,-1}}}p,\\ \mathcal{P}_{4,1}^{*}&=\{n\geq 1:\, p\mid n\Rightarrow p\equiv 1 \hspace{-0.1cm} \pmod 4\}. \end{align*} If we weigh the elements of $\mathcal{A}$ by $\nu_n=\omega_{(L^{-1}(n+1))}$, where $L^{-1}$ is the inverse function of $L$, the sifting function is \begin{align*} S(\mathcal{A},\mathcal{P}_{4,-1},z)=\sum_{\substack{n\sim N\\L(n)\in \mathbb{P}\\(L(n)-1,P(z))=1}}\omega_n. \end{align*} Note that $L(n)-1\equiv 2^{\beta} \hspace{-0.1cm} \pmod{2^{\beta+2}}$ for some $\beta\geq 1$ by the definition of amenability, so that $L(n)-1$ has an even number of prime factors that are $\equiv -1\hspace{-0.1cm} \pmod 4$ (counted with multiplicity). We have \begin{align}\label{eq85} \sum_{\substack{n\sim N\\L(n)\in \mathbb{P}\\L(n)-1\in \mathcal{S}}}\omega_n&=S(\mathcal{A},\mathcal{P}_{4,-1},(3KN)^{\frac{1}{2}}), \end{align} since the right-hand side counts with weight $\omega_n$ the numbers $L(n)-1=2^{\alpha_1}3^{\alpha_2}k\in \mathcal{A}$ with $k\in \mathcal{P}_{4,1}^{*}$, and we claim that these numbers are precisely the numbers in $\mathcal{S}\cap \mathcal{A}$. We have $2^{\alpha_1}3^{\alpha_2}k=L(n)-1$, so by amenability $\alpha_2\equiv 0\hspace{-0.1cm} \pmod 2$. It is a fact in elementary number theory that for $k\in \mathcal{P}_{4,1}^{*}$, both $k$ and $2k$ can be expressed in the form $a^2+b^2$ with $(a,b)=1$, and additionally no number of the form $2^{\alpha_1}3^{\alpha_2}k$ with $(k,6)=1$ and $\alpha_2$ odd or $k\not \in \mathcal{P}_{4,1}^{*}$ is of the form $x^2+y^2$ with $(x,y)\mid 6^{\infty}$. Hence both sides of \eqref{eq85} indeed count the same integers.\\ Buchstab's identity reveals that \begin{align*} S(\mathcal{A},\mathcal{P}_{4,-1},(3KN)^{\frac{1}{2}})&=S(\mathcal{A},\mathcal{P}_{4,-1},N^{\frac{1}{\sigma}})-\sum_{\substack{n\sim N\\L(n)\in \mathbb{P}}}\sum_{\substack{p_2\mid L(n)-1\\N^{\frac{1}{\sigma}}\leq p_2<(3KN)^{\frac{1}{2}}\\(L(n)-1,P(p_2))=1\\p_2\in \mathcal{P}_{4,-1}}}\omega_n. \end{align*} The condition $p_2\mid L(n)-1\equiv 2^{\beta} \hspace{-0.1cm} \pmod{2^{\beta+2}}$ implies that $L(n)-1$ has either exactly $2$ prime divisors from $\mathcal{P}_{4,-1}$ or at least $4$ such prime divisors (with multiplicities). The second case is impossible, since all the prime divisors of $L(n)-1$ that are from $\mathcal{P}_{4,-1}$ are $\geq p_2$ and $p_2^{4}\geq N^{\frac{4}{\sigma}}>L(2N)-1.$ This means that we may write $L(n)-1=p_1p_2m',$ $p_1\geq p_2,$ $p_1\in \mathcal{P}_{4,-1},$ with $m'$ having no prime divisors from $\mathcal{P}_{4,-1}$. Now $\delta\mid L(n)-1=Kn+b-1$ with $\delta=(b-1,K)$, and since $p_1\geq p_2\geq N^{\frac{1}{\sigma}}>K$, we have $\delta\mid m'$. Hence we may write $m'=\delta m$, where $m\in \mathcal{P}_{4,1}^{*}$ (we have $3\nmid m$, since $K$ is divisible by a larger power of $3$ than $b-1$ is, by the definition of amenability. Similarly $2\nmid m$). We claim that $(m,\frac{K}{\delta})=1$. Indeed, if $p\mid m$ and $p\mid \frac{K}{\delta}$, we must have $p\mid \frac{b-1}{\delta}$, a contradiction to $(K,b-1)=\delta$. Now we have \begin{align}\label{eq39} S(\mathcal{A},\mathcal{P}_{4,-1},(3KN)^{\frac{1}{2}})= S-T. \end{align} Here \begin{align*} S=S(\mathcal{A},\mathcal{P}_{4,-1},N^{\frac{1}{\sigma}}),\quad T=\sum_{\substack{n\sim N\\L(n)\in \mathbb{P}}}\sum_{\substack{L(n)-1=\delta p_1p_2m\\p_1,p_2\in \mathcal{P}_{4,-1}\\N^{\frac{1}{\sigma}}\leq p_2\leq p_1\\m\in \mathcal{P}_{4,1}^{*}}}\omega_n\leq \sum_{\ell \in \mathcal{L}}S(\mathcal{M}(\ell),\mathcal{P}(\ell),N^{\frac{1}{6}}), \end{align*} with \begin{align*} \mathcal{L}&=\{\delta p_2m:\,\, N^{\frac{1}{\sigma}}\leq p_2\leq (3KNm^{-1})^{\frac{1}{2}},\,\, p_2\in \mathcal{P}_{4,-1},\,\, m\in \mathcal{P}_{4,1}^{*},\,\, (m,\frac{K}{\delta})=1\},\\ \mathcal{M}(\ell)&=\{L(n):\, L(n)=\ell p+1: n\sim N, p\in \mathbb{P}\},\\ \mathcal{P}(\ell)&=\{p\in \mathbb{P}:(p,2\ell)=1\},\quad Q(z)=\prod_{\substack{p< z\\ p\in \mathcal{P}(\ell)}}p, \end{align*} and $M(\ell)$ has been assigned the weights $\nu_n=\omega_{L^{-1}(n)}$, so that \begin{align*} S(\mathcal{M}(\ell),\mathcal{P}(\ell),z)=\sum_{\substack{n\sim N\\L(n)=\ell p+1\\(L(n),Q(z))=1}}\omega_n. \end{align*} We carry out bounding $S$ from below and bounding $T$ from above separately.\\ \textbf{Bounding $S$.} For $d\mid P(z)$, $(d,K)=1$, let \begin{align*} r(\mathcal{A},d)&=\sum_{\substack{n\sim N\\L(n)\in \mathbb{P}\\L(n)-1\equiv 0\hspace{-0.1cm} \pmod d}}\omega_n-\frac{1}{\varphi(d)}\frac{K}{\varphi(K)}\sum_{n\sim N}\frac{\omega_n}{\log(Kn)}, \end{align*} and for $(d,K)>1$ we let $r(\mathcal{A},d)=0$ (since if $p\mid d$, $p\mid K$ and $p\in \mathcal{P}_{4,-1}$, then $p$ does not divide any element of $\mathcal{A}$ by the amenability of $L(n)$). Let $\sigma\in (3,4)$ be as in Hypothesis \ref{h1}. The semilinear sieve \cite[Theorem 11.13]{friedlander}, with $\beta=1$, sifting parameter $z=N^{\frac{1}{\sigma}}$, and level $D=z^s$, $1\leq s\leq 2$, gives \begin{align}\label{eq35}\begin{split} &S(\mathcal{A},\mathcal{P}_{4,-1},N^{\frac{1}{\sigma}}) \\ &\geq \frac{K}{\varphi(K)}\sum_{n\sim N}\frac{\omega_n}{\log(Kn)} V_K^{\textnormal{SEM}}(N^{\frac{1}{\sigma}}) \left(f(s)+O((\log N)^{-0.1})\right)+\sum_{d\leq N^{\frac{s}{\sigma}}}\lambda_d^{-,\textnormal{SEM}}r(\mathcal{A},d), \end{split} \end{align} where $\lambda_d^{-,\textnormal{SEM}}$ are the lower bound semilinear weights with sifting parameter $z=N^{\frac{1}{\sigma}}$ and we have introduced the quantities \begin{align*} f(s)=\sqrt{\frac{e^{\gamma}}{\pi s}}\int_{1}^s \frac{dt}{\sqrt{t(t-1)}}\quad \text{and}\quad V_K^{\textnormal{SEM}}(z)=\prod_{\substack{p<z\\p\equiv -1 \hspace{-0.1cm} \pmod 4\\p\nmid K}}\left(1-\frac{1}{\varphi(p)}\right) \end{align*} We take $s=\rho_2 \sigma \in [1,2]$, where $\rho_2$ is as in Hypothesis \ref{h1}. Now Hypothesis \ref{h1} permits replacing the last sum in \eqref{eq35} with an error of $\ll \frac{\sum_{n\sim N}\omega_n}{(\log N)^{100}}$ (since the terms of that sum in \eqref{eq35} vanish unless $(d,K)=1$). Moreover, the term $V_K^{\textnormal{SEM}}(N^{\frac{1}{\sigma}})$ can be computed asymptotically using \cite[Proposition 1]{wu}, which implies that \begin{align*} V_K^{\textnormal{SEM}}(z)=(1+o(1))\prod_{\substack{p\mid K\\p\equiv -1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p-1}\right)^{-1}\cdot 2AC_{4,-1}\cdot \left(\frac{\pi e^{-\gamma}}{\log z}\right)^{\frac{1}{2}}, \end{align*} where \begin{align*} A=\frac{1}{2\sqrt{2}}\prod_{p\equiv -1 \hspace{-0.1cm} \pmod 4}\left(1-\frac{1}{p^2}\right)^{\frac{1}{2}}\quad \text{and}\quad C_{4,i}=\prod_{p\equiv i \hspace{-0.1cm} \pmod 4}\left(1-\frac{1}{(p-1)^2}\right) \end{align*} for $i\in \{-1,1\}$. Therefore, we end up with the bound \begin{align}\label{eq44} S&\geq \frac{4AC_{4,-1}+o(1)}{(\log N)^{\frac{1}{2}}} \cdot I_1(\rho_2,\sigma)\frac{K}{\varphi(K)}\prod_{\substack{p\mid K\\ p\equiv -1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p-1}\right)^{-1}\cdot \sum_{n\sim N}\frac{\omega_n}{\log(Kn)} \nonumber \\ &=\frac{4AC_{4,-1}+o(1)}{(\log N)^{\frac{3}{2}}} \cdot I_1(\rho_2,\sigma)\frac{K}{\varphi(K)}\prod_{\substack{p\mid K\\ p\equiv -1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p-1}\right)^{-1}\cdot \sum_{n\sim N}\omega_n, \end{align} where \begin{align*} I_1(\rho_2,\sigma)&=\frac{1}{2\sqrt{\rho_2}}\int_{1}^{\rho_2 \sigma}\frac{dt}{\sqrt{t(t-1)}}. \end{align*} \textbf{Bounding $T$.} Write, for $d\mid Q(z)$, $(d,K)=1$, $(\ell,d)=1$ and $(\ell,K)=\delta$, \begin{align*} r(\mathcal{M}(\ell),d)=\sum_{\substack{n\sim N\\L(n)-1= \ell p\\L(n)\equiv 0\hspace{-0.1cm} \pmod d}}\omega_n-\frac{1}{\varphi(d)}\frac{K}{\varphi(\frac{K}{\delta})}\sum_{n\sim N}\frac{\omega_n}{\ell \log \frac{Kn}{\ell}}. \end{align*} For all other $d$ such that $d\mid Q(z)$, let $r(\mathcal{M}(\ell),d)=0$ (since if $(d,K)>1$, then $L(n)-1=\ell p,$ $L(n)\equiv 0\hspace{-0.1cm} \pmod d$ is impossible). With these notations, for $1\leq s \leq 3$ the linear sieve \cite[Theorem 11.13]{friedlander} with $\beta=2$ provides the bound \begin{align}\label{eq1} S(\mathcal{M}(\ell),\mathcal{P}(\ell),N^{\frac{1}{6}})&\leq \frac{(1+o(1))K}{\varphi(\frac{K}{\delta})}\sum_{n\sim N}\frac{\omega_n}{\ell \log \frac{Kn}{\ell}} V_K^{\textnormal{LIN}}(N^{\frac{1}{5}},\ell)F(s)+\sum_{d\leq N^{\frac{s}{5}}}\lambda_d^{+,\textnormal{LIN}}r(\mathcal{M}(\ell),d), \end{align} where $\lambda_d^{+,\textnormal{LIN}}$ are the upper bound linear sieve coefficients with sifting parameter $z=N^{\frac{1}{5}}$, $F(s)=\frac{2e^{\gamma}}{s}$, and \begin{align*} V_K^{\textnormal{LIN}}(z,\ell)&=\prod_{\substack{p\in \mathcal{P}(\ell)\\ p< z\\p\nmid K}}\left(1-\frac{1}{\varphi(p)}\right)=\prod_{2< p<z}\left(1- \frac{1}{p-1}\right)\prod_{\substack{p\mid K\ell\\2<p<z}}\left(1-\frac{1}{p-1}\right)^{-1}.\nonumber \end{align*} Applying formula (4.6) of \cite{wu}, we get the asymptotic \begin {align}\label{eq48} &V_K^{\textnormal{LIN}}(z,\ell)=(1+o(1))\frac{2C_{4,1}C_{4,-1}e^{-\gamma}\mathfrak{f}(K \ell)}{\log z},\textnormal{where}\quad \mathfrak{f}(d)=\prod_{\substack{p\mid d\\p>2}}\left(1-\frac{1}{p-1}\right)^{-1}. \end{align} We take $s=5\rho_1\in [1,3]$ in the linear sieve. Then we have \begin{align}\label{eq75} \sum_{\ell \in \mathcal{L}}\sum_{d\leq N^{\rho_1}}\lambda_d^{+,\textnormal{LIN}}r(\mathcal{M}(\ell),d)&=\sum_{\substack{d\leq N^{\rho_1}\\(d,K)=1}}\lambda_d^{+,\textnormal{LIN}}\sum_{\substack{\ell\leq N^{\frac{3}{4}+\varepsilon}\\(\ell,d)=1\\(\ell,K)=\delta}}1_{\mathcal{L}}(\ell)r(\mathcal{M}(\ell),d), \end{align} since $1_{\mathcal{L}}(\ell)$ is supported on $\ell\leq 3K^2N^{1-\frac{1}{\sigma}}\leq N^{\frac{3}{4}+\varepsilon}$. Concerning the error sum in \eqref{eq1}, observe that \begin{align*} 1_{\mathcal{L}}(\ell)=\sum_{\substack{\ell=k\cdot \delta m\\N^{\frac{1}{\sigma}}\leq k\leq (3KN)^{\frac{1}{2}}\\k\leq \left(\frac{3KN}{m}\right)^{\frac{1}{2}}}}1_{\mathcal{P}_{4,-1}}(k)1_{\mathcal{P}_{4,1}^{*}}(m)1_{(m,\frac{K}{\delta})=1}, \end{align*} so $1_{\mathcal{L}}(\ell)$ is of convolution type (for the value of $\sigma$ we are considering), except for the cross condition $k\leq \left(\frac{3KN}{m}\right)^{\frac{1}{2}}$. We use Perron's formula in the form \begin{align*} 1_{(1,\infty)}(y)&=\frac{1}{\pi}\int_{-N^4}^{N^4}\frac{\sin(t \log y)}{t}dt+O\left(\frac{1}{N^4|\log y|}\right)\\ &=\frac{2}{\pi}\int_{N^{-5}}^{N^4}\frac{\sin(t \log y)}{t}dt+O\left(\frac{1}{N^4|\log y|}+\frac{|\log y|}{N^5}\right) \end{align*} for $N^{-3}<y\leq N^3, y\neq 1$ to dispose of the cross condition. We choose $y=\frac{3KN}{k^2m}$, which satisfies $|y-1|\geq \frac{1}{3KN^2}$ after altering $N$ by $\leq 1$ if necessary, so that the error term in Perron's formula becomes $O(\frac{K}{N^2})$. According to the addition formula for sine, we have \begin{align*} \sin (t\log y)=\sin(t\log (3KN)-t\log k^2)\cos(t\log m)-\cos(t\log (3KN)-t\log k^2) \sin(t\log m) \end{align*} which permits us to separate the variables $k$ and $m$. Then we have \begin{align*} 1_{\mathcal{L}}(\ell)=\frac{2}{\pi}\int_{N^{-4}}^{N^3}\frac{1}{t}\sum_{\substack{\ell=k\cdot \delta m\\N^{\frac{1}{\sigma}}\leq k\leq (3KN)^{\frac{1}{2}}}}(\alpha_k^{(1)}(t)\beta_m^{(1)}(t)-\alpha_k^{(2)}(t)\beta_m^{(2)}(t))\, dt+O\left(\frac{1}{N^{2-\varepsilon}}\right), \end{align*} where $|\alpha_k^{(j)}(t)|, |\beta_m^{(j)}(t)|\leq 1$ and $t\mapsto \alpha_k^{(j)}(t)$ and $t\mapsto \beta_m^{(j)}(t)$ are continuous and $\alpha_k^{(j)}(t)$ is supported on $N^{\frac{1}{\sigma}}\leq k\leq (3KN)^{\frac{1}{2}}$. Substituting this to \eqref{eq75}, Hypothesis \ref{h1} tells that \begin{align*} \sum_{\ell \in \mathcal{L}}\sum_{d\leq N^{\rho_1}}\lambda_d^{+,\textnormal{LIN}}r(\mathcal{M}(\ell),d)\ll \frac{\sum_{n\sim N}\omega_n}{(\log N)^{99}}+O(N^{\frac{1}{2}-\varepsilon}). \end{align*} We sum \eqref{eq1} over $\ell \in \mathcal{L}$ and make use of \eqref{eq48}, after which we have obtained \begin{align*} &\sum_{\ell \in \mathcal{L}}S(\mathcal{M}(\ell),\mathcal{P}(\ell),N^{\frac{1}{5}})\\ & \leq (F(s)+o(1))\cdot \frac{K}{\varphi(\frac{K}{\delta})}\sum_{n\sim N}\sum_{\ell \in \mathcal{L}}\frac{\omega_n}{\ell \log \frac{Kn}{\ell}} V_K^{\textnormal{LIN}}(N^{\frac{1}{6}},\ell)+O\left(\frac{\sum_{n\sim N}\omega_n}{(\log N)^{99}}\right)\\ &=\left(\frac{2e^{\gamma}}{5\rho_1}+o(1)\right)\cdot \frac{K}{\varphi(\frac{K}{\delta})}\sum_{\ell\in \mathcal{L}}\frac{\mathfrak{f}(K \ell)}{\ell \log \frac{KN}{\ell}}\cdot \sum_{n\sim N}\omega_n\cdot\frac{2C_{4,1}C_{4,-1}e^{-\gamma}}{\frac{1}{5}\log N}+O\left(\frac{\sum_{n\sim N}\omega_n}{(\log N)^{99}}\right). \end{align*} We analyze the sum over $\mathcal{L}$ in the above formula. Denoting $\mathcal{L}'=\{\frac{\ell}{\delta}:\, \ell\in \mathcal{L}\}$, it is \begin{align*} \sum_{\ell \in \mathcal{L}}\frac{\mathfrak{f}(K\ell)}{\ell \log \frac{KN}{\ell}}&=\left(\frac{1}{\delta}+o(1)\right)\sum_{\ell' \in \mathcal{L}'}\frac{\mathfrak{f}(K\ell')}{\ell'\log \frac{KN}{\ell'}}1_{(\ell',\frac{K}{\delta})=1}, \end{align*} since $\delta\mid K$. The previous sum can be written as \begin{align}\label{eq83} (1+o(1))\sum_{m\leq N^{1-\frac{2}{\sigma}+\varepsilon}}\frac{u(m)\mathfrak{f}(Km)1_{(m,\frac{K}{\delta})=1}}{m}\sum_{\substack{N^{\frac{1}{\sigma}}\leq p\leq (\frac{3KN}{m})^{\frac{1}{2}}\\p\equiv -1\hspace{-0.1cm} \pmod 4}}\frac{1}{p\log \frac{N}{pm}}, \end{align} where $u(m)$ is the characteristic function of $\mathcal{P}_{4,1}^{*}$. To evaluate this sum, we study the sum \begin{align}\label{eq83a} \sum_{m\leq x}u(m)\mathfrak{f}(Km)1_{(m,\frac{K}{\delta})=1}. \end{align} The sum can be written as \begin{align*} \mathfrak{f}(K)\sum_{m\leq x}u(m)\mathfrak{f}(\psi_K(m))1_{(m,\frac{K}{\delta})=1}, \quad \text{where}\quad \psi_K(m)=\prod_{\substack{p\mid m\\p\nmid K}}p, \end{align*} and the advantage is that $\mathfrak{f}(\psi_K(m))$ is a multiplicative function. By Wirsing's theorem \cite[Satz 1]{wirsing} applied to the nonnegative multiplicative function $h(m)=u(m)\mathfrak{f}(\psi_K(m))1_{(m,\frac{K}{\delta})=1}$ (which is bounded by $2$ at prime powers and fulfills $\sum_{p\leq x}h(p)\log p=(\frac{1}{2}+o(1))x$), we see that \eqref{eq83a} equals \begin{align*} &(\mathfrak{f}(K)+o(1))\frac{e^{-\frac{\gamma}{2}}}{\sqrt{\pi}}\frac{x}{\log x}\prod_{\substack{p\leq x\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1+\frac{h(p)}{p}+\frac{h(p^2)}{p^2}+\cdots\right)\\ &=(\mathfrak{f}(K)+o(1))\frac{e^{-\frac{\gamma}{2}}}{\sqrt{\pi}}\frac{x}{\log x}\prod_{\substack{p\leq x\\p\nmid K\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1+\frac{1}{p-2}\right)\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)^{-1}. \end{align*} Applying Wirsing's theorem reversely, this is \begin{align*} &(\mathfrak{f}(K)+o(1))\sum_{m\leq x}u(m)\mathfrak{f}(m)\cdot \prod_{\substack{p\mid K\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1+\frac{1}{p-2}\right)^{-1}\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)^{-1}. \end{align*} By \cite[Lemma 3]{wu}, we have \begin{align*} \sum_{m\leq x}u(m)\mathfrak{f}(m)=(1+o(1))\frac{A}{C_{4,1}}\frac{x}{(\log x)^{\frac{1}{2}}}. \end{align*} Now, using the same argument as in the proof of \cite[Lemma 5]{matomaki-m2+n2+1}, we compute that \eqref{eq83} equals \begin{align*} \frac{A+o(1)}{C_{4,1}(\log N)^{\frac{1}{2}}}\cdot \frac{1}{2}\int_{2}^{\sigma}\frac{\log(t-1)}{t(1-\frac{t}{\sigma})^{\frac{1}{2}}}dt\cdot \frac{\mathfrak{f}(K)}{\delta}\prod_{\substack{p\mid K\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1+\frac{1}{p-2}\right)^{-1}\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)^{-1}. \end{align*} \textbf{Concluding the proof.} Now we have \begin{align}\label{eq22} T\leq \frac{4AC_{4,-1}+o(1)}{(\log N)^{\frac{3}{2}}} \frac{ I_2(\rho_1,\sigma)K\mathfrak{f}(K)}{\delta\varphi(\frac{K}{\delta})}\prod_{\substack{p\mid K\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\hspace{-0.1cm}\left(1+\frac{1}{p-2}\right)^{-1}\hspace{-0.2cm}\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\hspace{-0.1cm}\left(1-\frac{1}{p}\right)^{-1}\sum_{n\sim N}\omega_n, \end{align} where \begin{align*} I_2(\rho_1, \sigma)=\frac{1}{2\rho_1}\int_{2}^{\sigma}\frac{\log(t-1)}{t(1-\frac{t}{\sigma})^{\frac{1}{2}}}dt. \end{align*} We claim that the local factors in \eqref{eq44} and \eqref{eq22} are identical, or in other words that \begin{align}\label{eq84}\begin{split} &\prod_{p\mid K}\left(1-\frac{1}{p}\right)^{-1}\prod_{\substack{p\mid K\\ p\equiv -1\hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p-1}\right)^{-1}\\ &=\prod_{p\mid \frac{K}{\delta}}\left(1-\frac{1}{p}\right)^{-1}\prod_{\substack{p\mid K\\p>2}}\left(1-\frac{1}{p-1}\right)^{-1}\prod_{\substack{p\mid K\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1+\frac{1}{p-2}\right)^{-1}\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)^{-1}.\end{split} \end{align} By the identity $(1+\frac{1}{p-2})^{-1}=1-\frac{1}{p-1}$, \eqref{eq84} is equivalent to \begin{align*} \prod_{p\mid K}\left(1-\frac{1}{p}\right)^{-1}=\prod_{p\mid \frac{K}{\delta}}\left(1-\frac{1}{p}\right)^{-1}\prod_{\substack{p\mid K\\p\nmid \frac{K}{\delta}\\p\equiv 1 \hspace{-0.1cm} \pmod 4}}\left(1-\frac{1}{p}\right)^{-1}, \end{align*} which in turn is equivalent to the nonexistence of a prime $p\not\equiv 1\hspace{-0.1cm} \pmod 4$ for which $p\mid K$, $p\nmid \frac{K}{\delta}$. If $p\geq 5$ were such a prime, we would have $p\mid \delta$, so $p\mid b-1$, which contradicts the definition of amenability. We also cannot have $p=2$ or $p=3$, since $2\mid \frac{K}{\delta}$ and $3\mid \frac{K}{\delta}$ for $\delta=(b-1,K)$ by amenability.\\ Thus no such $p$ exists and \eqref{eq84} holds. Furthermore, it is clear that \eqref{eq84} is at least $0.01\mathfrak{S}(L)$. Consequently, \begin{align*} S-T\geq (0.01+o(1))4AC_{4,-1}\mathfrak{S}(L)(I_1(\rho_2,\sigma)-I_2(\rho_1,\sigma))\frac{\sum_{n\sim N}\omega_n}{(\log N)^{\frac{3}{2}}}+O(N^{\frac{1}{2}}). \end{align*} Owing to the fact that $\text{H}(\rho_1,\rho_2,\sigma)$ is assumed to be true, we have $I_1(\rho_2,\sigma)-I_2(\rho_1,\sigma)\geq 10^{-10}$, and this completes the proof of Theorem \ref{t2} in view of \eqref{eq85} and \eqref{eq39}.\qedd \section{Preparation for the verifying the hypothesis}\label{Sec: decomposition} The sequence $(\omega_n)$ to which we will apply Theorem \ref{t2} will be determined by a function $\chi(n)$ having a Fourier series of the form \eqref{eq13}. In \eqref{eq13} it is natural to separate the phases $\alpha_i$ into major and minor arc parameters. This partition arises from the following lemma. \begin{lemma}\label{le11} Let $\alpha_1,\ldots, \alpha_{\mathcal{C}}$ be real numbers with $\mathcal{C}\ll 1$, and let $W\ll 1$ be as in \eqref{eq30}. Also let the constants $A,B\geq 1$ be related by $B=A(3\mathcal{C})^{\mathcal{C}}$. Then for any large $N$ there exists a positive integer $Q\leq (\log N)^{B}$ such that each $\alpha_k$ may be written as \begin{align*} \alpha_k&=W\frac{a_k}{q_k}+\varepsilon_k,\,\, (a_k,q_k)=1,\,\, 1\leq q_k\leq \frac{N}{(\log N)^{100B}},\,\,|\varepsilon_k|\leq \frac{(\log N)^{100B}W}{q_k N}, \end{align*} and either $q_k\mid Q$ or $q_k\geq \frac{q_k}{(q_k,Q^2)}\geq (\log N)^A$. \end{lemma} \textbf{Proof.} This is Lemma 3.2 in \cite{matomaki-shao}.\qedd\\ From now on, $A$ (and therefore also $B$) will be large enough quantities (say $A,B\geq 10^{10}$). Let us define the sequence $(\omega_n)$ to which we will apply Theorem \ref{t2} in order to prove Proposition \ref{prop2}. Let $\chi:\mathbb{Z}\to \mathbb{R}_{\geq 0}$ be any function with Fourier complexity $\leq \mathcal{C}$ (i.e., $\chi$ satisfies \eqref{eq13}). Given an integer $t$ with $|t|\leq 5N$, we choose \begin{align*} (\omega_n)_{n\sim \frac{N}{Q}}=(\chi(t-(Qn+c_0)))_{n\sim \frac{N}{Q}}, \end{align*} where $Q$ is determined by the $\alpha_i$ in \eqref{eq13} with the help of Lemma \ref{le11} and $c_0\in \mathcal{Q}$ with \begin{align*} \mathcal{Q}=\{c_0\hspace{-0.2cm}\hspace{-0.1cm} \pmod Q:\,\, (Wc_0+b,Q)=(Wc_0+b-1,s(Q))=1\}. \end{align*} Recall that $|\mathcal{Q}|$ is given by \eqref{eq62}.\\ From now on, let \begin{align*} x=\frac{N}{Q},\quad L(n)=QWn+Wc_0+b,\quad c_0\in \mathcal{Q}. \end{align*} To prove Proposition \ref{prop2} and hence Proposition \ref{prop_bohr} and Theorem \ref{theo_goldbach}, it suffices to show that for $W$ as in \eqref{eq30} and $\mathfrak{S}(L)$ as in Definition \ref{def3} we have \begin{align}\label{eq12} \sum_{\substack{n\sim x\\L(n)\in \mathbb{P}\\L(n)-1\in \mathcal{S}}}\chi(t-(Qn+c_0))\geq \frac{\delta_0\cdot \mathfrak{S}(L)}{(\log x)^{\frac{3}{2}}}\sum_{n\sim x}\chi(t-(Qn+c_0))+o\left(\frac{x}{(\log x)^{\frac{3}{2}}}\right), \end{align} since $L(n)$ is amenable and since by \eqref{eq62} \begin{align*} \mathfrak{S}(L)&\asymp \prod_{\substack{p\equiv -1\hspace{-0.1cm} \pmod 4\\ p\mid QW\\p\nmid W}}\left(1-\frac{1}{p}\right)^{-2}\prod_{\substack{p\not \equiv -1 \hspace{-0.1cm} \pmod 4\\ p\mid QW\\p\nmid W}}\left(1-\frac{1}{p}\right)^{-1}\\ &\cdot \prod_{\substack{p\equiv -1\hspace{-0.1cm} \pmod 4\\p\mid W}}\left(1-\frac{1}{p}\right)^{-2}\prod_{\substack{p\not \equiv -1\hspace{-0.1cm} \pmod 4\\p\mid W}}\left(1-\frac{1}{p}\right)^{-1}\\ &\asymp \left(\frac{W}{\varphi(W)}\right)^{\frac{3}{2}}\frac{Q}{|\mathcal{Q}|}. \end{align*} By Theorem \ref{t2} and the remark after it, formula \eqref{eq12} will follow once we have verified Hypothesis \ref{h1} for our sequence $(\chi(t-(Qn+c_0)))_{n\sim x}$ and linear function $L(n)$ and parameters \begin{align}\label{eq87} \rho_1=\frac{1}{2}-10\varepsilon,\quad \rho_2=\frac{3}{7}-10\varepsilon,\quad \text{and}\quad \sigma=3+\varepsilon. \end{align} By formula \eqref{eq13} for $\chi(n)$ and Lemma \ref{le11}, it suffices to inspect Hypothesis \ref{h1} with the choices \eqref{eq87} for $(e(\xi n))_{n\sim x}$, where $\xi$ is an arbitrary real number satisfying, for some $Q\leq 2(\log x)^{B}$, \begin{align}\label{eq88} \left|\xi-\frac{QWa}{q}\right|\leq \frac{2(\log x)^{102B}}{qx}\,\, \text{for}\,\, (a,q)=1,\,\, q\leq \frac{x}{(\log x)^{99B}},\,\, \text{and}\,\, q\mid Q\,\, \text{or}\,\, \frac{q}{(q,Q^2)}\geq (\log x)^{A}. \end{align} Moreover, we may assume in \eqref{eq12} that \begin{align*} \sum_{n\sim x}\chi(t-(Qn+c_0))\gg \frac{x}{(\log x)\mathfrak{S}(L)}, \end{align*} since otherwise we have nothing to prove, and consequently it suffices to prove Hypothesis \ref{h1} for $(e(\xi n))_{n\sim x}$ with $(\sum_{n\sim x}\omega_n) (\log x)^{-100}$ replaced by $x(\log x)^{-200}$ in that hypothesis. \section{Bombieri-Vinogradov sums weighted by additive characters}\label{Sec: Bombieri} We will establish Hypothesis \ref{h1} in the setting of Section \ref{Sec: decomposition} subsequently in Section \ref{Sec: hypotheses}. For that purpose as well as for proving Theorem \ref{theo_alphap} in Section \ref{Sec: fractional parts}, we need the following Bombieri-Vinogradov type estimates for type I and II exponential sums. We employ for positive integers $q$ and $v$ the notation \begin{align*} q_{v}=\frac{q}{(q,v^2)}. \end{align*} \begin{lemma}\label{le8} Let $M\leq N^{0.4}$, $R\leq N^{0.1}$, and $\rho\leq \frac{1}{2}-\varepsilon$ for some $\varepsilon\in (0,\frac{1}{6})$. Let $\xi$ be a real number with $|\xi-\frac{a}{q}|\leq \frac{1}{(qv)^2}$ for some coprime $a$ and $q\in [1,N]$ and some positive integer $v\leq N^{0.1}$. Then for any complex numbers $|\alpha_m|\leq \tau(m)^2\log m$ and any $t\in [N,2N]$ we have \begin{align*} &\sum_{0<|r|\leq R}\,\,\sum_{d\leq N^{\rho}}\max_{(c,dv)=1}\bigg|\sum_{\substack{N\leq mn\leq t\\mn\equiv c \hspace{-0.1cm} \pmod{dv}\\m\leq M}}\alpha_m e(\xi r mn)\bigg|\\ &\ll \left(\frac{RN}{v}\right)^{\frac{1}{2}}\left(RMN^{\rho}+\frac{RN}{vq_v}+q_v\right)^{\frac{1}{2}}(\log N)^{1000}. \end{align*} \end{lemma} \textbf{Proof.} We follow the proof of \cite[Lemma 8.3]{matomaki-shao}. It suffices to consider the sum over $0<r\leq R$. Our task is to estimate \begin{align*} S_r=\sum_{d\leq N^{\rho}}\max_{(c,dv)=1}\bigg|\sum_{\substack{N\leq mn\leq t\\mn\equiv c \pmod{dv}\\m\leq M}}\alpha_m e(\xi rmn)\bigg| \end{align*} for $r\leq R$. The inner sum in the definition of $S_r$ is a geometric sum in the variable $n$, so evaluating it provides the bound \begin{align*} S_r\ll \sum_{d\leq N^{\rho}}\sum_{m\leq M}|\alpha_m|\min\left\{\frac{RN}{rmdv},\frac{1}{\|r \xi md v\|}\right\}. \end{align*} Observe that $\left|v\xi-\frac{av}{q}\right|\leq \frac{1}{q^2}$. Based on this, writing $d'=rmd$ and using a standard bound for sums over fractional parts \cite[Lemma B.3]{matomaki-shao} (taking $x=\frac{RN}{v}$ in that lemma), we get \begin{align*} \sum_{r\leq R}S_r&\ll \sum_{d'\leq RMN^{\rho}}\tau(d')^5\min\left\{\frac{RN}{d' v},\frac{1}{\|v\xi d'\|}\right\}(\log N)\\ &\ll \left(\frac{RN}{vq_{v}^{\frac{1}{2}}}+\left(\frac{RN\cdot RMN^{\rho}}{v}\right)^{\frac{1}{2}}+\left(\frac{RN}{v}q_v\right)^{\frac{1}{2}}\right)(\log N)^{1000}\\ &\ll \left(\frac{RN}{v}\right)^{\frac{1}{2}}\left(RMN^{\rho}+\frac{RN}{vq_v}+q_v\right)^{\frac{1}{2}}(\log N)^{1000}, \end{align*} as wanted.\qedd \begin{lemma}\label{le9} Let $M\in [N^{\frac{1}{2}},N^{\frac{3}{4}}]$ and $\Delta_1,\Delta_2\geq 1$, $\Delta_1\Delta_2\leq N^{\frac{1}{2}}$, $\Delta_1 \Delta_2^2\leq \frac{M}{v}$ for some positive integer $v\leq N^{0.1}$. Let $\xi$ be a real number with $|\xi-\frac{a}{q}|\leq \frac{1}{(qv)^2}$ for some coprime $a$ and $q\in [1,N]$. Then for any complex numbers $|\alpha_m|,|\beta_m|\leq \tau(m)^2\log m$ and any integer $c'\neq 0$ and number $t\in [N,2N]$ we have \begin{align*} &\sum_{0<|r|\leq R}\,\,\sum_{\substack{d_1\sim \Delta_1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,c'd_1v)=1}}\max_{(c,d_1v)=1}\bigg|\sum_{\substack{N\leq mn\leq t\\mn\equiv c \hspace{-0.1cm}\pmod{d_1v}\\mn\equiv c'\hspace{-0.1cm} \pmod{d_2}\\m\sim M}}\alpha_m\beta_ne(\xi r mn)\bigg|\\ &\ll \frac{RN}{v}\min\{F_1,F_2\}(\log N)^{1000}, \end{align*} with \begin{align*} F_1&=\left(\frac{\Delta_1 Mv}{N}+\Delta_1\Delta_2^2\frac{v}{M}\right)^{\frac{1}{2}}+\left(\frac{1}{\Delta_1}+\frac{1}{q_{v}}+\frac{q_{v}v^2}{RN}\right)^{\frac{1}{8}},\\ F_2&=\Delta_1 \Delta_2\left(\frac{1}{q_v^{\frac{1}{2}}}+\frac{v}{M^{\frac{1}{2}}}+\frac{v^2M}{N}+\frac{q_{v}^{\frac{1}{2}}v}{(RN)^{\frac{1}{2}}}\right)^{\frac{1}{2}}. \end{align*} \end{lemma} \begin{remark} In Section \ref{Sec: hypotheses}, we will only need the case $R=1$, while the dependence on $v$ will be crucial. In Section \ref{Sec: fractional parts}, on the other hand, $v=1$ but the dependence on $R$ will be crucial. \end{remark} \textbf{Proof.} We follow the proof of \cite[Lemma 8.4]{matomaki-shao}, which in turn is based on an argument of Mikawa \cite{mikawa-bombieri}. It suffices to consider the case $r>0$. We will first prove the lemma in the case $F_1=\min\{F_1,F_2\}$. Let us write \begin{align*} I_r=\sum_{d_1\sim \Delta_1}\sum_{\substack{d_2\sim \Delta_2\\(d_2,c'd_1v)=1}}\max_{(c,d_1v)=1}\bigg|\sum_{\substack{N\leq mn\leq t\\mn\equiv c \hspace{-0.1cm} \pmod{d_1v}\\mn\equiv c'\pmod{d_2}\\m\sim M}}\alpha_m\beta_ne(\xi r mn)\bigg|, \end{align*} so that $\sum_{r\leq R}I_r$ is what we are interested in. Since $\Delta_1\Delta_2^2\leq \frac{M}{v}$, a formula on page 37 of \cite{matomaki-shao} tells (with $x=N,$ $D=\Delta_1$, $\alpha=r\xi$) that \begin{align*} I_r^2\ll N(\log N)^{100}\left(\Delta_1\sum_{d_1\sim \Delta_1}\sum_{0<|j|\leq \frac{8\Delta_2^2 N}{\Delta_1 M v}}\tau_3(j)\min\left\{\frac{RN}{r(d_1v)^2|j|},\frac{1}{\|r \xi(d_1v)^2 |j|\|}\right\}+\frac{\Delta_1 M}{v}\right) \end{align*} (since the term $\frac{x^2}{Q^2}(\log x)^{-C+10}$ present in that formula of \cite{matomaki-shao} can be replaced with $\frac{DMx}{Q}(\log x)^{100}$ without changing anything in the proof). Using the Cauchy-Schwarz inequality, we obtain \begin{align}\label{eq65} &\frac{1}{(\log N)^{200}}\sum_{r\leq R}I_r\leq \frac{1}{(\log N)^{200}}R^{\frac{1}{2}}\left(\sum_{r\leq R}I_r^2\right)^{\frac{1}{2}}\nonumber\\ &\leq (RN)^{\frac{1}{2}}\left(\Delta_1 \sum_{d_1\sim \Delta_1}\sum_{0<|j|\leq \frac{8\Delta_2^2 N}{\Delta_1Mv}}\sum_{r\leq R}\tau_3(j)\min\left\{\frac{RN}{r(d_1v)^2 |j|},\frac{1}{\|r\xi (d_1v)^2 |j|\|}\right\}+\frac{\Delta_1 RM}{v}\right)^{\frac{1}{2}}\nonumber\\ &\ll (RN)^{\frac{1}{2}}\left(\Delta_1 \sum_{d_1\sim \Delta_1}\sum_{1\leq \ell\leq \frac{8\Delta_2^2 R N}{\Delta_1 Mv}}\tau_4(\ell)\min\left\{\frac{RN}{(d_1v)^2\ell},\frac{1}{\|v^2\xi d_1^2\ell\|}\right\}+\frac{\Delta_1 RM}{v}\right)^{\frac{1}{2}}, \end{align} after writing $\ell=rj$. When it comes to the sum above, we can estimate it using the lemma on page 6 of \cite{mikawa-bombieri} (with $\tau_3(\cdot)$ replaced by $\tau_4(\cdot)$), stating that \begin{align}\label{eq64} &\Delta_1\sum_{d_1\sim \Delta_1}\sum_{\ell\sim J}\tau_4(\ell)\min\left\{\frac{x}{d_1^2\ell},\frac{1}{\|\xi' d_1^2 \ell\|}\right\}\ll (\Delta_1^2 J+x^{\frac{3}{4}}(q'+\frac{x}{q'}+\frac{x}{\Delta_1})^{\frac{1}{4}})(\log x)^{100} \end{align} for $1\leq J\leq 10x$ and any real number $\xi'$ satisfying $|\xi'-\frac{a'}{q'}|\leq \frac{1}{q'^2}$ for some coprime $a'$ and $q'\leq x$. In the case $q'>x$, \eqref{eq64} continues to hold, by trivial estimates. We substitute \eqref{eq64} with $x=\frac{RN}{v^2}$, $\xi'=v^2\xi$ and $J\leq \frac{8\Delta_2^2RN}{\Delta_1 Mv}$ into \eqref{eq65} (we have $J\leq 10\frac{RN}{v^2}$ since $\Delta_1\Delta_2^2\leq \frac{M}{v}$), making use of our assumption on $\xi$, which implies that $\left|v^2\xi-\frac{\frac{av^2}{(q,v^2)}}{q_{v}}\right|\leq \frac{1}{q_v^2}$. This results in the claimed bound.\\ Then let $F_2=\min\{F_1,F_2\}$. In this situation, we use the orthogonality of characters to bound the sum in Lemma \ref{le9} with \begin{align}\label{eq85a} &\sum_{r\leq R}\sum_{d_1\sim \Delta_1}\sum_{d_2\sim \Delta_2} \max_{\psi \hspace{-0.1cm}\hspace{-0.1cm} \pmod{d_1d_2}}\bigg|\sum_{\substack{N\leq mn\leq t\\mn\equiv c_{v}(d_1,d_2) \pmod v\\m\sim M}}\alpha_m \psi(m)\beta_n \psi(n)e(\xi r mn)\bigg|, \end{align} where $c_v(d_1,d_2)$ is a suitably chosen integer coprime to $v$. Estimating the sums over $d_1$ and $d_2$ trivially and using the Cauchy-Schwarz inequality and expanding a square, we find that \eqref{eq85a} is, for some $|\beta_n'|\leq \tau(n)^2 \log n$ and some $c_v$ coprime to $v$, \begin{align}\label{eq106} &\leq \Delta_1 \Delta_2 (RM)^{\frac{1}{2}}\bigg(\sum_{r\leq R}\sum_{m\leq M}\bigg|\sum_{\substack{\frac{N}{m}\leq n\leq \frac{t}{m}\\n\equiv c_vm^{-1}\hspace{-0.1cm}\pmod v}}\beta_n' e(\xi r mn)\bigg|^2 \bigg)^{\frac{1}{2}}(\log M)^{100}\nonumber\\ &= \Delta_1 \Delta_2 (RM)^{\frac{1}{2}}\bigg(\sum_{r\leq R}\sum_{\substack{\frac{N}{2M}\leq n_i\leq \frac{2N}{M}\\n_1\equiv n_2\pmod v\\\text{for}\,\, i\in \{1,2\}}}\beta_{n_1}'\overline{\beta_{n_2}'}\sum_{\substack{m\leq M\\\frac{N}{n_i}\leq m\leq \frac{t}{n_i}\\m\equiv c_vn_i^{-1}\hspace{-0.1cm}\pmod v\\\text{for}\,\,i\in \{1,2\}}}e(\xi r m(n_1-n_2))\bigg)^{\frac{1}{2}}(\log M)^{100}\nonumber\\ &\ll \Delta_1 \Delta_2 (RN)^{\frac{1}{2}}\left(RM+\sum_{r\leq R}\sum_{\substack{1\leq n\leq \frac{2N}{M}\\n\equiv 0\pmod v}}T(n)\min\left\{\frac{RN}{rnv}+1,\frac{1}{\|v\xi r n\|}\right\}\right)^{\frac{1}{2}}(\log M)^{101}, \end{align} where \begin{align*} T(n)=\frac{M}{N}\sum_{\substack{n=n_1-n_2\\n_1,n_2\leq \frac{2N}{M}}}\tau(n_1)^2\tau(n_2)^2. \end{align*} We can write $n=kv$ and $\ell=kr$ to bound \eqref{eq106} with \begin{align}\label{eq103} \ll \Delta_1 \Delta_2 (RN)^{\frac{1}{2}}\left(RM+\sum_{\ell\leq \frac{2RN}{Mv}}U(\ell)\min\left\{\frac{RN}{\ell v^2}+1,\frac{1}{\|v^2\xi \ell\|}\right\}\right)^{\frac{1}{2}}(\log N)^{101}, \end{align} where \begin{align*} U(\ell)=\sum_{\substack{\ell=\ell_1 \ell_2\\\ell_1\leq \frac{2N}{Mv}}}T(\ell_1 v). \end{align*} We apply \cite[Lemma B.3]{matomaki-shao} (with $k=20$) to \eqref{eq103}. The weight function $U(\ell)$ is not a divisor function, but the only property of the weight function needed in that lemma is a second moment bound. Therefore, \eqref{eq103} can be bounded with \begin{align}\label{eq105} &\ll \Delta_1 \Delta_2 (RN)^{\frac{1}{2}}\bigg(\frac{RN}{q_{v}^{\frac{1}{2}}v^2}+\frac{RN}{(v^2M)^{\frac{1}{2}}}+RM+\bigg(\frac{RNq_v}{v^2}\bigg)^{\frac{1}{2}}\bigg)^{\frac{1}{2}}(\log N)^{1000}, \end{align} once we prove that \begin{align}\label{eq107} \sum_{\ell\leq \frac{2RN}{Mv}}U(\ell)^2\ll \frac{RN}{Mv}(\log N)^{100}. \end{align} We calculate \begin{align}\label{eq104} &\sum_{\ell\leq \frac{2RN}{Mv}}\left(\sum_{\substack{\ell=\ell_1\ell_2\\\ell_1\leq \frac{2N}{Mv}}}T(\ell_1v)\right)^2\ll\frac{RN}{Mv}\sum_{\substack{\ell_1\leq \frac{2N}{Mv}\\\ell_1'\leq \frac{2N}{Mv}}}\frac{T(\ell_1v)T(\ell_1'v)}{[\ell_1,\ell_1']}\nonumber\\ &\ll \frac{RN}{Mv}\sum_{d\leq \frac{2N}{Mv}}\frac{1}{d}\sum_{\substack{\ell_1\leq \frac{2N}{dMv}\\\ell_1'\leq \frac{2N}{dMv}}}\frac{T(\ell_1 dv)T(\ell_1' dv)}{\ell_1\ell_1'}=\frac{RN}{Mv}\sum_{d\leq \frac{2N}{Mv}}\frac{1}{d}\left(\sum_{\ell\leq \frac{2N}{dMv}}\frac{T(\ell dv)}{\ell}\right)^2. \end{align} We can estimate the sum inside the square using \begin{align*} &\sum_{\substack{n\leq \frac{2N}{M}\\n\equiv 0\pmod{c}}}\frac{T(n)}{n}\ll \frac{M}{N}\sum_{\substack{n_1\leq \frac{2N}{M}\\n_2\leq \frac{2N}{M}\\n_1\equiv n_2\pmod c\\n_1>n_2}}\frac{\tau(n_1)^2\tau(n_2)^2}{n_1-n_2}\\ &\ll\frac{M}{Nc}\sum_{1\leq a\leq c}\sum_{\substack{n_1'\leq \frac{2N}{Mc}\\n_2'\leq \frac{2N}{Mc}\\n_1'>n_2'}}\frac{\tau(cn_1'+a)^2\tau(cn_2'+a)^2}{n_1'-n_2'}\ll \frac{M}{Nc}\sum_{1\leq a\leq c}\sum_{n\leq \frac{2N}{Mc}}\tau(cn+a)^4\\ &\ll \frac{M}{Nc}\sum_{m\leq \frac{2N}{M}+c}\tau(m)^4\ll \frac{1}{c}(\log N)^{15}, \end{align*} for $c\leq \frac{2N}{M}$, where we used Hilbert's inequality \cite[Chapter 7]{montgomery} in the third last step. Taking $c=dv$, and substituting to \eqref{eq104}, we see that \eqref{eq107} holds, as claimed. Therefore, we indeed have the bound \eqref{eq105} for \eqref{eq103}, and that bound can be rewritten as the desired bound $F_2$.\qedd \section{Factorizing sieve weights}\label{Sec: sieveweight} The linear and semilinear sieve weights will play a crucial role in verifying Hypothesis \ref{h1}, since we aim to split the summation over $d\leq x^{\rho}$ in that hypothesis to summations over $d_1\sim \Delta_1$, $d_2\sim \Delta_2$ for various values of $\Delta_1$ and $\Delta_2$. If such a factorization can be done, it provides more flexibility in our Bombieri-Vinogradov sums, and hence gives better bounds. This advantage can be seen from Lemma \ref{le9}, which often produces better bounds when $\Delta_1$ and $\Delta_2$ are of somewhat similar size, as opposed to the choice $\Delta_1=x^{\rho}$, $\Delta_2=1$. The following lemmas about the combinatorial structure of sieve weights have been tailored so that the estimate given by Lemma \ref{le9} will be $\ll Nv^{-1}(\log N)^{-1000}$ if $\Delta_1$ and $\Delta_2$ satisfy the conditions for $d_1$ and $d_2$ in Lemma \ref{le10} or \ref{le1} with $D=\frac{x^{1-\varepsilon^2}}{M}$, $\theta=0$, $R=1$ and $q$ suitably large, and additionally $\rho=\frac{3}{7}(1-4\theta)-\varepsilon$ in the case of Lemma \ref{le10} or $\rho=\frac{1}{2}(1-4\theta)-\varepsilon$ in the case of Lemma \ref{le1}. It should be remarked that in Section \ref{Sec: hypotheses} we will only need the case $\theta=0$ of the following lemmas, but for the proof of Theorem \ref{theo_alphap} we will choose $\theta=\frac{1}{80}-\varepsilon$. \subsection{Linear sieve weights}\label{sub: linear} \begin{lemma}\label{le10} Let $\varepsilon>0$ be small, $0\leq \theta\leq \frac{1}{30}$, and $\rho=\frac{1}{2}(1-4\theta)-\varepsilon$. Let \begin{align*} \mathcal{D}^{+,\textnormal{LIN}}=\{p_1\cdots p_r\leq x^{\rho}:\,\, z_1\geq p_1> \ldots > p_r,\,\,p_1\cdots p_{2k-2}p_{2k-1}^3\leq x^{\rho}\,\, \textnormal{for all}\,\, k\geq 1\} \end{align*} be the support of the upper bound linear sieve weights with level $x^{\rho}$ and sifting parameter $z_1\leq x^{\frac{1}{2}}$. Then, for any $D\in [x^{\frac{1}{5}},x^{\rho}]$, every $d\in \mathcal{D}^{+,\textnormal{LIN}}$ can be written as $d=d_1d_2$, where the positive integers $d_1$ and $d_2$ satisfy $d_1\leq D$, $d_1d_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}$. Moreover, we can take either $d_1\geq x^{0.1}$ or $d_2=1$. \end{lemma} \textbf{Proof.} The proof is similar to the proof of \cite[Lemma 12.16]{friedlander} (which essentially says that the linear sieve weights $\lambda^{+,\textnormal{LIN}}_d$ are well-factorable for any sifting parameter $z\leq x^{\frac{1}{2}-\varepsilon}$). We will actually show that any $d=p_1\cdots p_r\in \mathcal{D}^{+,\textnormal{LIN}}$ can be written as $d=d_1d_2$ with $d_1\leq D$, $d_2\leq \frac{x^{\rho}}{D}$ and either $d_1\geq x^{0.1}$ or $d_2=1$. After that statement has been proved, we have proved the lemma, because then $d_1d_2^2\leq \frac{x^{2\rho}}{D}\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}$. We use induction on $r$ to prove the existence of such $d_1$ and $d_2$. For $r=1$, we can simply take $d_1=p_1$ and $d_2=1$, since $p_1\leq x^{\frac{\rho}{3}}\leq x^{\frac{1}{6}}$. If $r=2$, we can take $d_1=p_1p_2$ , $d_2=1$, unless $p_1p_2>D$. In the case $p_1p_2>D$, in turn, the choice $d_1=p_1$, $d_2=p_2$ works, since $p_1\leq x^{\frac{1}{6}}$ and $p_2\leq \frac{x^{\rho}}{p_1p_2}\leq \frac{x^{\rho}}{D}$. Suppose then that $r\geq 3$ and that case $r-1$ has been proved and consider the case $r$. We have $p_1\cdots p_{r-1}\in \mathcal{D}^{+,\textnormal{LIN}}$, so by the induction assumption $p_1\cdots p_{r-1}=d_1'd_2'$ with $d_1'\leq D$, $d_2'\leq \frac{x^{\rho}}{D}$ and either $d_1'\geq x^{0.1}$ or $d_2'=1$. We claim that we can take either $d_1=d_1'p_r$, $d_2=d_2'$ or $d_1'=d_1$, $d_2=d_2'p_r$. Firstly, if $d_1'< x^{0.1}$, then $d_2'=1$ and $d_1'=p_1\cdots p_{r-1}$. Since $r\geq 3$, this yields $p_1p_2<x^{0.1}$, so $p_2<x^{0.05}$. Now the choice $d_1=d_1'p_r$, $d_2=d_2'=1$ works because $d_1< x^{0.1}p_r\leq x^{0.15}\leq D$. Secondly, if in the opposite case $d_1'\geq x^{0.1}$ neither of the choices for $(d_1,d_2)$ works, then $d_1'd_2'p_r^2>x^{\rho}$. However, $d_1'd_2'p_r^2=p_1\cdots p_{r-1}p_r^2\leq x^{\rho}$ by the definition of $\mathcal{D}^{+,\textnormal{LIN}}$, so we have a contradiction and the induction works.\qedd \subsection{Semilinear sieve weights} \label{sub: semilinear} \begin{lemma}\label{le1} Let $\varepsilon>0$ be small, $0\leq \theta\leq \frac{1}{30}$, and $\rho=\frac{3}{7}(1-4\theta)-\varepsilon$. Let \begin{align*} \mathcal{D}^{-,\textnormal{SEM}}=\{p_1\cdots p_r\leq x^{\rho}:\,\, z_2\geq p_1> \ldots > p_r,\,\,p_1\cdots p_{2k-1}p_{2k}^2\leq x^{\rho}\,\, \textnormal{for all}\,\, k\geq 1\}. \end{align*} be the support of the lower bound semilinear sieve weights with level $x^{\rho}$ and sifting parameter $z_2\leq x^{\frac{1}{3}-2\theta-2\varepsilon^2}$. Then, for any $D\in [x^{\frac{1}{3}-2\theta-2\varepsilon^2},x^{\rho}]$, every $d\in \mathcal{D}^{-,\textnormal{SEM}}$ can be written as $d=d_1d_2$, where the positive integers $d_1$ and $d_2$ satisfy $d_1\leq D$, $d_1d_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}$. Moreover, we can take either $d_1\geq x^{0.1}$ or $d_2=1$. \end{lemma} \begin{remark}The exponent $\rho=\frac{3}{7}(1-4\theta)-\varepsilon$ is optimal in Lemma \ref{le1}. Namely, if $\rho=\frac{3}{7}(1-4\theta)+3\varepsilon$, then the lemma is false for $D=x^{\frac{3}{7}(1-4\theta)}$ and $p_1p_2p_3\in \mathcal{D}^{-,\textnormal{SEM}}$, $p_1,p_2,p_3\sim \frac{1}{2}x^{\frac{1}{7}(1-4\theta)+\varepsilon}$. \end{remark} \begin{remark} We remark that an argument almost identical to the proof of Lemma \ref{le1} below shows that the lemma holds also for the set \begin{align*} \mathcal{D}^{+,\textnormal{SEM}}=\{p_1\cdots p_r\leq x^{\rho}:\,\, x^{\frac{1}{2}}\geq p_1\geq \ldots \geq p_r,\,\,p_1\cdots p_{2k-2}p_{2k-1}^2\leq x^{\rho}\,\, \textnormal{for all}\,\, k\geq 1\}, \end{align*} which is the support of the upper bound semilinear weights, when $\rho=\frac{2}{5}(1-4\theta)-\varepsilon$, $\theta\leq \frac{1}{40}$, and all the other parameters are as before. This observation will be used in the proof of Theorem \ref{theo_sievebombieri}. This exponent is also optimal, as is seen by taking $\rho=\frac{2}{5}(1-4\theta)+2\varepsilon$ and $D=x^{\frac{2}{5}(1-4\theta)}$, $p_1p_2\in \mathcal{D}^{+, \textnormal{SEM}}$, $p_1,p_2\sim \frac{1}{2}x^{\frac{1}{5}(1-4\theta)+\varepsilon}$. \end{remark} \textbf{Proof of Lemma \ref{le1}.} The proof resembles some arguments related to Harman's sieve \cite[Chapter 3]{harman-sieves}. Let $d=p_1\cdots p_r \in \mathcal{D}^{-,\textnormal{SEM}}$. The claim is that the set $\{p_1,\ldots,p_r\}$ can be partitioned into two subsets $S_1$ and $S_2$ in such a way that the products $P_1$ and $P_2$ of the elements of $S_1$ and $S_2$ satisfy $P_1\leq D$, $P_1P_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}$, and additionally $P_1\geq x^{0.1}$ or $P_2=1$. Note that for $r=1$ one can take $S_1=\{p_1\}$ and $S_2=\emptyset$. Assume then that $r\geq 2$. If $p_1\cdots p_r\leq D$, we may take $S_1=\{p_1,\ldots,p_r\}$, $S_2=\emptyset$. Indeed, then $P_1\leq D$, $P_2=1$ and $P_1P_2^2\leq D\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}$. Now we may assume that $p_1\cdots p_r>D$. Since $p_1\leq D$, we can select the largest $j$ for which $p_1\cdots p_j\leq D$. We have $j\leq r-1$ and $p_{j+1}\leq p_2\leq x^{\frac{\rho}{3}}$, so \begin{align*} p_1\cdots p_j=\frac{p_1\cdots p_{j+1}}{p_{j+1}}\geq \frac{D}{x^{\frac{\rho}{3}}}. \end{align*} We claim that the choice $S_1=\{p_1,\ldots p_j\}$, $S_2=\{p_{j+1},\ldots,p_r\}$ works. First of all, we have $P_1\geq \frac{D}{x^{\frac{\rho}{3}}}\geq x^{0.1}$. Supposing that the claim does not hold for $S_1$ and $S_2$, we have $(P_1P_2)^2>P_1\frac{x^{1-4\theta-2\varepsilon^2}}{D}$. Using $P_1P_2\leq x^{\rho}$ and $P_1\geq \frac{D}{x^{\frac{\rho}{3}}}$, this yields $x^{2\rho}> x^{1-4\theta-\frac{\rho}{3}-2\varepsilon^2}$, from which we solve $\rho>\frac{3}{7}(1-4\theta)-\frac{6}{7}\varepsilon^2$, a contradiction to our choice of $\rho$.\qedd \section{Verifying the Hypothesis}\label{Sec: hypotheses} \subsection{Splitting variables}\label{sub: splitting} Based on Section \ref{Sec: decomposition}, the proof of Hypothesis \ref{h1} for the sequence $(\omega_n)_{n\sim x}$ and linear function $L(n)$ defined in that section has been reduced to showing that \begin{align} \sum_{\substack{d\leq x^{\rho_2}\\(d,QW)=1}}\hspace{-0.1cm}\lambda_{d}^{-,\textnormal{SEM}}\bigg(\hspace{-0.1cm}\sum_{\substack{n\sim x\\L(n)\in \mathbb{P}\\L(n)\equiv 1\hspace{-0.1cm} \pmod d}}e(\xi n)-\frac{1}{\varphi(d)}\frac{QW}{\varphi(QW)}\sum_{n\sim x}\frac{e(\xi n)}{\log(QWn)}\bigg)\quad \text{and}\label{eq41} \end{align} \begin{align} \sum_{\substack{d\leq x^{\rho_1}\\(d,QW)=1}}\hspace{-0.1cm}\lambda_d^{+,\textnormal{LIN}}\hspace{-0.1cm}\sum_{\substack{\ell \leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}g(\ell)\bigg(\hspace{-0.1cm}\sum_{\substack{n\sim x\\L(n)=\ell p+1\\L(n)\equiv 0\hspace{-0.1cm} \pmod d}}e(\xi n)-\frac{1}{\varphi(d)}\frac{QW}{\varphi(\frac{QW}{\delta})}\sum_{n \sim x}\frac{e(\xi n)}{\ell \log \frac{QWn}{\ell}}\bigg)\label{eq93} \end{align} are $\ll x(\log x)^{-200}$, where $\delta=(Wc_0+b-1,QW)$, $(g(\ell))_{\ell\geq 1}$ is a sequence of convolution type (with parameter $\sigma$), the sieve weights $\lambda_d^{+,\textnormal{LIN}}, \lambda_d^{-,\textnormal{SEM}}$ have respective sifting parameters $z_1\leq x^{\frac{1}{5}+\varepsilon}$, $z_2\leq x^{\frac{1}{3+\frac{\varepsilon}{2}}}$, and $\rho_1,\rho_2$, $\sigma$ are as in \eqref{eq87}, and $\xi$ is subject to \eqref{eq88}. It would actually suffice to replace $\ell\leq x^{1-\varepsilon}$ by $\ell\leq x^{0.9+\varepsilon}$ above, but this would not simplify the argument.\\ As mentioned in Section \ref{Sec: sieveweight}, we wish to split the sum over $d$ into a double sum. This is enabled by Lemmas \ref{le10} and \ref{le1}. If $D$ is as in Lemma \ref{le1} with $0\leq \theta\leq \frac{1}{30}$, we may write \begin{align}\label{eq89} |\lambda_d^{-,\textnormal{SEM}}|&\leq \min_{D}\sum_{\substack{d=d_1d_2\\d_1\leq D\\d_1d_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D}\\(d_1,d_2)=1\\d_1\geq x^{0.1}\,\textnormal{or}\,d_2=1}}1\leq \left(\frac{\log x}{\log 2}\right)^2\min_{D}\max_{\Delta_1,\Delta_2}\sum_{\substack{d=d_1d_2\\d_1\sim \Delta_1\\d_2\sim \Delta_2\\(d_1,d_2)=1}}1, \end{align} where the maximum and minimum are over those $\Delta_1, \Delta_2\geq 1$ and $D\geq 1$ that satisfy \begin{align}\label{eq90}\begin{split} &D\in [x^{\frac{1}{3}-2\theta-2\varepsilon^2},x^{\rho_2}],\,\,\Delta_1\leq D,\,\, \Delta_1\Delta_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D},\,\, \Delta_1\Delta_2\leq x^{\rho_2},\\ &\text{and either}\,\, \Delta_1\geq x^{0.1}\,\,\text{or}\,\, \Delta_2=1. \end{split} \end{align} By Lemma \ref{le10}, formula \eqref{eq89} continues to hold with $\lambda_{d}^{-,\textnormal{SEM}}$ replaced with $\lambda_{d}^{+,\textnormal{LIN}}$ and \eqref{eq90} replaced with \begin{align}\label{eq91}\begin{split} &D\in [x^{\frac{1}{5}},x^{\rho_1}],\,\,\Delta_1\leq D,\,\, \Delta_1\Delta_2^2\leq \frac{x^{1-4\theta-2\varepsilon^2}}{D},\,\, \Delta_1\Delta_2\leq x^{\rho_1},\\ &\text{and either}\,\, \Delta_1\geq x^{0.1}\,\,\text{or}\,\, \Delta_2=1. \end{split} \end{align} We take $\theta=0$ in this section, but in Section \ref{Sec: fractional parts} we will employ the same formulas with $\theta>0$. As a conclusion, we see that \eqref{eq41} and \eqref{eq93} are bounded by $(\frac{\log x}{\log 2})^2$ times \begin{align} &\sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,QW)=1\\(d_1,d_2)=1}}\hspace{-0.1cm}\bigg|\hspace{-0.1cm}\sum_{\substack{n\sim x\\L(n)\in \mathbb{P}\\L(n)\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}}}\hspace{-0.1cm}e(\xi n)-\frac{QW}{\varphi(d_1d_2)\varphi(QW)}\sum_{n\sim x}\frac{e(\xi n)}{\log(QWn)}\bigg|\quad \text{and}\label{eq15}\\ &\sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,QW)=1\\(d_1,d_2)=1}}\bigg|\sum_{\substack{\ell \leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d_1d_2)=1}}g(\ell)\bigg(\hspace{-0.1cm}\sum_{\substack{n\sim x\\L(n)=\ell p+1\\L(n)\equiv 0\hspace{-0.1cm} \pmod{d_1d_2}}}\hspace{-0.1cm}e(\xi n)-\frac{QW}{\varphi(d_1d_2)\varphi(\frac{QW}{\delta})}\sum_{n \sim x}\frac{e(\xi n)}{\ell \log \frac{QWn}{\ell}}\bigg)\bigg|\label{eq92}, \end{align} respectively, where $\Delta_1$ and $\Delta_2$ are any numbers constrained by \eqref{eq90} or \eqref{eq91}, depending on whether we consider \eqref{eq15} or \eqref{eq92}. At this point, it is also natural to split into two cases depending on whether $\xi$ lies on a major arc or minor arc (that is, whether $q\mid Q$ or $\frac{q}{(q,Q^2)}\geq (\log x)^{A}$ holds in \eqref{eq88}). \subsection{Major arcs for the semilinear sieve} \label{sub: maj sem} We first assume the major arc condition $q\mid Q$ in the definition of $\xi$ in \eqref{eq88}. By partial summation, \eqref{eq41} becomes \begin{align*} =\int_{x}^{2x}e(\pm\|\xi\| t)\,d\bigg\{\sum_{\substack{d\leq x^{\rho_2}\\(d,QW)=1}}\lambda_d^{-,\textnormal{SEM}} \bigg(\sum_{\substack{x\leq n\leq t\\L(n)\in \mathbb{P}\\L(n)\equiv 1\hspace{-0.1cm} \pmod d}}1-\frac{QW}{\varphi(QW)}\frac{1}{\varphi(d)}\sum_{\substack{x\leq n\leq t}}\frac{1}{\log(QW n)}\bigg)\bigg\}. \end{align*} Naming the function inside $d\{\ldots\}$ as $G(t)$, partial integration tells that the previous expression is \begin{align}\label{eq38} =G(2x)e(\pm2\|\xi\|x)\mp2\pi i \|\xi\|\int_{x}^{2x}e(\pm\|\xi\| t)G(t)dt\ll (1+\|\xi\|x)\max_{x\leq t\leq 2x}|G(t)|. \end{align} Since $\frac{1}{\log(QWn)}=\frac{1}{QW}\int_{QWn}^{QW(n+1)}\frac{du}{\log u}+O(\frac{1}{n})$, putting $c_1=Wc_0+b$ we have \begin{align*} G(t)&\leq\sum_{\substack{d\leq x^{\rho_2}\\(d,QW)=1}}|\lambda_d^{-,\textnormal{SEM}}|\bigg|\sum_{\substack{QWx\leq p\leq QWt\\p\equiv c_1\hspace{-0.1cm} \pmod{QW}\\p\equiv 1 \hspace{-0.1cm} \pmod d}}1-\frac{1}{\varphi(QWd)}\int_{QWx}^{QWt}\frac{du}{\log u}\bigg|+O(x^{\frac{1}{2}})\\ &\leq \sum_{\substack{d\leq x^{\rho_2}\\(d,QW)=1}}\max_{(r,QWd)=1}\left|\pi(QWt;QWd,r)-\frac{1}{\varphi(QWd)}\textnormal{Li}(QWt)\right|\\ &\quad +\sum_{\substack{d\leq x^{\rho_2}\\(d,QW)=1}}\max_{(r,QWd)=1}\left|\pi(QWx;QWd,r)-\frac{1}{\varphi(QWd)}\textnormal{Li}(QWx)\right|+O(x^{\frac{1}{2}})\\ &\ll \frac{x}{(\log x)^{1000B}} \end{align*} by the Bombieri-Vinogradov theorem \cite[Theorem 17.1]{iwaniec-kowalski}. As $\xi$ is on a major arc, by \eqref{eq88} we have $\|\xi\|\leq \frac{2(\log x)^{102B}}{x}$, so \eqref{eq38} is $\ll x(\log x)^{-1000}$. Therefore, the major arc case for the semilinear sieve has been dealt with. \subsection{Major arcs for the linear sieve} Again assume $q\mid Q$ in \eqref{eq88}. After applying partial summation, \eqref{eq93} takes the form \begin{align*} \int_{x}^{2x}e(\pm \|\xi\| t)\,d\bigg\{\hspace{-0.1cm}\sum_{\substack{d\leq x^{\rho_1}\\(d,QW)=1}}\lambda_d^{+,\textnormal{LIN}}\bigg(\hspace{-0.1cm}\sum_{\substack{x\leq n\leq t\\L(n)=\ell p+1\\L(n)\equiv 0\hspace{-0.1cm} \pmod d\\ \ell\leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}\hspace{-0.1cm}g(\ell)-\frac{QW}{\varphi(d)\varphi(\frac{QW}{\delta})}\sum_{\substack{x\leq n\leq t\\ \ell\leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}\frac{g(\ell)}{\ell \log \frac{QWn}{\ell}}\bigg)\bigg\}, \end{align*} so we want this to be $\ll x(\log x)^{-202}$. Proceeding as in Subsection \ref{sub: maj sem}, it suffices to prove for that $t\in [x,2x]$ \begin{align*} \sum_{\substack{d\leq x^{\rho_1}\\(d,QW)=1}}\bigg|\sum_{\substack{x\leq n\leq t\\L(n)=\ell p+1\\L(n)\equiv 0\hspace{-0.1cm} \pmod d\\ \ell\leq x^{1-\varepsilon}}}g(\ell)1_{(\ell,QW)=\delta,\,\,(\ell,d)=1}-\frac{QW}{\varphi(d)\varphi(\frac{QW}{\delta})}\sum_{\substack{x\leq n\leq t\\ \ell\leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}\frac{g(\ell)}{\ell \log \frac{QWn}{\ell}}\bigg| \end{align*} is $\ll x(\log x)^{-1000B}$.\\ We start by analyzing the second sum inside the absolute values in the previous expression. Since $QW\ll (\log x)^{B+1}$ and $\ell\leq x^{1-\varepsilon}$, a change of variables and the prime number theorem give \begin{align*} \frac{QW}{\varphi(\frac{QW}{\delta})}\sum_{x\leq n\leq t}\frac{1}{\ell \log \frac{QWn}{\ell}}&=\frac{QW}{\varphi(\frac{QW}{\delta})}\int_{x}^{t}\frac{du}{\ell \log \frac{QWu}{\ell}}+O(QW)\\ &=\frac{1}{\varphi(\frac{QW}{\delta})}\int_{\frac{QWx}{\ell}}^{\frac{QWt}{\ell}}\frac{du}{\log u}+O(QW)\\ &=\frac{1}{\varphi(\frac{QW}{\delta})}\sum_{\substack{QWx\leq \ell p\leq QWt}}1+O\left(\frac{x}{\ell}(\log x)^{-3000B}\right). \end{align*} The error term remains still $\ll x(\log x)^{-2000B}$ after multiplying it by $\frac{|g(\ell)|}{\varphi(d)}$ and summing over $d\leq x^{\rho_1}$,$\ell\leq x^{1-\varepsilon}$. Hence, what we wish to show is that \begin{align}\label{eq69} \sum_{\substack{d\leq x^{\rho_1}\\(d,QW)=1}}\bigg|\sum_{\substack{QWx\leq \ell p\leq QWt\\\ell p\equiv -1\hspace{-0.1cm} \pmod{d}\\\ell p\equiv c_1-1\hspace{-0.1cm} \pmod{QW}\\\ell\leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}g(\ell)-\frac{1}{\varphi(\frac{QWd}{\delta})}\sum_{\substack{QWx\leq \ell p\leq QWt\\\ell\leq x^{1-\varepsilon}\\(\ell,QW)=\delta\\(\ell,d)=1}}g(\ell)\bigg| \end{align} is $\ll \frac{x}{(\log x)^{1000B}}$ for $t\in [x,2x]$ and $c_1=Wc_0+b$. Since $(\ell,QW)=\delta$, $(\ell,d)=1$ and $(d,\delta)=1$, the congruences $\ell p\equiv -1\hspace{-0.1cm} \pmod d$, $\ell p\equiv c_1-1\hspace{-0.1cm} \pmod{QW}$ can be rewritten as $\ell' p\equiv -\delta^{-1}\hspace{-0.1cm} \pmod{d}$, $\ell' p\equiv \frac{c_1-1}{\delta} \hspace{-0.1cm} \pmod{\frac{QW}{\delta}}$ with $\ell'=\frac{\ell}{\delta}$. By the Chinese remainder theorem, these congruences are equivalent to $\ell' p\equiv c \hspace{-0.1cm} \pmod{\frac{QWd}{\delta}}$ for some $c$ depending on $Q,W, d$ and $\delta$ and coprime to $\frac{QWd}{\delta}$. Concerning the second sum inside absolute values in \eqref{eq69}, we wish to add the constraint $(\ell' p,\frac{QWd}{\delta})=1$ to that summation (where again $\ell'=\frac{\ell}{\delta}$). We know that $(\ell',\frac{QW}{\delta})=(\ell',d)=1$, and clearly $p\geq x^{\varepsilon}$ in \eqref{eq69}, so $(p,QW)=1$. Therefore, we have shown that we may insert the constraint $(\ell' p,QWd)=1$ if the case $p\mid d$ has a small enough contribution to the aforementioned sum. That case contributes at most \begin{align*} \sum_{\substack{p\mid d\\p\geq x^{\varepsilon}}}\sum_{\ell\leq \frac{2QWx}{p}}|g(\ell)|\ll_{\varepsilon}x^{1-\frac{\varepsilon}{2}}, \end{align*} which is $\ll x^{1-\varepsilon^2}$ when multiplied by $\frac{1}{\varphi(\frac{QWd}{\delta})}$ and summed over $d\leq x^{\rho_1}$. Summarizing, our aim has been reduced to showing that \begin{align}\label{eq70} \sum_{\substack{d\leq x^{\rho_1}\\(d,QW)=1}}\max_{(c,\frac{QWd}{\delta})=1}\bigg|\sum_{\substack{\frac{QWx}{\delta}\leq \ell' p\leq \frac{QWt}{\delta}\\\ell' p\equiv c \hspace{-0.1cm} \pmod{\frac{QWd}{\delta}}\\\ell'\leq x^{1-\varepsilon}/\delta}}g(\delta\ell')-\frac{1}{\varphi(\frac{QWd}{\delta})}\sum_{\substack{\frac{QWx}{\delta}\leq \ell' p\leq \frac{QWt}{\delta}\\(\ell' p,\frac{QWd}{\delta})=1\\\ell'\leq x^{1-\varepsilon}/\delta}}g(\delta \ell')\bigg| \end{align} is $\ll \frac{x}{(\log x)^{1000B}}$ for $t\in [x,2x]$.\\ To obtain this estimate, we apply \cite[Theorem 17.4]{iwaniec-kowalski} to the sequences $(\alpha_{\ell'})_{\ell' \leq x^{1-\varepsilon}/\delta}=(g(\delta \ell'))_{\ell'\leq x^{1-\varepsilon}/\delta}$ and $(\beta_k)_{k\geq 1}=(1_{\mathbb{P}}(k))_{k\geq 1}$ -- that theorem is applicable since the sequence $(1_{\mathbb{P}}(k))_{k\geq 1}$ is well-distributed in the sense of formula (17.13) of \cite{iwaniec-kowalski} (with $\Delta=(\log x)^{-20000B}$ there) by the Siegel-Walfisz theorem. Now, since in \eqref{eq70} we have $\ell'\geq x^{\frac{\varepsilon}{2}}$, $p\geq x^{\varepsilon}$, $\rho_1<\frac{1}{2}$ and $|\alpha_{\ell'}|\leq \tau(\ell')^2\log \ell'$, the claimed Bombieri-Vinogradov type estimate follows immediately from the theorem cited above. \subsection{Minor arcs for the semilinear sieve} \label{sub: min sem} We assume then that $\xi$ is on a minor arc, meaning that $\frac{q}{(q,Q^2)}\geq (\log x)^A$ in \eqref{eq88}. We study the sum \eqref{eq15}. Using partial summation, we see that \begin{align*} \sum_{n\sim x}\frac{e(\xi n)}{\log(QWn)}\ll \max_{x\leq t\leq 2x}\left|\sum_{x\leq n\leq t}e\left(\xi n\right)\right|\ll \frac{1}{\|\xi\|}. \end{align*} We have $(q,QW)\leq W(q,Q)\leq \frac{Wq}{(\log x)^{A}}<q$, so $q\nmid QW$. Taking this and \eqref{eq88} into account, $\|\xi\|\geq \frac{1}{q}-\frac{2(\log x)^{102B}}{q x}\geq \frac{1}{2q}$, so the second expression inside absolute values in \eqref{eq15} is $\ll \frac{q}{\varphi(d)}\ll \frac{x}{(\log x)^{99B}\varphi(d)}$. Hence it contributes $\ll x(\log x)^{-98B}$ when summing over $d$.\\ When it comes to the first expression inside absolute values in \eqref{eq15}, it equals \begin{align*} \sum_{\substack{n\sim x\\L(n)\in \mathbb{P}\\L(n)\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}}}e(\xi n)=e\left(\frac{-\xi c_1}{QW}\right)\sum_{\substack{p\sim QWx\\p\equiv c_1\hspace{-0.1cm} \pmod{QW}\\p\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}}}e\left(\frac{\xi}{QW}p\right)+O(QW), \end{align*} where the error $O(QW)$ remains $\ll x^{\frac{1}{2}}$ when summed over $d\leq x^{\rho_2}$. With partial summation, we may bound the sum on the right-hand side by \begin{align*} \bigg|\sum_{\substack{n\sim QWx\\n\equiv c_1\hspace{-0.1cm} \pmod{QW}\\n\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}}}\hspace{-0.1cm}\Lambda(n)e\left(\frac{\xi}{QW}n\right)\bigg|+\int_{QWx}^{2QWx}\hspace{-0.1cm}\sum_{\substack{QWx\leq n\leq t\\n\equiv c_1\hspace{-0.1cm} \pmod{QW}\\n\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}}}\hspace{-0.1cm}\Lambda(n)e\left(\frac{\xi}{QW}n\right)\, \frac{dt}{t\log^2 t}+O(x^{\frac{1}{2}+\varepsilon}), \end{align*} the error coming from the values of $n$ that are prime powers, and the error being $\ll x^{1-\varepsilon^2}$ after summing over $d\leq x^{\rho_2}$. This means that it suffices to prove \begin{align}\label{eq94} \sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,QW)=1\\(d_1,d_2)=1}}\bigg|\sum_{\substack{QWx\leq n\leq t\\n\equiv c_1\hspace{-0.1cm}\pmod{QW}\\n\equiv 1 \hspace{-0.1cm}\pmod{d_1d_2}}}\Lambda(n)e\left(\frac{\xi}{QW}n\right)\bigg|\ll \frac{x}{(\log x)^{1000}} \end{align} uniformly for $t\in [QWx,2QWx]$. We may now apply Vaughan's identity (in the form of \cite[Proposition 13.4]{iwaniec-kowalski} with $y=z=(QWx)^{\frac{1}{3}}$ there), which transforms the sum inside absolute values in \eqref{eq94} (up to error $O(x^{\frac{1}{3}+\varepsilon})$) into a sum of $\ll (\log x)^{10}$ type I and type II sums of the form \begin{align*} \widetilde{R}_{d_1d_2}^{\text{I}}(t)=\hspace{-0.2cm}\sum_{\substack{QWx\leq mn\leq t\\mn\equiv c_1 \hspace{-0.1cm} \pmod{QW}\\ mn\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}\\m\asymp M}}\alpha_m e\left(\frac{\xi m n}{QW}\right)\,\, \text{and}\,\, \widetilde{R}_{d_1d_2}^{\text{II}}(t)=\hspace{-0.2cm}\sum_{\substack{QWx\leq mn\leq t\\mn\equiv c_1 \hspace{-0.1cm} \pmod{QW}\\ mn\equiv 1\hspace{-0.1cm} \pmod{d_1d_2}\\m\asymp M}}\alpha_m \beta_n e\left(\frac{\xi m n}{QW}\right), \end{align*} with $|\alpha_m|,|\beta_m|\leq \tau(m)^2\log m$ some complex numbers and $M\leq (2QWx)^{\frac{1}{3}}$ in the case of $\widetilde{R}^{\text{I}}_{d_1d_2}(t)$, while $M\in [(QWx)^{\frac{1}{3}},(2QWx)^{\frac{2}{3}}]$ in the case of $\widetilde{R}^{\text{II}}_{d_1d_2}(t)$. Moreover, we may assume in the latter case that $M\in [(QWx)^{\frac{1}{2}},(2QWx)^{\frac{2}{3}}]$ by flipping the roles of the variables if necessary. We may replace the type I and type II sum with the (possibly larger) sums \begin{align}\begin{split}\label{eq100} R_{d_1d_2}^{\text{I}}(t)&=\max_{(c,d_1d_2QW)=1}\bigg|\sum_{\substack{QWx\leq mn\leq t\\mn\equiv c \hspace{-0.1cm} \pmod{d_1d_2QW}\\m\asymp M}}\alpha_m e\left(\frac{\xi}{QW}m n\right)\bigg|\quad \text{and}\\ \quad R_{d_1d_2}^{\text{II}}(t)&=\max_{(c,d_1QW)=1}\bigg|\sum_{\substack{QWx\leq mn\leq t\\ mn\equiv c\hspace{-0.1cm} \pmod{d_1QW}\\mn\equiv 1 \hspace{-0.1cm}\pmod{d_2}\\m\asymp M}}\alpha_m \beta_n e\left(\frac{\xi}{QW} m n\right)\bigg|. \end{split} \end{align} We are now in a position to apply the Bombieri-Vinogradov lemmas \ref{le8} and \ref{le9}. Note that, by \eqref{eq88}, we either have $|\xi-\frac{QWa}{q}|\leq \frac{1}{(QWq)^2}$ or $q>\frac{x}{2(\log x)^{102B}(QW)^2}$. If the latter happens, we have $|e(\frac{\xi}{QW} mn)-e(\frac{a}{q}mn)|\leq |\frac{\xi}{QW}-\frac{a}{q}|mn\leq \frac{8(QW)^3(\log x)^{204B}}{x}$ for $mn\leq 2QWx$. This implies that $e(\frac{\xi}{QW} mn)$ can be replaced by $e(\frac{a}{q}mn)$ in the type I and II sums. In conclusion, we can assume in any case that $|\xi-\frac{QWa}{q}|\leq \frac{1}{(QWq)^2}$.\\ The type I Bombieri-Vinogradov sums cause no problems, as Lemma \ref{le8} with the choices $R=1$, $N=QWx$, $v=QW$, $M= x^{\frac{1}{3}+\varepsilon}$, $\rho\leq \frac{1}{2}-\varepsilon$ tells at once that \begin{align*} \sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,QW)=1\\(d_1,d_2)=1}}R_{d_1d_2}^{\text{I}}(t)\ll \frac{x}{(\log x)^{\frac{A}{10}}}, \end{align*} since $\frac{q}{(q,(QW)^2)}\geq W^{-2}(\log x)^{A}$ and $\Delta_1\Delta_2\leq x^{\rho_2}$.\\ We know that $(QWx)^{\frac{1}{2}}\leq M\leq (2QWx)^{\frac{2}{3}}$ in the sum $R_{d_1d_2}^{\text{II}}(t)$. We divide the analysis of this sum into three cases.\\ \textbf{Case 1.} Assume that $M\geq x^{1-\rho_2-\varepsilon^2}$, $\Delta_1\geq (\log x)^{\frac{A}{10}}$. Take $D=\frac{x^{1-\varepsilon^2}}{M}$. We know that $x^{\frac{1}{3}-\varepsilon^2}(\log x)^{-B}\leq D\leq x^{\rho_2}$ by the bound on $M$. In view of \eqref{eq90} with $\theta=0$, this means in particular that $\Delta_1\leq \frac{x^{1-\varepsilon^2}}{M}$ and $\Delta_1\Delta_2^2\leq \frac{x^{1-2\varepsilon^2}}{D}= Mx^{-\varepsilon^2}$. Now we apply Lemma \ref{le9} (in the case of $F_1$) with $R=1$, $N=QWx$, $v=QW$, $\rho=\rho_2\leq \frac{3}{7}-\varepsilon$ to deduce that \begin{align*} &\sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_2,QW)=1\\(d_1,d_2)=1}}R_{d_1d_2}^{\text{II}}(t)\\ &\ll x\left(\left(\frac{1}{\Delta_1}+\frac{W^2}{(\log x)^{A}}+(\log x)^{-99B}(QW)^2\right)^{\frac{1}{8}}+\left(\frac{\Delta_1 M}{x}+\Delta_1\Delta_2^2\frac{QW}{M}\right)^{\frac{1}{2}}\right)(\log x)^{1000}, \end{align*} which is $\ll \frac{x}{(\log x)^{\frac{A}{100}}}$ for $A$ large enough by the lower bound on $\Delta_1$.\\ \textbf{Case 2.} Assume then that $M\geq x^{1-\rho_2-\varepsilon^2}$, $\Delta_1< (\log x)^{\frac{A}{10}}$. Since $\Delta_1<x^{0.1}$, we know that $\Delta_2=1$, so applying Lemma \ref{le9} (in the case of $F_2$) we obtain, for $A$ large enough, \begin{align*} \sum_{\substack{d_1\sim \Delta_1\\(d_1,QW)=1}}\sum_{\substack{d_2\sim \Delta_2\\(d_,QW)=1\\(d_1,d_2)=1}}R_{d_1d_2}^{\text{II}}(t)\ll x(\log x)^{\frac{A}{5}}\left(\frac{W}{(\log x)^{\frac{A}{2}}}+\frac{QW}{M^{\frac{1}{2}}}+\frac{(QW)^2M}{x}+\frac{(QW)^{\frac{1}{2}}}{(\log x)^{\frac{99B}{2}}}\right)^{\frac{1}{2}}, \end{align*} and this is again $\ll \frac{x}{(\log x)^{\frac{A}{100}}}$ for $A$ large.\\ \textbf{Case 3.} Lastly, assume that $M<x^{1-\rho_2-\varepsilon^2}$. Then we estimate \eqref{eq41} instead of \eqref{eq15}. This amounts to just replacing $d_1\sim \Delta_1$, $d_2\sim \Delta_2$ with $d_1\leq x^{\rho_2}$, $d_2=1$ throughout this subsection. We have $x^{\rho_2}\leq \frac{x^{1-\varepsilon^2}}{M}$ and $x^{\rho_2}\leq Mx^{-\varepsilon^2}$, so we can bound the type II sums in the same way as for $M\geq x^{1-\rho_2-\varepsilon^2}$ (considering again the cases $\Delta_1\geq (\log x)^{\frac{A}{10}}$ and $\Delta_1< (\log x)^{\frac{A}{10}}$ separately), so also Case 3 contributes $\ll \frac{x}{(\log x)^{\frac{A}{100}}}$.\\ Consequently, we have shown that the contribution of the minor arcs for the semilinear sieve is small enough. \subsection{Minor arcs for the linear sieve} We assume again $\frac{q}{(q,Q^2)}\geq (\log x)^{A}$. We first look at the second expression inside absolute values in \eqref{eq92}. We have by partial summation \begin{align*} \sum_{n\sim x}\frac{e(\xi n)}{\ell \log\frac{QWn}{\ell}}\ll \frac{1}{\ell\|\xi\|} \end{align*} for $\ell\leq x^{1-\varepsilon}$ just as in Subsection \ref{sub: min sem}. We showed earlier that $\frac{1}{\|\xi\|}\ll \frac{x}{(\log x)^{99B}}$ when $\frac{q}{(q,Q^2)}\geq (\log x)^{A}$, so the second expression inside absolute values in \eqref{eq92} is $\ll \frac{x}{\ell\varphi(d)(\log x)^{98B}}$, which is $\ll x(\log x)^{-97B}$ after summing over $d\leq x^{\rho_1}$ and over $\ell\leq x^{1-\varepsilon}$ weighted by $|g(\ell)|$.\\ We may write the first expression inside absolute values in \eqref{eq92} as \begin{align}\label{eq73} e\left(\frac{-(c_1-1)\xi}{QW}\right)\hspace{-0.1cm}\sum_{\substack{\ell p\sim QWx\\\ell p\equiv c_1-1\hspace{-0.1cm} \pmod{QW}\\\ell p\equiv -1\hspace{-0.1cm} \pmod d\\\ell\leq x^{1-\varepsilon}}}\hspace{-0.1cm}g(\ell)e\left(\frac{\xi}{QW}\ell p\right)+O(QW), \end{align} and the error $O(QW)$ is $\ll x^{\frac{1}{2}}$ after summing over $d\leq x^{\rho_1}$. We have ignored the conditions $(\ell,QW)=\delta, (\ell,d)=1$ above, since if either of them fails, $\ell p\equiv c_1-1 \hspace{-0.1cm} \pmod{QW}$, $\ell p\equiv -1\pmod d$ is impossible.\\ Crucially, our assumption is that the sequence $(g(\ell))_{\ell\geq 1}$ is of convolution type, so the sum in \eqref{eq73} can be rewritten as \begin{align*} \sum_{\substack{km p\sim QWx\\k m p\equiv c_1-1\hspace{-0.1cm} \pmod{QW}\\k m p\equiv -1\hspace{-0.1cm} \pmod d\\ k m\leq x^{1-\varepsilon}}}\alpha_k\beta_m e\left(\frac{\xi}{QW} k m p\right), \end{align*} where $(\alpha_k)$ is supported on $x^{\frac{1}{\sigma}}\leq k\leq (Qx)^{1-\frac{1}{\sigma}}$ for $\sigma=3+\varepsilon$. Putting \begin{align*} \beta^{*}_r=\sum_{r=mp}\beta_m \end{align*} and splitting the previous sum dyadically, it becomes $\ll \log x$ sums of the form \begin{align*} \sum_{\substack{k r\sim QWx\\k r\equiv c_1-1\hspace{-0.1cm} \pmod{QW}\\k r\equiv -1\hspace{-0.1cm} \pmod d\\k\asymp M}}\alpha_k\beta^{*}_r e\left(\frac{\xi}{QW} k r\right), \end{align*} where $x^{\frac{1}{\sigma}}\leq M\leq (Qx)^{1-\frac{1}{\sigma}}$, and by changing the roles of the variables, we may further assume that $(QWx)^{\frac{1}{2}}\leq M\leq Qx^{1-\frac{1}{\sigma}}$. Now our bilinear sums are exactly of the same form as in \eqref{eq100} (but with different $M$). Furthermore, we may assume that $|\xi-\frac{QWa}{q}|\leq \frac{1}{(QWq)^2}$ for the same reason as in Subsection \ref{sub: min sem}. If $M\geq x^{1-\rho_1-\varepsilon^2}$, denoting $D=\frac{x^{1-\varepsilon^2}}{M}\in [x^{\frac{1}{5}},x^{\rho_1}]$, we again see that $\Delta_1\Delta_2^2\leq Mx^{-\varepsilon^2}$ in \eqref{eq91} (with $\theta=0$). Therefore, we may apply the very same estimates as in the Cases 1 and 2 of Subsection \ref{sub: min sem}. If $M<x^{1-\rho_1-\varepsilon^2}$, we can apply precisely the same argument as in Case 3 of the previous subsection, since $x^{\rho_1}\leq \frac{x^{1-\varepsilon^2}}{M}$ and $x^{\rho_1}\leq Mx^{-\varepsilon^2}$. Summarizing, we have showed that the minor arcs for the linear sieve contribute $\ll x(\log x)^{-\frac{A}{100}}$, which is small enough for large $A$.\\ We have now concluded the proof of Theorem \ref{theo_goldbach}, in view of Theorem \ref{t2} and Proposition \ref{prop2}.\qedd\\ \textbf{Proof of Theorem \ref{theo_sievebombieri}:} We take $Q=W=1$ and $L(n)=n$ in \eqref{eq41} and replace $L(n)\equiv 1 \pmod d$ by $L(n)\equiv b \pmod d$ (with $b\neq 0$ an arbitrary integer) there and note that the proof that \eqref{eq41} is $\ll_C x(\log x)^{-C}$ is verbatim the same as the minor arc argument for the semilinear sieve in this section, provided that $\xi$ is any real number with $|\xi-\frac{a}{q}|\leq \frac{1}{q^2}$ for some coprime $a$ and $q\in [(\log x)^{1000C},x(\log x)^{-1000C}]$. This proves Theorem \ref{theo_sievebombieri} in the case of lower bound sieve weights. The case of upper bound sieve weights follows very similarly by replacing $\lambda_{d}^{-,\textnormal{SEM}}$ with $\lambda_{d}^{+,\textnormal{SEM}}$ and making use of a remark after Lemma \ref{le1} (which is where the value $\rho_{+}=\frac{2}{5}-\varepsilon$ comes from).\qedd \section[The distribution of fractional parts]{The distribution of $\xi p$ modulo $1$} \label{Sec: fractional parts} We show that our considerations on primes $x^2+y^2+1$ in Bohr sets imply a result about the distribution of irrational multiples of such primes, in the form of Theorem \ref{theo_alphap}.\\ For proving Theorem \ref{theo_alphap}, it suffices to prove that, given an irrational $\xi>0$, there exist infinitely many integers $N\geq 1$ such that some prime $p\sim N$ of the form $x^2+y^2+1$ satisfies $\|\xi p+\kappa\|\leq \frac{N^{-\theta}}{2}$. Let $\chi_0$ be a $1$-periodic function which is a lower bound for the characteristic function of $[-\frac{\eta}{2}, \frac{\eta}{2}]$ with $\eta=N^{-\theta}$. Specifically, as in \cite{matomaki-bombieri}, we choose $\chi_0$ so that \begin{align*} &0\leq \chi_0(t)\leq 1,\quad \chi_0(t)=0\quad \text{when}\quad t\not \in \left[-\frac{\eta}{2},\frac{\eta}{2}\right],\\ &\chi_0(t)=\frac{\eta}{2}+\sum_{|r|>0}c(r)e(rt)\quad \text{with}\,\,c(r)\ll \eta,\\ &\text{and}\,\,\sum_{|r|>R}|c(r)|\ll R^{-1}\quad \text{for} \quad R= \eta^{-1}(\log \eta^{-1})^C \end{align*} for some large constant $C$. This construction goes back to Vinogradov's work. What we want to show is that \begin{align}\label{eq25} \sum_{\substack{p\sim N\\p\in \mathcal{S}+1}}\chi_0(\xi p+\kappa)\geq \delta_0 \frac{\eta N}{(\log N)^{\frac{3}{2}}} \end{align} for some absolute constant $\delta_0>0$ and infinitely many $N$. From now on, we choose a large integer $q$ satisfying $|\xi-\frac{a}{q}|\leq \frac{1}{q^2}$ for some $a$ coprime to $q$ (there are infinitely many such $q$) and take \begin{align}\label{eq66} N=q^2,\,\, R= \eta^{-1}(\log \eta^{-1})^C\asymp N^{\theta}(\log N^{\theta})^{C}. \end{align} Concerning the term on the right-hand side of \eqref{eq25}, we note that \begin{align*} \sum_{n\sim N}\chi_0(\xi n+\kappa)-\frac{\eta}{2} N &\ll \eta \sum_{0<|r|\leq R} \left|\sum_{n\sim N}e(\xi r n)\right| +\frac{N}{R}\\ &\ll \eta \sum_{0<|r|\leq R}\frac{1}{\|\xi r\|}+\eta N(\log N)^{-C}\\ &\ll \eta q\log{2q}+\eta N(\log N)^{-C}\\ &\ll \eta N(\log N)^{-C} \end{align*} for $2\varepsilon\leq \theta\leq \frac{1}{2}-\varepsilon$, so \eqref{eq25} takes the form \begin{align}\label{eq63} \sum_{\substack{p\sim N\\p\in \mathcal{S}+1}}\chi_0(\xi p+\kappa)\geq \frac{\delta_1}{(\log N)^{\frac{3}{2}}}\sum_{n\sim N}\chi_0(\xi n +\kappa) \end{align} for some absolute constant $\delta_1>0$. This is what we set out to prove.\\ \textbf{Proof of Theorem \ref{theo_alphap}.} Pick any amenable linear polynomial, such as $L(n)=Kn+5$ with $K=6^4$. By applying Theorem \ref{t2} to $\omega_n=\chi_0(K\xi n+\kappa+5\xi)$ and $L(n)$, we see that \eqref{eq63} will follow (with $N$ replaced by $\frac{N}{K}$) once we establish Hypothesis \ref{h1} (with $\delta=(K,5-1)=4$) for this sequence $(\omega_n)$ and some parameters satisfying $\text{H}(\rho_1,\rho_2,\sigma)$ under the conditions \eqref{eq66}. Taking the definition of $\chi_0(\cdot)$ into account and making use of the classical Bombieri-Vinogradov theorem, it suffices to prove Hypothesis \ref{h1} for $\omega_n'=\sum_{0<|r|<R}c(r)e(K\xi r n)$ (with the choices \eqref{eq66}). Hence, what we must show is that \begin{align*} &\sum_{\substack{d\leq N^{\rho_2}\\(d,K)=1}}|\lambda_{d}^{-,\textnormal{SEM}}|\sum_{0<|r|<R}\bigg|\sum_{\substack{n\sim N\\Kn+5\in \mathbb{P}\\Kn+4\equiv 0\hspace{-0.1cm} \pmod d}}e(K\xi r n)-\frac{K}{\varphi(Kd)}\sum_{n\sim N}\frac{e(K\xi rn)}{\log(Kn)}\bigg|\quad \text{and}\\ &\sum_{\substack{d\leq N^{\rho_1}\\(d,K)=1}}|\lambda_{d}^{+,\textnormal{LIN}}|\sum_{0<|r|<R}\bigg|\sum_{\substack{\ell\leq N^{1-\varepsilon}\\(\ell,d)=1\\(\ell,K)=\delta}}g(\ell)\bigg(\sum_{\substack{n\sim N\\Kn+4=\ell p\\Kn+5\equiv 0\hspace{-0.1cm} \pmod d}}e(K\xi rn)-\frac{K}{\varphi(Kd)}\sum_{n\sim N}\frac{e(K\xi rn)}{\ell \log \frac{Kn}{\ell}}\bigg)\bigg| \end{align*} are $\ll \frac{N}{(\log N)^{100}}$, where $\lambda_d^{-,\textnormal{SEM}}$ has sifting parameter $z_2\ll N^{\frac{1}{\sigma}}$, while $\lambda_d^{+,\textnormal{LIN}}$ has sifting parameter $z_1\ll N^{\frac{1}{5}}$. We know that $|K\xi-\frac{a'}{q'}|\leq \frac{6^4}{q'^2}$ for some coprime $a'$ and $q'\asymp N^{\frac{1}{2}}$, so the minor arc arguments from Section \ref{Sec: hypotheses} allow replacing the previous Bombieri-Vinogradov sums (up to error $\ll N^{1-\varepsilon}$) with the sums \begin{align}\begin{split} \label{eq99} &\sum_{\substack{d\leq N^{\rho_2}\\(d,K)=1}}|\lambda_{d}^{-,\textnormal{SEM}}|\sum_{0<|r|<R}\bigg|\sum_{\substack{n\sim N\\Kn+5\in \mathbb{P}\\Kn+4\equiv 0\hspace{-0.1cm} \pmod d}}e(K\xi r n)\bigg|\quad \text{and}\\ &\sum_{\substack{d\leq N^{\rho_1}\\(d,K)=1}}|\lambda_{d}^{+,\textnormal{LIN}}|\sum_{0<|r|<R}\bigg|\sum_{\substack{\ell\leq N^{1-\varepsilon}\\(\ell,d)=1\\(\ell,K)=\delta}}g(\ell)\sum_{\substack{n\sim N\\Kn+4=\ell p\\Kn+5\equiv 0\hspace{-0.1cm} \pmod d}}e(K\xi rn)\bigg|. \end{split} \end{align} Splitting the variables as in Subsection \ref{sub: splitting} and again employing the minor arc arguments from Section \ref{Sec: hypotheses}, the sums in \eqref{eq99} reduce to $\ll (\log N)^{10}$ sums of the same form as in Lemmas \ref{le8} and \ref{le9} with \begin{align*} R\leq N^{\theta}(\log N)^{C},\quad v=1,\quad q\asymp N^{\frac{1}{2}},\quad M\ll N^{\frac{1}{3}} \end{align*} in the type I case, while \begin{align*} R\leq N^{\theta}(\log N)^{C},\quad v=1,\quad q\asymp N^{\frac{1}{2}},\quad M\in [N^{\frac{1}{2}},N^{\frac{2}{3}+\varepsilon^2}],\quad \Delta_1,\Delta_2 \quad \text{subject to} \quad \eqref{eq90} \end{align*} (with $x$ replaced by $N$ in \eqref{eq90}) in the type II sums arising from the semilinear sieve weights and \begin{align*} R\leq N^{\theta}(\log N)^{C},\quad v=1,\quad q\asymp N^{\frac{1}{2}},\quad M\in [N^{\frac{1}{2}},N^{\frac{3}{4}-\varepsilon}],\quad \Delta_1,\Delta_2 \quad \text{subject to} \quad \eqref{eq91} \end{align*} (with $x$ replaced by $N$ in \eqref{eq91}) in the type II sums arising from the linear sieve weights.\\ From now on, we fix the values \begin{align*} \rho_1=\frac{1}{2}(1-4\theta)-\varepsilon,\quad \rho_2=\frac{3}{7}(1-4\theta)-\varepsilon,\quad \sigma=\frac{1}{\frac{1}{3}-2\theta}+\varepsilon. \end{align*} The bound offered by Lemma \ref{le8} for the type I sums we face is evidently $\ll N^{1-\varepsilon^2}$ for $\theta\leq \frac{1}{30}$. This takes care of the type I sums.\\ We turn to the type II sums that are of the same form as in Lemma \ref{le9}. Utilizing Lemma \ref{le9}, such Bombieri-Vinogradov sums are bounded by \begin{align}\label{eq97} \ll RN(\log N)^{1000}\left(\left(\frac{\Delta_1 M}{N}+\frac{\Delta_1\Delta_2^2}{M}\right)^{\frac{1}{2}}+\left(\frac{1}{\Delta_1}+\frac{1}{N^{\frac{1}{2}}}\right)^{\frac{1}{8}}\right) \end{align} when $\Delta_1\Delta_2\leq N^{\frac{1}{2}}$ and $\Delta_1\Delta_2^2\leq M$. For $R\leq N^{\theta}(\log N)^{C}$, the estimate \eqref{eq97} is $\ll N^{1-0.1\varepsilon^2}$, provided that \begin{align}\label{eq98} \Delta_1\leq \frac{N^{1-2\theta-\varepsilon^2}}{M}, \quad \Delta_1\Delta_2^2\leq MN^{-2\theta-\varepsilon^2},\quad \Delta_1\geq N^{0.1},\quad \theta\leq \frac{1}{80}-\varepsilon. \end{align} We deal with the type II sums in three cases. We will use $\rho$ to denote either $\rho_1$ or $\rho_2$.\\ \textbf{Case 1:} Suppose that $M\geq N^{1-\rho-2\theta-\varepsilon^2}, \Delta_1\geq N^{0.1}$. By taking $D=\frac{N^{1-2\theta-\varepsilon^2}}{M}$ in \eqref{eq90}-\eqref{eq91} and using the fact that $\frac{1}{\sigma}\leq \frac{1}{3}-2\theta-2\varepsilon^2$, we can indeed achieve \eqref{eq98} as long as $D\in [N^{\frac{1}{5}},N^{\rho}]$ in the case of the linear sieve and $D\in [N^{\frac{1}{3}-2\theta-2\varepsilon^2},N^{\rho}]$ in the case of the semilinear sieve. The inequality $D\leq N^{\rho}$ holds due to our lower bound on $M$. The inequality $D\geq N^{\frac{1}{5}}$ holds for $M\leq N^{\frac{3}{4}}$, which is true in the linear case. Similarly, the inequality $D\geq N^{\frac{1}{3}-2\theta-2\varepsilon^2}$ reduces to $M\leq N^{\frac{2}{3}+\varepsilon^2}$, and this holds in the semilinear case. Therefore, in this case \eqref{eq98} is always valid, which means that our type II sums are $\ll N^{1-0.1\varepsilon^2}$, which is what we wanted.\\ \textbf{Case 2:} Suppose that $M\geq N^{1-\rho-2\theta-\varepsilon^2}, \Delta_1<N^{0.1}$. In this case we know that $\Delta_2=1$ from \eqref{eq90} and \eqref{eq91}. Now, choosing $F_2$ in Lemma \ref{le9}, we obtain for the type II Bombieri-Vinogradov sum the bound \begin{align*} \ll RN\Delta_1\left(\frac{1}{N^{\frac{1}{4}}}+\frac{1}{M^{\frac{1}{2}}}+\frac{M}{N}+\frac{N^{\frac{1}{4}}}{(RN)^{\frac{1}{2}}}\right)^{\frac{1}{2}}\ll RN\Delta_1N^{-\frac{1}{8}}\ll N^{0.999} \end{align*} when $\theta\leq \frac{1}{50}$.\\ \textbf{Case 3:} Suppose finally that $M< N^{1-\rho-2\theta-\varepsilon^2},\Delta_1\geq N^{0.1}$. Similarly as in Case 3 of Subsection \ref{sub: min sem}, we may take $\Delta_1=N^{\rho}$, $\Delta_2=1$. Again we require this choice to fulfill \eqref{eq98}. The first constraint in \eqref{eq98} follows directly from our upper bound on $M$. Since $M\geq N^{\frac{1}{2}}$, the second constraint in \eqref{eq98} holds for $\rho\leq \frac{1}{2}-2\theta-\varepsilon^2$, which certainly holds for our choices of $\rho_1$ and $\rho_2$. This means that also in Case 3 we get good enough bounds for the type II sums. Putting everything together, in each of the Cases 1-3 we get a good enough bound for the type II sums.\\ Combining the analyses of the Cases 1-3, we see that Theorem \ref{theo_alphap} will follow with exponent $\theta$ if $\text{H}(\rho_1,\rho_2,\sigma)$ is true for $\sigma=\frac{1}{\frac{1}{3}-2\theta}+\varepsilon$, $\rho_1=\frac{1}{2}(1-4\theta)-\varepsilon$ and $\rho_2=\frac{3}{7}(1-4\theta)-\varepsilon$, provided that $\theta\leq \frac{1}{80}-\varepsilon$. By continuity, it suffices to check $\text{H}(\frac{1}{2}(1-4\theta),\frac{3}{7}(1-4\theta),\frac{1}{\frac{1}{3}-2\theta})$ for $\theta=\frac{1}{80}$, and this holds by a numerical computation (the difference between the left and right side of \eqref{eq96} is then $>10^{-3}$). This completes the proof of Theorem \ref{theo_alphap}.\qedd
1,108,101,564,447
arxiv
\section{Introduction} In 1974, Hawking \cite{Hawking} proved that the black hole can emit particles from its event horizon with a temperature proportional to its surface gravity, and the radiant spectrum is a pure thermal one, which implies the loss of information of black hole after it has evaporated away and disappeared completely \cite{ILP}. Though a complete resolution of the information loss paradox must be in the framework of quantum gravity and/or the unitary theory of string/M-theory, Hawking argued that the information could come out if the outgoing radiation were not exactly thermal but had subtle corrections. Recently, Parikh and Wilczek \cite{PW} put forward a semi-classical tunnelling method to investigate Hawking radiation of the static Schwarzschild and Reissner-Nordstr\"{o}m black holes, they found that the radiant spectrum of the black hole is not a pure thermal one and the derived tunneling rate is related to the change of Bekenstein-Hawking entropy. In their methodology, Hawking radiation is treated as a tunneling process with the tunneling potential barrier produced by the outgoing particle itself. The key trick to calculate the tunneling rate is to find a coordinate system well-behaved at the event horizon. However, this method is currently limited to discuss the tunneling rate of the uncharged massless particles only \cite{PW,TRA,ZZY}. For black holes with a charge, the emitted outgoing particles can be charged also, not only should the energy conservation but also the charge conservation be considered \cite{KWZ}. On the other hand, researches on the charged black hole with a positive cosmological constant and with a global monopole become important due to the following reasons: (1) The recent observed accelerating expansion of our universe indicates the cosmological constant might be a positive one \cite{AEU}; (2) Conjecture about de Sitter/conformal field theory (CFT) correspondence \cite{dSCFT}; (3) There might exist topological defects in the early universe \cite{TSY}; etc. Combined with the reasons mentioned above, in this Letter we extend the Parikh's method to investigate the Hawking radiation of the charged particle via tunneling from the Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole whose Arnowitt-Deser-Misner (ADM) mass is $(1 -8\pi\eta^2)$ times than that of mass parameter. Our result shows that the emission rate of the charged particle is connected with the Bekenstein-Hawking entropy, and the corrected radiant spectrum is not a pure thermal one, but is consistent with an underlying unitary theory. Our Letter is outlined as follows: In Section \ref{GPRG}, we introduce the generalized Painlev\'{e} coordinate transformation and present the radial geodesic equation of charged particles. In Sections \ref{TREH} and \ref{TRCH}, we investigate Hawking radiation as tunneling from the event horizon and the cosmological horizon, and compute the tunneling rate from these two horizons, respectively. Finally we give some discussions about our results. \section{Generalized Painlev\'{e} coordinate transformation and the radial geodesics of charged particles} \label{GPRG} The line element of a Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole is \cite{GS} \begin{equation} ds^2 = -\Delta dt_R^2 +\Delta^{-1}dr^2 +(1 -8\pi\eta^2)r^2(d\theta^2 +\sin^2\theta d\phi^2) \, , \label{RNdSM} \end{equation} where $\Delta = 1 -2M/r +Q^2/r^2 -(\Lambda/3)r^2$, $\eta$ is a symmetry breaking constant related to the global monopole, $M$ is the mass parameter, $Q$ is the charge of the black hole, $\Lambda$ is a positive cosmological constant, and $t_R$ is the coordinate time for the black hole. In general, the black hole has an inner horizon (IH), an event horizon (EH) and a cosmological horizon (CH), all of them satisfying the horizon equation $\Delta = 0$. In this Letter we shall consider the most general case where neither of these horizons coincides with the other one. To remove the coordinate singularity in the metric (\ref{RNdSM}), we introduce a generalized Painlev\'{e} coordinate transformation \begin{equation} dt_R = dt \mp \frac{\sqrt{1 -\Delta}}{\Delta}dr \, , \end{equation} and obtain the Painlev\'{e}-like line element of the Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole as follows \begin{equation} ds^2 = -\Delta dt^2 \pm 2\sqrt{1 -\Delta}dtdr +dr^2 +(1 -8\pi\eta^2)r^2(d\theta^2 +\sin^2\theta d\phi^2) \, , \label{metric} \end{equation} where a plus (minus) sign denotes the space-time line element of the charged massive outgoing (ingoing) particles at the EH (CH). In Eq. (\ref{metric}), the Painlev\'{e}-like coordinate system has many attractive features. First, the metric is well behaved at the EH and CH; Secondly, it satisfies Landau's condition of the coordinate clock synchronization; Thirdly, the new form of the line element is stationary, but not static. These characters are useful to investigate the tunneling radiation of the charged massive particles across the horizons. It should be pointed out that unlike the asymptotically flat case, the Painlev\'{e}-like coordinate for the asymptotically non-flat space-time is not unique. In fact, there is another form for the metric (\ref{RNdSM}) \begin{eqnarray} ds^2 &=& -\Delta dt^2 \pm 2\sqrt{1 -\Delta/(1 -\Lambda r^2/3)}dtdr +(1 -\Lambda r^2/3)^{-1}dr^2 \nonumber \\ &&\qquad +(1 -8\pi\eta^2)r^2(d\theta^2 +\sin^2\theta d\phi^2) \, , \nonumber \end{eqnarray} which approaches to the de Sitter space in the vacuum case where $\eta = 0$. Now, let us work with the metric in the new form (\ref{metric}) and obtain the radial geodesics of the charged massive particles, which is different from that of the uncharged massless particles that follow the radial null geodesics \begin{equation} \dot{r} = \frac{dr}{dt} = \pm 1 \mp \sqrt{1 -\Delta} \, . \end{equation} According to de Broglie's hypothesis, from the definition of the phase velocity $v_p$ and the group velocity $v_g$, we have \begin{equation} v_p = \frac{1}{2}v_g \, . \end{equation} Since the tunneling process is an instantaneous effect, the metric in the line element (\ref{metric}) satisfies Landau's condition of the coordinate clock synchronization, the coordinate time difference of two events, which take place simultaneously in different places, is \begin{equation} dt = -\frac{g_{tr}}{g_{tt}}dr_c \, , \qquad\qquad (d\theta = d\phi = 0) \, , \end{equation} where $dr_c$ is the location of the tunneling particle. So the group velocity can be expressed as \begin{equation} v_g = \frac{dr_c}{dt} = -\frac{g_{tt}}{g_{tr}} = \pm \frac{r^2 -2Mr +Q^2 -(\Lambda/3)r^4}{\sqrt{2Mr^3 -Q^2r^2 +(\Lambda/3)r^6}} \, , \end{equation} therefore the phase velocity (the radial geodesics) is \begin{equation} \dot{r} = v_p = -\frac{g_{tt}}{2g_{tr}} = \pm \frac{r^2 -2Mr +Q^2 -(\Lambda/3)r^4}{2\sqrt{2Mr^3 -Q^2r^2 +(\Lambda/3)r^6}} \, , \end{equation} where $+ (-)$ sign denotes the phase velocity of the charged particles tunneling across the EH (CH). During the process of a charged massive particle tunneling across the potential barrier, the self-interaction effect of the electro-magnetic field on the emitted particles should not be ignored, and the temporal component of electro-magnetic potential is \begin{equation} A_t = \pm \frac{Q}{r} \, . \end{equation} In the remaining two sections, we shall discuss Hawking radiation from the event horizon and the cosmological horizon, respectively, and calculate the tunneling rate from each horizon. Since the overall picture of tunneling radiation for the metric is very involved, to simplify the discussion we will consider the outgoing radiation from the EH, and ignore the incoming radiation from the CH, for the moment when we deal with the black hole event horizon. While dealing with the CH case, we shall only consider the incoming radiation from the CH and ignore the outgoing radiation from the EH. \section{Tunneling rate of charged particles at the EH} \label{TREH} According to the energy conservation and the charge conservation, one can assume that the total ADM mass and charge of the hole-particle system are held fixed whereas the mass and the charge of the hole are allowed to fluctuate, the black hole mass and charge will become $M -\omega$, $Q -q$ when a particle with energy $\omega$ and charge $q$ has evaporated from the EH. Considering the charged particle tunnels out from the EH along the radial direction, we can get the new line element of the black hole in the EH case \begin{equation} ds^2 = -\Delta^{\prime} dt^2 +2\sqrt{1 -\Delta^{\prime}}dtdr +dr^2 +(1 -8\pi\eta^2)r^2(d\theta^2 +\sin^2\theta d\phi^2) \, , \end{equation} where $\Delta^{\prime} = 1 -2(M -\omega)/r +(Q -q)^2/r^2 -(\Lambda/3)r^2$. Accordingly the radial geodesics of the charged massive particles tunneling out from the EH is \begin{equation} \dot{r} = \frac{r^2 -2(M -\omega)r +(Q -q)^2 -(\Lambda/3)r^4}{2\sqrt{2(M -\omega)r^3 -(Q -q)^2r^2 +(\Lambda/3)r^6}} \, , \label{ME1} \end{equation} and the non-zero component of electro-magnetic potential becomes \begin{equation} A_t = \frac{Q -q}{r} \, . \end{equation} When the charged particle tunnels out, the effect of the electro-magnetic field should be taken into account. So the matter-gravity system consists of the black hole and the electro-magnetic field outside the hole. As the Lagrangian function of the electro-magnetic field corresponding to the generalized coordinates described by $A_{\mu}$ is $-(1/4)F_{\mu\nu}F^{\mu\nu}$, we can find that the generalized coordinate $A_{\mu} = (A_t, 0, 0, 0)$ is an ignorable coordinate. In order to eliminate the degree of freedom corresponding to $A_{\mu}$, the imaginary part of the action for the charged massive particle should be written as \begin{eqnarray} \textrm{Im} S &=& \textrm{Im}\int_{t_i}^{t_f}\big(L -P_{A_t}\dot{A_t}\big)dt = \textrm{Im}\int_{r_{ie}}^{r_{fe}}\big(P_r\dot{r} -P_{A_t}\dot{A_t}\big)\frac{dr}{\dot{r}} \nonumber \\ &=& \textrm{Im}\int\limits_{r_{ie}}^{r_{fe}}\Bigg[\int\limits_{(0, ~0)}^{(P_r, P_{A_t})} \Big(\dot{r}~dP_r^{\prime} -\dot{A_t}~dP_{A_t}^{\prime}\Big)\Bigg]\frac{dr}{\dot{r}} \, , \label{IA1} \end{eqnarray} where $r_{ie}$ and $r_{fe}$ represent the locations of the EH before and after the particle with energy $\omega$ and charge $q$ tunnels out. According to Hamilton's canonical equation of motion, we have \begin{eqnarray} \dot{r} &=& \frac{dH}{dP_r}\Big|_{(r; A_t, P_{A_t})} \, , \qquad dH|_{(r; A_t, P_{A_t})} = (1 -8\pi\eta^2)d\big(M -\omega\big) \, , \nonumber \\ \dot{A_t} &=& \frac{dH}{dP_{A_t}}\Big|_{(A_t; r, P_r)} \, , \qquad dH|_{(A_t; r, P_r)} = (1 -8\pi\eta^2)\frac{Q -q}{r}d(Q -q) \, , \label{HCE1} \end{eqnarray} where $\omega$ and $q$ are the energy and the charge of the emitted particle. Because of the existence of a global monopole in the black hole background, the total ADM mass and the total charge in the EH case are $M_{\infty} = (1 -8\pi\eta^2)M$ \cite{MC} and $Q_{\infty} = (1 -8\pi\eta^2)Q$, respectively. [For the sake of convenience, we take the mass and charge of the particle measured at infinity as $\omega_{\infty} = (1 -8\pi\eta^2)\omega$ and $q_{\infty} = (1 -8\pi\eta^2)q$.] Eq. (\ref{HCE1}) represents the energy change of the hole because of the loss of mass and charge when a particle tunnels out. Substituting Eqs. (\ref{ME1}) and (\ref{HCE1}) into Eq. (\ref{IA1}) and switching the order of integral, we obtain \begin{eqnarray} \textrm{Im} S &=& \textrm{Im} \int\limits_{r_{ie}}^{r_{fe}} \int\limits_{(1 -8\pi\eta^2)(M, ~Q)}^{(1 -8\pi\eta^2)(M -\omega, ~Q -q)} \Big[dH|_{(r; A_t, P_{A_t})} -dH|_{(A_t; r, P_r)}\Big] \frac{dr}{\dot{r}} \nonumber \\ &=& \textrm{Im} \int\limits_{(1 -8\pi\eta^2)(M, ~Q)}^{(1 -8\pi\eta^2)(M -\omega, ~Q -q)} \int\limits_{r_{ie}}^{r_{fe}} \frac{2\sqrt{2(M -\omega^{\prime})r^3 -(Q -q^{\prime})^2r^2 +(\Lambda/3)r^6}}{r^2 -2(M -\omega^{\prime})r +(Q -q^{\prime})^2 -(\Lambda/3)r^4} \nonumber \\ &&\qquad \times (1 -8\pi\eta^2)\Big[d(M -\omega^{\prime}) -\frac{Q -q^{\prime}}{r}d(Q -q^{\prime})\Big] dr \, . \label{IE1} \end{eqnarray} Since $1 -2(M -\omega^{\prime})/r +(Q -q^{\prime})^2/r^2 -(\Lambda/3)r^2 = 0$ satisfies the horizon equation after the particle with energy $\omega^{\prime}$ and charge $q^{\prime}$ tunnels out, there exists a single pole in Eq. (\ref{IE1}). Let us carry out the integral by deforming the contour around the pole so as to ensure that the positive energy solutions decay in time, and get \begin{equation} \textrm{Im} S = -(1 -8\pi\eta^2)\textrm{Im}\int_{r_{ie}}^{r_{fe}}(i\pi r)dr = -\frac{\pi}{2}(1 -8\pi\eta^2)\big(r_{fe}^2 -r_{ie}^2\big) \, . \end{equation} So the relationship between the tunneling rate and the imaginary part of the particle's action is \begin{equation} \Gamma \sim e^{-2\textrm{Im} S} = e^{\pi(1 -8\pi\eta^2)(r_{fe}^2 -r_{ie}^2)} = e^{(A_{fe} -A_{ie})/4} = e^{\Delta S_{EH}} \, , \label{TA1} \end{equation} where $A_{ie}$ and $A_{fe}$ denote the event horizon area before and after the charged particle tunnels out, and $\Delta S_{EH}$ is the change of Bekenstein-Hawking entropy. From Eq. (\ref{TA1}), we find that the tunneling rate at the EH is related to the Bekenstein-Hawking entropy, and is consistent with an underlying unitary theory. \section{Tunneling rate of charged particles at the CH} \label{TRCH} In this section, we will discuss the Hawking radiation of the charged particle via tunneling at the CH. Different from the particle's tunneling behavior in the EH case discussed in the last section, the particle is found to tunnel into the CH. So when the particle with energy $\omega$ and charge $q$ tunnels into the CH, we can get the new line element as follows \begin{equation} ds^2 = -\Delta^{\prime\prime} dt^2 -2\sqrt{1 -\Delta^{\prime\prime}}dtdr +dr^2 +(1 -8\pi\eta^2)r^2(d\theta^2 +\sin^2\theta d\phi^2) \, , \end{equation} where $\Delta^{\prime\prime} = 1 -2(M +\omega)/r +(Q +q)^2/r^2 -(\Lambda/3)r^2$. Using the same method, the phase velocity (the radial geodesics) of the charged particle tunneling into the CH can be expressed as \begin{equation} \dot{r} = -\frac{r^2 -2(M +\omega)r +(Q +q)^2 -(\Lambda/3)r^4}{2\sqrt{2(M +\omega)r^3 -(Q +q)^2r^2 +(\Lambda/3)r^6}} \, , \label{ME2} \end{equation} and the electro-magnetic potential becomes accordingly as \begin{equation} A_t = -\frac{Q +q}{r} \, . \end{equation} According to Hamilton's canonical equation of motion, when a particle with energy $\omega$ and charge $q$ is absorbed by the CH of the black hole, we can get \begin{eqnarray} dH|_{(r; A_t, P_{A_t})} &=& -(1 -8\pi\eta^2)d\big(M +\omega\big) \, , \nonumber \\ dH|_{(A_t; r, P_r)} &=& -(1 -8\pi\eta^2)\frac{Q +q}{r}d(Q +q) \, , \label{HCE2} \end{eqnarray} where $\omega$ and $q$ are the energy and the charge of the absorbed particle. Due to the presence of a global monopole in the black hole background, the total ADM mass and the total charge in the CH case are $M_{\infty} = -(1 -8\pi\eta^2)M$ and $Q_{\infty} = -(1 -8\pi\eta^2)Q$, respectively. In the same way, the imaginary part of the action for the charged massive particle incoming from the CH can be expressed as \begin{eqnarray} \textrm{Im} S &=& \textrm{Im}\int_{t_i}^{t_f}\big(L -P_{A_t}\dot{A_t}\big)dt = \textrm{Im}\int_{r_{ic}}^{r_{fc}}\big(P_r\dot{r} -P_{A_t}\dot{A_t}\big)\frac{dr}{\dot{r}} \nonumber \\ &=& \textrm{Im} \int\limits_{-(1 -8\pi\eta^2)(M, ~Q)}^{-(1 -8\pi\eta^2)(M +\omega, ~Q +q)} \int\limits_{r_{ic}}^{r_{fc}} \frac{2\sqrt{2(M +\omega^{\prime})r^3 -(Q +q^{\prime})^2r^2 +(\Lambda/3)r^6}}{r^2 -2(M +\omega^{\prime})r +(Q +q^{\prime})^2 -(\Lambda/3)r^4} \nonumber \\ &&\qquad \times (1 -8\pi\eta^2)\Big[d(M +\omega^{\prime}) -\frac{Q +q^{\prime}}{r}d(Q +q^{\prime})\Big] dr \, . \label{IE2} \end{eqnarray} In Eq. (\ref{IE2}), $r_{ic}$ and $r_{fc}$ are the locations of the CH before and after the particle with energy $\omega$ and charge $q$ is absorbed by the CH, and we find that $1 -2(M +\omega^{\prime})/r +(Q +q^{\prime})^2/r^2 -(\Lambda/3)r^2 = 0$ is the horizon equation after the particle tunnels into the CH, so there exists a single pole in Eq. (\ref{IE2}). Deforming the contour around the pole and carrying out the integral, we have \begin{equation} \textrm{Im} S = -(1 -8\pi\eta^2)\textrm{Im}\int_{r_{ic}}^{r_{fc}}(i\pi r)dr = -\frac{\pi}{2}(1 -8\pi\eta^2)\big(r_{fc}^2 -r_{ic}^2\big) \, . \end{equation} So the tunneling rate at the CH is \begin{equation} \Gamma \sim e^{-2\textrm{Im} S} = e^{\pi(1 -8\pi\eta^2)(r_{fc}^2 -r_{ic}^2)} = e^{(A_{fc} -A_{ic})/4} = e^{\Delta S_{CH}} \, , \label{TA2} \end{equation} where $A_{ic}$ and $A_{fc}$ are the cosmological horizon area before and after the charged massive particle tunnels into the CH, and $\Delta S_{CH}$ is the change of the Bekenstein-Hawking entropy at the CH. From Eq. (\ref{TA2}), we learn that tunneling rate at the CH of the Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole is connected with Bekenstein-Hawking entropy. \section{Summary and Discussions} In summary, we find that when the charged massive particle tunnels across the event horizon (EH) and the cosmological horizon (CH) of a Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole, the radiant spectrum is not a pure thermal one, the tunneling rate is related to the change of Bekenstein-Hawking entropy corresponding to each horizon, and is consistent with an underlying unitary theory. So the Hawking radiation can be viewed as an ideal case only, it is possible for a not precisely thermal radiation to carry out information during the radiation process of the black holes, and the underlying unitary theory is reliable. The result obtained in this paper provides further evidence to support the Parikh's tunneling picture, which might serve as a mechanism to deal with the information loss paradox. We would like to point out that a large class of previous results existed in the literature can be enclosed as special case of ours obtained here. In particular, results obtained in Ref. \cite{PW,TRA} can be recovered. For example, in the case where $\Lambda = 0$ and $\eta = 0$, the Reissner-Nordstr\"{o}m-de Sitter black hole with a global monopole reduces to the Reissner-Nordstr\"{o}m black hole. Considering an uncharged massless particle but with energy $\omega$ tunnels across the event horizon, we know that $r_i = M +\sqrt{M^2 -Q^2}$ and $r_f = M -\omega +\sqrt{(M -\omega)^2 -Q^2}$ are the horizons of the black hole before and after the emission of the particle. According to Eq. (\ref{TA1}), the tunneling rate is \begin{equation} \Gamma \sim e^{-2\textrm{Im} S} = e^{2\pi\big[(M -\omega)^2 +(M -\omega)\sqrt{(M -\omega)^2 -Q^2} -M^2 -M\sqrt{M^2 -Q^2}\big]} = e^{\Delta S_{BH}} \, , \end{equation} which is same one as that obtained in Ref. \cite{PW}. For another special case when $\Lambda = 0$, $Q = 0$, and $\eta = 0$, the black hole metric considered here reduces to the Schwarzschild black hole, one can derive the event horizon of the black hole before and after a particle with energy $\omega$ is emitted, namely, $r_i = 2M $ and $r_f = 2(M -\omega)$. According to Eq. (\ref{TA1}), the tunneling rate at the event horizon will be reduced to \begin{equation} \Gamma \sim e^{-2\textrm{Im} S} = e^{-8\pi(M -\omega/2)} = e^{\Delta S_{BH}} \, , \end{equation} which coincides with Parikh's result in the Schwarzschild black hole case. In addition, our discussions made here can be directly extended to the anti-de Sitter case \cite{AdSM} by changing the sign of the cosmological constant to a negative one, and also can be easily generalized to higher dimensional spherically symmetric black holes case. \section*{Acknowledgments} S.-Q.Wu was supported by a starting fund from Central China Normal University and by the Natural Science Foundation of China.
1,108,101,564,448
arxiv
\section{INTRODUCTION} Over the last three decades, and despite substantial progress, direct evidence of interactions of galactic dark matter (DM) with Standard Model (SM) particles has been persistently lacking. To increase the probability of capturing a particle with very feeble interactions, detectors have typically advanced by lowering their energy thresholds, by reducing backgrounds, and by increasing their target masses up to the ton-scale, see References~\cite{Battaglieri:2017aum,Schumann:2019eaa} for reviews. Now, the encroachment of the inevitable background of astrophysical neutrinos will prove to be the final---and in many cases, insurmountable---obstacle to the largest direct detection experiments ~\cite{Billard:2013qya}. This state of affairs has driven a resurgence in interest towards a comparatively little studied experimental technique It was first recognized by Spergel~\cite{Spergel:1987kx} that direct DM searches would be subject to a unique directional signature. The relative motion of the Solar System with respect to the Milky Way's DM halo should give rise to an anisotropic flux of DM with a peak incoming direction pointing back along the galactic plane, towards the constellation of Cygnus. A signal with a fixed galactic direction is not known to be mimicked by any cosmic or terrestrial background, and it is likely that any detected signal that was aligned in the direction opposing our galactic rotation would have to be related to the Milky Way's halo in some way. Moreover, unlike many other kinds of DM signals, which can vary considerably between experiments and particle candidates, the directionality of the flux is expected for almost all DM models~\cite{Kavanagh:2015jma, Catena:2015vpa}, and is highly robust against astrophysical uncertainties~\cite{OHare:2019qxc}. Directional detectors are uniquely capable of discriminating against the otherwise irreducible background of astrophysical neutrinos~\cite{Grothaus:2014hja, O'Hare:2015mda}---and a directional detector should generally enable the identification of a DM signal with far fewer events under any kind of background~\cite{Copi:1999pw, Morgan:2004ys, Billard:2009mf, Green:2010zm, Mayet:2016zxu, Vahsen:2020pzb}. Despite this strong motivation, directional detection is experimentally challenging, and the community, while growing, is still relative small. The majority of the experimental directional detection community has converged on the gas TPC as the optimum technology. The \textsc{Cygnus}\xspace collaboration~\cite{Vahsen:2020pzb} has been formed from the convergence of several gas TPC collaborations who have run successful small-scale experiments in the past~\cite{Santos:2011kf,Baracchini:2020btb,Battat:2016xxe,Yakabe:2020rua,Vahsen:2011qx}. Gas TPC proponents have grown in number in recent years, and so has the readiness of many advanced readout technologies to detect keV-scale electron and nuclear recoils, discriminate between them, and reconstruct their directions. A large part of the inspiration for this progress has been the quest for dark matter, however a slew of other physics goals --- from measurements of neutrinos~\cite{Vahsen:2020pzb}, to fundamental and applied physics --- are also considered to be well-suited for a future large-scale gas TPC. Several excellent review articles on directional detection have been written in recent years~\cite{Ahlen:2009ev, Mayet:2016zxu,Battat:2016pap}. We highlight in particular Reference~\cite{Sciolla_2009}, which predates the other reviews, but provides additional valuable perspectives on select topics, including the Lindhard model for the energy loss of low-energy particles. More recently, this work has culminated in a ton-scale directional gas TPC design outlined as part of the \textsc{Cygnus}\xspace project~\cite{Vahsen:2020pzb}. Given that the motivation for a directional detector is growing, and that the experimental community is converging, it is timely to revisit the motivation and scope of directional recoil detection and carefully consider the present opportunities and remaining challenges. This review is structured from general to specific, and gradually transitions from an objective overview of the field into a more subjective presentation of the main challenges, ending with our personal view on optimal technologies for addressing these. In Section~\ref{sec:motivation} we introduce the diverse physics motivation for performing directionally sensitive recoil experiments. Then, in Section~\ref{sec:detectors} we describe the basic physics of recoils in gas targets, consider several broad technological approaches before listing specific examples of demonstrated or proposed detectors. We focus in on gas TPCs as the optimum approach for directional detection. In Section~\ref{sec:performance} we describe the required capabilities to achieve different physics goals, limiting ourselves mostly to gas TPCs. We find that TPCs with high-definition readouts (HD TPCs) can meet the performance requirements. In Section~\ref{hd_recoil_imaging} we illustrate physics measurements that can only be performed with such detectors. Finally, in Section~\ref{sec:summary}, we briefly summarize this review and present our recommendations for future work in the field. \section{PHYSICS MOTIVATION}\label{sec:motivation} A summary of the physics case for a directional recoil detector is presented graphically in {\bf Figure~\ref{fig:summary}}. We will come back to this summary in Section~\ref{sec:summaryphysics} after we have discussed the full physics motivation for directional detection. \subsection{Dark matter} The search for DM remains the most compelling motivation for pursuing directional experiments. Let us recap why we believe DM signals to be generically directional. The commonly agreed-upon first approximation of a galaxy like our Milky Way is of a rotating disk embedded inside a spherical, isotropic, and non-rotating DM halo. Since we operate experiments in a reference frame that is moving at a velocity $\mathbf{v}_\mathrm{lab}$ with respect to the rest frame of the DM halo, the distribution of DM velocities that we observe, $f_{\rm lab}(\mathbf{v})$ is obtained by boosting the galactic velocity distribution, $f_{\rm lab}(\mathbf{v},t) = f_{\rm gal}(\mathbf{v} + \mathbf{v}_\mathrm{lab}(t))$. Many of the characteristic signals of DM are due to this boost into our frame of reference. For instance, the time dependence of $\mathbf{v}_\mathrm{lab}(t)$ (due to the Earth-Sun relative motion) makes the flux modulate annually; and because the size of $|\mathbf{v}_\mathrm{lab}(t)|$ (due to the Sun-halo relative motion) is larger than the expected width of $f_{\rm gal}(|\mathbf{v}|)$, the flux will also be strongly anisotropic. \begin{figure}[t] \centering \includegraphics[width=0.955\textwidth]{summary.png} \caption{The physics case for a directional gas TPC, organized in terms of DM, neutrino physics, as well as other fundamental and applied physics. The cases are presented in order of the total size of a gas TPC experiment that would be needed, in terms of $N$, the number of 10 m$^3$ TPC modules operating close to atmospheric pressure. These volumes are not precise beyond an order of magnitude. We use a different scale for $N$ in the ``other physics'' column since these goals can be achieved with much smaller scale experiments.} \label{fig:summary} \end{figure} The anisotropy of the flux of DM particles is often touted as a smoking gun, resting on only a select few basic assumptions. This review is dedicated to assessing the feasibility of detecting such a signal experimentally. However, given that the entire field rests upon these assumptions, it is worth taking time to critically assess how confident we are in them. \begin{summary}[Requirements for a directional DM signal that points back towards Cygnus] \begin{enumerate} \item The local dark matter density, $\rho_0$ is nonzero. \item The solar velocity points along the galactic plane \item The DM halo is not co-rotating at a similar speed to galactic rotation \end{enumerate} \end{summary} Firstly, the measurement of the density of unseen matter in the solar neighborhood has a long history that dates back to the work of Kapteyn in 1922~\cite{Kapteyn:1922zz}---predating even Zwicky's famous observations of the Coma cluster. The density of dark matter around us in the Milky Way can be inferred from the motions of stars, using them as tracers of the total gravitational potential. The inferred local DM density resulting from a variety of methods and datasets is typically $\rho_0\sim0.4$--$0.6$~GeV~cm$^{-3}$~\cite{deSalas:2020hbh}. These estimates are still heavily dominated by systematics but are, importantly, nonzero. Secondly, the direction of the DM anisotropy towards Cygnus only relies on the assumption that the motion of the galactic disk points us in that direction. The solar velocity is of fundamental importance in galactic astronomy and astrometry in order to make sense of stellar parallaxes and proper motions, hence astronomers have conceived of numerous ways to precisely measure it~\cite{Bovy:2020}. The Solar System moves almost perfectly along the Galactic plane, at around $246 \pm 1 \, \textrm{ km s}^{-1}$. Even accounting for the aberration of the Earth's direction of motion due to its orbit around the Sun ($\sim 30\,\textrm{ km s}^{-1}$) is not enough to cause the peak expected DM flux to ever point outside of the constellation of Cygnus. The final assumption is also generally believed to be true, albeit with a slightly greater degree of uncertainty: the DM halo must not co-rotate with the galactic disk. If the DM halo did co-rotate, this would not doom all detection efforts, but it could substantially wash out directional signals. Triaxial halos like the Milky Way's~\cite{Iorio:2019} are formed hierarchically and will therefore typically have some angular momentum which would manifest as a figure rotation, or ``tumbling'', on Gyr timescales~\cite{Bryan:2007}. The figure rotation of the Milky Way has not been measured, but it could not be anomalously faster than the typical rotations seen in simulated Milky Way analogs, which are currently at the cusp of what could be observed via the influence on stellar streams~\cite{Valluri:2020lsq}. Put together, the assumption of a directional DM signal pointing back towards Cygnus seems rather robust. \subsection{Directional signals of WIMP-like dark matter}\label{sec:wimps} \begin{marginnote}[] \entry{Weakly interacting massive particle (WIMP)}{a loosely defined particle candidate for DM that is usually assumed to be produced thermally in the early Universe.} \end{marginnote} The WIMP, supersymmetric or otherwise, remains a popular and widely-studied example of particle-like dark matter~\cite{Arcadi:2017kky}. The most common laboratory test of WIMP DM involves searching for their scattering with nuclei. The event rate of nuclear recoils as a function of recoil energy ($E_r$) and direction ($\hat{\mathbf{q}}$) is given by integrating over the DM flux, $v f_{\rm lab}(\mathbf{v},t)$, multiplied by some differential scattering cross section $\textrm{d}\sigma/\textrm{d}E_r$ as follows, \begin{equation}\label{eq:WIMPRate} \frac{\textrm{d}^2 R}{\textrm{d}E_r\textrm{d}\Omega_q}(E_r,t) = \frac{\rho_0}{2 \pi m_\chi m_N} \int_{v > v_\textrm{min}}{v^2 \delta(\mathbf{v}\cdot\hat{\mathbf{q}} - v_{\rm min}) \, f_{\rm lab}(\mathbf{v},t) \frac{\textrm{d} \sigma}{\textrm{d}E_r}(E_r,v)} \, \textrm{d}^3 v \,. \end{equation} This formula will hold for all 2$\rightarrow$2 elastic nuclear scattering processes. We have also divided by the nuclear mass $m_N$ to get the event rate per unit detector mass. We only integrate over DM velocities kinematically permitted to produce a given recoil energy and direction. This consideration introduces both the low speed cutoff $v>v_{\rm min}(E_r)$, and the delta function, which enforces the non-relativistic kinematic relationship for the DM-nucleus scattering angle with respect to the initial DM velocity $\mathbf{v}$ (both defined in the lab frame), \begin{equation} \frac{1}{v} \mathbf{v} \cdot \hat{\mathbf{q}} = \cos{\theta} = \sqrt{\frac{m_N E_r}{2 v^2 \mu^2_{\chi N}}} = \frac{v_{\rm min}}{v} \, , \end{equation} where $m_\chi$ is the DM mass, and $\mu_{\chi N}$ is the DM-nucleus reduced mass. The size of the DM cross section, and its dependence on recoil energy, DM velocity, particle identity, interaction type, and spin, are all model dependent. In general, this cross section is calculated from the squared matrix element, the transition probability for the DM-nucleus interaction. The most common treatment is to assume contact interactions which results in a cross section constructed from two operators, the identity ($\mathcal{O}_{\rm SI} = \mathbb{1}$) and one built from the DM and nuclear spins ($\mathcal{O}_{\rm SD} = \mathbf{S}_\chi \cdot \mathbf{S}_n$), often referred to as spin independent (SI) and spin dependent (SD), respectively. For SI and SD interactions, the matrix element introduces no additional $v$-dependence, meaning $\textrm{d}\sigma/\textrm{d}E_r \propto v^{-2}$. In these cases, the event rate inherits all of its direction dependence from the integral transform of the DM velocity distribution implied by Equation~(\ref{eq:WIMPRate}), which is known as the Radon transform~\cite{Gondolo:2002np}. \begin{figure}[t] \centering \includegraphics[width=0.97\textwidth]{Skymaps_extended.pdf} \caption{{\it Upper panel:}~the directional event rates from a 9 GeV DM particle (blue) and solar neutrinos (red) displayed in galactic coordinates $(l,b)$ in which the plane of the galaxy runs horizontally. We are moving towards the direction $(l,b) \approx (90^\circ,0^\circ)$, which means that this distribution of DM \emph{arrival} angles also peaks towards this direction. Solar neutrinos always originate from the ecliptic. {\it Lower panels:}~in the coordinates of a detector at a fixed location on earth, the DM dipole translates into a directional oscillation. The Earth's rotation axis is tilted by an angle of 39--46$^\circ$ with respect to the galactic plane depending on date. We sketch typical event rates as a function of some angle $\phi$ on a 2d readout plane for two detectors separated by $180^\circ$ of longitude, or equivalently the same detector 12 hours later. Both the DM and the neutrino signals oscillate in angle over the day, but will always be separated from one another. Any local backgrounds will \emph{not} oscillate, so would be flat distributions in the lower right panels. Therefore this directional oscillation is also a powerful and experimentally observable signature of DM and neutrinos.} \label{fig:Skymaps} \end{figure} The angular nuclear recoil distribution from an $m_\chi = 9$~GeV WIMP undergoing SI scattering with $^{19}$F nuclei can be seen in the blue contours of {\bf Figure~\ref{fig:Skymaps}}. The event rate has been integrated over the range $E_r \in [8, 50]$~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace and remains roughly stationary in the galactic coordinates shown, given by longitude and latitude ($l,b$). The prominent dipole signature of DM-induced nuclear recoils is inherited from the boost of the velocity distribution, which is now centered on $-\mathbf{v}_\mathrm{lab}$. Since the Radon transform largely retains this directionality, the most probable recoil direction is also $\hat{\bf{q}} = -{\bf v}_{\rm lab}$. In contrast, opposing directions $\hat{\bf q} \approx + \mathbf{v}_\mathrm{lab}$, either have to come from the very high-speed tail of the velocity distribution, which is exponentially suppressed; or must have very large scattering angles, which have low recoil energies that are typically sub-threshold. This results in a very strong $\mathcal{O}(10)$ anisotropy in directions, if one takes the ratio between the integrated event rates in the two opposing hemispheres, and even higher if one selects smaller angles around $\hat{\mathbf{q}} = \pm \hat{\mathbf{v}}_{\rm lab}$. The event rate will also become more strongly peaked towards Cygnus for higher recoil energies. This is because the low-speed cutoff for a given recoil energy, $v_{\rm min}$, increases with $E_r$. For higher energies, the only DM particles fast enough to scatter above $v_{\rm min}$ are those arriving from a head-on direction, aligning with $\mathbf{v}_\mathrm{lab}$. The generic signal shown in {\bf Figure~\ref{fig:Skymaps}} is common to both $\mathcal{O}_{\rm SI}$ and $\mathcal{O}_{\rm SD}$ interaction operators. However these are not the only possible operators that could describe a DM-nucleus interaction. The most general effective field theory (EFT) construction of a non-relativistic DM-nucleus interaction could, in principle, incorporate any operators preserving Galilean, Hermitian, and time-reversal symmetries~\cite{Fan:2010gt, Fitzpatrick:2012ix, Anand:2013yka}. This results in a total of 15 terms (each, for protons and neutrons) built from combinations of momentum transfer, transverse velocity, DM spin, and nuclear spin operators. In particular, operators that depend upon the DM transverse velocity, $v_{\perp}^{2}=v^{2}-q^{2}/4 \mu_{\chi N}^{2}$, introduce additional $v^2$-dependence not found in the SI and SD cross section expressions. These cases will introduce terms that depend upon the second moment of the Radon transform and lead to signals with additional ring-like features that slightly diminish the strength of the dipole~\cite{Kavanagh:2015jma,Catena:2015vpa}. Differentiating these kinds of features would be extremely difficult in nondirectional experiments. \subsection{Directionality for dark matter discovery} \vspace{1em} \noindent {\bf Rejecting isotropic backgrounds.} The strength of directional detection for DM discovery relies on the fact that no known backgrounds are believed to mimic (or even have any relation to) the directionality of a signal originating from the galactic halo. In fact, most backgrounds (with the notable exception of solar neutrinos) should be close to isotropic~\cite{Mei:2005gm}. To get a feeling for the effectiveness of directional information for DM discovery, we can calculate a rough estimate for how many DM recoil directions would need to be measured to tell that the signal was \emph{not isotropic} (see References~\cite{Copi:1999pw, Morgan:2004ys, Copi:2002hm} for other approaches). To detect a dipole anisotropy, the most basic requirement would be a contrast in event numbers in the forward/backward hemispheres ($N_{\rm fw} - N_{\rm bw}$) greater than the typical 3$\sigma$ random deviation expected under isotropy, 3$\sqrt{N_{\rm fw}+N_{\rm bw}}$. Rearranging this requirement in terms of the event rates in each hemisphere ($R_{\rm fw, bw}$) gives the formula, \begin{equation} N_{\rm iso} \approx \left( 3 \, \frac{R_{\rm fw} + R_{\rm bw}}{R_{\rm fw} - R_{\rm bw}} \right)^2 \, . \label{eq:reject_isotropy}\end{equation} Taking the example of a $^{19}$F-based experiment with a $\sim$3~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace energy threshold and zero background, the number of events required to reject isotropy for DM masses $m_\chi = 10,\,100,\,1000$~GeV are $N_{\rm iso} \approx 12,\,16,\,17$. Fewer events are required for the lowest masses because all the recoils scattering above threshold are from the high-speed tail of the distribution, which is the most anisotropic part. This simple non-parametric estimate already results in a promisingly low required number of events, however it is sensitive to isotropic background contamination. For example, if we assume signal events only make up a fraction $\lambda$ of the total number of events, this increases the value of $N_{\rm iso}$ by a factor $(1+1/\lambda)$, which could raise the required number for discovery up to $\mathcal{O}(100)$ for $\lambda\lesssim0.2$ \vspace{1em} \noindent {\bf Confirmation of a galactic signal.} The test of isotropy presented above is highly simplistic since it reduces the signal down to only two angular bins. However, it reflects one of the key conceptual advantages of directional detection, which is that non-parametric statistical tests can be extremely powerful. More sophisticated tests described in the literature~\cite{Mayet:2016zxu} allow for unbinned recoil directions. These tests result in slightly smaller required event numbers, but importantly they do not require any additional modeling or assumptions beyond that of the background being roughly isotropic. Going one step further and confirming that the signal aligns with Cygnus requires around a factor of two more events~\cite{Green:2010zm}---still significantly smaller than the numbers of events required to make a similar statement with a non-directional experiment. Nonetheless, modeling the signal and background would still be the most desirable strategy in practice. This would allow the kinematic correlation between recoil direction and energy to be included, and would result in even lower required numbers to point towards Cygnus, at the cost of more model dependence~\cite{Billard:2009mf}. \vspace{1em} \noindent {\bf DM discovery via sidereal modulation.} So far we have assumed that the galactic dipole is a measureable signal. This implicitly assumes that all three components of each recoil vector, including its sign or ``head/tail'' can be measured. Actual detectors, however, may not be sensitive to all three components, and obtaining a head/tail signature is often particularly challenging. A lack of complete three-dimensional recoil vectors implies that individual events cannot be unambiguously rotated into galactic coordinates, and any alignment with Cygnus cannot be checked. This would seem problematic when thinking about the simplistic DM discovery arguments we have presented so far. \begin{marginnote}[] \entry{Head/tail sensitivity}{a nuclear recoil detector's ability to distinguish between a recoil direction vector $\hat{\mathbf{q}}$ and the opposite vector $-\hat{\mathbf{q}}$.}\end{marginnote} \begin{marginnote}[] \entry{Sidereal day}{a measure of the Earth's rotational period with respect to the fixed stars. In contrast to the solar day which is measured with respect to the Sun.} \end{marginnote} Even in these cases, however, a form of directional discovery is possible. This alternative strategy relies on the rotation of the Earth to fill the gap in information, as depicted in the lower panels of {\bf Figure~\ref{fig:Skymaps}}. For example, if only a 2d projection of each recoil direction were measurable, then the projected dipole signature would rotate over the course of one sidereal day. A signal that modulated with the sidereal day, by definition, would have to be of galactic origin and unrelated to the Earth-Sun system. Any local systematic effects exhibiting a daily modulation (for instance with temperature) would presumably have to follow the solar day. Accounting for this diurnal variation and contrasting it with the background can allow 2d and 1d experiments to regain sensitivity~\cite{Billard:2014ewa}. \begin{figure}[t] \checkoddpage \edef\side{\ifoddpage l\else r\fi}% \makebox[\textwidth][\side]{% \begin{minipage}[t]{1.2\textwidth} \centering \includegraphics[width=0.97\textwidth]{NuFloor.pdf} \caption{Effect of directionality and other detector capabilities when setting limits on DM cross sections in the presence of neutrino background. Left: discovery limits versus DM mass for a fixed detector exposure. Right: discovery limits versus exposure for a fixed DM mass. The vertical white lines show where the two panels intersect. In both panels, the only difference between the different lines is the information that is used in the analysis. The lowest line (green) uses all information available (green: 3d directionality, recoil energy, and event time) whereas the highest line assumes the most minimal amount of information possible (black: the number of events only). We shade underneath the orange curve to highlight the range of WIMP models that are inaccessible without directional information for a given exposure and target.}\label{fig:NuFloor} \end{minipage}% }% \end{figure} \vspace{1em} \noindent {\bf DM discovery under the neutrino background.} Ultimately, the best prospects for discovery and characterization of a signal will be achieved when all recoil direction, time, and energy information are incorporated into a complete model. This is most clearly demonstrated when the dominant background is not isotropic; and there turns out to be a highly notable example: the solar neutrino background. The keV-scale nuclear recoils from coherent neutrino-nucleus elastic scattering (CE$\nu$NS\xspace) of solar neutrinos will be the most problematic background for the upcoming generation of WIMP searches. The CE$\nu$NS\xspace background from $^{8}$B neutrinos, is particularly troublesome because the resulting spectrum of nuclear recoil energies resembles that of DM for $m_\chi = $~5--10 GeV. In fact, spectral matching occurs at many other DM masses that each overlap with different fluxes of neutrino. This mimicry of the DM signal by an otherwise irreducible background is what gives rise to the well-known ``neutrino floor''~\cite{Billard:2013qya}. Without substantial improvements to the already precisely known neutrino fluxes, progress of direct DM detection towards smaller cross sections will be limited for the next decade and beyond. Directionality is an attractive prospect for circumventing the neutrino floor because the distinct angular distributions of DM and solar neutrinos recoils allows for optimum discrimination between the two~\cite{Grothaus:2014hja, O'Hare:2015mda}, see {\bf Figure~\ref{fig:Skymaps}}. The angular recoil distributions are distinct due to the separation between the path of the Sun (the ecliptic) and the constellation Cygnus across the sky. We display a quantitative demonstration of this in {\bf Figure~\ref{fig:NuFloor}}. In both panels, we show the discovery limits (defined as the median cross section that could be detected at 3$\sigma$) for $^{19}$F-based experiments with a 1~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace energy threshold and ton-scale target masses. The left-hand panel fixes the exposure (100 ton-year) but shows the limit as a function of DM mass, whereas the right-hand panel fixes the DM mass (9 GeV) but shows the limit versus exposure. The most important result shown by this figure is that the directional limits (green, red, blue) scale almost as $\sim 1/\textrm{Exposure}$, whereas the nondirectional limits all scale as (at best) $1/\sqrt{\textrm{Exposure}}$. \subsection{Directional signals for probing beyond-WIMP dark matter} Directionality is a broadly model-independent prediction that should be present in most DM models. We highlight examples of non-WIMP models where directional detectors appear better suited than their non-directional counterparts for detection or model characterization. \vspace{1em} \noindent {\bf Modified DM-nucleus kinematics.} One of longest-standing elaborations on the WIMP was proposed almost two decades ago~\cite{TuckerSmith:2001hy}, but is still the subject of investigation~\cite{Eby:2019mgs,Zurowski:2020dxe,Bramante:2016rdh}. So-called ``inelastic DM'' models introduce an excited state for the DM particle that it can either be excited to, or deexcited from, during a collision with a nucleus. These models modify the formula for $v_{\rm min}$ to account for the mass splitting between these states. In inelastic DM models with an available excited state, the distribution of recoils would be more focused towards Cygnus since slower particles would not be able to scatter with enough energy to get excited. Such a signal would be much more readily observed in the angular distribution~\cite{Finkbeiner:2009ug}. So untangling the mass spectrum of DM and distinguishing elastic from inelastic interactions would be a unique advantage of a directional detector~\cite{Lisanti:2009vy}. \vspace{1em} \noindent {\bf DM in detectors with large volumes.} For some DM models, certain kinds of directional recoil detectors are attractive, but not for their directional sensitivity. Directional recoil measurements typically prefer lower-density targets, hence directional detectors will typically require large total volumes to reach competitive exposures. It turns out that the DM event rate for certain classes of model scales with the geometric \emph{size} of the detector rather than the total mass. One example of this is when the DM is strongly interacting and extremely heavy. In these models, the flux of particles is low, but if one does cross the detector, the probability of it generating multiple scattering events is very high~\cite{Bramante:2018qbc,Bramante:2019yss}. In this case, the number of events would scale with the cross sectional area of the experiment. This was studied recently in Reference~\cite{Clark:2020mna} considering the reach of the 1 m$^2$-scale of Xenon1T to DM masses up to $10^{18}$~GeV. Since the masses are so high, the momentum imparted in each scattering event is negligible compared to the DM's kinetic energy. This means the multiple scatters would be essentially colinear and would be even more anisotropically distributed than WIMPs. Another instance of this is the case of ``luminous DM''~\cite{Feldstein:2010su} which is related to inelastic DM but has the added feature of electromagnetic emission from the decay of the excited DM state. This idea was studied recently in the context of large-scale directionally sensitive detectors~\cite{Eby:2019mgs}. To gain novel sensitivity, the experiment would need to be equipped with photodetectors to identify the scintillation emitted when a DM particle that was excited in a prior interaction inside the Earth then decays inside the detector volume. \vspace{1em} \noindent {\bf Fluxes of DM from other directions.} The flow of DM from Cygnus is a robust prediction, however in some models this population of particles may simply be impossible to detect, especially if they are much lighter than the typical GeV-scale WIMP. However, many have wondered if a sub-population of these light particles could be boosted to detectable energies. Such scenarios need not be contrived into existence, but could be a guaranteed prediction of certain models and the primary way they would be detectable. One such scenario is ``cosmic ray upscattered DM''~\cite{Bringmann:2018cvk, Alvey:2019zaa, Dent:2019krz, Guo:2020oum, Dent:2020syp}. The DM in this case would be a standard WIMP-like particle, however the upscattering by GeV cosmic rays in the galaxy allows experiments to reclaim sensitivity to sub-GeV masses that would normally generate signals well below threshold. The flux of upscattered DM would inherit directionality from the spatial distribution of DM in the galaxy as well as some of the directionality of cosmic rays in the interstellar medium~\cite{Guo:2020oum}. The large cross sections of models for which this effect is relevant also means that the DM will be attenuated noticeably by the Earth~\cite{Bringmann:2018cvk}. This will cause a daily modulation of the flux and will suppress upward-going DM arrival directions. Another boosted population of DM was suggested recently, specifically in the context of directional recoil detectors~\cite{Baracchini:2020owr}. The model in question involves an MeV-scale particle which could represent a viable light DM candidate~\cite{DeRocco:2019jti} but would also be generated in abundance during supernovae. The diffuse flux of semi-relativistic particles from galactic supernovae would generate nuclear recoil signals comparable in energy to a cold population of GeV-scale WIMPs. However, this flux should peak towards the Galactic center, around $90^\circ$ away from the expected DM flux. The discrimination of these two fluxes is only possible with a directional experiment. \vspace{1em} \noindent {\bf Electron recoils.} Finally, directional detectors present novel opportunities for probing non-WIMP models via electron recoils. Bosonic DM candidates, such as dark photons~\cite{An:2014twa} and axion-like particles~\cite{Jaeckel:2010ni}, could undergo absorption processes in atoms, resulting in the emission of electrons with energies equal to the DM mass~\cite{Derevianko:2010kz}. Therefore electrons from keV-scale mass particles are readily observable in most DM searches. The key issue is how to separate these signal electrons from all other sources of electron recoils. A major advantage of directional detectors in this context is the ability to not just discriminate electrons from nuclear recoils, but to discriminate many sources of electron recoil from each other. \subsection{Directional signals of the dark matter halo} An isotropic Gaussian is a convenient, first approximation for the DM velocity distribution $f(\mathbf{v})$. Several pieces of observational evidence, however, already suggest it may be inaccurate in specific ways~\cite{Evans:2018bqy}. Uncertainty in the velocity distribution reduces the reliability of predicted direct detection event rates. Understanding these astrophysical uncertainties is important when setting limits on DM, but becomes essential when measuring DM properties. This is where directional measurements become extremely useful. Incorporating directional information allows for far superior measurements of DM particle properties while also mitigating astrophysical uncertainties~\cite{Lee:2012pf, OHare:2014nxd,Kavanagh:2016xfi} The velocity distribution in itself is of great post-discovery interest. While the Gaussian distribution of the SHM is monolithic, the real velocity distribution will likely possess complexity and substructure acquired over the Milky Way's tumultuous 13 Gyr lifetime~\cite{Necib:2018iwb, Evans:2018bqy,Bozorgnia:2019mjk,Necib:2019zka}. Such substructure has already been revealed in the dark halo's stellar counterpart~\cite{Kr18, Be18, HelmiReview, Naidu2020} by the revolutionary dataset from the \!{\it Gaia}\xspace mission~\cite{GaiaDR2}. Some of the most relevant substructures will be in the form of tidal streams and unmixed debris, which are generic predictions of hierarchical structure formation and have been observed abundantly in the Milky Way's outer halo already. DM streams are possibly the most exciting form of substructure for directional detectors since they are kinematically localized around a single incoming direction~\cite{OHare:2014nxd}. The signature of this kind of substructure would be almost invisible in the recoil energy spectrum, but very prominent in the angular spectrum~\cite{OHare:2018trr,Adhikari:2020gxw}. Certainly, any direct measurement of this structure in DM experiments would be of profound importance to galactic astronomy. \subsection{Directional signals of neutrinos}\label{sec:neutrinos} Experiments seeking direct signals of WIMP-like DM can naturally serve a dual-purpose as detectors of astrophysical or terrestrial sources of neutrinos and serve a diverse catalog of potentially novel physics. Since DM recoil detectors are optimized to detect $\mathcal{O}(1$--$100)$ keV recoil energies and require at least $\mathcal{O}(\textrm{few})$ events per ton-year of detector mass, the natural sources that are realistically within reach are (in order of detectability) Solar~\cite{Billard:2013qya}, nearby galactic supernovae~\cite{Lang:2016zhv} and geological~\cite{Leyton_geo} neutrinos. Artificial fluxes of neutrinos could also be measured if a DM detector is placed near a neutrino source, such as a beamline, beam dump, or a nuclear reactor. Recoil detectors are sensitive to both coherent neutrino-nucleus elastic scattering (CE$\nu$NS\xspace\footnote{pronounced ``sevens''}) and neutrino-electron elastic scattering. While the latter is a valuable channel for probing astrophysical neutrinos~\cite{Tomas:2003xn}, CE$\nu$NS\xspace has so far only been measured by COHERENT using a stopped pion neutrino source~\cite{Akimov:2017ade, Akimov:2020pdx}. The elastic scattering angle between the neutrino direction and the recoil direction for a particle of mass $m$, is~\cite{Vogel:1989iv}, \begin{equation}\label{eq:nuscatteringangle} \cos{\theta} = \frac{E_\nu + m}{E_\nu}\sqrt{\frac{E_r}{E_r+2 m}} \, . \end{equation} The neutrino-electron and neutrino-nucleus recoils will generally be well-correlated with the original neutrino direction. CE$\nu$NS\xspace is a flavor-blind interaction proceeding via a neutral current, and at low momentum transfer is coherently enhanced by a factor that depends approximately on the number of neutrons in the target nucleus~\cite{Freedman:1973yd}. Neutrino-electron scattering, on the other hand, has both charged and neutral current contributions and the cross sections for $\nu_e$ and $\bar{\nu}_e$ are the highest by almost an order of magnitude. \begin{table}[t]\centering \ra{1.3} \caption{Approximate expected numbers of neutrino-induced nuclear and electron recoils assuming a 1000 m$^3$ target volume, 1 atmosphere pressure, and an exposure time of 1 year.}\label{tab:nurates} \begin{tabularx}{\textwidth}{l|YYY|YYY|YYY} \hline\hline {\bf Nuclear recoils} & \multicolumn{3}{c|}{SF$_6$} & \multicolumn{3}{c|}{CF$_4$} & \multicolumn{3}{c}{He}\\ Threshold [\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace] & 1 & 5 & 10 & 1 & 5 & 10 & 1 & 5 & 10 \\ \hline Solar (mainly $^8$B) & 73 & 15 & 2 & 54 & 16 & 3 & 3 & 2 & 1 \\ 3 kpc supernova & 25 & 18 & 12 & 18 & 13 & 10 & 0.6 & 0.5 & 0.5 \\ \hline \hline \multicolumn{10}{c}{}\\ \hline \hline {\bf Electron recoils} & \multicolumn{3}{c|}{SF$_6$} & \multicolumn{3}{c|}{CF$_4$} & \multicolumn{3}{c}{He}\\ Threshold [keV] & 5 & 500 & 1000 & 5 & 500 & 1000 & 1 & 500 & 1000 \\ \hline Solar (Total) & 537 & 42 & 4 & 438 & 34 & 3 & 102 & 8 & 0.8 \\ Solar (CNO) & 15 & 5 & 0.6 & 12 & 4 & 0.5 & 3 & 0.9 & 0.1\\ Geoneutrinos & 0.2 & $<$0.1 & $<$0.1 & 0.2 & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 & $<$0.1 \\ \hline \hline \end{tabularx} \end{table} \vspace{1em} \noindent {\bf Solar neutrinos.} The event rates for the most relevant sources of neutrino-induced nuclear and electron recoils are shown in Table~\ref{tab:nurates} for a range of thresholds and possible target gases. For both nuclear and electron recoils, the dominant natural source of neutrinos for a DM recoil experiment will be the Sun. The Sun produces several well-understood fluxes of neutrinos from a variety of processes involved in nuclear fusion. Most CE$\nu$NS\xspace recoils will be from the $E_\nu\sim10$~MeV neutrinos from the decay of $^8$B nuclei. These are not the highest energy neutrinos emitted by the Sun---those being the neutrinos from $^3$He-proton fusion---but they are the only ones that can generate a sizeable rate of nuclear recoils at keV energies. For electron recoils however, the kinematics result in much higher recoil energies at constant neutrino energy than in the case of nuclear recoils. This makes the electron recoil signature a very promising target for the directional detection community. In this case, the most substantial contribution will be from $pp$ fusion which generates the vast majority of the total solar flux. Unfortunately, $pp$ and $^8$B neutrinos are not the most interesting type of solar neutrino astrophysically, since both fluxes are known rather precisely~\cite{Vinyoles:2016djt, Bergstrom:2016cbh}. Instead, one of the most sought-after solar fluxes are the neutrinos emitted in the Sun's ``CNO cycle''. Three fluxes of neutrinos labeled, $^{13}$N, $^{15}$O and $^{17}$F, have only just been observed by Borexino after a heroic background modeling effort~\cite{Agostini:2020mfq}. CNO neutrinos are almost entirely hidden under backgrounds, both from their fellow and more abundant solar neutrinos, as well as from radioactive contaminants. Yet they are a highly prized signal from a solar physics standpoint. A firm measurement of the CNO flux would help understand a long-standing disagreement between two models for the Sun's heavy element content~\cite{Villante:2019tcd}. This quantitative issue is subtle but has far-reaching consequences for astronomy since almost all determinations of astronomical elemental abundances rely upon the solar abundances. The measurement of low energy solar neutrinos via directional electron recoils is, surprisingly, not a new idea. Largely-forgotten work from the 1990s~\cite{Seguinot:1992zu,Arpesella:1996uc}, proposed the use of a TPC filled with high densities of gases like He and CF$_4$ to detect solar-neutrino electron recoils $\gtrsim$100 keV. While most fluxes generating high numbers of electron recoils are now well-measured, the detection of CNO neutrinos is an intriguing possibility. The most obvious novel aspect of directionality is background rejection. Unfortunately, in the case of CNO neutrinos, the major backgrounds will be \emph{other} solar neutrinos. However, directionality is novel in another way when dealing with a signal originating from a single direction. Given the known position of the Sun and the combined measurement of recoil energy and direction, in theory, this information permits event-by-event reconstruction of the neutrino energy spectrum. A modern gas TPC with a 1000 m$^3$ volume at atmospheric pressure or higher could make \emph{directional} measurements down to $\mathcal{O}(10)$~keV energies, much lower than the current threshold of Borexino of $\sim$160 keV. \vspace{1em} \noindent {\bf Geoneutrinos.} Antineutrinos from the Earth have very low energies, $\lesssim 4.5$~MeV, hence electron recoils are required for detection. Fluxes are very low, meaning 100 to 1000 ton-year exposures are needed to make a scientifically useful observation~\cite{Leyton_geo}. Directional detectors would enable rejection of solar neutrino backgrounds, and utilizing elastic scattering would enable sentivitity to lower neutrino energies than experiments like KamLAND~\cite{Araki:2005qa} and Borexino~\cite{Bellini:2010hy}, which rely on capture via inverse beta decay. Crucial components of lower energy geoneutrino sources like $^{40}$K nuclei have gone undetected because of the 1.8 MeV threshold of inverse beta decay. A measurement of these sources could help constrain the radioactive contribution to the Earth's surface heat flow~\cite{se-1-5-2010, Gando:1900zz}. A 10 ton-scale detector operating for 10 years would be capable of a 95\%~CL measurement of the $^{40}$K flux~\cite{Leyton_geo} and go some way to understanding this problem. \vspace{1em} \noindent {\bf Galactic supernovae.} The $\sim$10~MeV energies of neutrinos from supernovae make them a prime target for DM detectors~\cite{Lang:2016zhv}. A detection via CE$\nu$NS\xspace would probe the all-flavor burst flux, thereby providing a normalizing measure of the total luminosity, which could be compared against flavor-dependent measurements made by other neutrino observatories. For a 1000 m$^3$ gas experiment, the nuclear recoil event rate within a $\sim$10~s burst window could be similar to a year's worth of solar neutrinos as long as the supernova occurred within around 3~kpc of the Earth (the average galactic supernova distance is estimated to be around 10 kpc~\cite{Mirizzi:2006xx}). The main advantage of directionality for the detection of supernova neutrinos via CE$\nu$NS\xspace is pointing, which would potentially provide a valuable service to follow-up electromagnetic observations~\cite{Kharusi:2020ovw} \vspace{1em} \noindent {\bf Artificial neutrinos.} Physics possibilities with an artificial-neutrino-source CE$\nu$NS\xspace experiment are extensive. If an experiment were placed close enough to a nuclear reactor, it would enjoy a generous flux of $\bar{\nu}_e$. Stopped pion sources are also available and were recently used by COHERENT for the first measurement of CE$\nu$NS\xspace~\cite{Akimov:2017ade, Akimov:2020pdx}. A potentially more fruitful application of directional detectors would be to operate near a beam dump. In such a setup, even a small-scale gas TPC could make the first directional measurement of CE$\nu$NS\xspace. This idea is being pursued for $\nu$BDX-DRIFT~\cite{nuBDX-DRIFT}, a proposal to place a negative-ion TPC behind the NuMI proton beam dump at Fermilab, with the longer-term goal of operating a TPC at the DUNE Near Detector Complex. Early estimates suggest that a 1 m$^3$ TPC could achieve a low-background directional measurement of CE$\nu$NS\xspace with around a year of operation. \vspace{1em} \noindent {\bf Beyond-the-SM neutrino interactions.} Measurements using artificial neutrino sources such as reactor, stopped pions, or beam dumps all offer a potential gateway to beyond-the-SM physics measurements. These could include the detection of up-scattered heavy neutrinos, axion-like particles~\cite{Brdar:2020dpr, Dent:2019ueq}, and light DM candidates~\cite{Dutta:2019nbn}, which may produce novel signatures in angular spectra. With even higher statistics, constraining and disentangling a wide range of additional mediators that could be involved in CE$\nu$NS\xspace could also greatly benefit from additional information present in the angular distribution~\cite{Abdullah:2020iiv}. Though the measured CE$\nu$NS\xspace cross section is consistent with the SM, there is still room for beyond-the-SM corrections below experimental bounds~\cite{AristizabalSierra:2019ykk}. In the context of DM detectors, the effects of new mediators taking part in CE$\nu$NS\xspace have been considered, for example, in References~\cite{Cerdeno:2016sfi, Bertuzzo:2017tuf,Boehm:2018sux, AristizabalSierra:2017joc, Boehm:2020ltd}. As well as providing opportunities for discovery, the added uncertainty in the CE$\nu$NS\xspace background also presents problems for conventional recoil detectors. As we discussed earlier, the height of the neutrino floor is controlled by the neutrino event rate, and its uncertainty. Non-standard interactions and additional mediators have the potential to increase both. In particular, the event rate at low energies relevant for GeV and sub-GeV WIMP searches is precisely where there is substantial room for large deviations from the SM. Conducting a directional search to unravel these subtleties and distinguish them from a potential DM signal, is therefore even more warranted. \subsection{Summary of the physics case for a directional recoil detector}\label{sec:summaryphysics} {\bf Figure~\ref{fig:summary}} summarizes the rich physics potential of a directional recoil experiment, assuming it is realized as a gas TPC. \vspace{1em} \noindent {\bf Dark matter.} Directional DM searches with gas TPCs have reached the 1~m$^3$ scale, but most have higher energy thresholds than desirable. Achieving improved directional sensitivity (Section~\ref{sec:angularperformance}) and particle identification (Section~\ref{sec:particleID}) should be feasible in the near future. Hence, a competitive low mass DM search with ultralow threshold is a natural first goal for a program of gas TPCs. The next goal---setting competitive SD WIMP limits---is also a natural one for TPCs using fluorine-based gases. A 10~m$^3$ detector should be sufficient to produce the world's best SD cross-section limits~\cite{Vahsen:2020pzb}. \vspace{1em} \noindent {\bf Neutrinos.} The first directional measurement of CE$\nu$NS\xspace should be possible in the near future with a small-scale TPC placed near to a neutrino source. By 1000~m$^3$, an atmospheric pressure TPC should already see in excess of $\mathcal{O}(10)$ nuclear recoils and $\mathcal{O}(100)$ electrons from solar neutrinos every year. With larger volumes, it may be possible to point to galactic supernovae out to 10~kpc, or even study the angular distribution of geoneutrinos. \vspace{1em} \noindent {\bf Other physics.} To reach the large volumes required to perform DM and neutrino physics suggested above, it will first be necessary to demonstrate the performance at smaller scales. There is a host of demonstrated and proposed applications of smaller-scale TPCs that will enable a rich research program. Small gas TPCs are already operating as directional neutron background detectors~\cite{Jaegle:2019jpx}. Other proposed applications include topics as diverse as neutron imaging, passive detection of special nuclear material, fuel rod monitoring, and medical physics. TPCs with HD readout, specifically, are also uniquely promising for verifying the physics of low-energy nuclear recoils, which all DM experiments rely on, but which is not well constrained. For example, it may be possible to perform a direct verification of the Migdal effect. Due to the increasing relevance of this effect to searches for DM, this may be one of the most interesting immediate physics goals for smaller-scale TPCs. \section{DETECTING RECOIL DIRECTIONS}\label{sec:detectors} \begin{figure}[t] \checkoddpage \edef\side{\ifoddpage l\else r\fi}% \makebox[\textwidth][\side]{% \begin{minipage}[t]{1.2\textwidth} \begin{minipage}{.33\textwidth} \centering \textbf{Fluorine recoil} \includegraphics[trim = 5mm 0mm 12mm 0mm, clip,width=0.99\textwidth]{fluorine.pdf} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \textbf{Helium recoil} \includegraphics[trim = 5mm 0mm 12mm 0mm, clip,width=0.99\textwidth]{recoil.pdf} \end{minipage} \begin{minipage}{.33\textwidth} \centering \textbf{Electron recoil} \includegraphics[trim = 5mm 0mm 12mm 0mm, clip,width=0.99\textwidth]{electron.pdf} \end{minipage} \caption{Simulation illustrating true and reconstructed recoil directions. Black points shows ionized electrons created by a 41~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace fluorine recoil (left), a 25~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace helium recoil (middle), and a 20~keV electron recoil (right) in atmospheric pressure He:SF$_6$ gas. Note that the electron recoil is about one order of magnitude longer than the two nuclear recoils. Due to ionization quenching, the ionization is nearly the same in these three events, despite the different recoil energies. Blue points show the same ionized electrons after a diffusion of $\sigma_{x,y,z} = 393\,\upmu{\rm m}$, typical for a gas TPC. The reconstructed nuclear recoil direction, red, clearly differs from the true recoil direction, shown in green. The curved recoil trajectory and the diffuse nature of the charge cloud both contribute to this measurement error. In the case of fluorine, the short recoil length and secondary recoils make the direction measurement particularly hard. For electron recoils, a straight-line track fit is clearly not applicable --- a dedicated curled-track fitter would be required.} \label{fig:recoil} \end{minipage}} \end{figure} Having reviewed the motivation for directional recoil detectors, we now consider how directional information is created by the nuclear and electronic recoil process. We then consider broad classes of technologies as well as specific detectors that can extract this information. \subsection{Ionization distributions from recoils}\label{sec:recoilphysics} {\bf Figure~\ref{fig:recoil}} shows the typical primary ionization trails created by 20~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace recoils of different types. The energy loss processes of recoils in this low energy regime were first described by Lindhard {\it et al.}~\cite{osti_4701226}; Reference~\cite{Sciolla_2009} provides a thorough review of this physics. At these low energies the energy loss, $\mathrm{d}E/\mathrm{d}x$, is described by the stopping side of the Bragg peak, i.e. decreases as the ion slows down.\begin{marginnote} \entry{Bragg peak}{the energy loss of a fast, charged particle moving through a medium rises to a maximum value --- the Bragg peak --- before falling steeply as it comes to a stop.} \end{marginnote Energy loss of fast ions occurs mostly by exciting and ionizing atoms along their path (electronic $\mathrm{d}E/\mathrm{d}x$), but as they slow down the dominant process becomes elastic nuclear scattering (nuclear $\mathrm{d}E/\mathrm{d}x$). The ratio of electronic to total $\mathrm{d}E/\mathrm{d}x$---referred to as the nuclear quenching factor---is a function of recoil energy, the type of recoiling atom, and the composition of the medium. Generally, the transition to where nuclear scattering becomes the dominant channel for energy loss occurs at much higher energies for heavier nuclei. This complexity in the energy loss of recoiling nuclei is important for DM experiments in general, but is even more crucial for directional experiments. In directional experiments, the negative slope of $\mathrm{d}E/\mathrm{d}x$ along the trajectory past the Bragg peak provides a measurable attribute to determine the vector head/tail of the recoil track. Naively, head/tail sensitivity should improve when taking quenching into account since most directional technologies rely on measuring ionization along tracks. In practice this is not the case, however, since the nuclear energy loss results in secondary recoils, which also lose energy in both nuclear and electronic channels, resulting in a cascade. The end result is that the energy loss of the primary nuclear recoil is diffused into the surrounding region lateral to its direction. This dilutes the head/tail signature and produces a shortened projection of the track along the detection planes, both from multiple scattering and the diversion of energy lateral to the main track. This was explicitly shown in References~\cite{Majewski:2009an,Deaconu:2017vam}, where simulations based on Stopping and Range of Ions in Solids (SRIM)~\cite{SRIM} were used to study the limitations of head/tail reconstruction in directional DM searches. Being a statistical process, multiple scattering results in fluctuations in the energy loss and range of the ion as well as deviations in the recoil's path from its initial direction --- generically referred to as ``straggling''. The deviations from a straight path due to multiple scattering can be described statistically by defining an angular resolution. In general, this quantity has a number of contributions, such as diffusion and the resolution of the detector readouts, however, the physics of energy loss described here poses a fundamental limit. There are multiple conventions for angular resolution, we propose one suitable for comparing directional detectors in Section~\ref{sec:angularperformance}. Another consequence of nuclear energy loss is how it affects discrimination between electron and nuclear recoils. The classic method relies on the differences in $\mathrm{d}E/\mathrm{d}x$ between these particles. Discrimination via this method must break down at low energies due to quenching, even if the effects of diffusion on short tracks could be neglected; the ionization $\mathrm{d}E/\mathrm{d}x$ of nuclear recoils and electrons converges if quenching grows at low energies, as it is expected to. This is exacerbated by the effective $\mathrm{d}E/\mathrm{d}x$ of electrons, which appears to grow as they slow down due to the rapid increase in scattering causing their tracks to curl up at the end of their trajectories. This is seen in both the simulated ({\bf Figure~\ref{fig:recoil}}) and measured electron tracks. This results in the slope of the $\mathrm{d}E/\mathrm{d}x$ having the opposite sign along the tracks of low energy nuclear recoils relative to those for electrons. This is critical for detecting the Migdal effect with directional detectors, as described in Section~\ref{sec:migdal}. \subsection{Recoil imaging versus indirect direction measurement}\label{sec:recoilimaging_vs_indirect} \begin{figure}[t] \checkoddpage \edef\side{\ifoddpage l\else r\fi}% \makebox[\textwidth][\side]{% \begin{minipage}[t]{1.0\textwidth} \centering \includegraphics[width=1.0\textwidth]{DetectorClassTable.pdf} \caption{Categorization of different directional detection strategies ordered left to right from the lowest to highest degree of directional information they provide. We also color-code each strategy according to its technological readiness.} \label{fig:DetectorClassTable} \end{minipage} } \end{figure} There are two broad strategies for detecting recoil directions: directly imaging the recoil track, and indirect methods. We will first describe the primary reasoning behind this distinction, before listing some specific examples of proposed or demonstrated technologies that fall into this categorization. These are also listed in {\bf Figure~\ref{fig:DetectorClassTable}}. We can refine this categorization even further by whether the detector can sense recoil directions at the event level, or with only statistical distributions of events, but we will describe this in more detail when we discuss performance in Section~\ref{sec:performance}. Recoil imaging entails directly observing one or more components of the recoil trajectory. Referring again to {\bf Figure~\ref{fig:recoil}}, we see that such a capability implies two detector requirements. First, the detector readout segmentation must be smaller than the recoil length of interest, so that multiple space points along the track are obtained. Second, any potential diffusion of the recoil trajectory information must also be small compared to the recoil length, so that the trajectory is not washed out. Additionally for electron recoils, good sensitivity to low densities of energy deposition is also required.\begin{marginnote} \entry{Micropattern gaseous detectors (MPGDs)}{gas avalanche devices, such as the gas electron multiplier (GEM), with 100-$\upmu$m-level feature size, enabled by modern photo-lithographic fabrication techniques.} \end{marginnote} These requirements for achieving recoil imaging are satisfied in gas and solid targets, but not in liquid. In low-density gas TPCs, keV-scale nuclear recoil lengths are $\mathcal{O}$(mm), while the segmentation of modern readouts, such as micro pattern gaseous detectors (MPGDs)~\cite{Oed:1988jh,Sauli:1999nt}, and diffusion are both $\mathcal{O}(100~\upmu$m). In condensed matter, recoils are about three orders of magnitude shorter, while diffusion of ionization is comparatively large. Nevertheless, such detectors can detect the topology and $\mathrm{d}E/\mathrm{d}x$ of higher-energy recoils and utilize this for particle identification. This is the case in DAMIC~\cite{Aguilar-Arevalo:2015lvd}, which can detect ionization energies in silicon as low as $50$~eV in $25~\upmu{\rm m} \times 25~\upmu{\rm m}$ pixels, but is diffusion limited for low energy recoils whose physical track lengths are shorter than $15~\upmu{\rm m}$. Drifting of ionization for near-real-time track imaging thus appears feasible only in gas. In solid targets, because the atoms do not move, there is also the option of performing ultra-high-resolution recoil imaging via other means, but not in real time. Nuclear emulsions~\cite{Gorbunov:2020wfj}, are an example of this strategy. Given the target mass advantage of condensed matter over gas, and the technological challenges of recoil imaging in the former, it is highly desirable to seek entirely different strategies for obtaining directional recoil information. In contrast to directionality via recoil imaging, we can define \emph{indirectly directional} detectors as those which utilize a variable which has a recoil-direction-dependent response. Anisotropic scintillators for example~\cite{Belli:2020hit}, have a light yield for recoils that depends on the relative orientation of the crystal axis and the recoil. For a single event, there is thus an ambiguity between energy and angle. Nevertheless, from a large data set, variations in angle can be inferred via a sidereal daily modulation in the distributions of detector responses. \subsection{Recoil Imaging Detectors}\label{sec:recoil_imaging_detectors} \begin{figure}[t] \checkoddpage \edef\side{\ifoddpage l\else r\fi}% \makebox[\textwidth][\side]{% \begin{minipage}[t]{1.2\textwidth} \centering \includegraphics[width=0.9\textwidth]{recoils.pdf} \caption{Example of 3d ionization distributions measured with a high definition time projection chamber (HD TPC). Left: an alpha particle track. Right: four superimposed (likely helium) recoil tracks, induced with a neutron source. Each 3d box shown indicates the amount of ionization recorded in a 2d \SI{250}{\micro\meter} $\times$ \SI{50}{\micro\meter} pixel of the TPC readout plane. The vertical coordinate is assigned using the arrival time of the charge on the readout plane. Images taken from Reference~\cite{Jaegle:2019jpx}.}\label{fig:tpc_event} \end{minipage}% }% \end{figure} \begin{marginnote}[] \entry{Time projection chamber (TPC)}{particle detector capable of imaging ionization in three dimensions by drifting it onto a readout plane, where two-dimensional projections are recorded at high rate~\cite{NygrenTPC}. \end{marginnote} \vspace{1em} \noindent {\bf Gas TPCs.} The most mature technology used for directional DM searches is the gaseous TPC. This technology provides tremendous flexibility that enables a broad range of operating pressures, $\sim$0.1--1 bar, and the ability to tailor the experiment for the DM parameter space of interest by tuning the gas mixture. At low pressures, the low energy nuclear recoil tracks expected from DM interactions can reach lengths of a few mm: long enough to be resolved and have their directions reconstructed. Gas mixtures can be chosen to include light or heavy targets as needed to optimize for the DM mass range of interest. Gases can also be chosen to include DM target nuclei with large nuclear spin (e.g.~fluorine) or large numbers of nucleons (e.g.~xenon) to enhance sensitivity to SD or SI interactions respectively. TPC readouts (discussed below) can provide 1d, 2d or 3d track reconstruction with a granularity of $\sim$200~$\upmu$m or better for each track component. For the lateral track components ($x$-$y$) this became possible with the advent of MPGD technologies, whereas for the $z-$component---along the drift direction---the standard approach of using pulse-shape timing together with the drift velocity is used. {\bf Figure~\ref{fig:tpc_event}} shows examples of such TPC measurements. In that case, the ionization was imaged in 2d with a high definition pixel chip readout, at a rate of 80~MHz. A number of advances over the past decade have improved the sensitivity of TPCs for directional DM searches. One important discovery early on~\cite{Martoff:2000wi} was the effect of adding electronegative components to the gas mixture. This enables negative ion drift (NID), which results in very low diffusion in the thermal regime and a factor $10^3$ slower drift speeds compared to electron drift. The former leads to better ability to resolve the short low-energy ionization tracks even over long drift lengths; lengthening this dimension is also a more cost effective path to scaling up the TPC volume. The slow NID speeds provide $<$100 $\upmu$m pixelization of the track along the drift direction with simple off-the-shelf electronics, resulting in exquisite resolution of this component at low cost. \begin{marginnote}[] \entry{Fiducialization}{rejection of background events, typically originating from radioactive contamination of detector surfaces, by reconstructing the absolute spatial position of such events and vetoing a specific spatial region.} \end{marginnote} Another critical advance has been the discovery of several methods to fiducialize events in the $z$-direction. This proved challenging due to the lack of a ``t-zero'' reference time for when the event occurred in the TPC, which can be used together with the drift speed to reconstruct its $z$ location. One method followed the serendipitous discovery of secondary negative ion species in NID gas mixtures that, due to their different drift speeds relative to the primary's, provide an event-by-event reconstruction of $z$ to sub-cm precision. These ``minority carriers'' were discovered in both CS$_2$:O$_2$ \cite{Snowden-Ifft:2014taa} and SF$_6$ \cite{Phan:2016veo} and, with the former, led to a transformation in the field by demonstrating zero-background limits in directional DM searches \cite{Battat:2014van, Battat:2016xxe}. A second method that determines the drift distance $z$ by measuring the transverse diffusion along the track, has also been demonstrated~\cite{Lewis:2014poa}. This technique should work in either electron or NID gases, but requires a detector readout segment size smaller than the typical diffusion scale, i.e. it requires a HD TPC. With all these advances in gas TPCs the biggest challenge for directional DM searches remains the low density target. Over the past decade the best SD limits set by directional experiments have been surpassed by nondirectional ones by many orders of magnitude. To remain competitive, while maintaining all of the desired features (Section~\ref{sec:performance}) required to detect directionality on an event-level basis, the obvious path to scale-up was to increase the detector volumes by many orders of magnitude. This approach has been reassessed recently, however, due to the looming neutrino floor and lack of hints consistent with the standard WIMP paradigm, which has motivated a search for new classes of DM candidates. Many of these lie in the sub-10 GeV mass range and fall under the umbrella of ``light DM''. This is an area in which directional experiments using lighter target nuclei such as helium and hydrogen, and possibly also exploiting the electron recoil signature, could become competitive, even with target volumes that could be reached with current technologies. The major requirements for directional light DM searches, where the directional thresholds need to be as low as possible, is spatial resolution. As there are now many options for readouts that satisfy this requirement (see below), the choice comes down to cost, scalability, and other considerations such as backgrounds. \vspace{1em} \noindent {\bf Dark matter TPC projects and readouts.} We provide a brief historical overview of gas TPC projects, which have enjoyed the most interest. A more comprehensive review of directional detection technologies and relevant TPC readouts can be found in Reference~\cite{Battat:2016pap}. The first directional DM detector was a low pressure $\sim$100 liter gas TPC with an optically read out parallel plate avalanche chamber (PPAC)~\cite{Buckland:1994gc}. Two gas mixtures were used, 20 Torr CH$_4$ (H target) and 50 Torr of P-10 (90:10 Ar:CH$_4$, Ar as the target) with a $\sim$7\% additive of triethylamine (TEA) vapor used in both to enhance the photon yield. The PPAC gave very high, 10$^5$--10$^6$ gas gains and light yield peaking in the UV. The two-dimensional track images in the PPAC plane were imaged with a multi-stage optical system that consisted of a UV grade lens, an image intensifier, a second lens, and a CCD camera. The TPC itself was placed inside a superconducting magnet with a 4.5~kG $B$-field parallel to the drift $E$-field. The $B$-field served to reduce transverse diffusion to $<$1~mm over 1~m of drift, and it also deflected electron tracks, producing helical/spherical shapes when projected on the image plane. The resulting topological features were used to reach a 99.8\% rejection of gamma/electron events with a 75\% nuclear recoil efficiency, above an energy threshold of about 6~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace. Although the authors of Reference~\cite{Buckland:1994gc} demonstrated many important features required for directionality with this detector, it was never deployed underground. The Directional Recoil Identification from Tracks (DRIFT) experiment~\cite{Battat:2016xxe} was the first directional DM experiment to take underground data and continued to do so with several generations of detectors over a decade-long program. The DRIFT detector was based on a m$^3$ TPC divided by a central cathode into two halves, each read out with a Multi Wire Proportional Chamber (MWPC). Signals from the MWPC anode wires (2 mm pitch) and their pulse-shape timing provided two components of ionization tracks, $\Delta x$ and $\Delta z$, respectively. DRIFT pioneered the use of NID with CS$_2$ gas mixtures, which provided thermal diffusion and slow drift. These features enabled 2d tracking with head/tail reconstruction in 1d ($z$)~\cite{ref:DRIFT_APP2007,Battat:2016xaw} and a gamma/electron rejection factor of $\sim 10^{-7}$ for energies between 18--150 \text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace \cite{Battat:2016xxe}. DRIFT developed novel techniques to reduce and mitigate against backgrounds from radon and its progeny~\cite{Battat:2014oqa, Battat:2015rna}, which allowed full fiducialization of the detector volume~\cite{Battat:2014van}. Culminating this effort, DRIFT set a series of zero-background DM limits in CS$_2$:CF$_4$:O$_2$ gas mixtures with underground data taken at the Boulby Mine in the UK~\cite{Battat:2016xxe}. Their most competitive limit, set using 54.7 live-days, was $\sigma^{\rm SD}_p < 2.8 \times 10^{-37}$~cm$^2$ at $m_\chi = 100$~GeV. A combination of the coarse granularity and low S/N of the MWPCs, a low target mass (34 g of fluorine in total) and relatively high energy thresholds (about 35 keVr for fluorine recoils) limited DRIFT's directional and DM sensitivity. The advances over the past decade in MPGDs and commercially available scientific-grade CCD/CMOS sensors led to a number of new TPC-based directional experiments. The Dark Matter Time Projection Chamber (DMTPC) collaboration built a series of prototypes with CCD-based optical readouts that imaged 2d tracks at the surface of a mesh avalanche stage~\cite{Deaconu:2015vbk}. They set a limit of $\sigma^{\rm SD}_p < 2\times10^{-33}$~cm$^2$ at $m_\chi = 115$~GeV, from a surface run in pure CF$_4$ gas~\cite{Ahlen:2010ub}. A 1~m$^3$ detector was constructed, but to our knowledge not deployed~\cite{Leyton:2016nit}. The CYGNO collaboration employs thin GEMs read out optically with CMOS cameras. They plan to augment the 2d optically imaged tracks using pulse-shape timing with a PMT to measure the third dimension. With several prototypes they have performed R\&D using 1 bar He:CF$_4$ gas mixtures that are being optimized for light DM and solar neutrinos~\cite{Baracchini:2020btb}. Their short-term program involves deploying a $\sim$m$^3$-scale demonstrator in the Gran Sasso National Laboratory, with scale-ups to $\sim$10 m$^3$ in the future. Electronic readouts using Micromegas, GEMs and other novel MPGDs for gas amplification combined with strips or pixels are also being used both for R\&D and in underground experiments. The NEw generation WIMP-search with Advanced Gaseous tracking device Experiment (NEWAGE) collaboration has deployed several generations of TPC detectors in the Kamioka Mine. Their technology is based on a micro pixel chamber ($\upmu$-PIC) combined with GEMs and strip readouts that provides them with vector 3d tracking. From measurements in 76 Torr CF$_4$ they report a an electron/gamma rejection of $\sim 10^{-5}$, a correct head/tail sense determination of 53.4\% and an angular resolution of 36$\pm$4$^\circ$, all for 50--100 \text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace~\cite{Yakabe:2020rua}. Although limited by radon backgrounds, they have used directionality to set several limits, with the latest reported at $\sigma^{\rm SD}_p <$ 4.3$\times$10$^{-34}$ at $m_\chi = 150$~GeV~\cite{Yakabe:2020rua}. The MIcro-tpc MAtrix of Chambers (MIMAC) experiment uses a Micromegas pixel readout TPC with a special gas mixture (70:28:2 CF$_4$:CHF$_3$:C$_4$H$_{10}$) tuned for SD sensitivity and other properties that allow full 3d tracking \cite{Santos:2011kf}. MIMAC is located in the Modane underground laboratory in France, but DM limits have not been published as of yet. The Directional Dark Matter Detector (D$^3$) project, an R\&D collaboration between Lawrence Berkeley National Laboratory and the University of Hawaii, has constructed small TPC prototype detectors with high-definition pixel charge readout based on application-specific integrated circuit (ASIC) chips. Eight of the latest generation detectors, also known as the ``BEAST TPCs''~\cite{Jaegle:2019jpx} were deployed for directional neutron background measurements at the SuperKEKB collider, using a He:CO$_2$ target gas mixture. While the target mass was minute, and the detectors running in low-gain neutron mode, a DM limit extending down to as low as 4~GeV was set as a feasibility demonstration~\cite{phdthorpe}. Events from these detectors are shown in {\bf Figure~\ref{fig:tpc_event}}. Since they can efficiently detect single electrons at higher gain settings, excellent low-mass DM sensitivity is expected. While considerable cost and effort is required to scale up such detectors to competitive masses, larger-scale pixel based readout planes are already being fabricated and tested for tracking detectors in future colliders. This is an R\&D synergy that could prove useful for the field. In fact, GridPix detectors~\cite{Ligtenberg:2020ofy}, based on pixel ASICs that are directly combined with a gas amplification MPGD structure, have already demonstrated exquisite imaging of nuclear recoils~\cite{Kohli:2017qzo}, with even finer spatial segmentation than in {\bf Figure~\ref{fig:tpc_event}}. Given the abundance of available TPC charge readout technologies, it is not straightforward to determine the best strategy for a large-scale detector. The recent \textsc{Cygnus}\xspace design study~\cite{Vahsen:2020pzb} is the first attempt at such a technology comparison, and suggested that x/y strips with $\mathcal{O}(100\,\upmu{\rm m})$ segmentation provide the best cost/performance tradeoff. An optimized strip readout should enable HD charge readout near the resolution obtained with pixel ASICs, but at substantially reduced cost and complexity. Based on this, two (40 liter and 1000 liter) ``\textsc{Cygnus}\xspace HD demonstrator'' detectors, utilizing CERN strip Micromegas readout and CERN SRS DAQ systems, are now under construction~\cite{vahsen_aps_2020}. \vspace{1em} \noindent {\bf Nuclear Emulsions.} The low densities of gas-based experiments is the primary factor working against them being the obvious strategy for directional recoil detection. Technologies that can image recoil tracks in high density materials are therefore highly motivated. However, as we discussed earlier, an increase in density must always be matched by an increase in spatial resolution due to the rapidly shrinking track lengths. One long-standing technology that is both high density and could permit the necessarily high spatial resolution, are nuclear emulsions. Nuclear emulsions consist of photographic plates with some dispersal of smaller crystals or grains. The nuclear emulsion most well developed for low energy nuclear recoils consists of a polymer layer dispersed with silver halide (AgBr) crystals. The crystal grains would seed nm-scale silver clusters in response to a track left by a recoil. After a suitable exposure time has elapsed, the emulsions must then be developed, during which 2d projections of recoil tracks can be identified and measured with an optical or x-ray microscope. The Nuclear Emulsions for WIMP Search (NEWSdm) collaboration~\cite{Gorbunov:2020wfj} is pursuing this idea with an automated optical scanning system. They employ a Nano Imaging Tracker~\cite{FineGrained} which can measure the positions of single grains with an accuracy of 10~nm. Nuclear emulsions are capable of 2d (and potentially 3d) event-level recoil imaging, however, the presence of any head/tail signature is unclear. Event time assignment is not possible with this technology. \vspace{1em} \noindent {\bf Crystal defects.} Another potential solid state directional detector involves the imaging of crystal damage, in particular in diamond. Nitrogen vacancy (NV) centers in diamond are defects consisting of nitrogen impurities neighboring a vacancy in the crystal lattice. The defects are highly sensitive to electromagnetic fields, and to the local crystal strain. Spectroscopically measuring NV centers in diamond has been suggested as a potentially promising way to image nm-scale crystal damage that could be left by a recoil~\cite{Rajendran:2017ynw}. This technology would benefit from the high densities of crystals, and may allow fully three-dimensional recoil imaging, with even a plausible head/tail signature. A recent study~\cite{Marshall:2020azl} expanded upon this idea and suggested a modular design strategy that, if realized, would bring the available quantity of directional information up to the same level as gas TPCs while enjoying a high inherent mass. \vspace{1em} \noindent {\bf 2d materials.} When imaging recoils in higher density targets, the ultimate recoil trajectory will be less correlated with the true initial recoil direction due to the high rate of interactions in the medium. A way to sidestep this problem may be to exploit materials in which the medium itself is confined to two dimensions. Such 2d targets could be fabricated from semiconductor materials in which the excitation energy is on the order of 1~eV, allowing even light DM particles to generate measurable electronic events. In the set up envisaged in Reference~\cite{Hochberg:2016ntt}, a 2d detector is comprised of an array of pixels containing back-to-back layers of graphene and a substrate, all placed inside an applied electric field to transport excited valence electrons to a calorimeter. This configuration can automatically exploit modulation-based directionality using the contrast in event numbers in the upper and lower layers of pixels. However, it is proposed that if one could monitor the conductivity of the graphene pixel arrays fast enough relative to the drift time of the electron, one could reconstruct the precise locations of pixels at which the electron interacts and the time-of-flight. As long as the experiment were in a high enough vacuum, in principle, this information can be manipulated to reconstruct the full three-dimensional recoil vector. No experimental implementation of this idea has yet appeared, however the relic neutrino experiment PTOLEMY~\cite{Betti:2019ouf} proposes to use tritiated graphene, so this effort could be complementary. \vspace{1em} \noindent {\bf DNA strands.} A novel re-imagining of a recoil imaging detector proposed in Reference~\cite{Drukier:2012hj} makes use of a forest of DNA or RNA strands hung vertically from a nanometer-thick gold foil. An incoming particle would collide with, and expel, a gold atom from the foil, and the recoil would then travel through the DNA forest, severing several strands. The strands would be sequenced precisely with base pairs that encode their $(x,y,z)$ positions in the detector volume. Using a well-established biotechnology known as polymerase chain reaction, it would be possible to amplify the severed strands once collected, reconstruct the positions of each strand break, and therefore the coordinates of each severing event to nm precision. This represents an entirely different method of imaging the nuclear recoil axis that negates the effects of diffusion. However, without any published demonstrations of this idea it is not clear whether it is practical, and to what extent head/tail and timing information could be recorded. \subsection{Indirect Directionality} \vspace{1em} \noindent {\bf Anisotropic scintillators.} Solid scintillators (e.g.~NaI and CsI) are commonly used in particle detection, and specifically in DM detection. Some scintillators, such as ZnWO$_4$, have been shown to exhibit a response that depends on the recoil direction relative to the crystal axes. In principle, this scintillation anisotropy can be used to infer the nuclear recoil track direction without direct reconstruction of the track geometry. Several groups have explored the possibility of using anisotropic scintillators for a DM search~\cite{Shimizu:2002ik,Cappella:2013rua,Sekiya:2003wf}. The anisotropy of ZnWO$_4$ in particular has recently been confirmed via measurement with a neutron gun for the ADAMO project~\cite{Belli:2020hit}, but only for energies higher than 70~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace. However, even if a strong anisotropy were discovered at lower energies, this form of directionality would be indirect, and achieving event-level directionality would likely be impossible. A DM search using anisotropic scintillators would have to exploit the daily modulation in events as the DM wind rotated with respect to the crystal axes. \vspace{1em} \noindent {\bf Columnar Recombination.} An indirect measure of recoil directionality called columnar recombination may be present in high pressure gaseous xenon or liquid argon (LAr) experiments. The effect appears when there is an asymmetry in the way an ionisation cloud generated by a recoil event behaves depending on its orientation with respect to an applied electric field. When a primary ionisation cloud drifts in the field, some of the ions and electrons will recombine. The amount of ionisation or scintillation that is ultimately detected from the event may then depend on the axial angle between the recoil track and the electric field. In principle, parallel tracks would produce more scintillation, and perpendicular tracks more ionisation. This effect was suggested to be of potential use for DM searches by Nygren~\cite{Nygren:2013nda}. Subsequently, it has been investigated experimentally using nuclear recoils in LAr by SCENE~\cite{Cao:2014gns}. However, the measured directional asymmetry in the scintillation yield of neutron-induced recoils was small and only statistically significant for their 57 keV beam. Nevertheless, since planned LAr experiments such as DarkSide-20k~\cite{Aalseth:2017fik} and Argo~\cite{Sanfilippo:2019amq}, are anticipated to reach the neutrino floor in the next few decades, a directional signal in this kind of experiment would be highly sought after. Unfortunately, since columnar recombination only provides information about one track dimension, does not have a head/tail signature, and must be obtained through a combination of two recoil energy observables; it ultimately appears to be somewhat insufficient for the discovery of DM~\cite{OHare:2020lva}. \section{DIRECTIONAL DETECTOR PERFORMANCE}\label{sec:performance} The ideal DM detector, whether directional or not, will have high target mass and low energy threshold, so as to maximize the probability of observing DM signals. To avoid becoming background limited, a low-background (underground) environment, highly radiopure components, and good background rejection capabilities are also required. An ideal {\it directional} DM detector is subject to the same requirements, but would in addition need to measure each recoil’s 3d vector direction, energy, topology, and time of occurrence. Below, we use back-of-the-envelope arguments and simulations to make quantitative performance requirements for each of these observables. Since we, perhaps subjectively, believe that gas TPCs are the closest to providing the required performance, we will use these detectors to illustrate state-of-the art performance and to suggest directions for future work. Directional recoil detection is still a young and growing field, and it is not straightforward to compare angular performance across different detector types. In much of the literature to date~\cite{Ahlen:2009ev, Mayet:2016zxu, Battat:2016pap}, directional detectors have been classified and simulated as head/tail sensitive or not, and as 1d, 2d, or 3d, depending on how many projections of nuclear recoils are detected. To give a more holistic view of the field we introduce a new classification, extending the concepts introduced in Reference~\cite{Vahsen:2020pzb}. The complete scheme is depicted in {\bf Figure~\ref{fig:DetectorClassTable}}. We already introduced the distinction between recoil imaging and indirect detection of the recoil direction. In comparing the physics sensitivity of detectors, a second, slightly different classification is useful. This classification is based on information content: we can classify detectors by their ability to gain directionality at the event-level, or statistically via modulating signals. The first category, event-level directional detectors, includes most, but not all, proposed directional detectors. These detectors directly reconstruct (or infer some component of) the recoil vector event-by-event. Examples of detector technologies in this category are: gas time projection chambers, nuclear emulsions, crystal defect spectroscopy, and detectors based on 2d graphene targets. Detectors that infer the recoil direction event-by-event by measuring two different physical quantities, for example the energy and one recoil trajectory component (gas TPC with 1d readout), or the ionization and scintillation energy (liquid noble gas TPC with columnar recombination), also belong in this category. On the other hand, if directional information is only present at the level of a statistical distribution of recoils, then a discovery or rejection of isotropy can only be performed using the modulation of that recoil distribution. Examples of this category are anisotropic scintillators. However, we note that if a second recoil observable were available for each event, such as energy in a different channel (ionization, heat) the observational degeneracy between energy and angle may be broken. In that case, the detector could obtain directional information at the event-level. Both event-level and modulation-based directional detectors can be said to have directional sensitivity, and all such detectors could, in principle, verify the galactic origin of a DM signal to a greater or lesser extent. Event-level directional detectors could exploit both the dipole feature (upper panel of {\bf Figure~\ref{fig:Skymaps}}) and the sidereal modulation (lower panels of {Figure~\ref{fig:Skymaps}}), whereas modulation-based detectors are forced to rely only on the latter. Both methods of directionality have powerful and unique signatures of DM that should not be mimicked by any background or systematic. One other important distinction between event-level and modulation-based directional detectors, is that in the latter, direction and energy are not measured independently. In the context of neutrino measurements, this means that if the neutrino source location is known (as in the case of the Sun, a supernova, or a neutrino beam) then independent recoil energy and direction measurements can be combined to calculate the neutrino energy, event-by-event. This powerful capability is not available in modulation-based detectors. \subsection{Directional performance of event-level directional detectors}\label{sec:angularperformance} The performance of an event-level directional detector will depend upon how much information about each recoil is measured. An ideal detector would measure the full three-dimensional vector corresponding to the true initial recoil direction, i.e. the direction immediately after the scattering process that produced the recoil. This direction is shown as a green arrow in {\bf Figure~\ref{fig:recoil}}. However, the direction of the entire recoil track (red in {\bf Figure~\ref{fig:recoil}}) will generally differ from the direction of the initial recoil due to several effects. First, the recoil does not travel in a straight line due to scattering (also known as straggling for nuclear recoils), so that even a well-measured average recoil direction will deviate from the initial, true recoil direction. Second, detector limitations such as charge diffusion and the finite segmentation of the readout, will further smear the measured recoil vector. Third, and often by design, a particular detector technology may not be able to measure all three components of the recoil vector, or its sign. Importantly, these effects are all energy-dependent, and each leads to worse directionality at lower energies. Despite these considerable complications, we can still describe the directional performance of any event-level directional detector with two simple and independent quantities: the effective 3d angular resolution and the head/tail recognition efficiency. Because several choices and conventions for these quantities exist, we will first carefully define these, adopting the same conventions we introduced in Reference~\cite{Vahsen:2020pzb}. \vspace{1em} \noindent {\bf Angular resolution.} We take this to be the mean difference between the true, initial recoil axis, and the measured recoil axis. The difference is measured by a single angle in a three-dimensional space. This corresponds to the angle between the red and green dashed lines in {\bf Figure~\ref{fig:recoil}}, but does not consider the sign of the two vectors. Because this difference is the angle between two {\it axes} (as opposed to two vectors), it ranges from 0 to only 90 degrees. As a result, the angular resolution ranges from 0 to 1 radian (approximately 57 degrees). Note that the upper limit of 1 radian is the average angle between two randomly chosen axes in 3d, i.e it corresponds to having no (axial) directional sensitivity. \vspace{1em} \noindent {\bf Head/tail recognition efficiency.} We define this to be the fraction of events where the vector product of the reconstructed recoil direction and true initial direction is positive. A value of 0.5 corresponds to completely random head/tail assignment by the detector, and 1 is the best possible performance. \begin{figure}[t] \centering \includegraphics[trim = 0mm 0mm 0mm 0mm, clip, width=1.0\textwidth]{nrequired.pdf} \caption{Impact of an event-level nuclear recoil detector's directional performance on solar neutrino/DM discrimination. The color scale shows the required number of detected fluorine (left) and helium (right) recoils to exclude a solar neutrino background hypothesis at 90\% C.L., versus angular resolution and head/tail recognition efficiency, as defined in Section~\ref{sec:angularperformance}. This particular simulation assumes $m_\chi = 10$~GeV, a He:SF$_6$ target gas, and an energy threshold of 1~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace. The top left of each plot corresponds to an idealized detector, while the bottom right corresponds to no directional sensitivity. The shape of the contours shows that both angular resolution and head/tail efficiency are required for optimal discrimination between WIMPs and solar neutrinos. That said, a detector with only good head/tail recognition (top right corner) performs significantly better than a detector with only good angular resolution (bottom left corner).} \label{fig:required_ang_performance} \end{figure} \vspace{1em} \noindent {\bf Required performance.} The above quantities form a finite parameter space and have the benefit of being robust and easy to measure directly, both in experiment and simulation, without any need for parameterization or fitting. Note that both definitions remain valid and useful even if not all recoil projections are measured. This means that for a given recoil energy, any event-level directional detector can be viewed as one point in the two-dimensional performance parameter space shown in {\bf Figure~\ref{fig:required_ang_performance}}. To show the impact of performance, the figure shows the required number of observed DM-helium or DM-fluorine recoils required to reject a solar neutrino background hypothesis.\footnote{For simplicity, this analysis utilizes only the recoil angle distributions in galactic coordinates integrated over one year. Incorporating event time and recoil energy information would allow for even fewer required events. The results also depend on the statistical testing methodology. We choose to focus on the big picture here, and defer more detailed descriptions to future work.} We see that a good performance target is an angular resolution of 30 degrees or lower and a head/tail efficiency of 80\% or better. This would result in $\lesssim$10 DM recoils needed to exclude a neutrino hypothesis. The shape of the contour in the figure also shows that head/tail recognition is especially important. \vspace{1em} \noindent {\bf Performance in practice.} Angular performance is strongly energy dependent. For example, the \textsc{Cygnus}\xspace simulation of optimized gas TPCs~\cite{Vahsen:2020pzb} suggests an angular resolution of 10$^\circ$ and a head/tail efficiency of nearly 100\% is feasible for helium recoils $\gtrsim50$~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace. At lower energies, even a highly idealized detector is limited by the primary ionization distribution of the recoils to about 28$^\circ$ resolution and 70\% head/tail efficiency. A realistic gas TPC with diffusion loses most directional sensitivity at 1~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace. Since solar neutrinos and $\mathcal{O}({\rm GeV})$ mass WIMPs generate most nuclear recoils below 10~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace, the greatest challenge for future detectors will be to extend good directional performance to low energies. In designing future detectors, the contribution of TPC readout performance to angular resolution can be reliably predicted, see Equation 5 in Reference~\cite{Vahsen:2015oya}. For mm-length nuclear recoils, this leads to the requirement of highly segmented detectors, with feature size $\mathcal{O} (100\,\upmu {\rm m})$, and low diffusion. The contribution from the spatial shape of the primary ionization distribution, especially below 10 \text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace has, however, large uncertainties, and the same is true for the head/tail efficiency. Because these directly affect the designs of future detectors, it is imperative for the field to validate the commonly used simulation tools at the lowest energies. Validation work using helium nuclei for energies above 50~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace and carbon and fluorine above 10~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace can be found in Reference~\cite{Deaconu:2017vam}. Fluorine recoil measurements going down to 6~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace, can be found in Reference~\cite{Tao:2019wfh}. For progress in this direction, recoil imaging detectors with low pressure, high definition (HD) readouts and minimal diffusion are required. \subsection{Directional performance of modulation-based directional detectors} To date, no demonstration has been made of modulation-based directionality at recoil energies relevant for a DM search. However, proposed modulation-based detectors have a natural upper hand for achieving high exposures in that they are based on liquid or solid targets. So it is interesting to compare the required DM event numbers for discovery to see which strategy is optimal. The difficulty is that the width of the expected DM recoil energy spectrum, $\sigma_E$, is quite broad. Therefore, many events, and an asymmetric detector response that is significant compared to this width, are required to detect a DM signal with this strategy. We are not aware of quantitative studies of this in the literature, so we perform a back-of-the envelope calculation, assuming Gaussian statistics. We assume the detector has a direction-dependent energy response, and that all detector data are grouped into two bins based on the sidereal time. We assume that the time at which the bins are divided is such that we expect higher event energies due to the recoil directions in one bin, lower in the other. With a few additional simplifications, the total number of detected events required to reject an isotropic background, in the presence of signal only, at the $s$-$\sigma$ level is \begin{equation}\label{eq:indirect} n = \frac{4\,s^2}{c^2} \,, \end{equation} where $c=\Delta E/ \sigma_E$ is the ratio of the difference in mean energies in the two bins to the width of the average energy distribution. This width will be a convolution of the DM recoil spectrum and the detector response. Assuming, for example, $c=1$\%, we find that of order 360,000 events are required for a 3$\sigma$ level exclusion of isotropy. We have described earlier (Equation \ref{eq:reject_isotropy}), that for an idealized event-level directional detector, as few as 10 events are required to reject an isotropic background at the same level of significance. If we, for example, assume the comparison is between a recoil imaging detector with an SF$_6$ gas target at atmospheric pressure, and an indirectly directional detector with a liquid xenon (LXe) target utilizing S2 only, and further assume SI WIMP-nucleon scattering (benefiting xenon), then we find that we would need a LXe detector of similar physical size as the gas detector to have similar directional sensitivity. This example is hypothetical, as the gas detector would likely need lower pressure, and the parameter $c$ for LXe is unknown. This illustrates, however, that a common argument made against directional gas detectors---size---may not necessarily be valid, given that a large gas TPC operating at room temperature should be easier to construct, and less costly than a cryogenic liquid noble-gas detector of the same size. To decide on the optimal strategy, directional performance measurements at relevant energies are still needed. We also suggest that a more careful comparison of event-level and modulation-based directionality be carried out in future work. \subsection{Energy thresholds and tension between target mass and directionality} While a lower energy threshold is generally better for DM searches, in the context of directional detection, there are three relevant energy thresholds: the energy threshold for nuclear event detection, for particle ID, and for directionality; ordered from what are typically lowest to highest. For some detectors, including gas TPCs, there can also be a minimum charge density below which events are not detected. For the near-term goal of distinguishing solar neutrinos and DM, sub 10-\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace event detection thresholds are required. Detecting events in this energy range is relatively straightforward for modern particle detectors; for example, a high gain gas detector can easily detect a single electron, corresponding to $\sim 25$~eV of ionization energy. The real challenge is therefore to achieve directionality and rejection of internal backgrounds in the sub-10-keV energy range. For gas-based detectors specifically, one of the biggest design trade-offs is that the directional threshold and the particle ID threshold both improve with lower gas density, while the detector target mass and hence DM sensitivity are reduced. Low density operation---achieved either via pressures of $\mathcal{O}(10-100)$~Torr, or via low-$Z$ gases at atmospheric pressure---is required to achieve adequate directionality. In existing and proposed designs, directionality still tends to gradually roll off below 50~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace, but may be useful for DM/neutrino discrimination down to approximately 6~\text{k\text{e\kern-0.15ex V}$_\mathrm{r}$}\xspace~\cite{Vahsen:2020pzb}. Further improvements should be investigated. However, particle identification capabilities tend to deteriorate \emph{exponentially} towards lower energies~\cite{Vahsen:2020pzb}, so in the end this is often expected to be the factor that determines the effective energy threshold for analysis Finally, if we wish to optimize a future detector for solar neutrinos, as well as DM, we must consider the recoil energy thresholds required to detect the fluxes of interest. We see from Table~\ref{tab:nurates} that around a 5~keV threshold is required detect a reasonable fraction of CNO neutrinos in the electron recoil channel. However, electron tracks have lower charge density than nuclear recoils, making detection more challenging. The ideal detector would therefore have high efficiency for detecting single primary electrons. \subsection{Particle identification}\label{sec:particleID} Preliminary simulations of $1000~{\rm m}^3$ gas detectors~\cite{Vahsen:2020pzb} suggest that internal backgrounds will be dominated by electron recoils from Compton scattering of gamma rays from radio-impurities and the detector environment. It may be necessary to reduce such backgrounds via particle ID at the reconstruction level by factors of $\sim 10^4$--$10^5$. Experimental work and simulations (see Reference~\cite{Ghrear:2020pzk} and citations therein) suggest this is feasible with high-definition gas TPCs at 10~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace, and perhaps even substantially lower energies. Above this, electron rejection improves exponentially. It is important for the field to demonstrate such electron rejection capability experimentally, and as a function of energy, as this may determine the practical energy threshold of large detectors. We note without providing further details that the same particle ID capabilities can also be used to identify the recoiling nucleus. \subsection{Energy resolution} Energy resolution is relevant to directional detection in a number of ways. For an idealized gas TPC, the fractional energy resolution is given by \begin{equation} \frac{\sigma_E}{E}=\sqrt{n \times (F+f)},\label{eq:energy_resolution} \end{equation} where $n$ is the number of ionized electrons, $F$ is the so-called Fano factor which quantifies primary ionization fluctuations, and $f$ is the relative gain variance of the gas amplification device~\cite{thorpe2020}. In practice, this typically leads to a quantitative resolution of order \begin{equation} \frac{\sigma_E}{E} = 10\% \sqrt{\frac{5.9~{\rm \text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace}}{E}} \, , \end{equation} which appears sufficient for particle ID in the context of rejecting electrons and retaining nuclear recoils via their difference in specific ionization, $\mathrm{d}E/\mathrm{d}x$. Finite energy resolution will also smear out any indirect or modulation-based directionality, thereby requiring more events for a given observation. This is seen in Equation~\ref{eq:indirect}, where $c$ is inversely proportional to the signal's energy spread, which increases with worse energy resolution. In the future, a recoil-imaging detector could be used to reconstruct the energy spectrum of a known neutrino source. We have performed a preliminary study and determined that the gas detector resolution quoted above is sufficient for reconstructing CNO and $^8$B solar neutrino energies. This is particularly promising with electron recoils, which have higher energy and rate than nuclear recoils, at fixed neutrino energy. The higher electron recoil energies also improve the resolution of the reconstructed neutrino energy spectrum. The \textsc{Cygnus}\xspace collaboration~\cite{Vahsen:2020pzb} is now investigating the detector requirements for this measurement. While the contribution $f$ in Equation~\ref{eq:energy_resolution} can be minimized by optimizing the detector, it could be reduced to zero by utilizing readouts capable of counting individual electrons. In this case the energy resolution would be determined only by primary ionization fluctuations. This was first attempted in Reference \cite{Sorensen:2012qc} with oxygen charge carriers. It may be achievable now with SF$_6$ or CS$_2$+$O_2$ NID and modern, high-speed charge readouts based on MPGDs~\cite{Ligtenberg:2020ofy,Kohli:2017qzo}. This is a natural performance limit to push for and investigate. \subsection{Event time} Ideally, a directional detector will not just measure 3d recoil vectors but will also record exact event times. Only if event times are known can a measured directional signal be transformed into galactic coordinates, and the DM dipole signature (displayed in {\bf Figure~\ref{fig:Skymaps}}) be searched for directly. The same is true of the Sun in the context of solar neutrinos. It is natural then to ask how good the time resolution must be to enable these techniques, and what happens when no timing information is present. In effect, these coordinate transformations rely on timing to deduce the spin angle of the Earth, $\alpha$, which then gives the recoil angle, $\theta$, with respect to the nominal particle source. Any uncertainty in time, $\sigma_t$, would then create an additional uncertainty in these angles, \begin{equation} \sigma_\alpha = \sigma_\theta = \sigma_t \times \frac{360^{\circ}}{24~{\rm hours}} \, . \end{equation} To make this timing-induced angular uncertainty smaller than the intrinsic recoil angle resolution of the detector, we would require, say, $\sigma_\alpha < 10^{\circ}$, resulting in a required time resolution of $\sigma_t < 40$~minutes. This requirement is easily met by modern particle detectors. For TPCs, even for the most pessimistic case of NID without absolute position measurement in the drift direction, the maximal drift time uncertainty would be of order 10~ms~\cite{Phan:2016veo}. In the limit of no timing information whatsoever, the detector becomes time-integrating. A detailed study of this scenario~\cite{OHare:2017rag} found that time integration does not wash out all directional information, but instead causes an effective exposure penalty of a factor of 2 for the rejection of isotropic backgrounds, but almost an order of magnitude for probing below the neutrino floor. However, this is the best case scenario, when both head/tail and complete 3d recoil tracks are measurable. If this information is not available then much larger penalties in directional sensitivity are incurred. For this reason, strategies to reclaim time information, or mitigate against the lack of it, become important, especially because these strategies typically have limited directionality. For instance, in the crystal defect detector~\cite{Marshall:2020azl}, a prompt scintillation or phonon signal can serve as a prompt to trigger the removal of subregions in the detector bulk where the event took place. In the proposed DNA detector, this issue was not addressed~\cite{Drukier:2012hj}, although one could envision a system of microfluidics acting as a ``conveyor belt'' to transport the broken strands out of the detector. Obtaining timing information is the most problematic in nuclear emulsions, where the post-exposure development of the tracks is complex and time-consuming. NEWSdm propose to mount their detector and shielding on a rotating stand, so that the detector always points towards Cygnus. This strategy does not reclaim any time information, rather it removes the need for any time information by keeping the signal fixed from the perspective of the detector. In other words, if the detector is tracking Cygnus, the detector will always directly measure the recoil angle with respect to Cygnus, providing access to the DM dipole. In principle there should then be no penalty in exposure. It appears, however, that this strategy could only optimize sensitivity for one target at a time, which would seem to complicate concurrent DM searches and solar neutrino measurements. This option would also somewhat increase project cost and complexity. We note in closing, that in the case of applied physics and calibration measurements, substantially better timing performance may be beneficial. One important example is quenching factor measurements with neutron beams, where delayed coincidence timing is used to identify matching recoil events in two detectors~\cite{Lenardo:2019vkn}. This highlights the need for smaller-scale prototypes with substantially better performance than may be required in a large, cost-optimized DM or neutrino experiment such as \textsc{Cygnus}\xspace~\cite{Vahsen:2020pzb}. \subsection{Summary of performance requirements} In summary, we found that a directional recoil detector targeting both solar neutrinos and $\mathcal{O}(10~{\rm GeV})$ DM masses requires event-level directionality with angular resolution $\leq30^\circ$ and excellent head/tail sensitivity, ideally down to recoil energies of $\mathcal{O}(5~{\rm keV})$. A $1000~{\rm m}^3$ detector volume would require that internal electron backgrounds be reduced by factors of at least $\mathcal{O}(10^5)$, also down to $\mathcal{O}(5~{\rm keV})$. Fractional energy resolution of order 10\% at 5.9~keV appears sufficient, and even poor timing resolution, of order 0.5~h, should suffice. These requirements are consistent what was considered an optimistic performance scenario at the conclusion of a previous optimization study~\cite{Billard:2011zj}, which focused on fluorine recoils in CF$_4$. The main difference in our requirements is the need for good energy resolution, which would be needed to reconstruct neutrino spectra, but which is likely also required to achieve sufficient electron background rejection suitable for large detectors. \section{THE CASE FOR HIGH DEFINITION RECOIL IMAGING}\label{hd_recoil_imaging} Of all technologies on the table, gas TPCs are the closest to meeting the performance requirements we arrived at in Section~\ref{sec:performance}. Yet the optimal operating configuration in terms of gas mixture, pressure, readout segmentation, and drift length needs further study. One promising approach is high definition (HD) charge readout, meaning electronic readout with high spatial segmentation via MPGDs. High segmentation will almost certainly be required to achieve sufficient discrimination between nuclear and electron recoils. In the optimal case, a HD TPC would count every single electron in 3d with near unity efficiency, 100~$\upmu$m-scale segmentation, and the smallest possible diffusion---implying NID. Pixel ASIC readouts are already close to achieving this~\cite{Ligtenberg:2020ofy}, but probably not cost-effective for detectors beyond the m$^3$ scale. For larger detectors, strip readout appears more realistic, but if NID is used, this may first require development of optimized readout electronics. As outlined in Section~\ref{sec:motivation}, there are wide applications of TPC detectors even at very small scales. A HD TPC capable of extracting all available primary ionization information would additionally enable the most precise measurements of low energy recoils possible. This will include measurements of recoil range, longitudinal and transverse straggling, and ionization quenching in gases of interest. Such measurements would allow not only validation and precise tuning of simulation tools and theoretical models for low energy nuclear recoils, but also searches for deviations---expected or otherwise---from the nominal nuclear recoil and electron recoil signatures. Such measurements would demonstrate the feasibility of directional technologies, while benefiting the whole wider field of DM detection, i.e.~including collaborations with non-directional detectors, which cannot resolve ionization distributions. Planning for HD TPC recoil measurements are already in preparation. To provide a tangible example that highlights the expected impact of this work, we end by describing one such measurement in more detail \subsection{The Migdal effect}\label{sec:migdal} \begin{figure}[t] \includegraphics[width=\textwidth]{MultipanelARc.jpg} \caption{{\it Left panel}: a 160~\text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace nuclear recoil track, showing the reconstructed direction (arrow) derived from its $\mathrm{d}E/\mathrm{d}x$ profile. {\it Middle panel}: An example Migdal event constructed by taking a composite of the 160 \text{k\text{e\kern-0.15ex V}$_\mathrm{ee}$}\xspace nuclear recoil image with that of a $\sim$5.2~keV electron track with their interaction points overlaid. Due to the large difference in the dE/dx of the electron and nuclear recoil, the intensity along the electron track was scaled up by a factor 5 for visualization purposes, before co-adding to produce the image. The reconstructed directions (arrows) derived from the $\mathrm{d}E/\mathrm{d}x$ profiles are used to identify each particle in the Migdal event and its interaction point (yellow dot). {\it Right panel}: The $\mathrm{d}E/\mathrm{d}x$ profile of the full Migdal event. Here we show projected intensity along the major axis of the reconstructed track for both electron (blue dashed) and nuclear (orange dashed) recoils, as well as their sum (black solid). Here the true scaling between the electron and nuclear recoils was used.} \label{fig:MigdalAR} \end{figure} When performing a naive two-body nuclear scattering calculation, it is typically assumed that the electron cloud follows the recoiling nucleus instantaneously. This approximation implies that for low enough energies, at some point the resulting ionization signal is unobservably small. However, the nucleus and the atomic electron cloud are distinct entities, and taking the so-called ``Migdal approach'' of treating them as such reveals a potentially interesting new source of ionization for very low energy nuclear recoils~\cite{Baur_1983, Vegh_1983, Ruijgrok, Vergados, Sharma, Ibe:2017yqa}, as well as other detectable signals~\cite{Kouvaris:2016afs, McCabe:2017rln}. If we model the nucleus and electron cloud separately, the electrons will lag behind the nucleus during a scattering event. In the frame of the nucleus, the electron cloud is seen to experience a small boost, which can excite or ionize an electron. The effect is small but can become the dominant source of ionization at very low recoil energies. For example, in xenon or germanium, the maximum kinetic energy of a recoiling atom from, say, a 1 GeV DM particle would be $\sim$0.1 keV---far below experimental thresholds~\cite{Ibe:2017yqa}. Nuclear quenching will reduce the measurable energy further, compounding the problem. Yet the Migdal prediction of the rare emission of a $\sim$keV electron would clearly be detected. So in the context of DM searches, simply invoking this effect can improve bounds for sub-GeV DM masses~\cite{Dolan:2017xbu}. Most remarkable amongst these are EDELWEISS~\cite{Armengaud:2019kfj} and XENON~\cite{Aprile:2019jmx}, who lowered their mass reach down to 45 and 85 MeV, respectively. While calculations of the Migdal effect exist~\cite{Ibe:2017yqa, Liu:2020pat, Liang:2020ryg}, the process itself has never been measured.\footnote{The Migdal effect covers a broad range of phenomena, from $\alpha$- and $\beta$-decay, to neutron scattering. Although experiments have measured it in the former two processes \cite{Migdal-alpha,Migdal-beta1,Migdal-beta2}, they have not done so for the latter, which best approximates the light DM interaction.} This raises doubts about the validity of the effect, especially since theoretical atomic physics calculations are performed under specific assumptions, which may break down in liquids or molecular targets. A possible route towards a first experimental verification could involve a directional measurement, as has been recently proposed by the MIGDAL collaboration~\cite{MIGDALcollab}. Such a measurement would be advantageous for a conclusive identification of the effect because of the additional handle on the kinematic relationship between the Migdal electron and the recoiling nucleus that directional information provides. Of the available directional technologies we have discussed, recoil imaging with HD gas TPCs stands out as the ideal strategy for the study of the Migdal effect. A low pressure TPC with a highly segmented ionization detector could provide both the high signal-to-noise and fine-granularity 3d track reconstruction needed to give detailed information on the low energy tracks. In contrast to DM or neutrino searches, an experiment sensitive to this rare effect (with a probability of $10^{-5} - 10^{-4}$ per nuclear recoil), would not require large volumes. Instead, one could focus on designing the best technology without the worry of scaling-up and the associated cost and complexity. The challenge for such a measurement is to fully detect the low $\mathrm{d}E/\mathrm{d}x$ electron tracks, which requires high resolution and signal-to-noise approaching single primary electron detection. The detection of electron tracks down to a few keV has been demonstrated in Reference~\cite{Phan:2017sep} using a small TPC operating in 25--100 Torr of CF$_4$. An electron and nuclear recoil track imaged with this TPC are shown in {\bf Figure~\ref{fig:MigdalAR}}. One can see how the order of magnitude lower $\mathrm{d}E/\mathrm{d}x$ of the electron compared to the nuclear recoil could be used to distinguish them. The direction of each particle can also be deduced from the $\mathrm{d}E/\mathrm{d}x$ profile, with the nuclear recoil's falling towards the head of the track, and the electron's rising. This is a fortuitous difference that can be used to find the common vertex between them. For a recoil imaging Migdal experiment, either optical or electronic MPGD-based readouts could work since most atoms of interest for DM searches can be found in scintillating gases. What is more important is that the detector has the highest 3d track resolution possible to measure the effect down at the energies relevant for DM searches. In this regard, an ideal detector would be a TPC with fine-granularity MPGD readouts operating with NID. \section{SUMMARY AND OUTLOOK}\label{sec:summary}\label{sec:outlook} There is an emerging worldwide community interested in the directional detection of nuclear, and more recently, electron recoils. We have shown that the physics case for DM and solar neutrino recoil directionality is robust and compelling. A detector optimized for both signals would need a nuclear recoil angular resolution of order 30$^\circ$ or better, and excellent head/tail sensitivity down to recoil energies of about 5~keV. Excellent particle identification capabilities are required, which will likely require HD recoil imaging. Typical gas detector energy resolution and very modest event time resolution appear sufficient. The detector requirements to measure electron recoil signatures from neutrinos still need further study. HD recoil-imaging gas TPCs, operating at the performance limit of unity single electron efficiency and minimal achievable diffusion should also allow novel measurements of keV recoil physics, potentially including the first experimental verification of the Migdal effect. The findings would be critical to reduce simulation uncertainties and to reliably optimize the design of large-scale DM and neutrino detectors. \begin{summary}[SUMMARY POINTS] \begin{enumerate} \item A galactic dipole in direction is a robust and surprisingly model-independent signature of DM. \item This provides powerful motivation for directional experiments, which can either directly measure the dipole, or indirectly measure it via a sidereal daily modulation. \item Gas TPCs and nuclear emulsions are the most advanced directional recoil detection technologies at the time of writing. \item A ton-scale directional gas TPC could measure solar neutrinos in both nuclear and electron recoil channels while searching for DM. \item HD nuclear recoil imaging has wide applications beyond astroparticle physics. \item The ideal recoil-imaging gas TPC---which operates at the expected performance limits of the technology---has not yet been constructed.\vspace{-1em} \end{enumerate} \end{summary} \begin{issues}[FUTURE ISSUES \begin{enumerate} \item The fundamental performance limits of gas TPCs should be experimentally demonstrated: single-electron counting detector with negative ion drift (NID) \item A single-electron counting NID TPC should be used to validate simulations of keV-scale nuclear and electron recoils \item Simulation tools that generate the 3d topology of low-energy nuclear recoils should be developed and made publicly available. \item Proponents of new directional technologies should demonstrate directional performance versus recoil energy. \item The potential of liquid noble gas detectors for directional detection should be demonstrated, and compared with that of demonstrated technologies. \item The neutrino physics potential of directional recoil detectors should be studied further and optimized in conjunction with their DM discovery capabilities. \item The physics reach of directional electron recoil detectors should be studied further. \item Strawman designs for directional electron recoil detectors should be developed.\vspace{-1em} \end{enumerate} \end{issues} \section*{DISCLOSURE STATEMENT} The authors are not aware of any affiliations, memberships, funding, or financial holdings that might be perceived as affecting the objectivity of this review. \section*{ACKNOWLEDGMENTS} SEV acknowledges support from the U.S. Department of Energy (DOE) via Award Number DE-SC0010504. The work of CAJO is supported by The University of Sydney. DL acknowledges support from the U.S. Department of Energy via Award Number DE-SC0019132. \bibliographystyle{bibi}
1,108,101,564,449
arxiv
\section{Introduction}\label{sec:intro} A lot of modern applications require image annotation to search, access and navigate the huge amount of visual data stored in personal collections or shared online. Whenever you want to retrieve photos from a particular concert, recall that pleasant summer day in which you napped on your comfortable hammock or look up a person, it is automatic image annotation that enables a plethora of useful applications. The exponential growth of media on sharing platforms, such as Flickr or Facebook, has led to the availability of a huge quantity of images that are enjoyed by millions of people. In such a huge sea of data, it is indispensable to teach computers to correctly label the visual content and help us search and browse image collections. In this paper, we tackle the challenging task of automatic image annotation. Given an image, we want to assign a set of relevant labels by taking into account image appearance and eventually some prior knowledge on the joint distribution of visual features and labels. Due to its importance, this is a very active subject of research \cite{lavrenko-2003,monay-2004,carneiro-2007,mei-2008,zhang-2010,dzhang-2012,2pknn-2012,gong2013deep}. Previous work typically use images and associated labels to build classifiers and then assign relevant labels to novel images. The early works usually rely on images labeled by domain experts \cite{duygulu-2002,monay-2004,carneiro-2007,guillaumin-2009,tousch-2012}, while recently several approaches use weak labels such as user-generated tags in social networks \cite{mcauley-2012,johnson-2015,arxiv2015-li} or query terms in search engines \cite{wang-2008,feifei-2010}. Despite the source of the labeling, non-parametric models which rely on a nearest-neighbor based voting scheme have received a lot of attention for automatic image annotation \cite{makadia-2008,guillaumin-2009,xli-2009,znaidia-2013,ballan-2015}. The main reason is that these methods have the ability to adapt to complex patterns as more training data become available. To annotate a new image, they apply a common strategy: first, they retrieve similar images in the training set, and second, they rank labels according to their frequency in the retrieval set. Automatic image annotation is thus achieved by transferring the most frequent labels in the neighborhood to the test image. This is essentially a lazy learning paradigm in which the image-to-label association is delayed at test time. In contrast, discriminative models such as support vector machines \cite{xqi-2007,grangier-2008,sahbi-2010,svmvt-2013} or fully supervised end-to-end deep networks \cite{gong2013deep}, require to define in advance the vocabulary of labels. This is particularly problematic in a large-scale scenario, such as images on social networks, in which you may have thousands of labels that may also change or increase over time. Several issues may arise in a nearest-neighbor approach. The set of retrieved images may contain many incorrect labels, mostly because of the so-called \emph{semantic gap} \cite{cbir-2000}. This happens because visual features may not be powerful enough in abstracting the visual content of the image. Thus the proposed algorithms tend to retrieve just the images whose features are very close in the visual space, but the semantic content is not well preserved. Researchers tried to cope with this issue by improving visual features. To this end, the most significant improvement has been the shift from handcrafting features to end-to-end feature learning, leading to current state-of-the art convolutional neural network representations \cite{krizhevsky2012imagenet,simonyan2014very,ilsvrc}. Nearest neighbors methods may also suffer when images are not paired with enough label information, leading to a poor statistical quality of the retrieved neighborhood. This is mostly due to the fact that label frequencies are usually unbalanced. Modern methods address this issue by introducing label penalties and metric learning \cite{guillaumin-2009,xli-2009,2pknn-2012}. \begin{figure} \centering \includegraphics[width=1\columnwidth]{eyecatch_s-crop.pdf} \caption{Labels associated to the images can be used to re-arrange the visual features and induce the semantics not caught by the original features. For instance, the sunset images with the red border should be closer to images of clouds and sea, according to the text space. A projection $\Phi(v; t)$ is learned to satisfy correlations in visual and textual space.} \label{fig:intro} \end{figure} The image representation can be improved also by shifting to a completely different perspective, namely moving towards a multimodal representation. A way of bridging the semantic gap might be by designing representations that account not just for the image pixels, but also for its textual representation. Here we follow this approach by constructing a framework in which the correlation between visual features and labels is maximized. To this end, we present an automatic image annotation approach that relies on Kernel Canonical Correlation Analysis (KCCA) \cite{hardoon-2004}. Our approach strives to create a semantic embedding through the connection of visual and textual modalities. This embedding lives in a latent space that we refer to as \emph{semantic space}. Images are mapped to this space by jointly considering the visual similarity between images in the original visual space, and label similarities. The projected images are then used to annotate new images by using a nearest-neighbor technique or other standard classifiers. Figure \ref{fig:intro} illustrates our pipeline. The main take-home message is that, as illustrated in the figure, the neighborhood of each image will contain more images associated with the same label (e.g. \dquote{sunset}) in the semantic space than in the original visual space (see for example the images with the red border). \subsection{Main Contributions} (1) The key contribution of our work is to improve image representations using a simple multimodal embedding based on KCCA. This approach has several advantages over parametric supervised learning. First, by combining a visual and textual view of the data, we reduce the semantic gap. Thus we can obtain higher similarities for images which are also semantically similar, according to their textual representation. Second, we are free from predetermining the vocabulary of labels. This makes the approach well suited for nearest neighbor methods, which for the specific task of image annotation are more robust to label noise. A slight disadvantage of our method is its inherent batch nature. Although, as shown in our experimental results, learning the semantic projection is also possible on a subset of the training data. (2) Previous works that learn multimodal representations from language and imagery exist \cite{srivastava2012dbm}, including prior uses of CCA and KCCA \cite{hardoon-2004,rasiwasia-2010,hwang-2012,gong-2013}. However, we are the first to propose a framework that combines the two modalities into a joint semantic space which is better exploitable by state-of-the-art nearest neighbor models. Interestingly enough, in our framework the textual information is only needed at training time, thus allowing to predict labels also for unlabeled images. (3) We provide extensive experimental validations. Our approach is tested on medium and large scale datasets, i.e. IAPR-TC12 \cite{iaprtc12}, ESP-GAME \cite{espgame}, MIRFlickr-25k \cite{mirflickr} and NUS-WIDE \cite{nuswide}. We show that our framework is able to leverage recently developed CNN features in order to improve the performance even further. Additionally, we introduce a tag denoising step that allows KCCA to effectively learn the semantic projections also from user-generated tags, which are available at no cost in a social media scenario. The scalability of the method is also validated with subsampling experiments. \smallskip This paper builds on our previous contribution on cross-modal image representations \cite{ballan-2014} and improves in many ways. We report new experimental evaluations covering the large dataset NUS-WIDE. Validate our pipeline with modern convolutional neural network based features. Extend our original approach with a new text filtering method that allows the semantic space to be computed from noisy and sparse tags, such as that from social media. Report new insights on several key aspects such as performance and scalability of our approach when subsampling the training set. \section{Related Work}\label{sec:related} \subsection{Automatic Image Annotation: Ideas and Main Trends} Automatic image annotation is a long standing area of research in computer vision, multimedia and information retrieval \cite{arxiv2015-li}. Early works often used mixture models to define a joint distribution over image features and labels \cite{lavrenko-2003,feng-2004,carneiro-2007}. In these models, training images are used as non-parametric density estimators over the co-occurrence of labels and images. Other popular probabilistic methods employed topic models, such as pLSA or LDA, to represent the joint distribution of visual and textual features \cite{barnard-2003,monay-2004,yxiang-2009}. They are generative models, thus they maximize the generative data likelihood. They are usually expensive or require simplifying assumptions that can be suboptimal for predictive performance. Discriminative models such as support vector machines (SVM) and logistic regression have also been used extensively \cite{grangier-2008,sahbi-2010,svmvt-2013,izadinia2015deep}. In these works, each label is considered separately and a specific model is trained on a per-label basis. In testing, they are used to predict whether a new image should be labeled with the corresponding label. While they are very effective, a major drawback is that they require to define in advance the vocabulary of labels. Thus, these approaches do not handle well large-scale scenarios in which you may have thousands of labels and the vocabulary may shift over time. Despite their simplicity, a class of approaches that has gained a lot of attention is that of nearest-neighbor based methods \cite{makadia-2008,guillaumin-2009,2pknn-2012,ballan-2015}. Their underlying intuition is that similar images are likely to share common labels. Many of these methods start by retrieving a set of visually similar images and then they implement a label transfer procedure to propagate the most common training labels to the test image. The most recent works usually implement also a refinement procedure, such as metric learning \cite{guillaumin-2009,2pknn-2012} or graph learning \cite{jliu-2009,jtang-2011,zhu-2014,fsu-2015}, in order to differently weight rare and common labels or to capture the semantic correlation between labels. They are usually computationally intensive and do not model the intermodal correlation between visual features and labels. In contrast, we introduce a framework in which textual and visual data are mapped to a common semantic space in which labels can be transferred more effectively. \subsection{Towards More Powerful Visual Representations} The most recent breakthrough in computer vision came from end-to-end feature learning through convolutional neural networks. In their seminal paper, Krizhevsky \textit{et al.}~ \cite{krizhevsky2012imagenet} demonstrated unprecedented improvement in large-scale image classification on ImageNet \cite{deng2009imagenet} using CNNs. These networks are composed of a hierarchy of layers, alternating convolutions and subsampling. They require high quality supervision with minimal noise in labeling. Since then, many researchers have applied deep learning to other visual recognition tasks such as object detection and image parsing \cite{girshick2014rich}. Deeper architectures have been recently proposed, showing further gain in image classification accuracy (e.g. \cite{simonyan2014very}). Another interesting property of these architectures is that they have the ability to learn representations that can be transferred and used in many other tasks, such as attribute prediction and image retrieval \cite{razavian2014cnn}. Convolutional neural networks (CNNs) have been also recently applied to automatic image annotation \cite{gong2013deep}, showing significant improvement in terms of precision and recall. On top of these powerful features, a number of recent works have used more advanced encoding schemes in order to improve feature generalization. For instance, VLAD encoding is applied in \cite{gong2014multi} to pool multi-scale CNN features computed over different windows, while Fisher Vector encoding applied to dense multi-scale CNN activations is used in \cite{yoo-cvprw-2015}. This has been also improved in \cite{uricchio-2015} by applying Fisher Vector to sparse boxes, selected by objectness or random selection. However, all these approaches only focus on the visual modality. \subsection{Cross-media and Multimodal Representations} A number of approaches have been developed for learning multimodal representations from images and labels \cite{lavrenko-2003,carneiro-2007,mcauley-2012,srivastava2012dbm,frome2013devise,guillaumin-2010}. In particular, we highlight that previous use of CCA and its variants exists, particularly for the task of cross-modal image retrieval \cite{hardoon-2004,rasiwasia-2010,hwang-2012,gong-2013,habibian2015discovering,ccir} and multi-view learning~\cite{twocca,mvisl}. This class of methods is often used to learn multi-view embeddings in a unimodal setting. For example, Yang~\textit{et al.}~ \cite{twocca} use CCA to learn a common representation from two views in the image space. A more general approach is presented in~\cite{mvisl} where a latent representation of samples is learned from multiple views. Their framework can be applied also to combine visual features or imagery captured in different conditions. Hardoon \textit{et al.}~ were the first to apply KCCA to image retrieval with textual query \cite{hardoon-2004}. Successively, Rasiwasia \textit{et al.}~ \cite{rasiwasia-2010} proposed to employ LDA and CCA to perform cross-modal retrieval on text and images obtaining improved results on single modalities. In \cite{hwang-2012}, a method to learn importance of textual object is proposed. They show that features such as word frequency, relative and absolute label rank are helpful to evaluate importance of textual information. Multi-modal learning has been applied to improve ranking in image retrieval fusing visual features and click features in~\cite{ccir}. A three-way CCA is proposed in \cite{gong-2013} to address the limited expressiveness of CCA. They show that adding a third view representing categories or clustered labels can improve retrieval performance. Murthy \textit{et al.}~~\cite{murthy-2015} propose to combine CNN features and word embeddings using CCA, but their approach is only tested on small scale datasets using expert labels. Embeddings carry many advantages, nonetheless learning such coupled representation may be extremely computationally expensive. Recently, there have been some attempts at making such approaches scalable~\cite{frome2013devise, weston2010large}. These on-line methods have usually low memory footprint, and scale very well to large dataset. Nonetheless, they are not designed to tackle multi-label image annotation and they are not able to learn from noisy examples such as tags extracted from social media. Differently from prior work, we tackle the specific problem of multi-label image annotation. For this task, only visual features are available at test time. Thus, our approach exploits labels only at training time. To this end, we learn a re-organization of the visual space to that of a semantic space where images that share similar labels are closer. Moreover, when combined to a nearest-neighbors scheme, our approach can predict labels that were not available at training time, when the projections have been learned. \begin{figure*}[!t] \centering \includegraphics[width=0.8\textwidth]{pipeline_new-crop.pdf} \caption{Overview of our approach. Image and textual features are projected onto a common \emph{semantic space} in which nearest-neighbor voting is used to perform label transfer.} \label{fig:approach} \end{figure*} \section{Approach}\label{sec:approach} Our key intuition is that the semantic gap of visual features can be reduced by constructing a semantic space that comprises the fusion of visual and textual information. To this end, we learn a transformation that embeds textual and visual features into a common multimodal representation. The transformation is learned using KCCA \cite{hardoon-2004}. This algorithm strives to provide a common representation for two views of the same data. Similarly to \cite{hardoon-2004,hwang-2012}, we use KCCA to connect visual and textual modalities into a common \emph{semantic space}, but differently from them, which focus on cross-modal retrieval, our framework is designed to effectively tackle the particular problem of image annotation. Moreover, we are able to construct the semantic space even exploiting noisy labels, such as the user tags. Advanced nearest neighbors methods are then used to perform label transfer. An overview of the approach is shown in Fig.~\ref{fig:approach}. Throughout the paper, we use the term \emph{labels} when we refer to generic textual information. We explicitly use the terminology \emph{expert labels} and \emph{user tags} when we refer only to the expert provided labels or the tags provided by users in social network, respectively. We now proceed in detailing the visual and textual representation, how KCCA is used to build the semantic space, and finally we describe our label transfer procedures. \subsection{Visual Features} We use a deep convolutional neural network pre-trained on ImageNet~\cite{deng2009imagenet} with the VGG-Net architecture presented in \cite{simonyan2014very} (using 16 layers)\footnote{In our preliminary experiments we found that this configuration gives the best results on all our datasets, although other networks gave similar results.}. We use the activations of the last fully connected layer as image features. Such representation proved to be good for several visual recognition and classification tasks \cite{razavian2014cnn,girshick2014rich}. Given an image $I_i$, we first warp it to $224\times224$ in order to fit the network architecture and subtract the training images mean. We use this normalized image to extract the activations of the first fully connected layer. Let $\phi^V({I_i})$ be the extracted feature of $I_i$. We use the ArcCosine kernel: \begin{equation} \label{eq:kernel_visual} K^V_n(\phi^V(I_i), \phi^V(I_j)) = \frac{1}{\pi} ||\phi^V(I_i)||^n ||\phi^V(I_j)||^n J_n(\theta) \end{equation} where $J_n$ is defined according to the selected order of the kernel. Following \cite{cho2009kernel}, we set $n=2$ which gives us: \begin{equation} J_2(\theta) = 3 \sin \theta \cos \theta + (\pi - \theta)(1 + 2 \cos^2 \theta) \end{equation} where $\theta$ is the angle between the inputs $\phi^V(I_i)$, $\phi^V(I_j)$. This kernel provides a representation that is better suited to neural networks activations and gives better results. We also tried other kernels such as linear and radial basis function, obtaining a slightly inferior performance ($\sim$1\%). \subsection{Textual Features} Depending on how labels are generated, \emph{i.e.} expert labels or user-generated tags, we should use different approaches. While expert labels can be trusted, user-generated tags are noisy and require a more robust representation. \begin{figure*} \centering \subfigure[CNN Features]{\label{fig:kcca:a}\includegraphics[width=0.5\columnwidth]{CNN-Tsne.jpg} \hspace{2pt} \subfigure[KCCA + Expert Labels]{\label{fig:kcca:b}\includegraphics[width=0.5\columnwidth]{KCCA-GT-Tsne.jpg} \subfigure[KCCA + User Tags]{\label{fig:kcca:c}\includegraphics[width=0.5\columnwidth]{KCCA-Tags-Tsne-Rotated.jpg} \caption{t-SNE visualization of images on MIRFlickr-25K with different features. Each color corresponds to a different label.} \label{fig:kcca} \end{figure*} \subsubsection{Expert Labels}\label{sec:expertlabels} For expert labels, we use simple binary indicator vectors as textual features. Let $D$ be the vocabulary size, \emph{i.e.} the number of labels used for annotation. We map each label set of a particular image $I_i$ to a $D$-dimensional feature vector $\phi^T(I_i)=[w^i_1,\cdots,w^i_D]$, where $w_k$ is $0$ or $1$ if that image has been annotated with the corresponding $k$-th label $l_k$. This results in a highly sparse representation. Then we use a linear kernel which corresponds to counting the number of labels in common between two images: \begin{equation} K^T(\phi^T(I_i), \phi^T(I_j)) = \sum_{k=1}^D w^i_k w^j_k. \end{equation} The basic idea is that we are considering the co-occurrences of labels in order to measure the similarity between two images. Nonetheless, this representation models each label independently from the others. It has been shown in previous works that exploiting semantic relations by weighting each label differently can improve performance \cite{johnson-2015,hu-2015}. Therefore, we explore two textual kernels that consider semantic relations between labels: an ontology-based textual kernel with bag-of-words \cite{shawe2004kernel} and one that exploits the more recent continuous word vector representation \cite{mikolov-2013}. For the bag-of-words semantic kernel, the idea is to weight each label in a linear kernel by using a similarity matrix $S \in \mathbb{R}^{D \times D}$ as: \begin{equation} K^T(\phi^T(I_i), \phi^T(I_j)) = \phi^T(I_i) S \phi^T(I_j)^\intercal. \end{equation} We set the elements of $S$ as the Lin similarity \cite{lin1998information} between each label, using WordNet. This measure has been used successfully in several works to suggest similar labels (see \cite{arxiv2015-li}). Regarding the continuous word vector kernel, Mikolov \textit{et al.}~ \cite{mikolov-2013} recently showed that it is possible to learn a word representation from a large scale corpus in an unsupervised way. The learned word vector features were proved to model semantics in form of regularities in several applications \cite{frome2013devise,murthy-2015}. Given the learned representation of a label $w_k$ as $\zeta(w_k) \in \mathbb{R}^P$, we represent the set of labels of an image $I_i$ using average pooling \begin{equation} \phi^T(I_i) = \frac{1}{N} \sum_k^D w^i_k \cdot \zeta(l^i_k). \end{equation} Finally, we apply a linear kernel on such representation: \begin{equation} K^T(\phi^T(I_i), \phi^T(I_j)) = \phi^T(I_i) \phi^T(I_j)^\intercal. \end{equation} We compare the performance obtained with these three textual representations in Sect. \ref{sec:results_txtkernels}. \subsubsection{Denoising User-generated Tags} For user-generated tags, we should first reduce the labeling noise. To this end, we perform a ``pre-propagation'' step based on visual similarity. The purpose of this tag denoising step is two-fold: first, we need to improve the quality of tags of each training image in order to learn a proper embedding; second, we need to cope with the sparsity of user tags. For the first issue, our assumption is that by gathering a neighborhood of visually similar images the more frequent tags will fade out noisy tags in favor of content related ones. Regarding the sparsity issue, images usually are labeled with few tags and in extreme cases they can have no tags at all. For this reason, the visual information is the most reliable information we can exploit. Thus, we shall obtain a cleaner tag feature-vector $\hat{\phi}^T(I_i) = [\hat{w}_{i,1},\cdots,\hat{w}_{i,D}]$ and then compute the textual kernel $K^T$. We start from the representation $\phi^T(I_i)=[w^i_1,\cdots,w^i_D]$, where $w_k$ is $0$ or $1$ if the image $I_i$ has been annotated with the corresponding tag $t_k$. For each image $I_i$ we consider the $R$=100 most similar images, according to the visual kernel $K^V$ (the same pre-computed in Eq. \ref{eq:kernel_visual}), and compute the new tag vector: \begin{equation} \hat{\phi}^T(I_i) = \frac{\sum_{k=1}^R x_k \phi^T(I_k)}{\sum_{k=1}^R x_k} \end{equation} where $x_k=\exp(-\frac{||\phi^V(I_i) - \phi^V(I_k)||^2}{\sigma})$ is an exponentially decreasing weight computed from image similarities. We set $\sigma$ to the mean of the distances. This improved tag vector can be seen as an approximation of the probability mass function of tags among its nearest neighbor images. We use the exp-$\chi^2$ kernel: \begin{equation} K^T(\hat{\phi}^T(I_i),\hat{\phi}^T(I_j))= \exp\left(-\frac{1}{2C}\sum_{k=1}^{D}{\frac{(\hat{w}_{i,k}-\hat{w}_{j,k})^2}{(\hat{w}_{i,k}+\hat{w}_{j,k})}}\right) \end{equation} where $C$ is set to the mean of the $\chi^2$ distances. We demonstrate in section \ref{sec:results_tags} that this pre-propagation step is essential to learn the semantic embedding properly, as clearly shown by the results reported in Table \ref{tab:denoising_mirflickr}. \subsection{Kernel Canonical Correlation Analysis}\label{sec:kcca} Given two views of the data, such as the ones provided by visual and textual features, we can construct a common multimodal representation. We first briefly describe CCA and then move to explain the extended KCCA algorithm. CCA seeks to utilize data consisting of paired views to simultaneously find projections from each feature space so that the correlation between the projected representations is maximized. \iftoggle{single}{ \newcommand{0.31}{0.22} }{ \newcommand{0.31}{0.31} } More formally, given $N$ training pairs of visual and textual features $\{(\phi^V(I_1),\phi^T(I_1)),\dots,(\phi^V(I_N),\phi^T(I_N))\}$, the goal is to simultaneously find directions $z_V^*$ and $z_T^*$ that maximize the correlation of the projections of $\phi^V$ onto $z_V^*$ and $\phi^T$ onto $z_T^*$. This is expressed as: \begin{align} z_V^*,z_T^* = \arg\max_{z_V,z_T}\frac{\mathrm{E}[ \langle \phi^V,z_V\rangle \langle \phi^T,z_T\rangle ]}{\sqrt{\mathrm{E}[ \langle \phi^V,z_V\rangle^2] \mathrm{E}[\langle \phi^T,z_T\rangle^2 ]}} \nonumber \\= \arg\max_{z_V,z_T}\frac{z_V^\intercal C_{vt} z_T}{\sqrt{z_V^\intercal C_{vv} z_V z_T^\intercal C_{tt} z_T}} \end{align} where $ \mathrm{E} [\cdot] $ denotes the empirical expectation, while $C_{vv}$ and $C_{tt}$ respectively denote the auto-covariance matrices for $\phi^V$ and $\phi^T$, and $C_{vt}$ denotes the between-sets covariance matrix. The CCA algorithm can only model linear relationships. As a result, KCCA has been introduced to allow projecting the data into a higher-dimensional feature space by using the kernel trick \cite{hardoon-2004}. Thus, the problem is now to search for solutions of $z_V^*$ and $z_T^*$ that lie in the span of the $N$ training instances $\phi^V(I_i)$ and $\phi^T(I_i)$: \begin{align} z_V^* = \sum_{i=1}^N\alpha_{i}\phi^V(I_i), \qquad z_T^* = \sum_{i=1}^N\beta_{i}\phi^T(I_i). \end{align} The objective of KCCA is to identify the weights $\alpha,\beta \in \mathbb{R}^N$ that maximize: \begin{equation} \alpha^*,\beta^* = \arg\max_{\alpha,\beta}\frac{\alpha^\intercal K^V K^T \beta}{\sqrt{\alpha^\intercal (K^V)^2 \alpha \beta^\intercal (K^T)^2 \beta}} \end{equation} where $K^V$ and $K^T$ denote the $N \times N$ kernel matrices over a sample of $N$ pairs. As shown by Hardoon \textit{et al.}~ \cite{hardoon-2004}, learning should be regularized in order to avoid trivial solutions. Hence, we penalize the norms of the projection vectors and obtain the generalized eigenvalue problem: \begin{equation} \label{eq:kcca_regularization} (K^V + \kappa I)^{-1}K^T(K^T + \kappa I)^{-1}K^V\alpha = \lambda^2 \alpha \end{equation} where $\kappa \in [0, 1]$. The top $M$ eigenvectors of this problem yield bases $A=\left[\alpha_1\dots\alpha_M\right]$ and $B=\left[\beta_1\dots\beta_M\right]$ that we use to compute the semantic projections of training and test kernels. For each pair $(\alpha_j, \beta_j)$ of the given bases, the corresponding eigenvalue $r_j$ measures the correlation between projected input pairs. Higher $r_j$ is associated with higher correlation, thus it is convenient to weight more the dimensions of higher energy. According to this principle, we obtain the final features as: \begin{equation} \label{eq:semantic_space} \psi(I) = (K^V A) R \end{equation} where $R = \text{diag}([r_1, \ldots, r_M])$. Note that $\psi$ has no dependency on the textual space. Thus, projecting new test images requires only their visual features $\Phi^V$, making our approach suitable for automatic image annotation. In Figure \ref{fig:kcca} we show t-SNE embeddings \cite{jmlr2014-maaten} of the CNN features and their projection into the semantic space. These plots qualitatively show that KCCA improves the separation of the classes, both in case of expert labels and user-generated tags. This leads to a more accurate manifold reconstruction and, as our experiments will confirm, a significant improvement in performance. \subsection{Label Transfer}\label{sec:kcca-annot} \begin{figure} \centering \subfigure[Baseline]{\label{fig:neighbors:a}\includegraphics[width=0.46\columnwidth]{nn_example_cnn.jpg} \hspace{2pt} \subfigure[Our Method]{\label{fig:neighbors:b}\includegraphics[width=0.46\columnwidth]{nn_example_kcca.jpg} \caption{Nearest neighbors found with baseline representation (a) and with our proposed method (b) for a water image (first highlighted in blue in both figures) from the MIRFlickr-25K dataset. Training images with ground truth label \textit{water} are highlighted with a green border. Nearest neighbors are sorted by decreasing similarity.} \label{fig:neighbors} \end{figure} The constructed semantic space assures that similar images, in visual space or in textual space, have also similar features. This property is especially useful for the class of nearest-neighbor methods, since they rely on the intuition that similar images share common labels. We show examples of this property in Figure \ref{fig:neighbors}. We compare the neighbors retrieved for the same query using the baseline visual features and the semantic space features from our method. The query, depicted in a blue box, is an image of water where green and red lights produce a fascinating visual effect. The other images are the most similar images retrieved by one of the two settings. We put a box in green on images that have the correct label ``water'' associated. We see that neighbors retrieved in the baseline space share some visual similarity: they mostly have green and red colors, some line or dotted patterns that mimic the query image. However only one image is really about water. Our method, instead, successfully retrieves 8 of 11 images with the label water, even if they are quite dissimilar in the visual space. Indeed, it is impossible with the images in Figure \ref{fig:neighbors:a} to obtain a meaningful neighborhood since the correct label ``water'' is not frequent enough to be relevant in the final labels rank. A quantitative characterization of this behavior can be seen comparing the sets of labels of images in the neighborhood of a test image with the correct labels of the image itself. We run an experiment on NUS-WIDE, measuring this similarity using Jaccard distance. Specifically, for each image $\hat{x}$ of the test set, we retrieve the $K$ most similar images $\{x_1, x_2, \ldots, x_K\}$ using the visual features and then compute the mean Jaccard similarity between their sets of labels as:$$\frac{1}{K} \sum_{i=1}^K J(\hat{\mathcal{Y}}, \mathcal{Y}_i) = \frac{1}{K} \sum_{i=1}^K \frac{|\hat{\mathcal{Y}} \cap \mathcal{Y}_i|}{|\hat{\mathcal{Y}}| + |\mathcal{Y}_i| - |\hat{\mathcal{Y}} \cap \mathcal{Y}_i|},$$ where $\hat{\mathcal{Y}}$ and $\mathcal{Y}_i$ are, respectively, the set of labels of $\hat x $ and $x_i$. We compute this measure for each test image and average them in a final similarity index as reported in Fig.~\ref{fig:jaccard}. \begin{figure} \centering \includegraphics[width=\columnwidth]{mean_jaccard-crop.pdf}\vspace{-5pt} \caption{Mean Jaccard similarity between label sets of a test image and the label sets of images in the neighborhood build using visual and KCCA features varying the neighborhood size.}\label{fig:jaccard} \end{figure} The higher Jaccard similarity yielded by KCCA features with respect to baseline visual features, shows that the neighbors retrieved using KCCA have a label distribution which is closer to the one of the query. Following this key idea, we have used four nearest-neighbor voting algorithms in our semantic space in order to automatically annotate images. Nevertheless, we expect that other general class of learning algorithms may take advantage of the semantic space. To this end, we also consider the off-the-shelf SVM classifier. Given an image and a vocabulary of labels, each algorithm performs automatic image annotation by applying a particular \emph{relevance function} \cite{arxiv2015-li}, as defined in the following. \medskip\subsubsection{Nearest-Neighbor Voting}\label{nn-vote} The most straightforward approach is to project the test image onto the semantic space, and then identify its $K$ nearest-neighbors. Here we rank the vocabulary labels according to the their frequency in the retrieval set. Thus, the relevance function is defined as: \begin{equation} f_{KNN}(I,t) := k_t \end{equation} where $k_t$ is the number of images labeled as $t$ in the neighborhood of $I$. \medskip\subsubsection{Tag Relevance}\label{nn-li} Li \textit{et al.}~ \cite{xli-2009} proposed a relevance measure based on the consideration that if several people label visually similar images using the same labels, then these labels are more likely to reflect objective aspects of the visual content. Following this idea it can be assumed that, given a query image, the more frequently the tag occurs in the neighbor set, the more relevant it might be. However, some frequently occurring labels are unlikely to be relevant to the majority of images. To account for this fact, the proposed tag relevance measurement takes into account both the number of images with tag $t$ in the visual neighborhood of $I$ (namely $k_t$) and in the entire collection: \begin{equation} f_{TagVote}(I, t) := k_t - {K} \frac{n_t}{|\mathcal{S}|}\label{eq:tagvote} \end{equation} where $n_t$ is the number of images labeled with $t$ in the entire collection $\mathcal{S}$ and $K$ is the number of neighbors retrieved. \medskip\subsubsection{TagProp}\label{nn-tagprop} Guillaumin \textit{et al.}~ \cite{guillaumin-2009} proposed an image annotation algorithm in which the main idea is to learn a weighted nearest neighbor model, to automatically find the optimal metric that maximizes the likelihood of a probabilistic model. The method can learn rank-based or distance-based weights: \begin{equation} f_{TagProp}(I, t) := \sum_j^K \pi_j \cdot \mathcal{I}(I_j,t) \end{equation} where $K$ is the number of neighbors retrieved, $\mathcal{I}$ is the indicator function that returns 1 if $I_j$ is labeled with $t$, and 0 otherwise; $\pi_j$ is a learned weight that accounts for the importance of the $j$-th neighbor $I_j$. In addition the model can be extended with a logistic per-tag model to promote rare labels and suppress the frequent ones. \medskip\subsubsection{2PKNN}\label{nn-2pknn} Verma and Jawahar \cite{2pknn-2012} formulated the problem as a probabilistic framework and proposed a two-phase approach: given a test image, a first phase is employed to construct a balanced neighborhood. Then, a second phase uses image distances to perform the actual estimation of the tag relevance. Given a test image $I$ and a vocabulary of $D$ labels, the first phase collects a set of neighborhoods $\mathcal{N}(I)$ composed of the nearest $M$ training images annotated with each $t$ in $D$. On the second phase, the balanced neighborhood is used to estimate the tag relevance of $t$ to $I$: \begin{equation} f_{2PKNN}(I, t) := \sum_{I_j\in \mathcal{N}(I)} \exp(-d(I, I_j)) \cdot \mathcal{I}(I_j, t) \end{equation} where $d(I,I_j)$ is a distance function between image $I$ and $I_j$. Since the distance function is parametrized with a trainable weight for each dimension, the algorithm presented in \cite{2pknn-2012} also performs metric learning similarly to TagProp (we refer to the complete algorithm as to 2PKNN-ML). We only consider the version without metric learning, since our implementation of 2PKNN-ML performs worse than 2PKNN. \medskip\subsubsection{SVM}\label{svm} For each label, a binary linear SVM classifier is trained using the L2-regularized least square regression, similarly to \cite{verbeek-2010}. Independently from the source of labels, be it expert labels or user tags, the images with the label are treated as positive samples while the others as negative samples. To efficiently train our classifier we use stochastic gradient descent (SGD). The relevance function is thus: \begin{equation} \label{eq:svm-af} f_{SVM}(I,t) := b + \langle w_t, \psi(I) \rangle, \end{equation} where $w_t$ are the weights learned for label $t$ and $b$ is the intercept. \section{Experiments}\label{sec:experiments} \subsection{Datasets} Automatic image annotation with expert labels has been historically benchmarked with three datasets: Corel5K, ESP-GAME and IAPR-TC12. We follow previous work but discard Corel5K since it is outdated and not available publicly. Note that these datasets have poor quality images and they lack metadata as well as user tags. Thus, we additionally consider two popular datasets collected from Flickr, i.e. MIRFlickr-25k and NUS-WIDE. Dataset statistics are summarized in Table \ref{tab:datasets}. \begin{table}[t] \centering \caption{Datasets Statistics.} \label{tab:datasets} \resizebox{0.45\textwidth}{!}{ \begin{tabular}{cccccc} \toprule & & & & \textbf{Expert} & \textbf{User}\\ \textbf{Dataset} & \textbf{Images} & \textbf{Labels} & \textbf{Tags} & \textbf{Labels} & \textbf{Tags}\\ \midrule IAPR-TC12 & 19,627 & 291 & - & \checkmark & - \\ ESP-GAME & 20,770 & 268 & - & \checkmark & - \\ MIRFlickr-25k & 25,000 & 18 & 1,386 & \checkmark & \checkmark \\ NUS-WIDE & 269,648 & 81 & 5,018 & \checkmark & \checkmark \\ \bottomrule \end{tabular} } \end{table} \medskip\textbf{ESP-GAME.}~The ESP-GAME dataset \cite{espgame} was built through an online game. Two players, not communicating with each other, describe images through labels and obtain points when they agree on the same terms. Since the image is the only media the players see, they are pushed to propose visually meaningful labels. Following previous work, we used the same split of \cite{guillaumin-2009} consisting of $18,689$ images for training and $2,081$ for test. There is an average of $4.68$ annotated labels per image out of $268$ total candidates. \medskip\textbf{IAPR-TC12.}~This dataset was introduced in \cite{iaprtc12} for cross-language information retrieval. It is a collection of $19,627$ images comprised of natural scenes such as sports, people, animals, cities or other contemporary scenes. Like previous work, we used the same setting as in \cite{guillaumin-2009}. It consists of $17,665$ training images and $1,962$ testing images. Each image is annotated with an average of $5.7$ labels out of $291$ candidates. \medskip\textbf{MIRFlickr-25K.}~The MIRFlickr-25K dataset \cite{mirflickr} has been introduced to evaluate keyword-based image retrieval. It contains $25,000$ images downloaded from Flickr, $12,500$ images for training and the same amount for testing. For each image, the presence of $18$ labels are available as expert labels as well as user tags (we consider the same labels as in \cite{verbeek-2010}). They are annotated with an average of respectively $2.78$ expert labels and $8.94$ user tags. Note that tags corresponding to the expert labels are very scarce in this dataset. Beside tag annotations, EXIF information and other metadata such as GPS are available. While the ground-truth labels are exact, the user tags are weak, noisy and overly personalized. Moreover, not all of them are relevant to the image content. We used the same training and test sets as in previous work \cite{verbeek-2010}. \medskip\textbf{NUS-WIDE.}~The NUS-WIDE dataset \cite{nuswide} is composed of $269,648$ images retrieved from Flickr. Similarly to MIRFlickr, $81$ labels are provided as expert labels as well as user tags. Images are annotated with an average of $2.40$ expert labels and $8.48$ user tags, respectively. NUS-WIDE is one of the largest datasets of images collected from social media. The sparsity of labels and user tags is one of the main challenges in exploiting this dataset as a training set. Moreover the distribution of labels is unbalanced with few concepts being present in almost 80\% of the images: \dquote{sky}, \dquote{clouds}, \dquote{person} and \dquote{water}. Following previous work, we discard images without any expert label \cite{gong2013deep}, leaving us with $209,347$ images that we further split into $\sim$125K for training and $\sim$80K for testing, by using the split provided by the authors of the dataset. \begin{table*}[t!] \centering \caption{Results of our method compared to the state of the art on IAPR-TC12 and ESP-GAME, using \textbf{expert labels}.} \label{tab:comparison_iapr_esp_gt} \resizebox{0.72\textwidth}{!}{ \begin{tabular}{lcccccccccc} \toprule & \multicolumn{1}{l}{} & \multicolumn{4}{c}{\textbf{IAPR-TC12}} && \multicolumn{4}{c}{\textbf{ESP-GAME}} \\ \cmidrule{3-6} \cmidrule{8-11} \multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Visual Feat}} & \multicolumn{1}{c}{\textbf{MAP}} & \multicolumn{1}{c}{\textbf{Prec@5}} & \multicolumn{1}{c}{\textbf{Rec@5}} & \multicolumn{1}{c}{\textbf{N+}} && \multicolumn{1}{c}{\textbf{MAP}} & \multicolumn{1}{c}{\textbf{Prec@5}} & \multicolumn{1}{c}{\textbf{Rec@5}} & \multicolumn{1}{c}{\textbf{N+}} \\ \cmidrule{1-11} \multicolumn{11}{l}{\textbf{\emph{State of the art:}}}\\ [3pt] \multicolumn{1}{l}{MBRM \cite{feng-2004}} & \multicolumn{1}{c}{HC} & - & 24 & 23 & \multicolumn{1}{c}{223} && - & - & - & \multicolumn{1}{c}{-} \\ \multicolumn{1}{l}{JEC-15 \cite{makadia-2008}} & \multicolumn{1}{c}{HC} & - & 29 & 19 & \multicolumn{1}{c}{211} && - & - & - & \multicolumn{1}{c}{-} \\ \multicolumn{1}{l}{TagProp \cite{guillaumin-2009}} & \multicolumn{1}{c}{HC} & \textbf{40} & 46 & 35 & \multicolumn{1}{c}{266} && \textbf{28} & 39 & 27 & \multicolumn{1}{c}{239} \\ \multicolumn{1}{l}{GS \cite{zhang-2010}} & \multicolumn{1}{c}{HC} & - & 32 & 29 & \multicolumn{1}{c}{252} && - & - & - & \multicolumn{1}{c}{-} \\ \multicolumn{1}{l}{RF-opt \cite{fu-2012}} & \multicolumn{1}{c}{HC} & - & 44 & 31 & \multicolumn{1}{c}{253} && - & - & - & \multicolumn{1}{c}{-} \\ \multicolumn{1}{l}{2PKNN-ML \cite{2pknn-2012}} & \multicolumn{1}{c}{HC} & - & \textbf{54} & \textbf{37} & \multicolumn{1}{c}{\textbf{278}} && - & 53 & {27} & \multicolumn{1}{c}{{252}} \\ \multicolumn{1}{l}{KSVM-VT \cite{svmvt-2013}} & \multicolumn{1}{c}{HC} & - & 47 & 29 & \multicolumn{1}{c}{268} && - & \textbf{55} & 25 & \multicolumn{1}{c}{\textbf{259}} \\ \multicolumn{1}{l}{SKL-CRM \cite{moran-2014}} & \multicolumn{1}{c}{HC} & - & 47 & 32 & \multicolumn{1}{c}{274} && - & 41 & 26 & \multicolumn{1}{c}{248} \\ \multicolumn{1}{l}{CCA-KNN \cite{murthy-2015}} & \multicolumn{1}{c}{VGG16} & - & 41 & 34 & \multicolumn{1}{c}{273} && - & 44 & \textbf{32} & \multicolumn{1}{c}{254} \\ \multicolumn{1}{l}{RLR \cite{izadinia2015deep}} & \multicolumn{1}{c}{Alexnet} & - & 46 & \textbf{41} & \multicolumn{1}{c}{277} && - & - & - & \multicolumn{1}{c}{-} \\ \midrule \multicolumn{11}{l}{\textbf{\emph{Baselines:}}}\\ [3pt] \multicolumn{1}{l}{NNvot} & \multicolumn{1}{c}{VGG16} & 36 & 39 & 29 & \multicolumn{1}{c}{239} && 28 & 31 & 28 & \multicolumn{1}{c}{232} \\ \multicolumn{1}{l}{TagRel} & \multicolumn{1}{c}{VGG16} & 35 & 34 & 35 & \multicolumn{1}{c}{262} && 30 & 29 & 31 & \multicolumn{1}{c}{240} \\ \multicolumn{1}{l}{TagProp} & \multicolumn{1}{c}{VGG16} & 38 & 40 & 32 & \multicolumn{1}{c}{257} && 32 & 34 & 32 & \multicolumn{1}{c}{241} \\ \multicolumn{1}{l}{2PKNN} & \multicolumn{1}{c}{VGG16} & \textbf{41} & \textbf{41} & \textbf{39} & \multicolumn{1}{c}{\textbf{276}} && \textbf{36} & \textbf{43} & \textbf{36} & \multicolumn{1}{c}{\textbf{257}} \\ \multicolumn{1}{l}{SVM} & \multicolumn{1}{c}{VGG16} & 34 & 31 & 29 & \multicolumn{1}{c}{221} && 31 & 29 & 30 & \multicolumn{1}{c}{224} \\ \midrule \multicolumn{11}{l}{\textbf{\emph{Our Approach:}}}\\ [3pt] \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{VGG16} & 40 & 44 & 34 & \multicolumn{1}{c}{250} && 34 & 38 & 34 & \multicolumn{1}{c}{240} \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{VGG16} & 40 & 41 & 37 & \multicolumn{1}{c}{259} && 35 & 33 & 37 & \multicolumn{1}{c}{249} \\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{VGG16} & 41 & 44 & 34 & \multicolumn{1}{c}{257} && 37 & 38 & 36 & \multicolumn{1}{c}{247} \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{VGG16} & \textbf{43} & \textbf{49} & \textbf{38} & \multicolumn{1}{c}{\textbf{278}} && \textbf{39} & \textbf{45} & \textbf{39} & \multicolumn{1}{c}{\textbf{260}} \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{VGG16} & 41 & 44 & 35 & \multicolumn{1}{c}{252} && 37 & 38 & 37 & \multicolumn{1}{c}{251} \\ \bottomrule \end{tabular} } \end{table*} \subsection{Evaluation Protocol}\label{measures} The performance of automatic image annotation on these datasets has been measured with different metrics. Therefore, for each dataset, we carefully follow previous work protocols. We employ four popular metrics to assess the performance of our algorithm and compare to existing approaches. Image annotation is usually addressed by predicting a fixed number of labels, $n$, per image (e.g. $n=3$, $n=5$). We compute precision (Prec@$n$) and recall (Rec@$n$) by averaging these two metrics over all the labels. Considering that image ground-truth labels may be less or more than $n$, and we are constrained by this setup to predict $n$ labels, perfect precision and recall can not be obtained. We also report results using Mean Average Precision (MAP), which takes into account all labels for every image, and evaluates the full ranking. First, we rank all test images according to the predicted relevance to compute AP for each label, then we report the mean value of AP over all labels. Finally we report N+ which is often used to denote the number of labels with non-zero recall. N+ is an interesting metric when the set of labels has a moderate to high cardinality, otherwise it tends to saturate easily not providing adequate information on a method. It has to be noted that each metric evaluates very different properties of each method. Therefore a method hardly dominates over the competition on every metric. Some methods, by design, provide better Recall or Precision than others. For IAPR-TC12 and ESP-GAME, the standard protocol is to report Prec@5, Rec@5 and N+~\cite{duygulu-2002,makadia-2008}. For completeness we report MAP on these two datasets although, as can be seen in Table~\ref{tab:comparison_iapr_esp_gt}, few previous work also report this metric. For MIRFlickr, considering that annotated labels are used to perform image retrieval, the few existing works report only the MAP \cite{verbeek-2010}. We also report Prec@5 and Rec@5. Considering the low cardinality of the tag vocabulary ($18$), N+ is not reported for this dataset. For NUS-WIDE, performances are usually reported either as MAP or precision and recall. Since NUS-WIDE has a lower average number of labels per image than IAPR-TC12 and ESP-GAME, we report results with $n=3$ labels, as in \cite{gong2013deep,johnson-2015}. \subsection{Implementation Details and Baselines} In order to avoid degeneracy with non-invertible Gram matrices and to increase computational efficiency, we approximate the Gram matrices using the Partial Gram-Schmidt Orthogonalization (PGSO) algorithm provided by Hardoon \textit{et al.}~ \cite{hardoon-2004}. In all the experiments we have empirically fixed $\kappa = 0.5$ (see Eq. \ref{eq:kcca_regularization}) since it gave the best performance in early experiments on IAPR-TC12. We use approximate kernel matrices given by the PGSO algorithm, where we consider at most $4,096$ dimensions (i.e. the dimension of the semantic space). Thus the dimensionality of $\psi(I)$ in Eq. \ref{eq:semantic_space} is $4,096$. In this case, the distance between two images is defined as the cosine distance between $\psi$ features. Since our approach is based on semantic space built from visual data and the available labels, we consider as baselines the label transfer methods trained on the bare visual features. The distance between two images $I_q$ and $I_i$ is defined as $d(I_q, I_i) = 1-K^V (I_q, I_i)$, where $K^V$ is the visual kernel described in Eq.~\ref{eq:kernel_visual}, normalized with values in $[0,1]$. The number of nearest neighbors $K$ and the $C$ of SVM were fixed by performing a 3-fold cross-validation on the training set for each dataset. \subsection{Experiment 1: Performance with Expert Labels}\label{sec:results_gt} \begin{table}[t!] \centering \caption{Results of our method compared to the state of the art on the dataset MIRFlickr-25K, using \textbf{expert labels}.} \label{tab:comparison_mirflickr_gt} \resizebox{0.92\columnwidth}{!}{ \begin{tabular}{lcccc} \toprule & & \multicolumn{3}{c}{\textbf{MIRFlickr-25K}} \\ \cmidrule{3-5} \multicolumn{1}{l}{\textbf{Methods}} & \multicolumn{1}{c}{\textbf{Visual Feat}} & \textbf{MAP} & \textbf{Prec@5} & \multicolumn{1}{c}{\textbf{Rec@5}} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{State of the art:}}}\\ [3pt] \multicolumn{1}{l}{TagProp \cite{verbeek-2010}} & \multicolumn{1}{c}{HC} & 46.5 & - & - \\ \multicolumn{1}{l}{SVM \cite{verbeek-2010}} & \multicolumn{1}{c}{HC} & 52.3 & - & - \\ \multicolumn{1}{l}{Autoencoder \cite{srivastava2012dbm}} & \multicolumn{1}{c}{HC} & 60.0 & - & - \\ \multicolumn{1}{l}{DBM \cite{srivastava2012dbm}} & \multicolumn{1}{c}{HC} & 60.9 & - & - \\ \multicolumn{1}{l}{MKL \cite{guillaumin-2010}} & \multicolumn{1}{c}{HC} & \textbf{62.3} & - & - \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Baselines:}}}\\ [3pt] \multicolumn{1}{l}{NNvot \hide{\cite{makadia-2008}} } & \multicolumn{1}{c}{VGG16} & 69.9 & 44.7 & 69.2 \\ \multicolumn{1}{l}{TagRel \hide{\cite{xli-2009}}} & \multicolumn{1}{c}{VGG16} & 68.9 & 41.5 & 72.1 \\ \multicolumn{1}{l}{TagProp \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & 70.8 & \textbf{45.5} & 70.1 \\ \multicolumn{1}{l}{2PKNN \hide{\cite{2pknn-2012}} } & \multicolumn{1}{c}{VGG16} & 66.5 & 46.4 & 70.9 \\ \multicolumn{1}{l}{SVM \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & \textbf{72.7} & 38.8 & \textbf{72.4} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Our Approach:}}}\\ [3pt] \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{VGG16} & 72.9 & 46.1 & 73.1 \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{VGG16} & 70.7 & 45.2 & 72.6 \\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{VGG16} & 73.0 & 44.6 & 74.1 \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{VGG16} & 67.7 & \textbf{47.3} & 74.6 \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{VGG16} & \textbf{73.0} & 38.9 & \textbf{75.0} \\ \bottomrule \end{tabular} } \end{table} \begin{table}[t!] \centering \caption{Results on the NUS-WIDE dataset using \textbf{expert labels}.} \label{tab:comparison_nuswide_gt} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{lcccc} \toprule & & \multicolumn{3}{c}{\textbf{NUS-WIDE}} \\ \cmidrule{3-5} \multicolumn{1}{l}{\textbf{Methods}} & \multicolumn{1}{c}{\textbf{Visual Feat}} & \textbf{MAP} & \textbf{Prec@3} & \multicolumn{1}{c}{\textbf{Rec@3}} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{State of the art:}}}\\ [3pt] \multicolumn{1}{l}{CNN + SoftMax \cite{gong2013deep}} & \multicolumn{1}{c}{RGB} & - & 31.7 & \multicolumn{1}{c}{31.2} \\ \multicolumn{1}{l}{CNN + WARP \cite{gong2013deep}} & \multicolumn{1}{c}{RGB} & - & 31.7 & \multicolumn{1}{c}{35.6} \\ \multicolumn{1}{l}{CNN + NNvot \cite{johnson-2015}} & \multicolumn{1}{c}{BLVC} & 44.0 & \textbf{44.4} & \multicolumn{1}{c}{30.8} \\ \multicolumn{1}{l}{CNN + logistic \cite{johnson-2015}} & \multicolumn{1}{c}{BLVC} & \textbf{45.8} & 40.9 & \multicolumn{1}{c}{\textbf{43.1}} \\ \multicolumn{1}{l}{MIE Ranking \cite{ren-2015}} & \multicolumn{1}{c}{BLVC} & - & 37.9 & \multicolumn{1}{c}{38.9} \\ \multicolumn{1}{l}{MIE Full Model \cite{ren-2015}} & \multicolumn{1}{c}{BLVC} & - & 37.8 & \multicolumn{1}{c}{40.2} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Baselines:}}}\\ [3pt] \multicolumn{1}{l}{NNvot \hide{\cite{makadia-2008}}} & \multicolumn{1}{c}{VGG16} & 49.3 & 39.6 & \multicolumn{1}{c}{44.0} \\ \multicolumn{1}{l}{TagRel \hide{\cite{xli-2009}} } & \multicolumn{1}{c}{VGG16} & 49.2 & 32.1 & \multicolumn{1}{c}{50.3} \\ \multicolumn{1}{l}{TagProp \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & \textbf{50.9} & \textbf{41.3} & \multicolumn{1}{c}{44.6} \\ \multicolumn{1}{l}{2PKNN \hide{\cite{2pknn-2012}} } & \multicolumn{1}{c}{VGG16} & 48.0 & 39.7 & \multicolumn{1}{c}{52.2} \\ \multicolumn{1}{l}{SVM \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & 50.2 & 34.6 & \multicolumn{1}{c}{\textbf{60.6}} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Our Approach:}}}\\ [3pt] \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{VGG16} & 51.7 & 40.2 & \multicolumn{1}{c}{50.5} \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{VGG16} & 51.4 & 34.4 & \multicolumn{1}{c}{\textbf{57.2}} \\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{VGG16} & \textbf{52.2} & 45.2 & \multicolumn{1}{c}{49.2} \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{VGG16} & 50.7 & \textbf{53.0} & \multicolumn{1}{c}{47.0} \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{VGG16} & 51.8 & 43.3 & \multicolumn{1}{c}{48.4} \\ \bottomrule \end{tabular} } \end{table} As a first experiment we analyze the performance of our method when the semantic space is built from expert labels. In Tables \ref{tab:comparison_iapr_esp_gt}, \ref{tab:comparison_mirflickr_gt} and \ref{tab:comparison_nuswide_gt} we report the performance of the state of the art, the five methods ran in the visual feature space and in the semantic space, respectively. Our best result is superior to the state of the art on NUS-WIDE and MIRFlickr-25K while it is comparable to more tailored methods on IAPR-TC12 and ESP-GAME. Table \ref{tab:comparison_iapr_esp_gt} shows the performance of the state of the art methods, the baselines and our approach on IAPR-TC12 and ESP-GAME. We first note that the majority of previous works report results with 15 handcrafted features (HC) \cite{guillaumin-2009} while we use the more recent VGG16 CNN activations, the same as \cite{murthy-2015}. By exploiting this feature, simple nearest neighbor methods like NNvot and TagRel reach a higher Prec@5 and Rec@5 compared to the similar JEC-15 \cite{makadia-2008} which uses a combination of HC features. Our baseline TagProp has a slight inferior performance to that reported in \cite{guillaumin-2009}, probably due to the lower number of learnable parameters, having only one single feature versus $15$. Comparing our approach versus the baselines, we observe that all metrics consistently report higher values when label transfer is applied in the semantic space. This suggests that classes in the semantic space are easier to separate. We reach our best result on IAPR-TC12 and ESP-GAME with KCCA + 2PKNN, still inferior to 2PKNN-ML \cite{2pknn-2012} that is additionally applying metric learning. \begin{figure}[!t] \centering \includegraphics[width=0.85\columnwidth]{diffs_map-crop} \caption{MAP difference of the four methods trained with KCCA on ESPGame, IAPR-TC12, MIRFlickr-25k and NUS-WIDE. KCCA is trained using \textbf{expert labels}.} \label{fig:diffs_performance} \end{figure} Table \ref{tab:comparison_mirflickr_gt} shows our results on the MIRFlickr-25k dataset. Again, we first note that by simply switching from HC features to VGG16, a large boost of MAP is obtained. Focusing on TagProp and SVM baselines, which are directly comparable with previous work \cite{verbeek-2010}, MAP increases from $52.3$ to $72.7$ and from $46.5$ to $70.8$, respectively. This is consistent with recent literature that suggests CNN activations are way more powerful than handcrafted features. We also report the experimental results of \cite{srivastava2012dbm}, obtained using autoencoders and multimodal Deep Boltzmann Machines, and \cite{guillaumin-2010} (semi-supervised multimodal kernel learning), which are the previous state-of-the-art results on this dataset. Applying our KCCA-based framework to the five methods results in a generalized improvement of all metrics, especially on the four nearest neighbor schemes. The best MAP is obtained by KCCA $+$ SVM that reaches a score of $73.0$, higher than the best baseline. Interestingly, KCCA $+$ NNvot and KCCA $+$ TagProp reach a score of $72.9$, that is higher than the best baseline SVM. We can observe that our semantic space improves both Rec@5 and Prec@5, specifically an average increase of 3.1 for Rec@5 and of 2.1 of Prec@5 can be measured for all 5 baseline methods. We report in Table \ref{tab:comparison_nuswide_gt} the results of the comparison on the large-scale NUS-WIDE dataset. Previous works used BLVC (Caffe reference model) features (e.g. \cite{johnson-2015}) while we use VGG16, but this does not provide significant differences in performance. Moreover Gong~\textit{et al.}~ \cite{gong2013deep} attempted to train the network from scratch, obtaining an inferior performance with respect to pre-trained features on ImageNet \cite{johnson-2015,gong2013deep}. A higher score of Rec@3 is observed in all our experiments with respect to the state of the art. This suggests that our approach is able to work with unbalanced distribution of labels, and improves recall of rare labels. KCCA $+$ TagProp is the overall best method on this dataset, even superior to SVM that is commonly recognized as better than kNN-based methods for classification. In summary, our framework is always able to improve performance in all datasets with every metric. This is an important result since each particular metric captures different properties. On smaller datasets, such as IAPR-TC12 and ESP-GAME, metric learning based approaches~\cite{2pknn-2012,guillaumin-2009} take more advantage from using 15 different but weaker features then a single, stronger one, as we do. Although on larger and more challenging datasets, such as MIRFlickr and NUS-WIDE, this effect is largely moderated. Finally, Figure \ref{fig:diffs_performance} shows the difference of MAP between the semantic space and their baseline, for all the five methods. We highlight that the improvement is generally higher on IAPR-TC12 and ESP-GAME, where fewer training examples are available. In particular, SVM has the largest gain followed by the simpler NNvot and TagRel. This might be because these methods suffer on rare concepts due to sample insufficiency. \subsection{Experiment 2: Performance with User Tags}\label{sec:results_tags} \begin{table*}[!ht] \centering \caption{Results on the MIRFlickr-25k and NUS-WIDE datasets using \textbf{user tags}.} \label{tab:comparison_user_tags} \resizebox{0.72\textwidth}{!}{ \begin{tabular}{lcccccccc} \toprule & & \multicolumn{3}{c}{\textbf{MIRFlickr-25k}} && \multicolumn{3}{c}{\textbf{NUS-WIDE}} \\ \cmidrule{3-5} \cmidrule{7-9} \multicolumn{1}{l}{\textbf{Methods}} & \multicolumn{1}{c}{\textbf{Visual Feat}} & \textbf{MAP} & \textbf{Prec@5} & \textbf{Rec@5} && \textbf{MAP} & \textbf{Prec@5} & \textbf{Rec@5} \\ \midrule \multicolumn{9}{l}{\textbf{\emph{State of the art:}}}\\ [3pt] \multicolumn{1}{l}{SVM v \cite{verbeek-2010}} & \multicolumn{1}{c}{HC} & 35.4 & - & - && - & - & - \\ \multicolumn{1}{l}{SVM v+t \cite{verbeek-2010}} & \multicolumn{1}{c}{HC} & 37.9 & - & - && - & - & - \\ \multicolumn{1}{l}{TagProp \cite{verbeek-2010}} & \multicolumn{1}{c}{HC} & 38.4 & - & - && - & - & - \\ \multicolumn{1}{l}{FisherBoxes \cite{uricchio-2015}} & \multicolumn{1}{c}{VGG128} & \textbf{54.8} & - & - && \textbf{39.7} & - & - \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Baselines:}}}\\ [3pt] \multicolumn{1}{l}{NNVot \hide{\cite{makadia-2008}}} & \multicolumn{1}{c}{VGG16} & \textbf{59.3} & 34.2 & 67.1 & & \textbf{43.1} & 30.1 & \multicolumn{1}{c}{46.3} \\ \multicolumn{1}{l}{TagRel \hide{\cite{xli-2009}} } & \multicolumn{1}{c}{VGG16} & 59.2 & 34.8 & \textbf{68.0} & & 42.5 & 27.9 & \multicolumn{1}{c}{49.7} \\ \multicolumn{1}{l}{TagProp \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & 58.1 & 33.5 & 66.0 & & 42.8 & 28.4 & \multicolumn{1}{c}{\textbf{50.2}} \\ \multicolumn{1}{l}{2PKNN \hide{\cite{2pknn-2012}} } & \multicolumn{1}{c}{VGG16} & 51.4 & 35.9 & 67.1 & & 41.2 & \textbf{37.5} & \multicolumn{1}{c}{43.7} \\ \multicolumn{1}{l}{SVM \hide{\cite{guillaumin-2009}} } & \multicolumn{1}{c}{VGG16} & 43.8 & \textbf{40.0} & 50.8 & & 35.5 & 30.4 & \multicolumn{1}{c}{45.2} \\ \midrule \multicolumn{5}{l}{\textbf{\emph{Our Approach:}}}\\ [3pt] \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{VGG16} & \textbf{60.6} & 35.4 & \textbf{68.8} & & \textbf{43.7} & 36.3 & \multicolumn{1}{c}{48.0} \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{VGG16} & 59.8 & 37.2 & 68.5 & & 43.5 & 29.0 & \multicolumn{1}{c}{\textbf{55.1}}\\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{VGG16} & 59.7 & 33.6 & 67.4 & & 42.9 & 29.3 & \multicolumn{1}{c}{51.3} \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{VGG16} & 56.8 & \textbf{42.9} & 65.4 & & 42.0 & \textbf{56.9} & \multicolumn{1}{c}{34.0} \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{VGG16} & 47.1 & 37.5 & 56.5 & & 41.6 & 37.9 & \multicolumn{1}{c}{47.6} \\ \bottomrule \end{tabular} } \end{table*} We now turn our attention to the more difficult setting of noisy user tags. Instead of using expert labels, we rely on user tags as training labels and repeat the same experiments of Section \ref{sec:results_gt}. Only MIRFlickr-25k and NUS-WIDE provide user tags, therefore we report results on these datasets. Table \ref{tab:comparison_user_tags} shows the performance of the state of the art, the baselines and our approach on MIRFlickr-25k and NUS-WIDE. As previously noted, changing the features from HC to VGG16 has a strong positive impact. Comparing the methods ran in the semantic space to the baselines ran on the bare visual feature, we observe that every metric is generally improved. FisherBoxes \cite{uricchio-2015} uses improved features with the same TagProp algorithm, as our baseline. Since our TagProp MAP is higher than FisherBoxes, this suggests that VGG16 features alone are more powerful than the combinations of VGG128 boxes. SVM is inferior to nearest neighbor techniques in terms of MAP while having comparable precision and recall. Consistently to expert labels results, 2PKNN performs poorly on NUS-WIDE. In the first phase few images per label are selected, thus reducing its power to address the high visual variability of images with frequent labels. We also note that all scores are lower than those reported with expert labels in Table \ref{tab:comparison_mirflickr_gt} and Table \ref{tab:comparison_nuswide_gt}. In particular SVM MAP is the most hampered. This is expected given the noise in user tags, and was also noted in previous work \cite{verbeek-2010}. In Figure \ref{fig:diffs_performance_tags} we report the relative MAP difference of the five methods with our technique and the baselines. We observe that largest gains are obtained with 2PKNN and SVM. We believe this is due to the fact that 2PKNN and SVM have numerous learning parameters that are likely to generate complex boundaries with label noise. In contrast, the other three schemes have few or no parameters at all. This suggests that features in the semantic space have also some robustness to tag noise. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{diffs_map_tags-crop} \caption{MAP relative difference of the four methods trained with KCCA on ESP-Game, IAPR-TC12, MIRFlickr-25k and NUS-WIDE. KCCA is trained with \textbf{user tags}.} \label{fig:diffs_performance_tags} \end{figure} \begin{table}[!ht] \centering \caption{Ablation study on the denoising method. Results are in terms of MAP.} \label{tab:denoising_mirflickr} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{lccc} \toprule & \multicolumn{3}{c}{\textbf{MIRFlickr-25k}} \\ \cmidrule{2-4} \multicolumn{1}{l}{\textbf{Methods}} & \multicolumn{1}{c}{\textbf{Baseline}} & \textbf{KCCA - NoPreProp} & \textbf{KCCA} \\ \midrule \multicolumn{1}{l}{NNvot \cite{makadia-2008} } & 59.3 & 56.2 & \textbf{60.6} \\ \multicolumn{1}{l}{TagRel \cite{xli-2009} } & 59.2 & 54.5 & \textbf{59.8} \\ \multicolumn{1}{l}{TagProp \cite{guillaumin-2009} } & 58.1 & 54.9 & \textbf{59.7} \\ \multicolumn{1}{l}{2PKNN \cite{2pknn-2012} } & 51.4 & 42.9 & \textbf{56.8} \\ \multicolumn{1}{l}{SVM \cite{guillaumin-2009} } & 43.8 & 41.3 & \textbf{47.1} \\ \bottomrule \end{tabular} } \end{table} We believe that such robustness is partially due to the denoising algorithm. To confirm this, we perform an ablation study on MIRFlickr-25k with the same settings as before, except that we omit the pre-propagation step. We report in Table \ref{tab:denoising_mirflickr} the MAP of three different cases: (i) the baseline methods (Baseline); (ii) our approach without the pre-propagation step (KCCA - NoPreProp); (iii) our full approach (KCCA). We observe that avoiding the denoising step leads to an inferior MAP, even less than the baseline case. This confirms that, in presence of excessive sparsity like that in MIRFlickr-25k, KCCA alone is unable to improve the visual features. \subsection{Experiment 3: Performance with different Textual Features}\label{sec:results_txtkernels} In this section, we compare the performance of the three proposed textual kernels, defined in Section~\ref{sec:expertlabels}, on expert labels: a bag-of-words linear kernel (\emph{Linear}), a semantic ontology-based kernel (\emph{Ontology}) and a continuous word vector kernel (\emph{Word2Vec}). Here we perform an experiment with the same settings as experiment 1 (Section~\ref{sec:results_gt}), but the Linear kernel is swapped with the Ontology or Word2Vec kernels. For the Ontology kernel we use WordNet as the underlying ontology while for Word2Vec we employ the pre-trained word vectors on news article. In Table \ref{tab:comparison_txt_kernels}, we report results on the two largest datasets MIRFlickr-25k and NUS-WIDE, but similar results were obtained on ESP-Game and IAPR-TC12. \begin{table*}[!ht] \centering \caption{Results of our method with the Linear, Ontology and Word2Vec textual kernels on MIRFlickr-25k and NUS-WIDE, using \textbf{expert labels}.} \label{tab:comparison_txt_kernels} \resizebox{0.72\textwidth}{!}{ \begin{tabular}{lcccccccc} \toprule & \multicolumn{1}{l}{} & \multicolumn{3}{c}{\textbf{MIRFlickr-25k}} && \multicolumn{3}{c}{\textbf{NUS-WIDE}} \\ \cmidrule{3-5} \cmidrule{7-9} \multicolumn{1}{l}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{Textual Kernel}} & \multicolumn{1}{c}{\textbf{MAP}} & \multicolumn{1}{c}{\textbf{Prec@5}} & \multicolumn{1}{c}{\textbf{Rec@5}} && \multicolumn{1}{c}{\textbf{MAP}} & \multicolumn{1}{c}{\textbf{Prec@5}} & \multicolumn{1}{c}{\textbf{Rec@5}} \\ \cmidrule{1-9} \multicolumn{9}{l}{\textbf{\emph{Baselines:}}}\\ [3pt] \multicolumn{1}{l}{NNvot} & \multicolumn{1}{c}{-} & 69.9 & 44.7 & 69.2 && 49.3 & 39.6 & \multicolumn{1}{c}{44.0} \\ \multicolumn{1}{l}{TagRel} & \multicolumn{1}{c}{-} & 68.9 & 41.5 & 72.1 && 49.2 & 32.1 & \multicolumn{1}{c}{50.3} \\ \multicolumn{1}{l}{TagProp} & \multicolumn{1}{c}{-} & 70.8 & 45.5 & 70.1 && \textbf{50.9} & \textbf{41.3} & \multicolumn{1}{c}{44.6} \\ \multicolumn{1}{l}{2PKNN} & \multicolumn{1}{c}{-} & 66.5 & \textbf{46.4} & 70.9 && 48.0 & 39.7 & \multicolumn{1}{c}{52.2} \\ \multicolumn{1}{l}{SVM} & \multicolumn{1}{c}{-} & \textbf{72.7} & 38.8 & \textbf{72.4} && 50.2 & 34.6 & \multicolumn{1}{c}{\textbf{60.6}} \\ \midrule \multicolumn{9}{l}{\textbf{\emph{Our Approach:}}}\\ [3pt] \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{Linear} & \textbf{72.9} & 46.1 & 73.1 && \textbf{51.7} & 40.2 & \multicolumn{1}{c}{\textbf{50.5}} \\ \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{Ontology} & 72.5 & 46.6 & 72.3 && 51.2 & \textbf{46.7} & 46.3 \\ \multicolumn{1}{l}{KCCA + NNvot} & \multicolumn{1}{c}{Word2Vec} & 72.3 & \textbf{46.9} & \textbf{73.4} && 50.6 & 40.8 & 50.1 \\ \cmidrule{1-9} \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{Linear} & 70.7 & 45.2 & 72.6 && \textbf{51.4} & 34.4 & \multicolumn{1}{c}{\textbf{57.2}} \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{Ontology} & 70.6 & \textbf{47.4} & 73.9 && 49.5 & \textbf{35.9} & 54.3 \\ \multicolumn{1}{l}{KCCA + TagRel} & \multicolumn{1}{c}{Word2Vec} & \textbf{70.9} & 47.2 & \textbf{74.2} && 49.8 & 34.9 & 57.0 \\ \cmidrule{1-9} \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{Linear} & \textbf{73.0} & 44.6 & \textbf{74.1} && \textbf{52.2} & \textbf{45.2} & \multicolumn{1}{c}{49.2} \\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{Ontology} & 72.7 & 44.6 & 73.7 && 51.7 & \textbf{45.2} & 48.1 \\ \multicolumn{1}{l}{KCCA + TagProp} & \multicolumn{1}{c}{Word2Vec} & 72.9 & \textbf{45.3} & 73.8 && 51.6 & 40.9 & \textbf{50.6} \\ \cmidrule{1-9} \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{Linear} & \textbf{67.7} & \textbf{47.3} & 74.6 && \textbf{50.7} & \textbf{53.0} & \multicolumn{1}{c}{47.0} \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{Ontology} & 65.7 & 44.1 & \textbf{76.1} && 49.2 & 46.3 & 51.1 \\ \multicolumn{1}{l}{KCCA + 2PKNN} & \multicolumn{1}{c}{Word2Vec} & 66.2 & 44.2 & 75.7 && 48.9 & 47.3 & \textbf{51.4} \\ \cmidrule{1-9} \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{Linear} & \textbf{73.0} & 38.9 & \textbf{75.0} && \textbf{51.8} & 43.3 & \multicolumn{1}{c}{\textbf{48.4}} \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{Ontology} & 71.4 & 39.3 & 73.0 && 51.4 & \textbf{44.7} & 46.7 \\ \multicolumn{1}{l}{KCCA + SVM} & \multicolumn{1}{c}{Word2Vec} & 71.8 & \textbf{39.5} & 74.1 && 50.2 & 42.7 & 47.7 \\ \bottomrule \end{tabular} } \end{table*} We observe that all methods have better performance than the baseline when using our approach, regardless of the textual kernel. Some combinations of kernels and methods favor one metric over the others, although the Linear Kernel has almost always the best MAP. Nevertheless, these slight differences in performance do not suggest a superiority of a kernel over the others. We believe that further studies on how to integrate label relations in KCCA are required, leaving the problem of choosing a better textual kernel for KCCA open. \subsection{Experiment 4: Varying the Size of Neighborhood} \iftoggle{single}{ \renewcommand{0.31}{0.45} }{ \renewcommand{0.31}{0.49} } \begin{figure} \centering \includegraphics[width=0.31\columnwidth]{espgame_0_5_0_2_2-crop} \includegraphics[width=0.31\columnwidth]{iaprtc12_0_5_0_2_2-crop}\\ \includegraphics[width=0.31\columnwidth]{mirflickr_0_5_0_2_2-crop} \includegraphics[width=0.31\columnwidth]{nuswide_0_5_0_2_2-crop} \caption{MAP of NN-voting, TagRel and TagProp trained with KCCA on ESPGame, IAPR-TC12, MIRFlickr-25k and NUS-WIDE varying the number of nearest neighbors. KCCA is trained with \textbf{expert labels}. Dashed lines represent baseline methods.} \label{fig:MAP_nearest_neighbor_gt} \end{figure} \begin{figure} \centering \includegraphics[width=0.31\columnwidth]{mirflickr_2_5_0_2_2-crop} \includegraphics[width=0.31\columnwidth]{nuswide_2_5_0_2_2-crop} \caption{MAP evaluation for NN-voting, TagRel and TagProp trained with KCCA on MIRFlickr-25k and NUS-WIDE varying the number of nearest neighbors. KCCA is trained with \textbf{user tags}. Dashed lines represent baseline methods.} \label{fig:MAP_nearest_neighbor_tags} \end{figure} Nearest neighbor methods proved to be well performing on all settings we considered. Although they are simple and do not require much training, they still depend on choosing the right number $K$ of nearest neighbors. Thus, we conduct an evaluation of how $K$ affect the performance for both our approach and the baselines. Since SVM does not use neighbors, we only perform this evaluation on NNvot, TagRel, TagProp and 2PKNN. We report in Figures \ref{fig:MAP_nearest_neighbor_gt} and \ref{fig:MAP_nearest_neighbor_tags} the MAP scores when using the expert labels and the user tags, respectively. As can be seen from both figures, the KCCA variant of the nearest neighbor methods (solid lines) have systematically better MAP than baselines, for any number of neighbors used. As expected, MAP scores are lower when using user tags (Figure \ref{fig:MAP_nearest_neighbor_tags}). Nevertheless, a gain is observed for each method with any number of neighbors selected. This again confirms that features in the semantic space are better re-arranged, since images with similar semantics are closer in this space. \begin{figure* \centering \resizebox{0.85\textwidth}{!}{ \begin{tabular}{b{1cm}ccccccccc} \toprule & & \multicolumn{2}{c}{\textbf{NNVot}} && \multicolumn{2}{c}{\textbf{TagRel}} && \multicolumn{2}{c}{\textbf{TagProp}} \\ \cmidrule{3-4} \cmidrule{6-7} \cmidrule{9-10} \multicolumn{1}{c}{\textbf{Image}} & \textbf{Exp Labels} & \textbf{Baseline} & \textbf{Our} && \textbf{Baseline} & \textbf{Our} && \textbf{Baseline} & \textbf{Our} \\ \midrule \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\includegraphics[width=0.15\textwidth]{21632.jpg}\end{tabular} } & \begin{tabular}[c]{@{}c@{}}desert\\ mountain\\ range\\ salt\\ sky\end{tabular} & \begin{tabular}[c]{@{}c@{}}beach\\ cloud\\ \textbf{mountain}\\ sea\\ \textbf{sky}\end{tabular} & \begin{tabular}[c]{@{}c@{}}cloud\\ \textbf{desert}\\ landscape\\ \textbf{mountain}\\ \textbf{sky}\end{tabular} && \begin{tabular}[c]{@{}c@{}}beach\\ cloud\\ sea\\ \textbf{sky}\\ wave\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{desert}\\ lake\\ landscape\\ \textbf{mountain}\\ \textbf{salt}\end{tabular} && \begin{tabular}[c]{@{}c@{}}beach\\ cloud\\ \textbf{mountain}\\ sea\\ \textbf{sky}\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{desert}\\ man\\ \textbf{mountain}\\ \textbf{salt}\\ \textbf{sky}\end{tabular} \\ \midrule \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\includegraphics[width=0.16\textwidth]{12399.jpg}\end{tabular}} & \begin{tabular}[c]{@{}c@{}}hammock\\ man\\ woman\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{man}\\ room\\ table\\ wall\\ \textbf{woman}\end{tabular} & \begin{tabular}[c]{@{}c@{}}front\\ house\\ \textbf{man}\\ wall\\ \textbf{woman}\end{tabular} && \begin{tabular}[c]{@{}c@{}}\textbf{man}\\ room\\ table\\ wall\\ \textbf{woman}\end{tabular} & \begin{tabular}[c]{@{}c@{}}front\\ \textbf{hammock}\\ \textbf{man}\\ wall\\ \textbf{woman}\end{tabular} && \begin{tabular}[c]{@{}c@{}}bottle\\ \textbf{man}\\ people\\ table\\ \textbf{woman}\end{tabular} & \begin{tabular}[c]{@{}c@{}}front\\ \textbf{hammock}\\ \textbf{man}\\ wall\\ \textbf{woman}\end{tabular} \\ \midrule \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\includegraphics[width=0.15\textwidth]{37492.jpg}\end{tabular}} & \begin{tabular}[c]{@{}c@{}}cap \\ flag \\ hair \\ man \\ polo \\ portrait \\ shirt\end{tabular} & \begin{tabular}[c]{@{}c@{}}boy \\ girl \\ hat \\ \textbf{man} \\ sky\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{cap} \\ front \\ \textbf{man} \\ sky \\ woman \end{tabular} && \begin{tabular}[c]{@{}c@{}}boy \\ \textbf{cap} \\ girl \\ \textbf{hair} \\ hat\end{tabular} & \begin{tabular}[c]{@{}c@{}}boy \\ \textbf{cap} \\ hat \\ \textbf{man} \\ \textbf{shirt}\end{tabular} && \begin{tabular}[c]{@{}c@{}}boy \\ \textbf{cap} \\ child \\ sky \\ sweater\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{cap} \\ \textbf{man} \\ \textbf{polo} \\ \textbf{portrait} \\ \textbf{shirt}\end{tabular} \\ \midrule \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\includegraphics[width=0.12\textwidth]{25966.jpg}\end{tabular}} & \begin{tabular}[c]{@{}c@{}}man \\ sky \\ statue \\ view\end{tabular} & \begin{tabular}[c]{@{}c@{}}building \\ front \\ people \\ \textbf{sky} \\ tower \end{tabular} & \begin{tabular}[c]{@{}c@{}}front \\ \textbf{man} \\ \textbf{sky} \\ \textbf{statue} \\ tree\end{tabular} && \begin{tabular}[c]{@{}c@{}}building \\ column \\ \textbf{sky} \\ \textbf{statue} \\ tower\end{tabular} & \begin{tabular}[c]{@{}c@{}}base \\ building \\ \textbf{sky} \\ square \\ \textbf{statue}\end{tabular} && \begin{tabular}[c]{@{}c@{}}column \\ front \\ \textbf{man} \\ \textbf{sky} \\ \textbf{statue}\end{tabular} & \begin{tabular}[c]{@{}c@{}}front \\ \textbf{man} \\ \textbf{sky} \\ \textbf{statue} \\ \textbf{view}\end{tabular} \\ \bottomrule \end{tabular} } \caption{Qualitative results of the baseline methods and our proposed representation on IAPR-TC12. Labels ordered according to their relevance scores.} \label{fig:qualitative} \end{figure*} \subsection{Experiment 5: Scaling by Subsampling the Training Set} \begin{figure} \centering \includegraphics[width=0.49\columnwidth]{nuswide_0_3_0_2_2_subsampling-crop.pdf} \includegraphics[width=0.49\columnwidth]{nuswide_2_3_0_2_2_subsampling-crop.pdf} \caption{\textbf{Training KCCA with a subset of data.} MAP of the five methods trained with KCCA on NUS-WIDE varying the number of images used for training the projections, with expert labels (on the left) and user tags (on the right). Dashed lines represent baseline methods.} \label{fig:MAPgtsubsample} \end{figure} One key issue with KCCA is that it can be onerous to scale the training over millions of images. The most expensive effort is carried out in the training phase where the projection vectors are estimated. At test time, the computational cost is negligible since it is only given by the multiplication of the features with the estimated projection vectors. As also noted by Hardoon \textit{et al.}~ \cite{hardoon-2004}, big training sets with large kernel matrices can lead to computational problems. Two main issues arise: \emph{i}) high computational cost to compute the generalized eigenvalues problem, and \emph{ii}) the memory footprint of handling large kernel matrices. For the first issue, we compute only a reduced number of dimensions in the semantic space by using partial Gram-Schmidt orthogonalization (PGSO), i.e. we solve the generalized eigenvalues with an incomplete Cholesky factorization of the kernel matrices. This is a reasonable approximation because the projection is built up as the span of a subset of independent projections, and it reconstructs a limited amount of energy. For the second issue, the memory footprint increases quadratically with the number of training images. In this section we explore the possibility of using a subsample of the training set to manage also this problem. To this end, we randomly select a subset of size $M$ from the original training set used to train KCCA, and obtain the projections. Then we use them to project the full training set and test the methods in this approximate semantic space. We run the experiment only on NUS-WIDE since it has the highest number of images. The whole experiment is repeated with five different splits in the two settings of using expert labels or user tags. Note that this setting is different from the one used in Sect.~\ref{sec:results_gt} and Sect.~\ref{sec:results_tags} for NUS-WIDE, where we used the split provided by the authors of the dataset. Figure \ref{fig:MAPgtsubsample} shows the MAP scores obtained with a subset of the training data. We report results by increasing $M$ from $100$ to the full training set size (with exponential steps). Using more training data, we expect the quality of the projections to be improved. Either with expert labels or user tags, more the training data, the better the projections obtained. We note that a minimum quantity of data is required to obtain a performance higher than the baseline; this corresponds to the point in the figure in which the corresponding dashed and solid lines intersect each other. The specific subset of training data depends on the method and on the quality of the annotations. When expert labels are available, NNvot and TagRel obtains an improvement even with a very small amount of training images. In contrast, TagProp requires more data to gain MAP because of its rank learning phase. This means that our approach can provide some improvements even when very few labeled images are available, but more data may be needed with advanced nearest neighbors schemes. Considering the scenario of user tags, the three methods show similar performance with similar numbers of training images. This suggests that differently from expert labels, the noise in user tags is responsible for the hampered performance and more data is needed to reliably estimate good projections. \begin{figure} \centering \includegraphics[width=.85\columnwidth]{timings-crop.pdf} \caption{Timing of our approach varying the number of samples employed for learning KCCA. We report separately the time for visual kernel, textual kernel and KCCA computation. The time is dominated by the visual kernel computation.\label{fig:timing}} \end{figure} We evaluate the additional computational cost of our approach, by timing the run of KCCA on NUS-WIDE on our sub-sampling experiment. It can be noted from Fig.~\ref{fig:timing} that the overall computation is dominated by the visual kernel computation. Since we approximated the kernel matrices with GSD to a fixed rank value, the running time required to compute the KCCA projections can only increase up to a fixed maximum value, independently from the number of samples. \subsection{Qualitative Analysis} Figure~\ref{fig:qualitative} shows four examples of annotations produced by our method on the IAPR-TC12 dataset. It can be seen that TagProp and TagRel perform better for both baseline representation and the proposed semantic space. Thanks to the integration of labels into the semantic space, our technique allows nearest neighbor methods to distinguish between visually similar but semantically different images. Look for instance at the first example: a salt desert. Baseline approaches wrongly predict that this might be a \dquote{beach} image, since the salt visually resembles sand. Differently, our semantic space dismisses beach images and allows NN methods to find samples with \dquote{desert} and \dquote{salt}, thus obtaining a correct image labeling. Moreover, our method can also deal with information that was missing in the visual space. A good example is given by the second picture shown in Figure~\ref{fig:qualitative}. This image depicts two people and an \dquote{hammock}. Since the label \dquote{hammock} is not in the $1$K concepts used to train the VGG16 network, similar hammock images are difficult to be retrieved for the baseline methods. In contrast, our method has integrated this missing information into the semantic space, allowing TagRel and TagProp to find semantically similar images and predict the presence of the hammock correctly. The third and fourth images demonstrate that our technique is able to bring closer images with fine-grained labels. For instance, the third image is a close-up of a person wearing several well visible clothing. Baseline methods correctly found easy concepts like \dquote{man}, \dquote{cap} or \dquote{hair}, while label transfer methods operating in the semantic space can also predict more specific labels such as \dquote{shirt}, \dquote{polo} and \dquote{portrait}. Finally, the fourth image depicts a statue portrayed from below, in contrast with the blue sky. This image is correctly annotated with the difficult labels \dquote{man} and \dquote{view} only by TagProp when trained on the semantic space. \section{Conclusion}\label{sec:conclusions} This paper presents a novel automatic image annotation framework based on KCCA. Our work shows that it is indeed useful to integrate textual and visual information into a semantic space that is able to preserve correlation with the respective original features. Our method does not require the textual information at test time, and it is therefore suitable for label prediction on unlabeled images. We additionally propose a label denoising algorithm that allows to exploit user tags in place of expert labels. This scenario is of extreme interest given the abundance of images with user tags that can be extracted from social media. Finally, we show that semantic projections can be learned also with a subset of the training set, making it possible to obtain some benefits even on large-scale datasets. We report extensive experimental results on all the classic automatic image annotation datasets, as well as more recent datasets collected from Flickr. Our experiments show that label transfer in the semantic space allows consistent improvement over standard schemes that rely only on visual features. All the best performing image annotation methods have shown to be able to exploit the proposed embedding. We believe that our framework will provide a strong baseline to compare and better understand future automatic image annotation algorithms. \section*{Acknowledgments} This work was supported by the MIUR project No. CTN01\_ 00034\_23154\_SMST. L. Ballan was supported by the EU's FP7 under the grant agreement No.~623930 (Marie Curie IOF). \section*{References} \bibliographystyle{model1-num-names}
1,108,101,564,450
arxiv
\section{Functional setting}\label{sec-setting} In this section we introduce the functional setting to rewrite system \eqref{Navier} in abstract form. \subsection{The functional spaces} Let $D$ be a bounded domain in $\mathbb R^{d}$ ($d\ge 2$) with smooth boundary $\partial D$. For $1\le p<\infty$ we denote \[ L_{\sigma}^{p}= \text{ the closure in } [L^{p}(D)]^d \text{ of }\{ u\in [C_{0}^{\infty}(D)]^d, \ div \ u=0\} \] and \[ G^{p}= \{ \nabla q, q\in W^{1,p}(D) \}. \] We then have the following Helmholtz decomposition \[ [L^{p}(D)]^d= L_{\sigma}^{p} \oplus G^{p}, \] where the notation $\oplus$ stands for the direct sum. In the case $p=2$ the sum above reduces to the orthogonal decomposition and $L_{\sigma}^2$ is a separable Hilbert space, whose scalar product is denoted by $(\cdot,\cdot)$. \subsection{The Stokes operator} Let us recall some results on the Stokes operator (see, e.g., \cite{Soh}). Now we fix $p$. Let $P$ be the continuous projection from $[L^{p}(D)]^d$ onto $L_{\sigma}^{p}$ and let $\Delta$ be the Laplace operator in $L^p$ with zero boundary condition, so that $D(\Delta)=\{u \in [W^{2,p}(D)]^d: u|_{\partial D}=0\}$. Now, we define the Stokes operator $A$ in $L_{\sigma}^{p}$ by $A=-P\Delta$ with domain $H^{2,p}:=L_{\sigma}^{p}\cap D(\Delta)$. The operator $-A$ generates a bounded analytic semigroup $\{S(t)\}_{t\ge 0}$ of class $C_{0}$ in $L_\sigma^p$. In particular, for $p=2$ we set $H^2=H^{2,2}$ and the Stokes operator $A : H^{2}\rightarrow L_{\sigma}^2$ is an isomorphism, the inverse operator $A^{-1}$ is self-adjoint and compact in $L_{\sigma}^{2}$. Thus, there exists an orthonormal basis $\{e_{j}\}_{j=1}^\infty\subset H^{2}$ of $L^{2}_{\sigma}$ consisting of the eingenfunctions of $A^{-1}$ and such that the sequence of eigenvalues $\{\lambda_{j}^{-1}\}_{j=1}^\infty$, with $\lambda_{j}>0$, converges to zero as $j \to \infty$. In particular, $\lambda_j$ behaves as $j^{\frac 2d}$ for $j \to \infty$. Then, $\{e_{j}\}_j$ is also the sequence of eingenfunctions of $A$ corresponding to the eigenvalues $\{\lambda_{j}\}_j$. Moreover $A$ a is positive, selfadjoint and densely defined operator in $L_{\sigma}^{2}$. Using the spectral decomposition, we construct positive and negative fractional power operators $A^{\beta}$, $\beta\in \mathbb R$. For $\beta\ge 0 $ we have the following representation for $(A^{\beta}, D(A^{\beta}))$ as a linear operator in $L_{\sigma}^{2}$ \[ D(A^{\beta})= \big\{ v\in L_{\sigma}^{2}:\ \|v\|^2_{D(A^\beta)}= \sum_{j=1}^\infty \lambda_{j}^{2\beta} |(v,e_{j})|^{2} < \infty \big\}, \] \[ A^\beta v= \sum_{j=1}^\infty \lambda_{j}^\beta (v,e_{j}) e_{j}. \] For negative exponents, we get the dual space: $D(A^{-\beta})=(D(A^\beta))'$. We set $H^s=D(A^{\frac s2})$. Let us point out that the operator $A^{-\beta}$ is an Hilbert-Schmidt operator in $L^2_\sigma$ for any $\beta>\frac d4$; indeed, denoting by $\|\cdot \|_{\gamma(L^2_\sigma,L^2_\sigma)} $ the Hilbert-Schmidt norm, we have \[ \|A^{-\beta}\|_{\gamma(L^2_\sigma,L^2_\sigma)}^2 := \sum_{j=1}^\infty \|A^{-\beta}e_j\|^2_{L^2_\sigma} = \sum_{j=1}^\infty \lambda_j^{-2\beta} \sim \sum_{j=1}^\infty j^{-2\beta\frac 2d} \] and the latter series in convergent for $2\beta\frac 2d>1$. We also recall (see, e.g., \cite{W}) that for any $t>0$ we have \begin{equation}\label{stimaSpr} \| S(t) u \|_{{L^p_\sigma}}\le \frac{M}{t^{\frac{d}{2}(\frac 1r-\frac 1p) } } \| u \|_{L^r_\sigma} \ \text{ for } \ 1< r\le p < \infty \end{equation} \begin{equation}\label{stimaASp} \|A^{\alpha} S(t) u \|_{L^r_\sigma}\le \frac{M}{t^{\alpha}} \| u \|_{L^r_\sigma} \ \text{ for } \ 1< r < \infty, \, \alpha >0 \end{equation} for any $u \in L^r_\sigma$, where $M$ denotes different constants depending on the parameters. Moreover we have the following result on the Hilbert-Schmidt norm of the semigroup, that we shall use later on. What is important is the behaviour for $t$ close to $0$, let us say for $t \in (0,1)$. \begin{lemma}\label{qsmall} We have \[ \|S(t)\|_{\gamma(H^{\frac d2};L^2_\sigma)} \le M(2-\ln t) \qquad\forall t\in (0,1) \] and for $q< \frac d2$ \begin{equation}\label{qd2} \|S(t)\|_{\gamma(H^q;L^2_\sigma)} \le \frac M{t^{\frac d4-\frac q2}} \qquad\forall t>0 \end{equation} \end{lemma} \begin{proof} The Hilbert-Schmidt norm of the semigroup can be computed. Recall that $\{\frac {e_j}{\lambda_j^{q/2}}\}_j$ is an orthonormal basis of $H^q$. Thus \[ \|S(t)\|_{\gamma(H^q,L^2_\sigma)}^2 = \sum_{j=1}^\infty \|S(t) \frac {e_j}{\lambda_j^{q/2}}\|_{L^2_\sigma}^2 = \sum_{j=1}^\infty \frac 1{\lambda_j^{q}} \|e^{-\lambda_j t} e_j\|_{L^2_\sigma}^2 = \sum_{j=1}^\infty \frac {e^{-2\lambda_j t}}{\lambda_j^{q}}. \] Since $\lambda_j \sim j^{\frac 2d}$ as $j\to \infty$, we estimate \[ \|S(t)\|_{\gamma(H^q,L^2_\sigma)}^2\le C \sum_{j=1}^\infty \frac {e^{-2j^{\frac 2d} t}}{j^{\frac{2q}d}}. \] Therefore we analyse the series $s_q(t)=\displaystyle \sum_{j=1}^\infty \frac {e^{-2j^{\frac 2d} t}}{j^{\frac{2q}d}}$. Let us consider different values of the parameter $q$. \\$\bullet$ When $q=\frac d2$ the series becomes \[ s_{\frac d2}(t)=\sum_{j=1}^\infty j^{-1} e^{-2j^{\frac 2d} t} = e^{-2t}+\sum_{j=2}^\infty j^{-1} e^{-2j^{\frac 2d} t} \le e^{-2t}+\int_1^\infty \frac 1x e^{-2x^{\frac 2d} t} dx. \] The integral is computed by means of the change of variable $x=y^d t^{-\frac d2}$ so to get \[ \int_1^\infty \frac 1x e^{-2x^{\frac 2d} t} \text{d}x=\int_{\sqrt t}^\infty \frac dy e^{-2y^2}\text{d}y . \] Hence, for $t \in (0,1)$ we get \[ s_{\frac d2}(t)\le e^{-2t}+d \int_{\sqrt t}^1 \frac 1y \text{d}y +\int_1^\infty e^{-2y^2}\text{d}y \le 1-\frac d2 \ln t+C. \] $\bullet$ When $0\le q < \frac d2$ then the sequence of the addends is monotone decreasing and therefore we estimate the series by an integral: \[ \sum_{j=1}^\infty \frac {e^{-2j^{\frac 2d} t}}{j^{\frac{2q}d}} \le \int_0^\infty \frac {e^{-2x^{\frac 2d} t}}{x^{\frac{2q}d}}dx . \] Again, by the change of variable $x=y^d t^{-\frac d2}$ we calculate the integral and get \[ \sum_{j=1}^\infty \frac {e^{-2j^{\frac 2d} t}}{j^{\frac{2q}d}}\le t^{q-\frac d2} d\int_0^\infty y^{d-2q-1} e^{-2y^2}\text{d}y . \] The latter integral is convergent since $d-2q-1>-1$ by the assumption that $q<\frac d2$. Hence we get the bound \eqref{qd2} for the Hilbert-Schmidt norm of $S(t)$. \\$\bullet$ When $q<0$ the sequence of the addends in the series $s_q(t)$ is first increasing and then decreasing. Let us notice that $t\mapsto s_q(t)$ (defined for $t>0$) is a continuous decreasing positive function converging to $0$ as $t \to +\infty$. Hence to estimate it for $t\to 0^+$ it is enough to get an estimate over a sequence $t_n\to 0^+$. We choose this sequence in such a way that the maximal value of the function $a_t(x):= x^{-\frac {2q}d} e^{-2x^{\frac 2d} t}$ (defined for $x>0$) is attained at the integer value $n=(-\frac q{2t_n})^{\frac d2} \in \mathbb N$. In this way we can estimate the series by means of an integral: \[\begin{split} s_q(t_n)\equiv \sum_{j=1}^\infty a_{t_n}(j)&\le \int_1^n a_{t_n}(x)\text{d}x+ a_{t_n}(n)+ \int_n^\infty a_{t_n}(x)\text{d}x \\& = \int_1^\infty x^{-\frac{ 2q}d} e^{-2x^{\frac 2d} t_n} \text{d}x+n^{-\frac {2q}d} e^{-2 n^{\frac 2d} t_n} \\& \le d\Big(\int_0^\infty y^{d-1-2q}e^{-2y^2}\text{d}y\Big)\ t_n ^{q-\frac d2} +C_q t_n^q \end{split}\] where we have computed the integral by means of the change of variable $x=y^d t_n^{-\frac d2}$ as before. Hence, we get that \[ s_q(t_n)\le \tilde C t_n^{q-\frac d2} \;\text{ for any }n \] and therefore for $t \to 0^+$ \[ s_q(t)\le\frac C{t^{\frac d2-q}}. \] This proves \eqref{qd2} when $q<0$. \end{proof} \subsection{The bilinear term} Let us define the nonlinear term by $B(u,v)=-P[(u\cdot \nabla)v]$. Following \cite{Soh}, this is first defined on smooth divergence free vectors fields with compact support and one proves by integration by parts that \begin{equation} \langle B(u,v), z\rangle=-\langle B(u,z), v\rangle, \qquad \langle B(u,v), v\rangle=0 \end{equation} Then one specifies that $B$ is continuous with respect to suitable topologies. In particular, H\"older inequality provides \[ \|B(u,v)\|_{H^{-1}}\le \|u\|_{L^4_\sigma} \|v\|_{L^4_\sigma} \] and thus $B:L^4_\sigma\times L^4_\sigma \to H^{-1}$ is continuous. Since $u$ is a divergence free vector field, we also have the representation $B(u,v)= -P[ \text{div}\ (u\otimes v )]$ which will be useful later on (again this holds for smooth entries and then is extended for $u$ and $v$ suitably regular). For short we shall write $B(u)$ instead of $B(u,u)$. \subsection{Fractional Brownian motion} First, we recall that a real fractional Brownian motion (fBm) $\{B^{{\mathcal H}}(t)\}_{t\in [0,T]}$ with Hurst parameter ${\mathcal H}\in(0,1)$ is a centered Gaussian process with covariance function \begin{equation} \label{cov} \mathbb{E}[ B ^{{\mathcal H}}(t) B ^{{\mathcal H}}(s)] :=R_{{\mathcal H}}(t,s)= \frac{1}{2} ( t^{2{\mathcal H}} + s^{2{\mathcal H}}- \vert t-s\vert ^{2{\mathcal H}}), \hskip0.5cm s,t \in [0,T]. \end{equation} For more details see \cite{N}. We are interested in the infinite dimensional fractional Brownian motion. We consider the separable Hilbert space $L^{2}_{\sigma}$ and its orthonormal basis $\{e_j\}_{j=1}^\infty$. Then we define \begin{equation}\label{cfBm} W^{{\mathcal H}}(t)=\sum_{j=1}^{\infty} e_{j} \beta_{j}^{{\mathcal H}}(t) \end{equation} where $\{ \beta_{j}^{{\mathcal H}} \}_j$ is a family of independent real fBm's defined on a complete filtered probability space $(\Omega, \mathbb F, \{\mathbb F_t\}_t, \mathbb P)$. This is the so called $L^{2}_{\sigma}$-cylindrical fractional Brownian motion. Moreover we consider a linear operator $\Phi$ defined in $L^2_\sigma$. Notice that the series in \eqref{cfBm} does not converge in $L^2_\sigma$. We need to define the integral of the form $\int_{0}^{t} S(t-s) \Phi dW^{{\mathcal H}}(s)$, appearing in the definition of mild solution; we will analyze this stochastic integral in Section \ref{sec-linear}. \subsection{Abstract equation} Applying the projection operator $P$ to \eqref{Navier} we get rid of the pressure term; setting $\nu=1$, equation \eqref{Navier} becomes \begin{equation}\label{eq-abst} \begin{cases} du(t) + Au(t)\ dt= B(u(t)) \ dt + \Phi \text{d} W^{{\mathcal H}}(t), & t>0 \\ u(0)=u_{0} \end{cases} \end{equation} We consider its mild solution. \begin{definition} A measurable function $u:\Omega\times [0,T]\rightarrow L_{\sigma}^{p}$ is a mild $L^p$-solution of equation \eqref{eq-abst} if $\bullet$ $u \in C([0,T]; L^{p}_\sigma)$, $\mathbb P$-a.s. $\bullet$ for all $t \in (0,T]$, we have \begin{equation}\label{mild} u(t)= S(t)u_{0}+ \ \int_{0}^{t} S(t-s)B(u(s)) \ ds + \int_{0}^{t} S(t-s) \Phi \text{d}W^{{\mathcal H}}(s) \end{equation} $\mathbb P$-a.s. \end{definition} \section{The linear equation}\label{sec-linear} Now we consider the linear problem associated to the Navier-Stokes equation \eqref{eq-abst}, that is \begin{equation}\label{eq-z} d z(t)+ Az(t) \ dt = \Phi dW^{{\mathcal H}}(t) \end{equation} When the initial condition is $z(0)=0$, its mild solution is the stochastic convolution \begin{equation}\label{z-proc} z(t)= \int_{0}^{t} S(t-s) \Phi \ dW^{{\mathcal H}}(s). \end{equation} To analyze its regularity we appeal to the following result. \begin{proposition}\label{pro-gen} Let $0<{\mathcal H}<1$. \\ If there exist $\lambda, \alpha \ge 0$ such that \begin{equation}\label{SPhi} \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le \frac C{t^\lambda} \qquad\forall t>0 \end{equation} and \begin{equation} \lambda+\frac \alpha2<{\mathcal H} \end{equation} then $z$ has a version which belongs to $C([0,T];H^\alpha)$. \end{proposition} \begin{proof} This is a well known result for ${\mathcal H}=\frac 12$. Moreover, the case ${\mathcal H}<\frac 12$ is proved in Theorem 11.11 of \cite{PDDM2006} and the case ${\mathcal H}>\frac 12$ in Corollary 3.1 of \cite{DPDM2002}, by assuming that the semigroup $\{S(t)\}_t$ is analytic. \end{proof} Now we use this result with $\alpha=d(\frac 12-\frac 1p)$ for $p>2$; by means of the Sobolev embedding $H^{d(\frac 12-\frac 1p)}(D)\subset L^p(D)$, this provides that $z$ has a version which belongs to $C([0,T];L^p_{\sigma})$. We have our regularity result for the stochastic convolution by assuming that $\Phi \in \mathcal L(L^2_\sigma;H^q)$ for some $q \in \mathbb R$, as e.g. when $\Phi= A^{-\frac q2}$. \begin{proposition}\label{pro-z} Let $0<{\mathcal H}<1$, $2<p<\infty$ and $\Phi \in \mathcal L(L^2_\sigma,H^q)$ for some $q \in \mathbb R$. If the parameters fulfil \begin{equation}\label{cond-dpq} \frac d2(1-\frac 1p)-\frac q2<{\mathcal H} \end{equation} then the process $z$ given by \eqref{z-proc} has a version which belongs to $C([0,T];H^{d(\frac 12-\frac 1p)})$. By Sobolev embedding this version is in $C([0,T];L^p_\sigma)$ too. \end{proposition} \begin{proof} According to Proposition \ref{pro-gen} we have to estimate the Hilbert-Schmidt norm of the operator $S(t) \Phi$. We recall that the product of two linear operators is Hilbert-Schmidt if at least one of them is Hilbert-Schmidt. Bearing in mind Lemma \ref{qsmall}, when $q<\frac d2$ we get \begin{equation}\label{norm-operators2} \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le \|S(t)\|_{\gamma(H^q,L^2_\sigma)} \|\Phi \|_{\mathcal L(L^2_\sigma,H^q)} \le \frac C{t^{\frac d4-\frac q2}} \end{equation} and when $q=\frac d2$ we get \begin{equation}\label{norm-operators3} \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le \|S(t)\|_{\gamma(H^{\frac d2},L^2_\sigma)} \|\Phi \|_{\mathcal L(L^2_\sigma,H^{\frac d2})} \le \frac C{t^a} \end{equation} for any $a>0$ (here the constant depends also on $a$). Therefore when $q<\frac d2$ we choose $\lambda=\frac d4-\frac q2$, $\alpha=d(\frac 12-\frac 1p)$ and condition $\lambda+\frac \alpha 2<{\mathcal H}$ becomes \eqref{cond-dpq}; when $q=\frac d2$ we choose $\lambda=a$, $\alpha=d(\frac 12-\frac 1p)$ and since $a$ is arbitrarily small we get again \eqref{cond-dpq}. Otherwise, when $q>\frac d2$ we have that $\Phi$ is a Hilbert-Schmidt operator in $L^2_\sigma$ (since $ \|\Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le \|A^{-\frac q2} \|_{\gamma(L^2_\sigma,L^2_\sigma)} \|A^{\frac q2}\Phi \|_{\mathcal L(L^2_\sigma,L^2_\sigma)}$) and we estimate \begin{equation}\label{norm-operators} \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le \|S(t)\|_{\mathcal L(L^2_\sigma,L^2_\sigma)} \|\Phi \|_{\gamma(L^2_\sigma,L^2_\sigma)}\le C \end{equation} for all $t\ge 0$. Actually we can prove something more; we write $A^{\frac 12(q-\frac d2)}=A^{\varepsilon} A^{-\frac d4 -\varepsilon} A^{\frac q2}$ and for any $\varepsilon>0$ we have \[\begin{split} \|S(t) A^{\frac 12(q-\frac d2)}\Phi\|_{\gamma(L^2_\sigma,L^2_\sigma)} &\le \|A^\varepsilon S(t)\|_{\mathcal L(L^2_\sigma,L^2_\sigma)} \|A^{-\frac d4 -\varepsilon}\|_{\gamma(L^2_\sigma,L^2_\sigma)} \|A^{\frac q2}\|_{\mathcal L(H^q,L^2_\sigma)} \|\Phi\|_{\mathcal L(L^2_\sigma;H^q)} \\& \le \frac M{t^\varepsilon} \end{split} \] According to Proposition \ref{pro-gen}, choosing $\gamma=\varepsilon$ and $\alpha=d(\frac 12-\frac 1p)-(q-\frac d2)$ we obtain that the process \[ \int_0^t S(t-s) A^{\frac 12(q-\frac d2)} \Phi \ dW^{{\mathcal H}}(s), \quad t \in [0,T] \] has a $C([0,T];H^{d(\frac 12-\frac 1p)-(q-\frac d2)})$-valued version if \[ \varepsilon+\frac 12[ d(\frac 12-\frac 1p)-(q-\frac d2)] <{\mathcal H}<1 \] i.e. choosing $\varepsilon$ very small, if \[ \frac d2 (1-\frac 1p)-\frac q2<{\mathcal H}<1. \] Since $S(t)$ and $A^{\frac 12(q-\frac d2)}$ commute, we get as usual that the result holds for the process $A^{\frac 12(q-\frac d2)}z$. Therefore $z$ has a $C([0,T];H^{d(\frac 12-\frac 1p)})$-version. Actually this holds when $\alpha=d(\frac 12-\frac 1p)-(q-\frac d2)\ge 0$, that is when $q\le d(1-\frac 1p)$. For larger values of $q$ the regularising effect of the operator $\Phi$ is even better and the result holds true for any $0<{\mathcal H}<1$. \end{proof} \begin{remark} Instead of appealing to the Sobolev embedding $H^{d(\frac 12-\frac 1p)}\subset L^p_\sigma$, we could look directly for an $L^p$-mild solution $z$, that is a process with $\mathbb P$-a.e. path in $C([0,T];L^p_\sigma)$. Let us check if this approach would be better. There are results providing the regularity in Banach spaces; see e.g. Corollary 4.4. in the paper \cite{Co} by \v{C}oupek, Maslowski, and Ondrej\'at. They involve the $\gamma$-radonifying norm instead of the Hilbert-Schmidt norm (see, e.g., \cite{vN} for the definition of these norms). However the estimate of the $\gamma$-radonifying norm of $S(t) \Phi $ is not trivial. The estimates involved lead anyway to work in a Hilbert space setting. Let us provide some details about this fact. According to \cite{Co}, assuming $\frac 12 <{\mathcal H}<1$ and $1\le p{\mathcal H}<\infty$ one should verify that there exists $\lambda \in [0,{\mathcal H})$ such that \[ \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^p_\sigma)}\le \frac C{t^\lambda} \qquad\forall t>0 \] \\ Given $\Phi \in \mathcal L(L^2;H^q)$ we just have to estimate the $\gamma(H^q,L^p_\sigma)$-norm of $S(t)$, since \[ \|S(t) \Phi \|_{\gamma(L^2_\sigma,L^p_\sigma)}\le \|S(t)\|_{\gamma(H^q,L^p_\sigma)} \|\Phi \|_{\mathcal L(L^2_\sigma,H^q)}. \] The $\gamma(H^q,L^p_\sigma)$-norm of $S(t)$ is equivalent to \[ \left[\int_D \Big(\sum_{j=1}^\infty |S(t) \frac {e_j(x)}{\lambda_j^{q/2}}|^2\Big)^{\frac p2}dx \right]^{1/p} \] since $\{\frac {e_j}{\lambda_j^{q/2}}\}_j$ is an orthonormal basis of $H^q$. Therefore, we estimate the integral. Let us do it for $p \in 2\mathbb N$. We have \[\begin{split} \int_D \Big(\sum_{j=1}^\infty |S(t) \frac {e_j(x)}{\lambda_j^{q/2}}|^2\Big)^{\frac p2}dx &= \int_D \Big(\sum_{j=1}^\infty \lambda_j^{-q} e^{-2\lambda_j t} |e_j(x)|^2\Big)^{\frac p2}dx \\&= \int_D \Pi_{n=1}^{p/2}(\sum_{j_n=1}^\infty \lambda_{j_n}^{-q} e^{-2\lambda_{j_n} t} |e_{j_n}(x)|^2) dx \end{split}\] Using the H\"older inequality, we get \[ \int_D |e_{j_1}(x)|^2 |e_{j_2}(x)|^2 \cdots |e_{j_{p/2}}(x)|^2 dx \le \|e_{j_1}\|_{L^p}^2 \|e_{j_2}\|_{L^p}^2 \cdots \|e_{j_{p/2}}\|_{L^p}^2 \] Hence \[ \int_D \Big(\sum_{j=1}^\infty |S(t) \frac {e_j(x)}{\lambda_j^{q/2}}|^2\Big)^{\frac p2}dx \le \left(\sum_{j=1}^\infty \lambda_j^{-q} e^{-2\lambda_j t} \|e_{j}\|_{L^p}^2\right)^{p/2} \] How to estimate $ \|e_{j}\|_{L^p}$? Again using the Sobolev embedding $H^{d(\frac 12-\frac 1p)}\subset L^p$. Actually we are back again to Hilbert spaces and we obtain nothing different with respect to our procedure which started in the Hilbert spaces since the beginning. We leave the details to the reader. Finally, let us point out that for $0<{\mathcal H}<\frac 12$, an $L^p$-mild solution $z$ can be obtained in the Banach setting by means of Theorem 5.5 in \cite{BvNS}; this requires the operator $\Phi$ to be a $\gamma$-radonifying operator from $L^2_\sigma$ to $L^p_\sigma$, which is a quite strong assumption. Our method exploits the properties of the semigroup $S(t)$ so to allow weaker assumptions on the operator $\Phi$. \end{remark} \section{Existence and uniqueness results}\label{sec-ex} In this section we study the Navier-Stokes initial problem \eqref{eq-abst} in the space $L_{\sigma}^{p}$. We prove first the local existence result and then the pathwise uniqueness. \subsection{Local existence} Following \cite{F}, we set $v=u-z$, where $z$ is the mild solution of the linear equation \eqref{eq-z}. Therefore \begin{equation}\label{eq-v} \begin{cases} \dfrac{dv}{dt}(t)+Av(t)=B(v(t)+z(t)) ,&\quad t>0\\ v(0)=u_0 \end{cases} \end{equation} and we get an existence result for $u$ by looking for an existence result for $v$. This is given in the following theorem. \begin{theorem} \label{th-ex} Let $0<{\mathcal H}<1$, $d<p<\infty$ and $\Phi \in \mathcal L(L^2_\sigma,H^q)$ for some $q \in \mathbb R$.\\ Given $u_{0}\in L_{\sigma}^{p}$, if the parameters fulfil \begin{equation}\label{cond-dpq2} \frac d2(1-\frac 1p)-\frac q2<{\mathcal H} \end{equation} then there exists a local mild $L^p$-solution to equation \eqref{eq-abst}. \end{theorem} \begin{proof} From Proposition \ref{pro-z} we know that $z$ has a version which belongs to $C([0,T];L^p_\sigma)$. Now we observe that to find a mild solution \eqref{mild} to equation \eqref{eq-abst} is equivalent to find a mild solution \[ v(t)= S(t)u_{0} + \ \int_{0}^{t} S(t-s)B(v(s) +z(s))ds \] to equation \eqref{eq-v}. We work pathwise and define a sequence by iterations: first $v^{0}=u_{0}$ and inductively \[ v^{j+1}(t)= S(t)u_{0}+ \ \int_{0}^{t} S(t-s)B(z(s)+ v^{j}(s)) \ ds , \quad t \in [0,T] \] for $j=0,1,2,\ldots$. Let us denote by $K_{0}$ the random constant \[ K_0=\max\left(\| u_{0}\|_{L^p_\sigma}, \sup_{t\in[0,T]} \|z(t) \|_{L^p_\sigma} \right). \] We shall show that there exists a random time $\tau>0$ such that $\displaystyle\sup_{t\in [0,\tau]}\|v^{j}(t)\|_{L^p_\sigma}\le 2K_{0}$ for all $j\ge 1$. We have \[ \|v^{j+1}(t)\|_{L^p_\sigma} \le \|S(t)u_{0}\|_{L^p_\sigma} + \int_{0}^{t} \|S(t-s)B(v^{j}(s)+z(s))\|_{L^p_\sigma} \ ds \] We observe that from \eqref{stimaSpr} and \eqref{stimaASp} we get \begin{equation}\label{one} \|S(t)u_{0}\|_{L^p_\sigma} \le \|u_{0}\|_{L^p_\sigma} \end{equation} and \begin{equation}\label{two} \begin{split} \int_0^t &\|S(t-s)B((v^{j}(s)+z(s))\|_{L^p_\sigma} ds \\&\le \int_{0}^{t} \| A^{\frac{1}{2}} S(t-s) A^{-\frac{1}{2}} P \text{ div } ((v^{j}(s)+z(s))\otimes (v^{j}(s)+z(s))) \|_{L^p_\sigma} \ ds, \\& \le \int_{0}^{t} \frac{1}{(t-s)^{\frac 12}} \ \|S(t-s) A^{-\frac{1}{2}} P \text{ div }((v^{j}(s)+z(s))\otimes (v^{j}(s)+z(s)))\|_{L^p_\sigma}\ ds \\& \le \int_{0}^{t}\frac{M}{(t-s)^{\frac 12 + \frac{d}{2p}}}\ \|A^{-\frac{1}{2}} P \text{ div } ((v^{j}(s)+z(s))\otimes (v^{j}(s)+z(s)))\|_{L^{p/2}_\sigma} \ ds \\& \le \int_{0}^{t} \frac{M}{(t-s)^{\frac 12 + \frac{d}{2p}}} \ \| (v^{j}(s)+z(s))\otimes (v^{j}(s)+z(s)) \|_{L^{p/2}_\sigma} \ ds \\& \le \int_{0}^{t} \frac{M}{(t-s)^{\frac 12 + \frac{d}{2p}}} \ \|v^{j}(s)+z(s)\|_{L^p_\sigma}^{2} \ ds \end{split} \end{equation} From (\ref{one}) and (\ref{two}) we deduce that \[ \begin{split} \|v^{j+1}(t)\|_{L^p_\sigma} &\le K_{0} +\int_{0}^{t} \frac{M}{(t-s)^{\frac 12+ \frac d{2p} }}\ \|v^{j}(s)+z(s)\|_{L^p_\sigma}^{2} \ ds \\&\le K_{0} +\int_{0}^{t} \frac{2M}{(t-s)^{\frac 12+ \frac d{2p} }}\ \|z(s)\|_{L^p_\sigma}^{2} \ ds +\int_{0}^{t} \frac{2M}{(t-s)^{\frac 12+ \frac d{2p} }}\ \|v^{j}(s)\|_{L^p_\sigma}^{2} \ ds \end{split} \] Thus, when $\frac 12 +\frac d{2p}<1$ (i.e. $p>d$) we get \[\begin{split} \sup_{t\in [0,T]}\|v^{j+1}(t)\|_{L^p_\sigma} &\le K_{0} + 2 M\ \frac{T^{\frac 12- \frac d{2p}}}{\frac 12- \frac d{2p}} \ \sup_{t\in [0,T]} \|z(t)\|_{L^p_\sigma}^{2} + 2 M\ \frac{T^{\frac 12- \frac d{2p}}}{\frac12- \frac d{2p}} \ (\sup_{t\in [0,T]}\|v^{j}(t)\|_{L^p_\sigma})^{2} \\ &\le K_{0} +\frac{4pM}{p-d} T^{\frac 12- \frac d{2p}} K_0^2 + \frac{4pM}{p-d} \ T^{\frac 12- \frac d{2p}}\ (\sup_{t\in [0,T]}\|v^{j}(t)\|_{L^p_\sigma})^{2} \end{split} \] Now we show that if $\displaystyle\sup_{t\in [0,T]}\|v^{j}(t)\|_{L^p_\sigma}\le 2K_0$, then $\displaystyle\sup_{t\in [0,T]}\|v^{j+1}(t)\|_{L^p_\sigma}\le 2K_0$ on a suitable time interval. Indeed, from the latter relationship we get \[\begin{split} \sup_{t\in [0,T]}\|v^{j+1}(t)\|_{L^p_\sigma} &\le K_0+ \frac{4pM}{p-d} T^{\frac 12- \frac d{2p}}K_0^2 +\frac{4pM}{p-d} T^{\frac 12- \frac d{2p}} 4K_0^2 \\&=2K_0\left(\frac 12 +\frac 12 \frac{20pM}{p-d} T^{\frac 12- \frac d{2p}} K_0\right). \end{split}\] Hence, when $T$ is such that \begin{equation*} \frac{20pM}{p-d} T^{\frac 12- \frac d{2p}} K_0 \le 1 \end{equation*} we obtain the required bound. Therefore we define the stopping time \begin{equation}\label{cond-T} \tau = \min\Big\{T,\left(\frac{p-d}{20pMK_0} \right)^{\frac {2p}{p-d}}\Big\} \end{equation} so that \begin{equation}\label{cond-tau} \frac{20pM}{p-d} \tau^{\frac 12- \frac d{2p}} K_0 \le 1 \end{equation} and obtain that \begin{equation}\label{stima-unif} \sup_{t\in [0,\tau]}\|v^{j}(t)\|_{L^p_\sigma}\le 2K_0 \qquad \forall j . \end{equation} Now, we shall show the convergence of the sequence $v^{j}$. First, notice that \begin{multline*} B(v^{j+1}+z)-B(v^j+z) \\=-P\text{div}\ \big((v^{j+1}-v^j)\otimes v^{j+1}+v^j\otimes (v^{j+1}-v^j) +(v^{j+1}-v^j)\otimes z+z\otimes (v^{j+1}-v^j)\big). \end{multline*} We proceed as in \eqref{two} and get \[ \begin{split} \|v^{j+2}(t)-&v^{j+1}(t)\|_{L^p_\sigma} \\&\le \int_0^t \|S(t-s)\big(B(v^{j+1}(s)+z(s))-B(v^{j}(s)+z(s))\big)\|_{L^p_\sigma} ds \\&\le \int_{0}^{t} \frac{M}{(t-s)^{\frac 12+\frac d{2p}}} \big( \| v^{j}(s)\|_{L^p_\sigma} + \| v^{j+1}(s)\|_{L^p_\sigma}+2\|z(s)\|_{L^p_\sigma} \big) \ \|v^{j+1}(s)-v^{j}(s) \|_{L^p_\sigma} ds \end{split}\] Hence, using \eqref{stima-unif} we get \[\begin{split} \sup_{t \in [0,\tau]}\|v^{j+2}(t)-v^{j+1}(t)\|_{L^p_\sigma} &\le \int_{0}^{\tau} \frac{ M 6K_0}{(t-s)^{\frac 12+\frac d{2p}}} ds \ \left(\sup_{s \in [0,\tau]} \|v^{j+1}(s)-v^{j}(s) \|_{L^p_\sigma} \right) \\ &= \frac{12pM K_{0}}{p-d} \ \tau ^{\frac 12- \frac d{2p}} \ \left( \sup_{t\in [0,\tau]}\|v^{j+1}(t)-v^{j}(t) \|_{L^p_\sigma} \right) \end{split}\] Setting $C_0= \frac{12pM K_{0}}{p-d} \ \tau ^{\frac 12- \frac d{2p}}$, from \eqref{cond-T}-\eqref{cond-tau} we obtain that $C_0<1$. Moreover \[\begin{split} \sup_{t \in [0,\tau]}\|v^{j+2}(t)-v^{j+1}(t)\|_{L^p_\sigma} &\le C_0 \sup_{t\in [0,\tau]}\|v^{j+1}(t)-v^{j}(t) \|_{L^p_\sigma} \\& \le C_0^{j+1} \sup_{t\in [0,\tau]}\|v^{1}(t)-v^{0}(t) \|_{L^p_\sigma} \end{split}\] Therefore $\{v^j\}_j$ is a Cauchy sequence; hence it converges, that is there exists $v \in C([0,\tau]; L^{p}_\sigma)$ such that $v^{j}\rightarrow v$ in $C([0,\tau]; L^{p}_\sigma)$. This proves the existence of a unique local mild $L^p$-solution $v$ for equation \eqref{eq-v}. Since $u=v+z$, we have got a local mild $L^p$-solution $u$ for equation \eqref{eq-abst}. \end{proof} \begin{remark} We briefly discuss the case of cylindrical noise, i.e. $\Phi= Id$. Bearing in mind Theorem \ref{th-ex}, the parameters fulfil \begin{equation}\label{q0} \frac d2(1-\frac 1p)<{\mathcal H}<1. \end{equation} When $2=d<p$, this means that $p$ and ${\mathcal H}$ must be chosen in such a way that \begin{equation} 1-\frac 1p<{\mathcal H}<1 \end{equation} This means that ${\mathcal H}$ must be at least larger than $\frac 12$. On the other hand, when $3=d<p$ we cannot apply our procedure, since $\frac d2(1-\frac 1p)>1$ and therefore the set of conditions \eqref{q0} is void. \end{remark} \subsection{Uniqueness} Now we show pathwise uniqueness of the solution given in Theorem \ref{th-ex}. \begin{theorem} Let $d<p<\infty$ and $\Phi \in \mathcal L(L^2_\sigma,H^q)$ for some $q \in \mathbb R$.\\ Given $u_{0}\in L_{\sigma}^{p}$, if \[ \frac d2 (1-\frac 1p)-\frac q2<{\mathcal H}<1 \] then the local mild $L^p$-solution to equation \eqref{eq-abst} given in Theorem \ref{th-ex} is pathwise unique. \end{theorem} \begin{proof} Let $u$ and $\tilde u$ be two mild solutions of equation \eqref{eq-abst} with the same fBm and the same initial velocity. Their difference satisfies an equation where the noise has disappeared. Hence we work pathwise. We get \[ u(t)-\tilde u(t)= \ \int_{0}^{t} S(t-s) \big(B(u(s))-B(\tilde u(s)) \big) \ ds . \] Writing $B(u)-B(\tilde u)=B(u-\tilde u,u)+B(\tilde u, u-\tilde u)$, by classical estimations as before we have \[ \begin{split} \| u(t)-\tilde u(t)\|_{L^p_\sigma} &\le \ \int_{0}^{t} \|S(t-s) \big(B(u(s))-B(\tilde u(s)) \big)\|_{L^p_\sigma} \ ds \\& \le \int_{0}^{t} \frac{M}{(t-s)^{\frac{1}{2}+ \frac{d}{2p}}} (\| u(s)\|_{L^p_\sigma}+ \| \tilde u(s)\|_{L^p_\sigma}) \|u(s)-\tilde u(s)\|_{L^p_\sigma} \ ds \end{split} \] Thus \[ \sup_{[0,\tau]}\|u(t)-\tilde u(t)\|_{L^p_\sigma} \le 4K_0 M \frac{ \tau^{\frac{1}{2}- \frac{d}{2p}}}{\frac{1}{2}- \frac{d}{2p}} \ \sup_{t\in [0,\tau]}\|u(t)-\tilde u(t)\|_{L^p_\sigma} . \] Keeping in mind the definition \eqref{cond-T} of $\tau$ and \eqref{cond-tau} we get \[ \sup_{[0,\tau]}\|u(t)-\tilde u(t)\|_{L^p_\sigma} \le \frac 25 \sup_{[0,\tau]}\|u(t)-\tilde u(t)\|_{L^p_\sigma} \] which implies $u(t)=\tilde u(t)$ for any $t \in [0,\tau]$. \end{proof} \subsection{Global existence} Let us recall that \cite{Fang} proved global existence an uniqueness of a $L^4((0,T)\times D)$-valued solution. A similar result of global existence for a less regular (in time) solution holds in our setting. Let us begin with the case $d=2$ and consider a process solving equation \eqref{eq-abst} whose paths are in $L^{\frac {2p}{p-2}}(0,T;L^p_\sigma)$. Its local existence comes from the previous results. However we can prove an a priori bound leading to global existence. Let us multiply equation \eqref{eq-v} by $v$ in $L^2_\sigma$; we obtain by classical techniques (see Lemma 4.1 of \cite{Fla}) \[\begin{split} \frac 12 \frac d{dt}\|v(t)\|_{L^2_\sigma}^2+\|\nabla v(t)\|^2_{L^2}&=\langle B(v(t)+z(t),z(t)),v(t)\rangle \\& \le\|v(t)+z(t)\|_{L^4_\sigma} \|z(t)\|_{L^4_\sigma} \|\nabla v(t)\|_{L^2} \\& \le\frac 12 \|\nabla v(t)\|^2_{L^2} +\frac C2 \|z(t)\|_{L^4_\sigma}^4 \|v(t)\|_{L^2_\sigma}^2+\frac C2 \|z(t)\|_{L^4_\sigma}^4 \end{split} \] Hence \[ \frac d{dt}\|v(t)\|_{L^2_\sigma}^2\le C \|z(t)\|_{L^4_\sigma}^4 \|v(t)\|_{L^2_\sigma}^2+C \|z(t)\|_{L^4_\sigma}^4 . \] As soon as $z$ is a $C([0,T];L^4_\sigma)$-valued process we get by means of Gronwall lemma that $v \in L^\infty(0,T;L^2_\sigma)$. And integrating in time the first inequality we also obtain that $v\in L^2(0,T;H^1)$. By interpolation $L^\infty(0,T;L^2_\sigma)\cap L^2(0,T;H^1)\subset L^{\frac {2p}{p-2}}(0,T;H^{1-\frac 2p})$ for $2<p<\infty$. Using the Sobolev embedding $H^{1-\frac 2p}\subset L^{p}_\sigma$, we have the a priori estimate for $v$ in the $L^{\frac {2p}{p-2}}(0,T;L^p_\sigma)$ norm, which provides the global existence of $v$ and hence of $u$. This holds for $d=2$ and $4\le p<\infty$, since the global estimate holds when $z$ is $C([0,T];L^4_\sigma)$-valued at least. \\Notice that for $d=2$ and $p=4$ we obtain the same result as by Fang, Sundar and Viens (see Corollary 4.3 in \cite{Fang}). Similarly one proceeds when $d=3$. The change is in the Sobolev embedding, which depends on the spatial dimension. Thus from $v \in L^\infty(0,T;L^2_\sigma)\cap L^2(0,T;H^1)$ we get by interpolation that $v \in L^{\frac{4p}{3(p-2)}}(0,T;H^{3\frac{p-2}{2p}})$ for $2<p\le 6$. Using the Sobolev embedding $H^{3\frac{p-2}{2p}}\subset L^{p}_\sigma$ we conclude that the $L^{\frac{4p}{3(p-2)}}(0,T;L^{p}_\sigma)$-norm of $v$ is bounded. Hence the global existence of a solution $v \in L^{\frac{4p}{3(p-2)}}(0,T;L^{p}_\sigma)$ for $4\le p\le 6$ as well as of a solution $u \in L^{\frac{4p}{3(p-2)}}(0,T;L^{p}_\sigma)$. \section*{Acknowledgements} C. Olivera is partially supported by FAPESP by the grants 2017/17670-0 and 2015/07278-0. B. Ferrario is partially supported by INdAM-GNAMPA, by PRIN 2015 ''Deterministic and stochastic evolution equations'' and by MIUR -Dipartimenti di Eccellenza Program (2018-2022) - Dept. of Mathematics ''F. Casorati'', University of Pavia.
1,108,101,564,451
arxiv
\section{Introduction} \section{Introduction} XSTAR is ``a computer program for calculating the physical conditions and emission spectra of photoionized gases" (Kallman \& Bautista 2001); the science it facilitates may be described most concisely by paraphrasing the documentation: {\em a spherical gas shell surrounding a central source of ionizing radiation absorbs some of this radiation and reradiates it in other portions of the spectrum. XSTAR computes the effects on the gas of absorbing this energy, and the spectrum of reradiated light, while allowing for consideration of other sources (or sinks) of heat, such as mechanical compression \& expansion, or cosmic ray scattering.} Coded in Fortran 77, XSTAR may be used as either a standalone executable or in the form of analytic models like {\tt warmabs}, with the latter being compiled into shared objects and dynamically loaded into spectral modeling tools such as ISIS (Houck, 2002). We are presently using XSTAR in ISIS to model active galactic nuclei and non-equilibrium ionization of photoionized plasmas. Relative to classic spectral modeling conducted with interactive analysis tools, the scales of these efforts are large: analytic models with 20 or more components \& roughly 300 parameters, scores of which may vary during fitting, or batch XSTAR runs on thousands of individual sets of parameters. The compute time required in both use cases, on the order of a week to a month for single end-to-end runs, precludes traditional use of XSTAR, which is coded for serial execution on one CPU. Compounding the problem is the fact that most research efforts require multiple end-to-end runs, e.g. to experiment with different model components or parameter values, which can extend analysis timeframes into several months. \section{Batch Execution of XSTAR} Part of our non-equilibrium ionization modeling includes large-scale simulations, wherein the XSTAR application is repeatedly invoked over sets of unique input parameter tuples; one spectrum is generated per XSTAR run and saved as a FITS file, and these are collated into a single FITS table model that can be incorporated into an analytic model for fitting. Historically, this process has been driven by the serial {\tt xstar2xspec} script bundled with XSTAR and outlined in Fig. \ref{flow}. A representative simulation of 600 XSTAR jobs, generating power spectra of Hercules X1, consumed 26.4 hours of wallclock time on a single 2.6Ghz AMD Opteron processor with 2GB RAM; a linear scaling to 4200 jobs would consume 7.5 days on the same machine. In contrast, a similar physical simulation of 4200 XSTAR jobs completed in 110 minutes when executed via {\tt pvm\_xstar}\ on our Beowulf cluster of 52 2.4Ghz Opteron (4GB RAM) processors. As shown in Fig. \ref{flow}, {\tt pvm\_xstar}\ consists of 4 scripts: 2 of these, {\tt pvm\_xstar}\ proper and {\tt pvm\_xstar\_wrap}), are coded in Bourne shell, while the master/slave scripts are coded in S-Lang using the S-Lang PVM module (Davis et al 2005, Noble et al 2006) to interface with the Parallel Virtual Machine toolkit (Geist et al 1994). \begin{figure}[t] \plotone{C.6_1.eps} \caption{The flow diagrams of classic {\tt xstar2xspec} and its parallelized cousin, {\tt pvm\_xstar}, are identical (left): both run {\tt xstinitable} at outset and {\tt xstar2table} at completion. The only conceptual difference is that in {\tt pvm\_xstar}\ the N jobs are distributed to multiple CPUs via PVM (right), and executed in N unique directories to avoid FITS i/o \& parameter file clashes. } \label{flow} \end{figure} \section{XSTAR Analytic Modeling} As noted earlier, XSTAR is also used in the form of dynamically loaded analytic models, as in this sequence of commands at the ISIS prompt: {\small \begin{verbatim} isis> load_data("my_data.pha") isis> model("warmabs(1) + warmabs(2) + hotabs(1)") isis> set_params(...) isis> fit Parameters [Variable] = 48[21] Data bins = 3 Chi-square = 1.1118061 \end{verbatim} } The second step defines a 3-component model, consisting of two XSTAR {\tt warmabs} components and one XSTAR {\tt hotabs} component.\footnote{Note that in {\tt warmabs(1)} and {\tt warmabs(2)} the numbers within parentheses are not parameters to the model, but rather are tags which uniquely identify {\em instances} of a given model type, so that each instance may be evaluated with its own set of parameter values.} The performance bottleneck here is that each component may take 15 or more seconds to evaluate just once on a modern CPU, or 45 seconds to compute the entire model expression for every iteration of the fit loop initiated by step 4. A typical fit loop may contain hundreds of such iterations, with tens of thousands to millions of component evaluations often needed to conduct thorough walks through parameter space while generating error bars. In short, days or weeks of compute time can be needed for essential analysis when expensive models are involved. \subsubsection{Latent Parallelism} These lengthy runtimes may be shrunk by observing that there are two sources of parallelism inherent to model evaluation. First, whenever model components are mathematically independent of one another they may be evaluated concurrently. In the above model, for example, each component may be evaluated simultaneously, potentially reducing the runtime of each fit loop iteration from 45 to 15 seconds (the theoretical maximum of linear speedup on 3 CPUs). This component independence is common in model expressions, which are evaluated from left to right under the associativity and precedence rules of classic algebra. The second form of parallelism arises from bin independence within models: when evaluating the model on the {\em i}-th bin---\verb=model(lo[i], hi[i], params)=---requires no knowledge of bins {\em i-1} or {\em i+1}, then wavelength/energy grids of size {\tt nbins} may be trivially decomposed {\small \begin{verbatim} lo[1, nbins] = [ lo[1,N], lo[N+1, 2N] ... lo[nbins-N+1, nbins] ] hi[1, nbins] = [ hi[1,N], hi[N+1, 2N] ... hi[nbins-N+1, nbins] ] \end{verbatim} } \noindent into {\tt nbins/N} subgrids and each {\small\verb=model(lo_subgrid[j],hi_subgrid[j],params)=} evaluated concurrently. This is relatively common in models of X-ray spectra. The {\tt PModel}\ plugin for ISIS was written to exploit these latent sources of parallelism. Loaded at runtime by a simple \verb=require("pmodel")= command, the package adds 4 primary functions to ISIS: {\tt pm\_add(), pm\_mult(), pm\_func(), \& pm\_subgrid(N)}. The first three are stub models, in that they contribute nothing to the physics being modeled, but can be used in a model expression to identify which portions to evaluate concurrently. The fourth function is not a stub model, but rather overrides the default model evaluation mechanisms in ISIS with routines that decompose the model grid into N independent subgrids. In this case the entire model is independently evaluated over pieces of the grid, while the first group of functions evaluates pieces of the model independently over the entire grid. Using {\tt PModel}\ is easy: in the context of our XSTAR example only step 2 would need to change, to {\small \begin{verbatim} model("pm_add(warmabs(1), warmabs(2), hotabs(1))") \end{verbatim} } For every iteration of the ISIS fit loop this revised model expression would cause the dispatch of each component evaluation to a distinct processor, with the results from each combined by a simple additive {\em reduction} operation. Although {\tt PModel}\ may be used to distribute virtually any expensive model components, the same ease of use would apply: the parallel use case bears an overwhelming resemblance to the serial one, with the differences being simple to identify and implement. This means that end-users need not learn to program for parallelism in order to use multiple processors in their models, a classic barrier to the adoption of parallel methods by non-specialists. The {\tt PModel}\ functions will decompose the model or grid and combine results with either additive, multiplicative, or arbitrary functional reduction operations, all transparent to the top-level user interface. Moreover, ISIS did not need to be recoded for parallelism, and in fact it does not even know the model is computed in parallel; this knowledge is completely encoded within {\tt PModel}, whose functions ISIS simply calls in the same serial manner it would for any other physical model component. We have used these techniques to reduce the compute time of models with 20+ components, containing 10 or more XSTAR components and hundreds of parameters, from 4+ weeks when run serially to \verb=~=22 hours on the aforementioned Beowulf cluster. \section{Conclusion} Together, {\tt pvm\_xstar}\ and {\tt PModel}\ enable scientists to incorporate multiple processors in their XSTAR modeling without becoming experts in parallelism. Amortizing the evaluation of expensive XSTAR components over many CPUs allows larger and more physically realistic models to be computed, permitting us to probe thousands of physical scenarios in the time it has previously taken to compute only a handful of such models. Insofar as analytic modeling of observational data is among the most common scientific activities in astronomy, {\tt PModel}\ has a broad scope of applicability, particularly because it can in principle distribute the evaluation of {\em any} expensive model, not merely the XSTAR components shown here. Both {\tt pvm\_xstar}\ and {\tt PModel}\ are small open source packages, and have been employed at several institutes, on multicore desktops, workstation clusters, and high-performance parallel computers. They may be obtained by download from \url{http://space.mit.edu/cxc/pvm\_xstar/} or by contacting the lead author. \acknowledgments {\footnotesize This work was supported by NASA through the Hydra AISRP grant NNG06GE58G, and by contract SV-1-61010 from the Smithsonian Institution.} \vspace*{-4mm}
1,108,101,564,452
arxiv
\section*{Introduction} The emergence of long-range order, or collective behavior (CB), in non-equilibrium systems such as granular materials and living organisms is a matter of great interest for fundamental physics and applications~\cite{narayan2007long,kumar2014flocking}. Examples, recently observed in experiments and numerical simulations, are motility induced phase transitions in bacteria \cite{fily2012athermal,redner2013structure,cates2015motility,CapriniPRL2020}, collective migration in epithelial cells~\cite{alert2020physical}, persistent collective rotations in granular systems \cite{Scalliet2015,Plati2019,Plati2020slow}. An important class of CB instances includes flocking and swarming in animals, systematically studied by physicists in the last 25 years \cite{vicsek1995novel,toner1998flocks,CavagnaPNAS2010}. The great variety of systems in which CB has been observed makes the formulation of a rigorous and unifying definition for them a difficult task. Generally speaking we can say that CB occurs when a many-body system \emph{acts as a whole}. Indeed, a common property of the previous examples is the interplay between different length scales: the interactions act on microscopic distances while correlations extend to macroscopic scales, comparable with the system size. In the study of CB it is common, in fact, to look at spatial correlation functions of the relevant fields: if this function has a typical decay length $\xi$ then we can divide the system in almost independent subsystems of size $\sim \xi$. If the correlation function decays without a typical length it is said to be \emph{scale-free}: in this case the dynamics of every particle is correlated with the whole system. We underline that \emph{scale-free} spatial correlations appear naturally in critical phase transitions at equilibrium~\cite{ma2018modern}, but a general and well established theoretical framework to understand the appearance of long-range ordering in non-equilibrium systems is still lacking: sometimes equilibrium-like approaches are successful (effective Hamiltonian/temperatures)~\cite{cavagna2019dynamical,Gradenigo2015} while in other cases fully non-equilibrium tools have to be developed \cite{garrido1990long,grinstein1990,bertini2015macroscopic}. In this paper, we provide analytical results about the occurrence of \emph{scale-free} (more precisely power law decaying) correlations in a velocity field defined on a one dimensional lattice with interactions mediated by viscous friction. We'll show that this behavior is observed in the non-equilibrium stationary state (NESS) obtained by coupling only the boundaries of the system with a thermal bath. We call this phase Non-Homogeneously Heated Phase (NHHP). If the particles in the bulk are also put in contact with a bath a different regime is found, the Homogeneously Heated Phase (HHP), where the spatial correlation is exponential with a characteristic length scale that goes to infinity when the contact between the bulk and the bath vanishes. The NHHP is also characterized by slow relaxation times that scale with the square of the system size. Lattices (particularly in 1d) bring two main advantages: (i) analytical calculations are often possible, (ii) they help to isolate minimal ingredients for the occurrence of the phenomenon under study. Considering just the non-equilibrium context, 1D models have been used to study thermal conduction \cite{Rieder67,lepri2003thermal,Falasco2015}, non-equilibrium fluctuations~\cite{derrida1998exactly,prados2011large}, correlations and response with non-symmetric couplings \cite{Ishiwata2020}, velocity alignment in active matter \cite{Caprini1Darxiv}, systems with Vicsek-like interactions~\cite{manacorda2017lattice,butta2019}, velocity fields in granular materials \cite{Baldassa2002,lasanta2015fluctuating,Puglisi1D2018}. In the following we'll just consider linear interactions between variables and this allows to work in the framework of multivariate linear stochastic processes. Despite their simplicity, this class of models continues to be a powerful tool when dealing with dynamics driven out of equilibrium as in biological systems \cite{Battle2016,Mura2018}. As discussed in the next section, our model can be thought as an extreme simplification of a vibrated granular system at strong compression. Looking for the emergence of a collective motion in it is then motivated also by the recent experimental/numerical evidence of slow collective behavior in vibro-fluidized granular materials \cite{Scalliet2015,Plati2019}. This phenomenon is not yet fully understood and our study tackles this problem, revealing that non-homogeneous heating and frictional interactions (i.e standard features of vibrated granular matter) are minimal ingredients to develop a slow collective dynamics. The manuscript is organized as follows: In section "Model" we present our model discussing its phenomenology and its relation with real granular systems and previously studied non-equilibrium 1D models. Section "Results" contains the key-steps for the calculation of the spatial correlation function in the NHHP and in the HHP shedding light on the limit for which diverging correlation lengths and times are obtained. We also show the validity of our results beyond the assumptions used to perform analytical calculations. Finally, in "Discussion" we draw conclusions and sketch some perspectives. In the Supplemental Material (SM) details of the calculations are provided in addition to some insights about the cooling state and the active equivalent of our model. \section*{Model} \subsection*{Definition and phenomenology} We consider a velocity field on a one dimensional lattice of size $L$. The $i$th particle interacts with their nearest neighbors $j$ through a viscous force with coefficient $\gamma$: $F_i=-\sum_{j} \gamma(v_i-v_j)$. The boundary (bulk) sites are coupled with an external bath defined by a drag coefficient $\gamma_b$ ($\gamma_a$) and relative temperatures which can be different if at the boundaries or in the bulk. Considering particles with unitary mass the equations for the model are: \begin{subequations} \label{eq::ModelEq} \begin{align} \dot{v}_i=-(2\gamma + \gamma_a)v_i +\gamma (v_{i+1}+v_{i-1}) +\sqrt{2\gamma_a T_a}\eta_i(t) \label{eq::ModelEqA} \\ \dot{v}_1=-(\gamma+\gamma_b) v_1 +\gamma v_2 +\sqrt{2\gamma_b T_1}\eta_1(t)\\ \dot{v}_L=-(\gamma+\gamma_b) v_L +\gamma v_{L-1} +\sqrt{2\gamma_b T_L}\eta_L(t) \end{align} \end{subequations} Where the first equation holds for $1<i<L$ and the $\eta_i(t)$s are Gaussian white noises with unitary variance: $\langle \eta_i(t)\eta_j(t') \rangle=\delta_{ij} \delta(t-t')$. In this model, the way in which energy is supplied to the system is consistent with the fluctuation-dissipation theorem. Indeed, for each viscous force ($\gamma_{a(b)}$) there is a stochastic counterpart at finite temperature ($T_{a(b)}$). This is actually not true for the interaction force defined by $\gamma$ because it is related to the viscosity of the material that forms the grains. Thus, the associated temperature (typical of the thermal agitation at the molecular scale) can be reasonably neglected in a granular context. We refer to NHHP when $\gamma_a=0$ so that just the first and the $L$th sites are heated, while in the HHP we consider a general $\gamma_a\neq 0$. We note that the HHP is not strictly spatially homogeneous because viscous coefficients and temperatures depend on the position: we refer to it as {\em homogeneously heated} meaning that in this phase \emph{all} the particles are coupled with a bath. \begin{figure} \centering \includegraphics[width=0.245\textwidth]{Figure1a.png} \includegraphics[width=0.245\textwidth]{Figure1b.png} \includegraphics[width=0.245\textwidth]{Figure1c.png} \includegraphics[width=0.245\textwidth]{Figure1d.png} \caption{a,b,c) Snapshots of the velocity field in the stationary state of the two phases. We exclude the first five (really hot) sites near the boundaries to have a more clear view of the field. Each panel shows the vectors in linear scale and the moduli in log scale in order to better appreciate the phenomenology of the system. Orange and blue bars discriminate the two directions. We note that a great cluster of particles with same direction and similar modulus is found in the NHHP only, signaling that in terms of correlations the key parameter is $\gamma_a$ rather than $T_a$. d) Autocorrelation times for each site defined as the time $\tau_i$ for which $\Gamma_i(\tau_i)=0.4$. The autocorrelation function is defined as $\Gamma(t')=\lim_{t\to \infty} \langle v_i(t)v_i(t+t') \rangle/\langle v_i^2 (t)\rangle$ where the brackets refer to a time average on the stationary state. We note that in the NHHP the dynamics is far slower than in the HHP also when $T_a=0$. The snapshots are obtained by numerical integration of Eqs. \eqref{eq::ModelEq} with $L=50$, $\gamma=5$, $\gamma_b=10$, $\gamma_a=\{3,0\}$, $T_1=T_L=0.002$, $T_a=\{0.002,0\}$ after a time $t_M=10^8/\gamma$ and with a temporal step $dt=0.05/\gamma$. } \label{fig:snapshots} \end{figure} As we discuss in the next paragraphs, this is a linear model and a full solution can be found in the context of multivariate stochastic processes. Nevertheless, a numerical integration of Eqs. \eqref{eq::ModelEq} can be useful to have a physical insight on the phenomenology in play. In Fig. \ref{fig:snapshots} we show some instantaneous snapshots of the system in the stationary state for three different conditions: HHP with $T_a \neq 0$, HHP with $T_a = 0$ and NHHP. We note that in the NHHP (panel c) almost all the velocities are aligned with similar moduli while in the HHP we have smaller aligned domains with moduli that decay sharply moving away from the boundaries when $T_a = 0$ (panel b) and a random configuration when $T_a\neq 0$ (panel a). This comparison makes clear that - in terms of correlations - the key parameter is $\gamma_a$ rather than $T_a$: indeed a situation where the sites experience a collective behavior (in the intuitive sense that they \emph{act as a whole}) is only found in the NHHP. In Fig. \ref{fig:snapshots}d the typical correlation time for each site is shown and we can see that in the NHHP the dynamics is extremely slower with respect to the other two conditions. It is worth noting that this model does not present any directional asymmetry so the true mean value of the velocity field (i.e. obtained by an average over long times or equivalently over all the realizations of the noises) is zero also in the NHHP, even if the single time configurations clearly show an explicit global alignment. The phenomenology of the NHHP can then be described as the occurrence of slow and collective fluctuations around the expected mean value. \subsection*{Relation with real granular systems and other models} We note that the kind of interaction used in Eqs. \ref{eq::ModelEq} is typical of contact models for granular materials \cite{Luding98,Brilliantov1996}. In these models, the grains (that are disks or spheres depending on the geometry) interact when a distance smaller than the sum of their radius is reached. In this condition, the particles penetrate each other and the dynamics is ruled by contact forces that are split into a normal and tangential component with respect to the vector connecting the centers of the grains. Both of this contributions contain a (linear or non-linear) elastic term that depends on the normal/tangential displacement and a dissipative one that depends on the normal/tangential relative velocity. The latter has, in many cases, exactly the form of the viscous interaction we use in our model \cite{footCoulomb}. In view of this we can say that if we fix the centers of $L$ grains on the lattice sites so that they are partially overlapped, then the dynamics of the particles' velocities would be given by Eqs. \ref{eq::ModelEq}. Neglecting the dynamics of positions (they don't appear at all in Eqs. \ref{eq::ModelEq}) is surely the most relevant approximation of our approach: in the SM (S5) we briefly discuss how to go beyond it. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{Figure2.png} \caption{Sketch of the model and relation with higher dimensional systems. On the left we suggest a hypothetical 2D dense granular system where particles are roughly located on the vertices of a regular lattice. A possible mapping from the 2D to the 1D system involves replacing the mean horizontal velocity on the $i$th layer of the 2D system and replacing it with the $v_i$ of the 1D system. The dynamics in the vertical direction is neglected, an approximation which is justified by the presence of the vertical confinement, while the periodic boundary conditions (indicated by the dotted lines) are representative of a 'free' direction in which the grains can flow without obstacles. This can be realized experimentally, for instance, in a 3D cylindrical geometry, where the velocity of grains in the tangential direction (with respect to the central axis of the cylinder) constitute the horizontal velocities in the putative 2D system sketched here, see for instance~\cite{Scalliet2015,Plati2019}. Red grains are in direct contact with the external source of energy coming from the boundaries ($\gamma_b$,$T_{1(L)}$) while the green ones are in contact with the bulk bath, which is switched off in the NHHP. } \label{fig:cartoon} \end{figure} Nevertheless, the physics described by our model can realistically represent the condition of permanent contacts in which dense granular matter is found in vertically-vibrated setups. Such kind of systems are widely studied experimentally; they consist on assemblies of grains confined in a box vibrated with a noisy or sinusoidal signal on the $z$ direction. For low driving energies, the particles are always arranged in a dense packing where they vibrate in permanent contact with each other experiencing very rare and slow rearrangements. This implies, if the geometry is narrow enough, that just the external layers of the system are in direct contact with the vibrating walls while the others never touch them. This last fact tell us that, in addition to the specific form of the viscous forces and the permanent interactions, also the way in which the external energy injection is modeled in the NHHP resembles the conditions of a vibrated granular system in a dense state. Moreover, if layers of particles are mapped into lattice sites, a 1D chain can also be representative of a higher dimensional systems (see Fig. \ref{fig:cartoon}). On the other hand, the HHP can be referred to a setup where all the particles interact with the vibrating walls, as it happens for instance in vibrated monolayers ~\cite{puglisi2012structure}. The idea of considering velocity fields defined on lattices, i.e. neglecting the evolution of the positions and density fluctuations in the dynamics, has been widely exploited in granular literature~\cite{Baldassa2002,lasanta2015fluctuating,Puglisi1D2018} especially for dilute systems. In these previous works, however, there is no continuous interaction, but only instantaneous collisions occurring between pairs of neighboring grains picked up at random, at every time step. Many results have been obtained by solving (analytically or numerically) the corresponding master equation or performing its hydrodynamic limit, revealing that these models are a powerful tool to investigate complex phenomena observed in experiments and simulations of realistic granular systems such as shock waves, anomalous transport and current fluctuations~\cite{plata2016lattice,manacorda2016lattice}. To summarize motivations and background, our model reflects three main characteristics of dense granular materials in vertically-vibrated setups i.e. viscous forces, permanent contacts and energy injection localized at the boundaries. It can be then considered as the high density variant of a well established family of models previously investigated. It is important to note that also the dilute models can exhibit long-range correlations~\cite{plata2016lattice,manacorda2016lattice}. Nevertheless, those are finite-size effects found in the homogeneous cooling state~\cite{puglisi2014transport} i.e. without external driving and with conserved total momentum. As we briefly discuss in the next paragraph and more clearly in the SM (S4), our model makes clear that there is a sharp difference between the correlations of the cooling state and the NESS ones. \subsection*{Compact SDE formulation of the model} Defining the vectors $\bm{V} =(v_1 , \dots , v_L)$ , $\bm{\eta}(t)=(\eta_1(t), \dots , \eta_L(t))$ and the adimensional parameters $\beta=\gamma_b/\gamma$, $\alpha=\gamma_a/\gamma$ then we can rewrite Eqs. \ref{eq::ModelEq} as a multivariate Ornstein-Uhlenbeck process obtaining the following stochastic differential equation (SDE): \begin{equation} \label{eq::ModelSDE} \dot{\bm{V}}= -\hat{A}\bm{V} + \hat{B}\bm{\eta}(t) \end{equation} where $\hat{B}=\text{diag}(\sqrt{2\gamma_b T_1},\sqrt{2\gamma_a T_a},\dots,\sqrt{2\gamma_a T_a},\sqrt{2\gamma_b T_L})$ and: \begin{equation} \label{Eq::Matricione} \hat{A}= \gamma \begin{pmatrix} 1+\beta & -1 & & & \bm{0} \\ -1 & 2+\alpha & -1 & & \\ & \ddots & \ddots & \ddots & \\ & & -1 & 2+\alpha & -1 \\ \bm{0} & & &-1& 1+\beta \end{pmatrix} \end{equation} is a $L\times L$ tridiagonal symmetric matrix. The information about space-time correlations of the system are encoded in the two times correlation matrix $\hat{\sigma}(t,s)$ whose entries are defined as $\sigma_{jm}(t,s)=\langle v_j(t)v_m(s) \rangle \equiv \langle \left[v_j(t)-\langle v_j(t)\rangle\right] \left[v_m(s)-\langle v_m(s)\rangle\right] \rangle$. We now define the quantity of principal interest in this paper i.e. the static spatial correlation function of the velocity field: \begin{equation} \label{eq:SpCorr:defSpCorr} \zeta_{jm}=\frac{\sigma_{jm}}{\sqrt{\sigma_{jj}\sigma_{mm}}} \quad \text{where} \quad \sigma_{jm}=\left\langle v_j v_m \right\rangle. \end{equation} With this definition we have $\zeta_{jm}=1$ if $j=m$ or $v_j=v_m$ and $\zeta_{jm}=0$ if $\langle v_j v_m \rangle=0$. It is then clear that our goal is to solve Eq. \ref{eq::ModelSDE} and find the stationary correlation matrix $\hat{\sigma}=\lim_{t\to\infty}\hat{\sigma} (t,t)$ that exists if $\hat{A}$ is positive semi-definite. In this conditions, regardless the symmetry of $\hat{A}$, the correlation matrix can be found by inverting the relation \cite{G90}: \begin{equation} \label{Eq::MAtricialSigma} \hat{A}\hat{\sigma}+\hat{\sigma}\hat{A}^T=\hat{B}\hat{B}^T. \end{equation} Nevertheless, a more direct way to obtain an analytic expression of $\hat{\sigma}$ can be followed exploiting the fact that $\hat{A}$ is symmetric. In this case there exist a unitary matrix $\hat{S}$ such that $\hat{S}\hat{S}^+=\hat{I}$ and $\hat{S}^+ \hat{A} \hat{S}$=$\hat{S}^+ \hat{A}^T \hat{S}=\hat{\lambda}=\text{diag}(\lambda_1, \lambda_2, \dots , \lambda_L)$ where $\hat{I}$ is the identity matrix, the $\lambda_j$s are the eigenvalues of $\hat{A}$ while $S_{ji}$ is the $j$th component of the $i$th eigenvector of it. With these hypotheses and in the case of $\hat{B}=\text{diag}(b_1,\dots,b_L)$ we can write the covariance matrix in the two-times (with $t \ge s$) and non-stationary case: \begin{equation} \label{Eq::DiagoSigmaGeneral} \hat{\sigma}(t,s)=\hat{S} \left(\hat{C}(t,s) + \hat{G}(t,s)\right) \hat{S}^+ \end{equation} where: \begin{subequations} \label{Eq::CeGALL} \begin{align} \hat{C}(t,s)=\exp(-\hat{\lambda}t) \hat{S}^+ \langle \bm{V}(0),\bm{V}^T(0)\rangle \hat{S} \exp(-\hat{\lambda}s) \label{Eq::CeGa} \\ G_{jm}(t,s)=\frac{\left(e^{-\lambda_j(t-s)}-e^{-(\lambda_j+\lambda_m)s}\right)\sum_{n} S_{jn}^+S_{nm}b_n^2}{\lambda_j+\lambda_m}. \label{Eq::CeGb} \end{align} \end{subequations} The first matrix represents the transient and the brackets refer to the average over initial conditions while the NESS is described by $\lim_{s \to \infty} G(t,s)$. Without noises, Eq. \eqref{Eq::CeGa} would be the solution of Eq. \ref{eq::ModelSDE} representing the correlations in the cooling state. We note that the two correlation matrices has a different mathematical structure. The consequences of that together with some properties of the cooling state are discussed in the SM (S4) while in the next paragraphs we will neglect $\hat{C}$ concentrating on the NESS. Defining $\hat{\sigma}(t')=\lim_{t\to \infty} \hat{\sigma}(t+t',t)$ and through Eqs. \eqref{Eq::DiagoSigmaGeneral} and \eqref{Eq::CeGb} it is also possible to evaluate the single particle autocorrelation function $\Gamma_{j}(t')\equiv \sigma_{jj}(t')/\sigma_{jj}(0)$: \begin{equation} \label{eq:autoCorrTemp} \Gamma_j(t')=\frac{1}{\sigma_{jj}}\sum_k q_{jk}S^+_{kj}e^{-\lambda_k t'}, \quad q_{jk}=\sum_{ls}\frac{S_{jl}S^+_{ks}S_{sl}b^2_s}{\lambda_l+\lambda_k} \end{equation} from which is clear that, as expected for a linear system, the autocorrelation function is a sum of exponential terms with different characteristic times that are given by the inverse of the eigenvalues $\tau_k=1/\lambda_k$. We will derive $\sigma_{jm}$ in a specific case where the diagonalisation of $\hat{A}$ can be done analytically and then follow a numerical technique of diagonalisation \cite{Vaia2013} to show the robustness of our main results i.e. power law decay of spatial correlations. Before doing that, we briefly review what techniques have been used to solve similar problems highlighting the differences with the present case. These kinds of lattice models, and also more complex ones (with higher dimension and second order dynamics), when translational invariance holds, can be mapped in a system of independent equations for the modes in the Bravais lattice allowing a full solution \cite{CapriniPRL2020}. However, our model (both NHHP and HHP) has not periodic boundary conditions and the bath parameters depend on the particular site position. Assuming translational invariance would mean giving up some crucial aspects of our investigation. To keep a reasonable connection with dense granular matter it is important to have a source of energy that acts differently at the boundary and in the bulk of the system. Nevertheless, in the next section we'll discuss some common aspects between the HHP and translational invariant systems. We also point out that the continuous limit of Eq. \ref{eq::ModelEqA} leads to the following equation for the velocity field: $\partial_t v(x,t)=-\gamma_a v(x,t) +\partial_{xx}v(x,t)+\sqrt{2T_a\gamma_a}\xi(x,t)$ with $\langle \xi(x,t)\xi(x',t')\rangle=\delta(x-x')\delta(t-t')$. Equations of this form applied on a density field describe a diffusion process with traps and noise. The variation of the field at the point $x$ is indeed given by a noise, a diffusive term and a loss term ($-\gamma_a v(x,t)$) that represents the possibility for the particles to be permanently trapped. These processes can be used to describe the dynamics of mobile defects in crystals where translational invariance is assumed and the problem can be easily solved in Fourier space \cite{Schroeder76}. Our case where external thermostats are necessary to keep stationary the system and break translational invariance is different. In the general case with space-dependent parameters, correlations can be studied diagonalising the matrix $\hat{A}$ or by exploiting Eq. \ref{Eq::MAtricialSigma} combined with physical constraint on $\hat{\sigma}$. The former strategy, used by us and recently applied in \cite{Ishiwata2020,Falasco2015}, when possible, is more convenient because it gives access also to time-dependent properties. The latter has been used to study temperature profiles in non-equilibrium harmonic chains \cite{Rieder67} . It is important to stress that a crucial difference between the present work and the aforementioned ones is that we deal with interactions acting on relative velocities and not (only) on displacements. Indeed, we have a direct competition between baths $\gamma_{a(b)}$ and interaction $\gamma$ in $\hat{A}$, while in heated harmonic chains only the coupling constants appear in the interaction matrix. \subsubsection*{Toeplitz condition} In order to obtain an explicit form of Eq. \ref{Eq::DiagoSigmaGeneral} we consider the case of $\gamma_b=\gamma+\gamma_a$ so that $\beta=1+\alpha$ making $\hat{A}$ a Toeplitz matrix whose eigenvalues and eigenvectors are respectively: \begin{equation} \lambda_j=\gamma(2+\alpha- 2\cos(j\Pi) ) , \quad S_{jm}=\sqrt{\frac{2\Pi}{\pi}}\sin\left( jm\Pi \right) \label{eq::ActiveEigenVal} \end{equation} where $\Pi=\pi/(L+1)$. Replacing these in Eq. \eqref{Eq::CeGb} and taking $t=s\to \infty$, Eq. \ref{Eq::DiagoSigmaGeneral} becomes: \begin{equation} \sigma_{jm}(\alpha)= \frac{2\Pi^2}{\gamma \pi^2}\sum_{lk} \frac{\sin\left( jl\Pi \right)\sin\left( mk\Pi \right)\left[\sum_n b^2_n\sin(ln\Pi)\sin(kn\Pi)\right]}{\Delta(\alpha)-\cos\left( k\Pi \right)- \cos\left( l\Pi \right)} , \label{eq:SpCorr:Sigma2} \end{equation} where $\Delta(\alpha)=2+\alpha$. The sums run from 1 to $L$ and: \begin{equation} \label{eq:consitionsBn} b_n^2=\begin{cases} 2(\gamma+\gamma_a) T_1, \quad n=1 \\ 2\gamma_a T_a, \quad 1<n<L \\ 2(\gamma+\gamma_a) T_L, \quad n=L .\\ \end{cases} \end{equation} We point out that Eq. \eqref{eq:SpCorr:Sigma2} is symmetric with respect the center of the lattice (i.e. $\sigma_{1m}=\sigma_{L(L+1-m)}$) if the coefficients $b_n$ are too. \section*{Results} \subsection*{Power-Law correlations and slow time scales in the NHHP} We first study the NHHP so we put $\gamma_a=0$ and use the Toeplitz condition that now reads $\gamma_b=\gamma$ so $\beta=1$. Exploiting the limit for large systems ($L\gg 1$), we can exchange sums with integrals as $\Pi \sum_{k=1}^{k=L} f(k\Pi) \rightarrow \int_0^\pi dz f(z)$. We note that in Eq. \eqref{eq:SpCorr:Sigma2}, when $\gamma_a=0$, the sum over $n$ is actually made of two terms. The one multiplied by $\gamma_b T_L$ has a sign that depends on the parity of $l$ and $k$ and this brings to a subleading contribution if one considers $L \gg 1$ and $j,m \ll L$ (see S1 in the SM). Neglecting it and defining \begin{equation} \Sigma_{jm}(\alpha)=\int_0^\pi dzds \frac{\sin(jz)\sin(ms)\sin(z)\sin(s)}{\Delta(\alpha)-\cos\left( z \right)-\cos\left(s \right)} \end{equation} we obtain the covariance matrix for the NHHP: \begin{equation} \sigma^{\text{NHHP}}_{jm}=\frac{4T_1}{\pi^2}\Sigma_{jm}(0).\label{eq:SpCorr:Sigma3} \end{equation} The integral contained in $\Sigma_{jm}(0)$ is difficult to be explicitly evaluated but the following asymptotic behaviors can be derived in the limit $L \gg m \gg 1$: \begin{subequations} \label{eq:SpCorr:allPredictions} \begin{align} \sigma^{\text{NHHP}}_{mm} \sim \frac{1}{m^2} \label{Eq::sigmaijA} \\ \sigma^{\text{NHHP}}_{1m} \sim \frac{8T_1}{\pi m^3} \label{Eq::sigma1jB}\\ \zeta^{\text{NHHP}}_{1m} \sim \frac{1}{m^2} \label{Eq::zeta1jC} \end{align} \end{subequations} As explained in the SM (S2), these results are obtained by expressing $\sigma^{\text{NHHP}}_{jm}$ as a power series of $(jm)^{-1}$ by multiple integration by parts and estimating opportune upper bounds. The limit $L \gg m \gg 1$ is important because we want to study the asymptotic behavior of the correlations in the range for which they are not affected by the opposite boundary of the system. This is the reason why we predict just a decay for the variance $\sigma_{mm}$ even if it must grow approaching the $L$th site if $T_L \neq 0$. This growth for large $m$ is given by the term proportional to $\gamma_b T_L$ that we have neglected going from Eq. \eqref{eq:SpCorr:Sigma2} to Eq. \eqref{eq:SpCorr:Sigma3}. Eq. \eqref{Eq::zeta1jC} clearly states that the bulk sites are correlated with the first (heated) one by a power law decay with exponent 2. Regarding the correlations between particles in the bulk, they show a decay even slower than a power law. We discuss them in the last paragraph of this section. Regarding time scales, looking at Eq. \eqref{eq:autoCorrTemp} and at the specific form of the eigenvalues of $\hat{A}$ in Eq. \eqref{eq::ActiveEigenVal} for $\alpha=0$, we see that, when $j/L \ll 1$, the slowest time scales in the single particle autocorrelation function behave as: \begin{equation} \tau_j^{\text{NHHP}}=1/\lambda_j \sim \tau L^2 \end{equation} where $\tau=1/\gamma$. We note that the emergence of characteristic times that scale with the system size together with \emph{scale free} correlations is fully consistent. Thus, the information that influences the dynamics of every particle comes from all across the system and so the time to receive it must increase with the system size. \\ \subsection*{Finite Correlation Length and Times in the HHP} The emergence of \emph{scale free} correlations is often considered a remarkable fact in physical systems. Nevertheless, we are now dealing with a model so it is important to understand if this result is found just by an algebraic coincidence or if it is consistent with the usual framework in which \emph{scale free} correlations are understood i.e. a particular limit for which a finite correlation length diverges. The study of the HHP comes into play to provide an evidence of this last scenario. We point out that by studying the HHP with periodic boundary conditions, and therefore assuming translational invariance (i.e. extending Eq. \ref{eq::ModelEqA} to all the particles in the system), it is quite easy to derive an exponential decay for the stationary spatial correlation function. This can be done by expressing Eq. \ref{eq::ModelEqA} in the Bravais lattice or by studying the continuous limit of $\dot{\sigma}_{jm}=\langle v_j\dot{v}_m+v_m\dot{v}_j \rangle=0$. Nevertheless, we want to study the passage from the HHP to the NHHP when $\gamma_a \to 0$ so we proceed with space dependent parameters from Eq. \eqref{eq:SpCorr:Sigma2}. This expression, in the HHP, contains all the contributions given by Eq. \eqref{eq:consitionsBn}. Performing the large system limit and taking into account just the leading terms we arrive at the following expression for the covariance matrix in the HHP (see S3 in the SM for details): \begin{equation} \begin{split} \sigma_{jm}^{\text{HHP}}(\alpha)= \frac{2\alpha T_a}{\pi}\int_0^{\pi} dz \frac{\sin (jz)\sin(mz)}{\Delta(\alpha)-2\cos(z)} +\frac{4T_1}{\pi^2}\left[1+ \alpha \left(1-\frac{T_a}{T_1} \right) \right] \Sigma_{jm}(\alpha) \end{split} \label{eq::sigmaijHHPfull} \end{equation} where we see that for $\alpha=0$ Eq. \eqref{eq:SpCorr:Sigma3} is recovered. It is important to note that trying to express the above equation as a power series of $(m)^{-1}$ one finds that all the coefficients are zero signaling a decay faster than every power law. In order to go straight to the result we consider homogeneous amplitude of noises i.e. $T_1=T_L=T_a \gamma_a/(\gamma+\gamma_a)$ so that the second term of Eq. \eqref{eq::sigmaijHHPfull} vanishes. In this condition the matrix $\hat{B}$ is proportional to the identity so the system can reach thermodynamic equilibrium. We then take the Fourier transform $\tilde{\sigma}_{j\omega} (\alpha)=\int dm \exp(i\omega m) \sigma^{\text{HHP}}_{jm}(\alpha)$ and study the limit $\omega \ll 1$ ($m \gg 1$): \begin{equation} \tilde{\sigma}_{j\omega} (\alpha)\propto \int_{0}^{\pi}dz\frac{\delta(\omega-z)\sin(jz)}{\Delta(\alpha)-2\cos(z)}\sim \frac{\sin(j\omega)}{\alpha+\omega^2} \end{equation} whose inverse Fourier transform for $m>j$ is proportional to an exponential with characteristic length $\sqrt{\alpha}$, so we have that $\sigma^{\text{HHP}}_{jm}(\alpha)\sim\exp(-\sqrt{\alpha}m)$. This last result is valid for a generic $j\ll L$ so it holds also for particles in the bulk. We note that $\alpha \to 0$ is a singular limit because the pole of the last term of the above equation tends to the real axis. Regarding variances that we need to calculate $\zeta_{jm}$ we can write : \begin{equation} \label{eq::asyTemp} \sigma^{\text{HHP}}_{mm}(\alpha) = \frac{2\alpha T_a}{\pi}\int_{0}^{\pi}dz\frac{\sin^2(mz)}{\Delta(\alpha)-2 \cos(z)} = T_a\sqrt{\frac{\alpha}{4+\alpha}} + o(m^{-1}),\quad m\gg 1 \end{equation} as we expect, in the HHP the asymptotic temperature is a constant that we explicitly calculate in the SM (S3). We point out that this variance has two reasonable limiting cases: for $\alpha = 0$ it is $o(m^{-1})$ consistently with the NHHP while $\lim_{\alpha\to \infty}\sigma^{\text{HHP}}_{mm}(\alpha)=T_a$ representing the condition for which the external bath overcomes the interaction so that the variables are in equilibrium with the thermostats. From this and by the definition of Eq. \ref{eq:SpCorr:defSpCorr} we can conclude that spatial correlations in the HHP follow an exponential decay with a finite characteristic length scale $\xi$: \begin{equation} \label{eq::ExpDecay} \zeta^{\text{HHP}}_{jm}\sim e^{-m/\xi} \quad m\gg 1, \quad \xi=\alpha^{-1/2}. \end{equation} In the SM (S3) we show that this trend holds also without equal noise amplitudes so it is not strictly related to the equilibrium condition. We note that looking at this result in the framework of critical phenomena we would have a critical point at $\alpha_c=0$ and a correlation length that diverges as $\xi \sim (\alpha-\alpha_c)^{-\nu}$ with a critical exponent $\nu=1/2$. This critical point would then coincide with the NHHP. Indeed, in this phase, the system behave as in a critical regime where spatial correlations exhibit a power low decay. Nevertheless, we make clear that this is just an analogy and we don't interpret our results as a phase transition. Moreover, it is important to remind that an equivalent equilibrium phase transition governed by temperature could not occur because we are considering a 1D system. In equilibrium cases there is actually a transition at zero temperature but it coincides with a physical state with no dynamics. In other words, the model described by Eqs. \ref{eq::ModelEq} can't be mapped into an Ising or Heisenberg-like Hamiltonian system maintaining the same properties. We also note that the same scaling relation between correlation length and characteristic time of the bath has also been found in dilute granular systems with an hydrodynamic approach \cite{Gradenigo2011} and in dense active systems \cite{Caprini1Darxiv}. Nevertheless in these two translational invariant systems the equivalent limit for $\alpha=0$ is meaningless because in the first case it removes the driving while in the second one it implies a deterministic constant self propulsion. In Fig. \ref{fig:CorrFunc_Scaling_FiniteSize} we show that the exponential to power law crossover and the scaling for $\xi$ derived in the large system limit are clearly visible also for finite size lattices. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{Figure3a.png} \includegraphics[width=0.4\textwidth]{Figure3b.png} \caption{a) Spatial correlation function calculated via Eq. \eqref{eq:SpCorr:defSpCorr}. The entries of $\hat{\sigma}$ are obtained from Eq. \eqref{Eq::DiagoSigmaGeneral} with $t=s \gg 1$ and diagonalising $\hat{A}$. The parameters of the system are: $L=500$, $\gamma=5$, $\beta=1+\alpha$ (i.e. Toeplitz condition) and $\alpha \in [0.002,5]$. We observe an exponential decay with a growing correlation length that turns into a power law when $\alpha=0$. b) Scaling of the correlation length obtained from an exponential fit of $\zeta^{\text{HHP}}_{1m}$ for different combinations of parameters. We can see that the relation $\xi=\alpha^{-1/2}$ does not depend on the microscopic details of the system. Quasi-Toeplitz cases are discussed in the next paragraph. In both panels we used $T_1=T_a=0.001$ and $T_L=0$. \label{fig:CorrFunc_Scaling_FiniteSize}} \end{figure} In order to discuss also the characteristic time scales in the HHP, we note from Eq. \eqref{eq::ActiveEigenVal} that $\lambda_j > \gamma_a$ $\forall$ $j$ and so for finite $\alpha$ and $j/L \ll 1$ we have that: \begin{equation} \tau^{\text{HHP}}_j \sim 1/\gamma_a=\tau_a. \end{equation} This result is consistent with the fact that being correlated with a finite fraction of the system implies a finite time to receive the information that effectively determines the dynamics. To conclude the comparison between HHP and NHHP, we stress that the difference between the two phases is originated in the structure of the eigenvalues of $\hat{A}$. In particular, for both space and time correlations, the crucial ingredient is that the spectrum of $\hat{A}$ accumulates in $\gamma_a$ for $L \gg 1$ (Eq. \eqref{eq::ActiveEigenVal}). Consequently it accumulates to a finite value in the HHP and to zero in the NHHP. The crossover between the two phases is then governed by the limit $\alpha \to 0$ that brings to diverging correlation lengths and times. \subsection*{Beyond the Toeplitz case} Up to now we have considered the special case $\beta=1+\alpha$ for which $\hat{A}$ is a uniform Toeplitz matrix. Now we want to study the system with a general viscous constant $\gamma_b\neq \gamma+\gamma_a$ at the boundaries. Are the results obtained in the previous paragraphs still valid also in this more general case? In order to answer this question, we follow a procedure, systematically explained in \cite{Vaia2013}, to diagonalise quasi-uniform Toeplitz matrices i.e. matrices that deviates from the Toeplitz form just for few external borders. It does not give an analytical expression of the eigenvalues and eigenvectors but assures some constraints on their form and allows to find their values by numerically solving a set of transcendental equations. In order to uniform our notation with \cite{Vaia2013} we note that $\hat{A}=\gamma(2+\alpha)\hat{I}-\gamma\hat{A}'$ where: \begin{equation} \label{Eq::MatricionePrime} \hat{A}'= \begin{pmatrix} x & 1 & & & \bm{0} \\ 1 & 0 & 1 & & \\ & \ddots & \ddots & \ddots & \\ & & 1 & 0 & 1 \\ \bm{0} & & &1& x \end{pmatrix} \end{equation} and $x=1-\beta+\alpha$ so that for $\beta=1+\alpha$ we recover the Toeplitz case. Once defined $\lambda'_j$ ($S'_{ij}$) as the eigenvalues (eigenvectors) of $\hat{A}'$, then $\lambda_j=\gamma(2+\alpha)-\gamma\lambda'_j$ and $S_{jm}=S'_{jm}$. If the eigenvalues are parametrized as $\lambda'_j=2\cos(k_j)$ then we can find them by solving: \begin{equation} \label{Eq::VaiaEigenVal} k_j=\frac{\pi j+2 \phi(k_j)}{L+1}, \quad \phi(k)=k-\tan^{-1}\left( \frac{\sin(k)}{\cos(k)-x}\right) \end{equation} that determine the allowed values of $k_j$. The entries of the eigenvector matrix $\hat{S}$ can then be directly obtained starting from the numerical solution of Eq. \eqref{Eq::VaiaEigenVal} \cite{Vaia2013}. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{Figure4a.png} \label{fig:robaVaia_a} \includegraphics[width=0.4\linewidth]{Figure4b.png}\label{fig:robaVaia_b} \caption{a) Spatial correlation function for different quasi-Toeplitz cases in both HHP and NHHP. We can see that the two phases are stable also for large values of negative $x$. The entries of $\hat{\sigma}$ are obtained from Eq. \eqref{Eq::DiagoSigmaGeneral} for $t=s \gg 1$ and diagonalising $\hat{A}$. b) Spectra of $\hat{A}$ for different values of $x$ and $\alpha=2.1$. The spectra always accumulate at the boundary of the band $[\gamma_a,4\gamma+\gamma_a]$ and out-of-band eigenvalues can occur only for $|x|>1$. We also note that in the range of interest for the NHHP ($x \in [-\infty,1]$) the spectra are always positive assuring the stability of the system. In both panel we used $L=500$, $\gamma=5$, $T_1=T_a=0.001$ and $T_L=0$.} \label{fig:robaVaia} \end{figure} Once calculated all the $\lambda_j$ and the $S_{jm}$ we can use Eq. \eqref{Eq::CeGb} in the stationary case to obtain the covariance matrix and consequently the correlation functions. In Fig. \ref{fig:robaVaia} we show the correlation function for some quasi-Toeplitz cases for both the HHP and the NHHP finding the same asymptotic behavior obtained for the Toeplitz one in Fig. \ref{fig:CorrFunc_Scaling_FiniteSize}a. Also the scaling for $\xi$ in the HHP does not change (see Fig. \ref{fig:CorrFunc_Scaling_FiniteSize}b). We note that the difference in terms of parameters between Toeplitz and quasi-Toeplitz cases is that in the former we have just one adimensional ratio between viscous constants i.e. $\alpha=\gamma_a/\gamma$ while in the latter we can independently fix $\beta=\gamma_b/\gamma$ and $\alpha$. Given the form with which eigenvalues are parametrized they can take values only in the band $\lambda'_j \in [-2,2]$ and equivalently $\lambda_j \in [\gamma_a,4\gamma +\gamma_a]$. Nevertheless, for absolute values of $x$ large enough, out-of-band eigenvalues can occur \cite{Vaia2013}. This fact would compromise the existence of a stationary state in the NHHP because $\hat{A}$ would cease to be positive semi-definite. A more refined inspection of the spectral properties is then needed. Being $\beta>0$ by definition we are sure that $x\in[-\infty,1)$ in the NHHP. For $L \gg 1$ and $|x|>1$ two out-of-band eigenvalues $\lambda^{\text{out}}_{1,2}$ emerge converging to a common value given by $\lambda^{\text{out}}_{1,2}=\gamma(2+\alpha-x-x^{-1})$ that, in our case, is strictly positive preventing any problem of stability (see Fig. \ref{fig:robaVaia}b). Moreover, as shown in the same panel, we can see that the spectrum of $\hat{A}$ always accumulates at the boundary of the band independently from the value of $x$. This is also clear by taking $j/L \ll 1$ or $\sim 1$ in Eqs. \eqref{Eq::VaiaEigenVal} and verifying that $k_j$ tends respectively to $0$ or $\pi$. Consequently the $\lambda'_j$s always accumulate in $2$ and the $\lambda_j$s in $\gamma_a$. This generalizes our result about the power law decay in the NHHP (i.e. with $\gamma_a=0$) for any $\gamma_b>0$ because, as explained in the previous paragraphs, its origin relies in the accumulation of the $\lambda_j$ spectrum in zero (see also Fig. \ref{fig:robaVaia}). \subsection*{Correlations in the bulk and finite size effects} In previous paragraphs we focused on the correlation function with respect to the first site $\zeta_{1m}$ in the limit $L \gg m \gg 1$. These conditions, particularly in the NHHP, were crucial ingredients for calculations. Moreover, in Fig. \ref{fig:CorrFunc_Scaling_FiniteSize}a and Fig. \ref{fig:robaVaia}a we have always shown the correlation function in the case of $T_L=0$ in order to treat cases more compatible with our calculations where the terms proportional to $T_L\sim \mathcal{O}(1/L)$ are neglected. In this condition the only source of stochasticity is the bath on the first site so the finite size effects do not substantially affect the shape of $\zeta_{1m}$. Thus, the power law regime in the NHHP spans almost all the system size. Here we want to discuss the behavior of spatial correlations between particles in the bulk (i.e. $\zeta_{jm}$ with $1\ll j,m \ll L$) and the finite size effects for $T_L\neq0$. In Fig. \ref{fig:bulkAndFinite} we show $\zeta_{j(j+m-1)}$ with $j=1,L/2$ for different values of $L$ and $\alpha$. In all the cases we have $T_1=T_a=T_L\neq0$. The correlation function with respect $L/2$ is representative for the bulk and we can see from Fig. \ref{fig:bulkAndFinite} that in the HHP it presents an exponential decay with a correlation length independent from $L$ while in the NHHP it decays slower than a power law: $\zeta^{\text{NHHP}}_{L/2(L/2+m-1)}$ remains essentially constant up to a sharp cutoff that increases by raising $L$. Regarding $\zeta^{\text{NHHP}}_{1m}$ for $T_L \neq 0$, we can still observe the power law decay $\sim m^{-2}$ predicted in the previous paragraphs but with a sharp cutoff that occurs when $m$ is large enough and depending on $L$. In Fig. \ref{fig:bulkAndFinite}b we show the same curves as a function of $m/(L/2)$ and we note that the cutoffs of the correlation functions in the NHHP collapse signaling that their size scales linearly with $L$. In other words this confirms that, also when the boundary effects affect the shape of $\zeta_{jm}$, the NHHP presents \emph{scale-free} correlations. Indeed the only typical correlation length that one can define grows with system size. As we expect, the correlation functions in the HHP separates when plotted as a function of $m/(L/2)$ because their decay is strictly defined by $\alpha$ regardless of $L$. \begin{figure} \centering \includegraphics[width=0.4\linewidth]{Figure5a.png} \includegraphics[width=0.4\linewidth]{Figure5b.png} \caption{a) Spatial correlation function with respect to the site $j=1,\frac{L}{2}$ for $\beta=2$, $T_1=T_a=T_L=0.001$ and different values of $\alpha$ and $L$. The entries of $\hat{\sigma}$ are obtained from from Eq. \eqref{Eq::DiagoSigmaGeneral} for $t=s \gg 1$ and diagonalising $\hat{A}$. b) Same curves shown in the left panel but as a function of the rescaled distance $m/(L/2)$. The collapse of the cutoffs is a signature of \emph{scale-free} correlations\cite{CavagnaPNAS2010}}\label{fig:bulkAndFinite} \end{figure} \section*{Discussion} We studied spatial and temporal correlations in the NESS reached by a velocity field with viscous interactions defined on the lattice and coupled with Brownian baths. The model reproduces three main characteristics of vibrated granular matter at high density i.e. dissipative forces, permanent contacts and non-homogeneous energy injection. The typical correlation lengths and times have a finite characteristic scale when the bulk particles are coupled to an external bath (HHP regime); however such a scale diverges with the system size, as in a \emph{scale-free} scenario, when the thermal bath is removed from the bulk particles and kept acting on the boundary sites only (NHHP regime). Solving this model as a diagonalisable multivariate Ornstein-Uhlenbeck process, we unveiled the role of non-homogeneous heating in the development of slow and collective dynamics. We conclude that keeping the bath only at the boundaries allows to have a driven NESS in which the internal (deterministic) dynamics - and the corresponding propagation of information and fluctuations - is not hindered by external disturbances. From a mathematical point of view this is reflected in the spectral properties of the interaction matrix that accumulates in zero also in the presence of noises at the boundaries of the lattice. Our findings provide an example of a mechanism for which power law decays of correlations can occur out of equilibrium, shedding light on the emergence of collective behavior in dense granular matter. Further investigations of this model, considering both harmonic and viscous interactions, are promising steps towards the understanding of more general non-equilibrium systems such as active matter and biological assemblies. \section*{Supplemental Materials: Details of calculations} \subsection*{S1: Subleading terms in the large system limit} Here we show how performing the large system limit ($L \gg 1$) subleading terms $\sim 1/L$ occurs. Starting from Eq.\eqref{eq:SpCorr:Sigma2} we consider the contribution proportional to $b_L^2$: \begin{equation} \label{eq:blTerm} b_L^2\Pi^2 \sum_{lk} \frac{\sin\left( jl\Pi \right)\sin\left( mk\Pi \right)\sin(lL\Pi)\sin(kL\Pi)}{\Delta(\alpha)-\cos\left( k\Pi \right)- \cos\left( l\Pi \right)} \end{equation} where $\Pi=\pi/(L+1)$ and we note that: $\sin(lL\Pi)\sin(kL\Pi)=\left( -1 \right)^{k+l+2}\sin(l\Pi)\sin(k\Pi)$. Considering a generic function $f$ we can write \begin{multline} \Pi^2\sum_{lk}(-1)^{k+l+2} f(jl\Pi,mk\Pi)= \Pi^2\sum_{nh} \big[f(2jn\Pi,2mh\Pi)-f(2jn\Pi+j\Pi,2mh\Pi) + f(2jn\Pi+j\Pi,2mh\Pi+m\Pi) \\ -f(2jn\Pi,2mh\Pi+m\Pi) \big] \end{multline} that taking the large system limit $L \gg 1$ and replacing sums with integrals as $\Pi \sum_{m=0}^{m=L/2} f(2m\Pi) \rightarrow \frac{1}{4}\int_0^\pi dx f(x)$ becomes: \begin{equation} \frac{1}{4} \int_0^\pi dz ds \left[ f(jz,ms)-f(jz+j\Pi,ms)+f(jz+j\Pi,ms+m\Pi)-f(jz,ms+m\Pi)\right] \sim \mathcal{O}(1/L), \quad L \gg 1, \quad m \vee j \ll L \end{equation} because all the terms at the zeroth order vanish in the integrand. This explains why it is possible to neglect the term proportional to $b_L^2$ in Eq. \eqref{eq:SpCorr:Sigma2} once the large system limit is taken and for $j \vee m$ small enough. This is consistent with the idea that the effect of the bath acting on the Lth site can be neglected only if $\sigma_{jm}$ is calculated for sites that are far away from $L$. \subsection*{S2: Covariance matrix in the NHHP} Here we give some details about the calculations necessary to derive the asymptotic predictions of Eqs. \eqref{eq:SpCorr:allPredictions} from Eq. \eqref{eq:SpCorr:Sigma3}. To do so we start from the latter equation in a form more suitable for next calculations: \begin{equation} \sigma^{\text{NHHP}}_{jm}= \lim_{L\to \infty}\frac{4T_1}{\pi^2}\int_{\frac{\pi}{L+1}}^\frac{\pi L}{L+1} dz \int_{\frac{\pi}{L+1}}^\frac{\pi L}{L+1} ds\sin(jz)\sin(ms) g(z,s)\quad \text{where} \quad g(z,s)=\frac{\sin(z)\sin(s)}{2-\cos(z)-\cos(s)} \label{eq:SpCorr:SigmaLimit}. \end{equation} In this expression we have shown the explicit form of the large $L$ limit because the integrand of the function $g$ is a function of both $z$ and $s$ that is singular in the point $(0,0)$. Indeed, its right value in the origin comes from the limit for large $L$ of the integration domain $[\frac{\pi}{L+1},\frac{\pi L}{L+1}]\times [\frac{\pi}{L+1},\frac{\pi L}{L+1}]$ in the $zs$ plane. More specifically we have that $ 0\le g(z,s) \le 1$ $\forall z,s \in [0,\pi]$ and that $\lim_{z \to 0}g(z^a,z^b) \sim z^{a-b}$ if $a\ge b$. In the remainder, we consider the integration intervals as $[\frac{\pi}{L+1},\pi]$ because the singularity is just in the origin. Integrating two times by parts and noting that $ g(\pi,s)=g(z,\pi)=0$ $ \forall $ $z,s$ we have: \begin{multline} \label{eq:sigmaByParts} \sigma^{\text{NHHP}}_{jm} = \lim_{L\to \infty}\frac{4T_1}{\pi^2jm} \Biggl[ \cos\left(\frac{j\pi}{L+1}\right) \cos\left(\frac{m\pi}{L+1}\right)g\left(\frac{\pi}{L+1},\frac{\pi}{L+1}\right) + \cos\left(\frac{m\pi}{L+1}\right)\int_{\frac{\pi}{L+1}}^\pi dz \cos\left(jz\right) \partial_zg\left(z,\frac{\pi}{L+1}\right) \\ + \cos\left(\frac{j\pi}{L+1}\right)\int_{\frac{\pi}{L+1}}^\pi ds \cos\left(ms\right) \partial_sg\left(\frac{\pi}{L+1},s\right) + \int_{\frac{\pi}{L+1}}^\pi ds dz \cos\left(jz\right) \cos\left(ms\right) \partial_{zs} g(z,s) \Biggr] . \end{multline} We want to show that $\sigma^{\text{NHHP}}_{jm}\sim (jm)^{-1}$ so we have to demonstrate that the sum of the terms in the square brackets is $\mathcal{O}(1)$ for $m,j \gg 1$ in the large $L$ limit. The first term clearly tends to $1$ when $L \to \infty$ regardless the value of $j$ and $m$ (remember that $j,m \ll L$). Reintroducing $\Pi=\pi/(L+1)$ we can express Eq. \eqref{eq:sigmaByParts} as: \begin{equation} \label{eq:sigmaConC} \sigma^{\text{NHHP}}_{jm} \sim \frac{4T_1}{\pi^2jm}\left[1+C_{jm} \right] \quad \text{where} \quad C_{jm}=\lim_{L\to \infty}\left[\cos(m\Pi)I_j + \cos(j\Pi)I_m + I_{jm} \right] \end{equation} and where $I_j$, $I_m$ and $I_{jm}$ are respectively the integrals of the second, third and fourth term in the square brackets of Eq. \eqref{eq:sigmaByParts}. The estimate of the asymptotic behavior of such integrals is not trivial because of the presence of the derivatives of $g(z,s)$ that diverge in the origin. We then proceed by estimating upper bounds. It is important to note that, in order to demonstrate $\sigma^{\text{NHHP}}_{jm} \sim (jm)^{-1}$, requiring $C_{jm} \sim \mathcal{O}(1)$ or $|C_{jm}| \le 1$ is not enough because it would bring contributions as $-1 \pm o(1/j)$ that imply the emergence of a faster decay. The right thing to do is instead to show that $|C_{jm}| \le c$ with $c<1$. In this way, we could be sure that $C_{jm}$ cannot cancel 1 in Eq. \eqref{eq:sigmaConC}. Starting by $I_j$, we define $u(z)=\partial_{z} g(z,\frac{\pi}{L+1})$ and rewrite it as: \begin{equation} I_j=\int_{\frac{\pi}{L+1}}^{\pi+\frac{\pi}{L+1}} dz \cos\left(jz\right) u(z) + \mathcal{O}(1/L) \end{equation} Now we note that the interval of integration is much larger than the period $T_j=\frac{2\pi}{j}$ of the cosine so we can split it in a sum of contributions over consecutive periods. Without loss of generality we can assume $j$ even and exploit the periodicity of the cosine obtaining: \begin{equation} \label{eq::I1firstPass} I_j=\sum_{k=1}^{k=j/2}\int_{(k-1)T_j+\Pi}^{k T_j+ \Pi} dz \cos(jz)u(z)=\frac{1}{j}\int_{\Pi}^{2\pi+ \Pi} dx \cos(x)\sum_{k=1}^{k=j/2} u\left( \frac{x}{j}+(k-1)T_j \right) \end{equation} where we have have changed variable as $x=jz+2\pi(k-1)$ and reintroduced the symbol $\Pi=\frac{\pi}{L+1}$. Now we use the fact that $T_j \ll 1$ to exchange the sum over $k$ with an integral as $\sum_k f\left((k-1)T_j\right) \to T_j^{-1}\int d\phi_j f(\phi_j)$ and return to an expression with $g$: \begin{equation} \label{eq:I1} I_j=\frac{1}{2\pi}\int_{\Pi}^{2\pi+ \Pi} dx \cos(x)\int_0^{\pi-\frac{2\pi}{j}}d\phi_j u\left( \frac{x}{j}+\phi_j \right)=\frac{1}{2\pi}\int_{\Pi}^{2\pi+ \Pi} dx \cos(x) \left[ g\left( \frac{x}{j} + \pi - \frac{2\pi}{j} ,\Pi \right)-g\left( \frac{x}{j} , \Pi \right) \right] . \end{equation} The function $g$ can be regularly expanded in series around the point $(\pi,0)$. Doing this, it's easy to verify that the integral of the first term in the brackets gives $\mathcal{O}(1/j)$ contributions. We can't perform such an estimate for $g(x/j,\Pi)$ because the derivatives near the origin are not well defined. Nevertheless, we know that $g(x/j,\Pi) \in [0,1]$ $\forall$ $x \in [\Pi,2\pi/+\Pi]$ if $j$ is sufficiently large so we can estimate an upper bound for $I_j$ (and $I_{m}$) as: $\lim_{L\to \infty}|I_{j(m)}| \le 1/\pi$ for $j \gg 1$. This happens because, given $T$ a $2\pi$-large interval with $T_{+(-)}$ the sub-interval where the cosine is positive(negative) and $g(x) \in [0,1]$ if $x \in T$, we can write: \begin{equation} \label{eq:ineq} \Bigg|\int_T \cos(x) g(x)\Bigg|=\Bigg|\bigg|\int_{T_+} \cos(x)g(x)\bigg|-\bigg|\int_{T_-} \cos(x)g(x)\bigg|\Bigg| \le \frac{1}{2}\int_T \big|\cos(x)\big|=2 \end{equation} With the same kind of calculations leading to Eq. \eqref{eq:I1} we obtain: \begin{equation}\label{eq:I12} I_{jm}=\frac{1}{4\pi^2}\int_{\Pi}^{2\pi+\Pi} dx dy \cos(x)\cos(y)g\left(\frac{x}{j},\frac{y}{m}\right) +\mathcal{O}((mj)^{-1}). \end{equation} Using inequalities similar to the ones of Eq. \eqref{eq:ineq} but for 2D integrals we estimate the upper bound of Eq. \eqref{eq:I12} as $\lim_{L\to \infty}|I_{jm}| \le 2/\pi^2$ for $j,m \gg 1$. Putting together these results in the definition of $C_{jm}$ of Eq. \eqref{eq:sigmaConC} we are sure that in the large $L$ limit: \begin{equation} |C_{jm}|\le \lim_{L\to \infty} \left[|I_j| + |I_m| + |I_{jm}| \right]= \frac{2}{\pi}\left(1+\frac{1}{\pi}\right) \simeq 0.83926 < 1 \quad \text{for} \quad j,m \gg 1 \end{equation} We conclude that $\sigma^{\text{NHHP}}_{jm} \sim (jm)^{-1}$ from which Eq. \eqref{Eq::sigmaijA} is straightforward. It is important to note that, in order to obtain Eqs. \eqref{eq::I1firstPass} and \eqref{eq:I1}, we need both $j$ and $m \gg 1$. So we have to use another way to estimate the asymptotic behavior of $\sigma^{\text{NHHP}}_{1m}$. It can be rewritten as \begin{equation} \sigma^{\text{NHHP}}_{1m} = \frac{4T_1}{\pi^2} \int_{0}^{\pi} dzds \sin(ms) g_1(s,z) \quad \text{where} \quad g_1(s,z)=\frac{\sin^2(z)\sin(s)}{2-\cos(z)-\cos(s)} \end{equation} and $g_1$ is regular in the origin because $\lim_{z \to 0}g_1(z^a,z^b)=0$ $\forall$ $a,b > 0$. We can perform the integral over $z$ obtaining $\int_0^{\pi} dz g_1(z,s)=\pi\left[ -2 +\cos(s) +\sqrt{6-2\cos(s)}\sin(s/2) \right]\sin(s)$ where the first two terms in the brackets vanish when also the integral over $s$ is performed ($m$ is an integer). We have now that $\sigma^{\text{NHHP}}_{1m} = \frac{4T_1}{\pi^2} \int_{0}^\pi ds \sin(ms)f(s)$ where $f(s)=\sin(s)\left[\sqrt{6-2\cos(s)}\sin(s/2) \right]$. Integrating four times by parts and noting that $f(0)=f(\pi)=f''(\pi)=0$ while $f''(0)=2$ we obtain: \begin{equation} \sigma^{\text{NHHP}}_{1m} = \frac{8T_1}{\pi m^3} + R_m \sim \frac{8T_1}{\pi m^3} + \mathcal{O}(m^{-5}) \quad m \gg 1 \end{equation} where $R_m=(m)^{-4}(\pi)^{-1}\int_0^\pi ds \sin(ms)f^{(4)}(s) $ so $|R_m|\le (m)^{-4}(\pi)^{-1}|\text{max}(f^{(4)}(s))|\int_0^\pi ds |\sin(ms)|= 2(m)^{-5}(\pi)^{-1}|\text{max}(f^{(4)}(s))| \simeq 19(m)^{-5}(\pi)^{-1}$ . The last quantity needed for the Eqs. \eqref{eq:SpCorr:allPredictions} is $\sigma^{\text{NHHP}}_{11}=\int_0^{\pi} dzds \sin(z)\sin(s)g(z,s)=\pi^2-8\pi/3$ that is finite and does not depends on $m$ so the asymptotic behavior for $\zeta_{1m}$ directly follows from the ones derived for Eqs. \eqref{Eq::sigmaijA} and \eqref{Eq::sigma1jB}. \subsection*{S3: Covariance matrix in the HHP} In order to derive Eq. \eqref{eq::sigmaijHHPfull} from Eq. \eqref{eq:SpCorr:Sigma2} we have to discuss the contributions coming from the sum $\sum_n b_n^2 \sin(ln\Pi)\sin(kn\Pi)$ that compares in the latter. As explained in the first appendix, the term proportional to $b^2_L$ gives a subleading term $\mathcal{O}(1/L)$ in the large system limit while the one proportional to $b_1^2$ gives $4T_1(1+\alpha)(\pi)^{-2}\Sigma_{jm}(\alpha)$. Regarding the other contributions, we exploit orthogonality to express the remaining sum as: \begin{equation} \sum_{n=2}^{n=L-1}\sin(ln\Pi)\sin(kn\Pi)= \frac{L+1}{2}\delta_{kl}-\sin(l\Pi)\sin(k\Pi)-\sin(lL\Pi)\sin(kL\Pi) \end{equation} where again the last term gives $\mathcal{O}(1/L)$ for $L\gg1$. Thus, using this equation and neglecting subleading terms, Eq. \eqref{eq:SpCorr:Sigma2} becomes: \begin{equation} \sigma_{jm}(\alpha)= \Pi^2\sum_{lk} \frac{\sin\left( jl\Pi \right)\sin\left( mk\Pi \right)}{\Delta(\alpha)-\cos\left( k\Pi \right)- \cos\left( l\Pi \right)}\left[ \frac{2\alpha T_a (L+1)}{\pi^2}\delta_{kl}+\frac{4T_1}{ \pi^2}\left(1+\alpha\left(1-\frac{T_a}{T_1}\right)\right)\sin(l\Pi)\sin(k\Pi)\right] \end{equation} that in the large system limit gives Eq. \eqref{eq::sigmaijHHPfull}. In the main text we proceed from Eq. \eqref{eq::sigmaijHHPfull} by considering constant amplitude of noise i.e. $T_1=T_a \gamma_a/(\gamma+\gamma_a)$. In this way the term proportional to $\Sigma(\alpha)$ vanishes and one can shorten calculations concentrating just on the integral over $z$. To verify that the asymptotic behavior of Eq. \eqref{eq::ExpDecay} holds also without constant amplitude of noises we have to show that $\Sigma_{jm}(\alpha)$ does not decay slower than $\exp({-\sqrt{\alpha}m})$. We then consider the fourier transform $\tilde{\Sigma}_{j\omega} (\alpha)=\int dm \exp(i\omega m) \Sigma_{jm}(\alpha)$ for small $\omega$: \begin{equation} \tilde{\Sigma}_{j\omega}\sim \int_0^\pi dz \frac{\sin(jz)\sin(z)\omega}{1+\alpha-\cos(z)+\frac{\omega^2}{2}} \quad \text{so} \quad \Sigma_{jm} \sim \int_0^\pi dz \frac{\sin(jz)\sin(z)}{1+\alpha-\cos(z)}\exp(-m\sqrt{2(1+\alpha-\cos(z))}) \end{equation} and for this last expression is simple to show that $|\Sigma_{jm}| \le \frac{\pi}{\alpha} \exp(-\sqrt{2\alpha}m)$. Then we are sure that its behavior for large $m$ will be subleading with respect to $\exp(-\sqrt{\alpha}m)$. To complete the discussion about the exponential decay in the HHP we need to evaluate the result of Eq. \eqref{eq::asyTemp}. We then write such integral after one integration by parts obtaining: \begin{equation} \frac{2\alpha T_a}{\pi}\int_{0}^{\pi}dz\frac{\sin^2(mz)}{\Delta(\alpha)-2 \cos(z)}=\frac{2\alpha T_a}{\pi}\left[\frac{\pi}{2(4+\alpha)}-\int_0^{\pi} dz \frac{z \sin(z)}{(\Delta(\alpha)-2 \cos(z))^2} -\int_0^{\pi} dz \frac{\sin(mz) \sin(z)}{2m(\Delta(\alpha)-2 \cos(z))^2} \right] \end{equation} from which we have that $\sigma^{\text{HHP}}_{mm}(\alpha)=T_a\sqrt{\frac{\alpha}{4+\alpha}}+o(m^{-1})$. \subsection*{S4: Spatial correlation in the cooling state} An important question that often arise in granular systems regards the relation between the properties of the cooling dynamics and the one of the NESS obtained with the injection of energy. In our case we obtain the cooling state by switching off all the temperatures in the lattice (matrix $\hat{B}$ with all zero entries). In this situation the covariance matrix is simply given by Eq. \eqref{Eq::CeGa}. Where the brackets $\langle \rangle$ refer to a mean on the initial condition. Exploiting the symmetricity of $\hat{A}$ we can rewrite it as: \begin{equation} \sigma_{jm}(t,s)= \sum_{nhkl} S_{hn} e^{-\lambda_n t} S^+_{nk} \langle v_k(0) v_l(0) \rangle S_{lh}e^{-\lambda_h s} S^+_{hj} \end{equation} Keeping initial conditions identically and independently distributed around $0$ with the variance 1 so that $\langle v_{k}(0)v_l(0) \rangle=\delta_{kl}$ and exploiting orthogonality of the eigenvectors we have: \begin{equation} \sigma_{jm}(t,s)= \sum_n S_{jn} e^{-\lambda_n(t+s)}S^+_{nm} \end{equation} That in the Toeplitz case for $t=s$ becomes: \begin{equation}\label{eq:sigmaCool} \sigma_{jm}(t)= \frac{\exp(-2(2\gamma+\gamma_a)t)\Pi}{\pi}\sum_n \sin\left( jn\Pi \right)\sin\left( nm\Pi \right)\exp\left( 4\gamma t \cos \left( n\Pi \right) \right). \end{equation} where we note that for $t=0$ $\sigma_{jm}(0)=\delta_{jm}$ as imposed by the initial state. The same uncorrelated condition, expected for non-iteracting systems, is also obtained with $\gamma=0$. Another important properties of the $\sigma_{ij}(t)$ is that the dependence on $\gamma_a$ is factored out from the sum so, when calculating $\zeta_{jm}=\sigma_{jm}/\sqrt{\sigma_{jj}\sigma_{mm}}$, it simplifies. Moreover, also the dependence from $\gamma$ can be removed just by using the adimensional time $\tilde{t}=\gamma t$. To conclude, during the cooling the behavior of spatial correlations is crucially different from the one observed in the two heated phases studied in the main text. In particular, the parameter $\alpha$ does not play a crucial role as in the NESS. This is an intriguing result because we found that an external source of energy makes something more than just keeping alive the dynamics that characterizes the system when it cools down. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{coolingProp.png} \includegraphics[width=0.4\textwidth]{coolingPropResc.png} \caption{Spatial correlation function in the cooling state after different times $\tilde{t}$. We observe a collapse by rescaling the horizontal axis by $\sqrt{\tilde{t}}$.} \label{fig:cooling} \end{figure} In Fig. \ref{fig:cooling} we show $\zeta_{1x}(\tilde{t})$ for different times $\tilde{t}$ and we clearly observe that it presents a finite cutoff that grows with the delay time $\tilde{t}$. We can understand it by thinking that the information is propagating through the system in time. In Fig. \ref{fig:cooling}b we show how rescaling the space with $\sqrt{\tilde{t}}$ all the curves collapse. So the information propagates as $\xi(t)\propto \sqrt{\gamma t}$. This result is fully consistent with diffusion-like coarsening dynamics of vortices, found in other models for granular velocity fields \cite{van1997mesoscopic,Baldassa2002,baldassarri2015coarsening}. In those models however the cooling state is closer to "dilute" situations where interactions are sequences of separate binary collisions.\\ \subsection*{S5: Reintroduction of space and connection with active matter} Although it is reasonably justified from empirical observations, neglecting the positional dynamics remains the main approximation of our model. A way to reintroduce it in our description is to consider a harmonic potential between nearest neighbors in the lattice. The equation of motion for each particle would then be of this form \begin{subequations} \label{eq::conSpace} \begin{align} \dot{x}_i=v_i \\ \dot{v}_i=-(\gamma_{a(b)}+2\gamma)v_i-2kx_{i}+k(x_{i+1}+x_{i-1})+\gamma(v_{i+1}+v_{i-1})+\sqrt{2T_{a(i)}\gamma_{a(b)}}\xi_i(t) \end{align} \end{subequations} where we consider again a bath on the boundaries characterized by ($\gamma_b$, $T_{1(L)}$) and a bath on the bulk ($\gamma_a$, $T_{a}$). It is interesting to note that we can obtain equations of the same form when considering a 1D chain of (overdamped) active particles with harmonic interactions, where self-propulsion is modeled using a colored noise $\eta$ (AOUP): \begin{subequations} \begin{align} \dot{x}_i=-k(x_{i}-x_{i+1})-k(x_{i}-x_{i-1})+\eta_i(t) \\ \dot{\eta}_i=-\gamma_a\eta_i+\sqrt{2T_a\gamma_a}\xi_i(t) \end{align} \end{subequations} where $\xi_i$ are Gaussian white noises with unitary variance. Time-deriving the first of these equations and following standard manipulations, we get~\cite{maggi2015multidimensional}: \begin{subequations}\label{eq::activeChainPassaggioCarino} \begin{align} \dot{x}_i=v_i \\ \dot{v}_i=-2k\gamma_a x_{i} -(\gamma_a +2k)v_i +k\gamma_a (x_{i+1}+x_{i-1}) +k(v_{i+1}+v_{i-1}) +\sqrt{2T_a\gamma_a}\xi_i(t) \end{align} \end{subequations} which are formally equivalent to Eqs. \eqref{eq::conSpace}. If we consider the particles fixed on the lattice and neglect the positional dynamics we find the analogous of the granular case studied in the main with a transition in $\gamma_a=0$. While in the granular chain removing the bath on the bulk has a specific and realistic physical condition (granular materials are often driven only through boundaries) in the active case it seems meaningless. A self-propelled harmonic chain modeled by Eqs. \eqref{eq::activeChainPassaggioCarino} has been studied taking account the positional dynamics and assuming spatially homogeneous self-propulsion \cite{Caprini1Darxiv}. The authors perform calculations based on translational invariance (they solve the system in the Bravais reciprocal lattice). This assumption is crucial and it is also the main difference with our approach in which we are interested in the effect of non-homogeneous heating. The interesting connection with our investigation is that they found a correlation length that scales as $\xi\sim\sqrt{1/\gamma_a}$ as in our case \cite{Caprini1Darxiv}. The study of correlations in this kind of 1D systems with both positional dynamics and non-homogeneous heating is, up to our knowledge, still lacking. We are currently working in this direction.
1,108,101,564,453
arxiv
\section{Introduction} Emission of light fragments (LF) from nuclear reactions is an open question. Different reaction mechanisms contribute to their production; the relative roles of each, and how they change with incident energy, mass number of the target, and the type and emission energy of the fragments is not completely understood. None of the available models are able to accurately predict emission of LF from arbitrary reactions. However, the ability to describe production of LF (especially at energies $\gtrsim 30$~MeV) from many reactions is important for different applications, such as cosmic-ray-induced Single Event Upsets (SEUs), radiation protection, and cancer therapy with proton and heavy-ion beams, to name just a few. The Cascade-Exciton Model (CEM) \cite{CEMModel} version 03.03 and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) \cite{LAQGSM, ICTP-IAEAWorkshop} version 03.03 event generators in Monte Carlo N-Particle Transport Code version 6 (MCNP6) \cite{MCNP6} describe quite well the spectra of fragments with sizes up to $^{4}$He across a broad range of target masses and incident energies (up to $\sim 5$~GeV for CEM and up to $\sim 1$~TeV/A for LAQGSM). However, they do not predict the high-energy tails of LF spectra heavier than $^4$He well. Most LF with energies above several tens of MeV are emitted during the precompound stage of a reaction. The current versions of the CEM and LAQGSM event generators do not account for precompound emission of LF larger than $^{4}$He. The aim of our work is to extend the precompound model in them to include such processes, leading to an increase of predictive power of LF-production in MCNP6. This entails upgrading the Modified Exciton Model currently used at the preequilibrium stage in CEM and LAQGSM. It will also include expansion and examination of the coalescence and Fermi break-up models used in the precompound stages of spallation reactions within CEM and LAQGSM. Extending our models to include emission of fragments heavier than $^4$He at the precompound stage has already provided preliminary results that have much better agreement with experimental data. \section{Why This Research Is Needed} In October 2008 an Airbus plane was struck by a cosmic ray en route from Perth to Singapore, one of its inertial reference computer units failed, and it sharply lost altitude \cite{NeciaGrantCooper}. It did land safely, but as seen in Figure~\ref{fig:Airbus}, it caused significant injury to both the occupants and the plane. \begin{figure} [htp] \begin{center} \includegraphics[height=2.0in]{AirbusSEU1} \includegraphics[height=2.0in]{AirbusSEU2} \caption[]{Photographs of the damaged Airbus after the SEU \cite{NeciaGrantCooper}.} \label{fig:Airbus} \end{center} \end{figure} These SEUs are not rare, and can wreak significant havoc. For example, in a typical 14-day space mission the shuttles' 5 computers typically receive 400-500 SEUs \cite{Singleterry}. In addition, even though the plane accident was serious, much more serious incidents have occurred: during the Cold War a U. S. satellite was hit by a cosmic ray and reported that there had been a nuclear missile launch, heading toward the U. S. \cite{CountdownToZero}. The U. S. went on high alert and readied their nuclear weapons. Thankfully they were never launched. Understanding how high-energy fragments interact with matter is critical to preventing these malfunctions. Accurate simulation of LF spectra is also important in the fields of radiation shielding, especially for applications in space. Modern computers cannot be used in space because the electronics are too small and delicate and cannot, at present, be shielded well enough. An even larger problem is radiation shielding for the human astronauts exposed to Galactic Cosmic Rays (GCRs) \cite{Singleterry}. This research is also important to several medical fields, such as cancer treatment with proton or heavy-ion beams. Proton and heavy-ion therapy has been shown to be more effective than x-ray therapy, and have much fewer side effects \cite{Protons}. Another indication of the importance of this research is the recommendation of an international evaluation and comparison, the 2008-2010 IAEA (International Atomic Energy Agency) Benchmark of Spallation Models, that we make this change in our code \cite{SecondAdvancedWorkshop,IAEABenchmark}. While no other spallation model can generally predict high-energy light fragment emission from arbitrary reactions, it is an accomplishment several model development groups are working to achieve. Furthermore, MCNP6's GENXS option at present does not produce tallies for particles larger than $^{4}$He. This limitation is serious for some of our interest groups. For example, NASA recently contacted one of us (SGM) to inquire if our codes could produce LF spectra in the intermediate- and high-energy regimes. At present they cannot. Last, but not least, this research helps us understand better the mechanisms of nuclear reactions. \section{Current Capabilities of CEM03.03} \subsection{Overview of the CEM Model} \begin{figure}[htp] \centering \includegraphics[width=4.5in]{CEMModelOverview.jpg} \caption[]{Flowchart of nuclear-reaction calculations by CEM03.03 \cite{ICTP-IAEAWorkshop}.} \label{fig:Flowchart} \end{figure} As a rule, a reaction begins with the IntraNuclear Cascade, referred to as either the INC or as the Cascade (see Fig.~\ref{fig:Flowchart}). The incident particle or nucleus (in the case of using LAQGSM) enters the target nucleus and begins interacting with nucleons, scattering off them and also often creating new particles in the process. The incident particle and all newly created particles are followed until they either escape from the nucleus or reach a threshold energy (roughly 10-30 MeV per nucleon) and are then considered ``absorbed" by the nucleus. The preequilibrium stage uses the Modified Exciton Model (MEM) to determine emission of protons, neutrons, and fragments up to $^4$He from the residual nucleus. We discuss the MEM in more detail below. This stage can have a highly excited residual nucleus undergoing dozens of exciton transitions and particle emissions. The preequilibrium stage ends when the residual nucleus is just as likely to have a $\Delta n = +2$ exciton transaction as a $\Delta n = -2$ exciton transaction. In the evaporation stage neutrons and protons in the outer shells of the residual nucleus can ``evaporate" off, either singly or as fragments. The CEM evaporation stage is modeled after Furihata's Generalized Evaporation Model (GEM2) \cite{GEM2}, and can emit light fragments up to $^{28}$Mg. During and after evaporation, the code looks to see if we have an isotope that has $Z \geq 65$ and is fissionable. If it is, and there is fission, then the code follows the evaporation stage for the fission fragments. There are two models that are not directly part of this linear progression: Coalescence and Fermi break-up (see Fig.~\ref{fig:Flowchart}). The Cascade stage only emits neutrons, protons, and pions (and other particles, in the case of using LAQGSM at high energies), so the coalescence model ``coalesces" some of the neutrons and protons produced during the INC into larger fragments, by comparing their momenta. If their momenta are similar enough then they coalesce. The current coalescence model can only coalesce up to a $^4$He fragment, the same as the preequilibrium stage. The Fermi break-up is an oversimplified multifragmentation model that is fast and accurate for small atomic numbers, so we use it when the residual mass number is less than or equal to 13. \subsection{Comparison with Experimental Data by Machner et al.} Figure~\ref{fig:p200AlCompOld} shows the double-differential cross section of the reaction 200 MeV p + $^{27}$Al $\rightarrow$ $^{6}$Li, comparing Machner et al. \cite{Machner} experimental data (open points) and unmodified CEM03.03 (solid red lines). \begin{figure}[htp] \centering \includegraphics[trim = 0.5in 3.5in 1.0in 3.5in, width=6.0in]{p200AlCompOld.pdf} \caption[]{Comparison of CEM03.03 (solid red lines) and experimental data by Machner et al. \cite{Machner} (open points).} \label{fig:p200AlCompOld} \end{figure} The vertical axis presents the double differential cross sections. The horizontal axis shows the kinetic energy of the emitted particles ($^6$Li in this case) in MeV. The different data bands represent $^6$Li detected (or simulated) at different angles, and are separated out by multiplying each band by a different factor of 10. As can be seen, the current version of CEM does not predict the high-energy tails of $^6$Li well. This is true across other reaction energies and target mass numbers for all fragments heavier than $^4$He, for higher energies. At lower energies ($\lesssim 25$~MeV) CEM matches well, but as we enter intermediate energies ($\gtrsim 25$~MeV) CEM falls off sharply. This is because the peak which occurs at lower energy is a result of the evaporative stage, which does consider emission of LF (up to $^{28}$Mg). The intermediate section of the fragment spectra tail (up to $\sim 150$~MeV) is largely produced by the MEM within the preequilibrium stage. The higher energy tail of fragment spectra is largely produced from coalescence, also a precompound stage. Neither the MEM nor the coalescence model presently consider emission of light fragments heavier than $^4$He. \section{Emission of High-Energy LF in Other Models} This paper focuses on the emission of high-energy LF at the preequilibrium stage of nuclear reactions. However, high-energy LF can be produced at other stages of reactions. So, Cugnon {\it et al.} have modified their Li\`{e}ge IntraNuclear Cascade (INCL) code to consider emission of light fragments heavier than $^4$He during the cascade stage of reactions via coalescence of several nucleons at the nuclear periphery \cite{Cugnon}. These modifications have not yet been generalized across all types of reactions. In addition, the INCL+ABLA model is limited to relatively light incident projectiles (particles and light ions, typically, up to oxygen). Several previous papers by the same group discuss the production of light fragments up to $A=10$ (see, e.g., \cite{Cugnon2010, Cugnon2011}). A recent 2013 paper by the same authors presents satisfactory results for emission spectra of $^6$He, $^6$Li, $^7$Li, and $^7$Be in the reaction $p + ^{197}Au \rightarrow ...$ and discusses emission of clusters up to $A = 12$ \cite{INCL4.6}. Emission of $^7$Be at the preequilibrium stage (described by a hybrid exciton model and coalescence pick-up model) was studied by A. Yu. Konobeyev and Yu. A. Korovin more than a decade ago \cite{Konobeyev}. Additionally, preequilibrium emission of helium and lithium ions and the necessary adjustments to the Kalbach systematics was discussed in Ref. \cite{Uozumi}. Preequilibrium emission of light fragments was also studied within the CEM in 2002 \cite{CEM2k2f}, but that project was never completed. Finally, energetic fragments can be produced via Fermi break-up \cite{Fermi} and multifragmentation processes, as described, e.g., by the Statistical Multifragmentation Model (SMM) \cite{SMM}; (see a comparison of the Fermi break-up model with SMM in the recent paper by Souza {\it et al.} \cite{Souza2013}). Light fragments can also be emitted during the compound stage of reactions. GEM2, the evaporation model used in CEM, emits light fragments up to $^{28}$Mg \cite{GEM2}. In addition, light fragments can be produced via very asymmetric binary fission, as described, e.g., by the fission-like binary decay code GEMINI by Charity et al. \cite{Charity01}, and also via ternary fission. For more information, see the recent Ref. \cite{Ronen} wherein Y. Ronen discusses the physics of how light fragments are products seen in ternary fission. However, neither evaporation nor fission processes can produce high-energy fragments, of interest to our current study. \section{The Modified Exciton Model (MEM)} \subsection{MEM Code} Let us present below an in-depth description of the code in MEM calculations. The flowchart in Figure~\ref{fig:MEMFlowchart} describes the calculations and processes performed in the MEM. \begin{figure}[htp] \centering \includegraphics[width=5.5in]{MEMFlowchart.pdf} \caption[]{Flowchart for emission of light fragments in the MEM code.} \label{fig:MEMFlowchart} \end{figure} \subsection{MEM Physics} The probability of finding the system at the time moment $t$ in the $E\alpha$ state, $P(E,\alpha,t)$, is given by the following differential equation: \begin{equation} \frac{\delta P(E,\alpha,t)}{\delta t} = \sum_{\alpha \neq \alpha '}[\lambda(E\alpha,E\alpha')P(E,\alpha',t) - \lambda(E\alpha',E\alpha)P(E,\alpha,t)] . \label{Master} \end{equation} Here $\lambda(E\alpha,E\alpha')$ is the energy-conserving probability rate, defined in the first order of the time-dependent perturbation theory as \begin{equation} \lambda (E\alpha,E\alpha') = \frac{2\pi}{h} |<E\alpha|V|E\alpha'>|^2 \omega_{\alpha}(E) . \label{LambdaGeneral} \end{equation} The matrix element $<E\alpha|V|E\alpha'>$ is believed to be a smooth function in energy, and $\omega_\alpha(E)$ is the density of the final state of the system. One should note that Eq.~(\ref{Master}) is derived provided that the ``memory" time $\tau_{mem}$ of the system is small compared to the characteristic time for intranuclear transition $~\frac{\hbar}{\lambda(E\alpha,E\alpha')}$ but, on the other hand, Eq.~(\ref{Master}) itself is applicable for the time moments $t \gg \frac{\hbar}{\lambda(E\alpha,E\alpha')}$. Due to the condition $\tau_{mem} \gg \frac{\hbar}{\lambda(E\alpha,E\alpha')}$, being described by Eq.~(\ref{Master}), the random process is the Markovian one. The Modified Exciton Model (MEM) \cite{CEMModel, Gudima, MODEX} utilized by CEM and LAQGSM uses effectively the relationship of the master equation (\ref{Master}) with a Markovian random processes. Indeed, an attainment of the statistical equilibration described by Eq.~(\ref{Master}) is an example of the discontinuous Markovian process: the temporal variable changes continuously and at a random moment the state of the system changes by a discontinuous jump, the behavior of the system at the next moment being completely defined by its state at present. As long as the transition probabilities $\lambda(E\alpha,E\alpha')$ are time independent, the waiting time for the system in the $E\alpha$ state has the exponential distribution (the Poisson flow) with the average lifetime $\frac{\hbar}{\Lambda(\alpha,E)} = \frac{\hbar}{\sum_\alpha'{\lambda(E\alpha,E\alpha')}}$. This fact prompts a simple method of solving the related system of Eq.~(\ref{Master}): simulation of the random process by the Monte Carlo technique. In this treatment it is possible to generalize the exciton model to all nuclear transitions with $\Delta n = 0, \pm 2$, and the multiple emission of particles and to depletion of nuclear states due to the particle emission. In this case the system (\ref{Master}) is as follows:~\cite{Mashnik1994} \begin{equation} \begin{split} \frac{\delta P(E,\alpha,t)}{\delta t} = & -\Lambda(n,E)P(E,n,t) + \lambda_+(n-2,E)P(E,n-2,t) + \\ & + \lambda_0(n,E)P(E,n,t) + \lambda_-(n+2,E)P(E,n+2,t) + \\ & + \sum_j \int dT \int dE' \lambda_c^j (n,E,T)P(E',n+n_j,t)\delta(E' -E-B_j-T) . \end{split} \label{Probability} \end{equation} Now we solve our master equation Eq.~(\ref{Probability}) by finding the particle emission rates $\lambda_c^j$ and the exciton transition rates $\lambda_+$, $\lambda_0$, and $\lambda_-$. \vspace*{0.25cm} {\noindent \bf \em Particle Emission} \\ According to the detailed balance principle, the emission width $\Gamma _{j}$, (or probability of emitting particle fragment $j$), is estimated as \begin{equation} \Gamma_{j}(p,h,E) = \int_{V_j^c}^{E-B_j} \lambda_c^j (p,h,E,T)dT , \end{equation} where the partial transmission probabilities, $\lambda_c^j$, are equal to \begin{equation} \lambda_c^j (p,h,E,T) = \frac{2s_j + 1}{\pi^2\hbar^3} \mu_j \Re (p,h) \frac{\omega (p-1,h,E-B_j-T)}{\omega (p,h,E)} T \sigma_{inv} (T) . \label{LambdaTransmission} \end{equation} \begin{itemize}[noitemsep] \item[] $s_j$: spin of the emitted particle $j$ \item[] $\mu_j$: reduced mass of the emitted particle $j$ \item[] $\omega$: level density of the $n$-exciton state \item[] $B_j$: binding energy \item[] $V_j^c$: Coulomb barrier \item[] $T$: kinetic energy of the emitted particle $j$ \item[] $\sigma_{inv}$: inverse cross section \item[] $\Re$: creates zero probability of emission if the number of particle excitons is less than the number nucleons of particle $j$ \end{itemize} Equation~(\ref{LambdaTransmission}) describes the emission of neutrons and protons. For complex particles, the level density formula $\omega$ becomes more complicated and an extra factor $\gamma_j$ must be introduced: \begin{equation} \gamma_j \approx p_j^3 (\frac{p_j}{A})^{p_j - 1} . \label{GammaBeta} \end{equation} In reality Equation~(\ref{GammaBeta}) for $\gamma_j$ is a preliminary rough estimation that is refined by parameterizing over a mesh of residual nuclei energy and mass number \cite{CEMUserManual}. Adding the possibility of LF emission alters the previous parameterization, effectively requiring new parameterization. This work of parameterizing $\gamma_j$ still needs to be done in order to generalize our results to all energies and target masses. In addition, we would like to add better modeling of $\gamma_j$: investigating the use of physical models and/or adding extrapolation to the mesh. Assuming an equidistant level scheme with the single-particle density $g$, we have the level density of the $n$-exciton state as~\cite{Ericson} \begin{equation} \omega(p,h,E) = \frac{g (gE)^{p+h-1}}{p! h! (p+h-1)!} \mbox{ .} \label{OmegaGeneral} \end{equation} This expression should be substituted into Eq.~\ref{LambdaTransmission} to obtain the transmission rates $\lambda_c^j$. \vspace*{0.5cm} {\noindent \bf \em Exciton Transitions} \\ According to Equation~(\ref{LambdaGeneral}), for a preequilibrium nucleus with excitation energy $E$ and number of excitons $n=p+h$, the partial transition probabilities changing the exciton number by $\Delta n$ are \begin{equation} \lambda_{\Delta n} (p,h,E) = \frac{2\pi}{\hbar}|M_{\Delta n}|^2 \omega_{\Delta n} (p,h,E) \mbox{ .} \label{LambdaTransitionGeneral} \end{equation} For these transition rates, one needs the number of states, $\omega$, taking into account the selection rules for intranuclear exciton-exciton scattering. The appropriate formulae have been derived by Williams~\cite{Williams} and later corrected for the exclusion principle and indistinguishability of identical excitons in Refs.~\cite{Williams2,Ribansky}: \begin{eqnarray} \omega _+ (p,h,E) & = & \frac{1}{2} g \frac{[gE-{\cal A}(p+1,h+1)]^2} {n+1} \biggl[ \frac{gE - {\cal A}(p+1,h+1)}{gE - {\cal A}(p,h)} \biggr] ^{n-1} \mbox{ ,} \nonumber \\ \omega _0 (p,h,E) & = & \frac{1}{2} g \frac{[gE-{\cal A}(p,h)]}{n} [p(p-1)+4ph+h(h-1)] \mbox{ ,} \nonumber \\ \omega _- (p,h,E) & = & \frac{1}{2} gph(n-2) \mbox{ ,} \label{OmegaTransition} \end{eqnarray} where ${\cal A}(p,h) = (p^2 +h^2 +p-h)/4 - h/2$. By neglecting the difference of matrix elements with different $\Delta n$, $M_+ = M_- = M_0 = M$, we estimate the value of $M$ for a given nuclear state by associating the $\lambda_+ (p,h,E)$ transition with the probability for quasi-free scattering of a nucleon above the Fermi level on a nucleon of the target nucleus. Therefore, we have \begin{equation} \frac{ < \sigma (v_{rel}) v_{rel} >}{V_{int}} = \frac{\pi}{\hbar} |M|^2 \frac{g [ gE-{\cal A}(p+1,h+1)]}{n+1} \biggl[ \frac{gE - {\cal A}(p+1,h+1)}{gE - {\cal A}(p,h)} \biggr] ^{n-1} \mbox{ .} \label{SigmaAverage} \end{equation} Here, $V_{int}$ is the interaction volume estimated as $V_{int} = {4 \over 3} \pi (2 r_c + \lambda / 2 \pi)^3$, with the de Broglie wave length $\lambda / 2 \pi$ corresponding to the relative velocity $v_{rel} = \sqrt{2 T_{rel} /m_N}$. A value of the order of the nucleon radius is used for $r_c$ in the CEM: $r_c = 0.6$ fm. The averaging on the left-hand side of Eq.~(\ref{SigmaAverage}) is carried out over all excited states, taking into account the exclusion principle. Combining~(\ref{LambdaTransitionGeneral}), (\ref{OmegaTransition}), and (\ref{SigmaAverage}) we finally get for the transition rates: \begin{eqnarray} \lambda _+ (p,h,E) & = & \frac{ < \sigma (v_{rel}) v_{rel} >}{V_{int}} \mbox{ ,} \nonumber \\ \lambda _0 (p,h,E) & = & \frac{ < \sigma (v_{rel}) v_{rel} >}{V_{int}} \frac{n+1}{n} \biggl[ \frac{gE - {\cal A}(p,h)}{gE - {\cal A}(p+1,h+1)} \biggr] ^{n+1} \frac{p(p-1)+4ph+h(h-1)}{gE-{\cal A}(p,h)} \mbox{ ,} \nonumber \\ \lambda _- (p,h,E) & = & \frac{ < \sigma (v_{rel}) v_{rel} >}{V_{int}} \biggl[ \frac{gE - {\cal A}(p,h)}{gE - {\cal A}(p+1,h+1)} \biggr] ^{n+1} \frac{ph(n+1)(n-2)}{[gE-{\cal A}(p,h)]^2} \mbox{ .} \label{LambdaTransition} \end{eqnarray} \vspace*{0.25cm} {\noindent \bf \em Angular Distributions} \\ The CEM predicts forward peaked (in the laboratory system) angular distributions for preequilibrium particles. For instance, CEM03.03 assumes that a nuclear state with a given excitation energy $E^*$ should be specified not only by the exciton number $n$ but also by the momentum direction $\Omega$. Following Ref.~\cite{Mantzouranis}, the master equation (Eq.~(\ref{Probability})) can be generalized for this case provided that the angular dependence for the transition rates $\lambda _+$, $\lambda _0$, and $\lambda _-$ (Eq.~(\ref{LambdaTransition})) is factorized. In accordance with Eq.~\ref{SigmaAverage}, in the CEM it is assumed that \begin{equation} <\sigma> \to <\sigma> F(\Omega) \mbox{ ,} \label{SigmaFactor} \end{equation} where \begin{equation} F(\Omega) = {d \sigma^{free}/ d \Omega \over \int d \Omega ' d \sigma^{free} / d \Omega '} \mbox{ .} \label{Factor} \end{equation} The scattering cross section $ d \sigma^{free}/ d \Omega$ is assumed to be isotropic in the reference frame of the interacting excitons, thus resulting in an asymmetry in both the nucleus center-of-mass and laboratory frames. The angular distributions of preequilibrium complex particles are assumed to be similar to those for the nucleons in each nuclear state \cite{CEMModel}. This calculational scheme is easily realized by the Monte-Carlo technique. It provides a good description of double-differential spectra of preequilibrium nucleons and a not-so-good but still satisfactory description of complex-particle spectra from different types of nuclear reactions at incident energies from tens of MeV to several GeV. For incident energies below about 200 MeV, Kalbach \cite{Kalbach88} has developed a phenomenological systematics for preequilibrium-particle angular distributions by fitting available measured spectra of nucleons and complex particles. As the Kalbach systematics are based on measured spectra, they describe very well the double-differential spectra of preequilibrium particles and generally provide a better agreement of calculated preequilibrium complex particle spectra with data than does the CEM approach based on Eqs.~(\ref{SigmaFactor},\ref{Factor}). This is why we have incorporated into CEM03.03 the Kalbach systematics \cite{Kalbach88} to describe angular distributions of both preequilibrium nucleons and complex particles at incident energies up to 210 MeV. At higher energies, we use in CEM03.03 the CEM approach based on Eqs.~(\ref{SigmaFactor},\ref{Factor}). \subsection{Precompound Particles Considered} \begin{table}[here] \caption{The emitted particles considered by the modified MEM} \begin{ruledtabular} \begin{tabular}{rlllllll} \hline\hline $Z_j$\hspace{2mm} & \multicolumn{7}{l} {Ejectiles} \\ \hline 0\hspace{2mm} & n & & & & & & \\ 1\hspace{2mm} & p &\hspace{1mm} d &\hspace{1mm} t & & & & \\ 2\hspace{2mm} &$^{3 }$He&\hspace{1mm}$^{4 }$He&\hspace{1mm}$^{6 }$He&\hspace{1mm}$^{8 }$He& & & \\ 3\hspace{2mm} &$^{6 }$Li&\hspace{1mm}$^{7 }$Li&\hspace{1mm}$^{8 }$Li&\hspace{1mm}$^{9 }$Li& & & \\ 4\hspace{2mm} &$^{7 }$Be&\hspace{1mm}$^{9 }$Be&\hspace{1mm}$^{10}$Be&\hspace{1mm}$^{11}$Be&\hspace{1mm}$^{12}$Be& & \\ 5\hspace{2mm} &$^{8 }$B &\hspace{1mm}$^{10}$B &\hspace{1mm}$^{11}$B &\hspace{1mm}$^{12}$B &$\hspace{1mm}^{13}$B & & \\ 6\hspace{2mm} &$^{10}$C &\hspace{1mm}$^{11}$C &\hspace{1mm}$^{12}$C &\hspace{1mm}$^{13}$C &\hspace{1mm}$^{14}$C &\hspace{1mm}$^{15}$C &\hspace{1mm}$^{16}$C \\ 7\hspace{2mm} &$^{12}$N &\hspace{1mm}$^{13}$N &\hspace{1mm}$^{14}$N &\hspace{1mm}$^{15}$N &\hspace{1mm}$^{16}$N &\hspace{1mm}$^{17}$N & \\ 8\hspace{2mm} &$^{14}$O &\hspace{1mm}$^{15}$O &\hspace{1mm}$^{16}$O &\hspace{1mm}$^{17}$O &\hspace{1mm}$^{18}$O &\hspace{1mm}$^{19}$O &\hspace{1mm}$^{20}$O \\ 9\hspace{2mm} &$^{17}$F &\hspace{1mm}$^{18}$F &\hspace{1mm}$^{19}$F &\hspace{1mm}$^{20}$F &\hspace{1mm}$^{21}$F & & \\ 10\hspace{2mm} &$^{18}$Ne&\hspace{1mm}$^{19}$Ne&\hspace{1mm}$^{20}$Ne&\hspace{1mm}$^{21}$Ne&\hspace{1mm}$^{22}$Ne&\hspace{1mm}$^{23}$Ne&\hspace{1mm}$^{24}$Ne\\ 11\hspace{2mm} &$^{21}$Na&\hspace{1mm}$^{22}$Na&\hspace{1mm}$^{23}$Na&\hspace{1mm}$^{24}$Na&\hspace{1mm}$^{25}$Na& & \\ 12\hspace{2mm} &$^{22}$Mg&\hspace{1mm}$^{23}$Mg&\hspace{1mm}$^{24}$Mg& \hspace{1mm}$^{25}$Mg&\hspace{1mm}$^{26}$Mg&\hspace{1mm}$^{27}$Mg& \hspace{1mm}$^{28}$Mg\\ \hline\hline \end{tabular} \end{ruledtabular} \label{Particles} \end{table} Table~\ref{Particles} displays the particles our expanded MEM is designed to emit. Our model has been expanded to emit all 66 of these isotopes (through $^{28}$Mg). \section{Results} \subsection{Code Crash Protection} Bugs used to be fixed on an as-encountered basis. However, after encountering one bug that could not feasibly be fixed in this manner, we decided to complete CEM-wide code crash protection. The entirety of the CEM code was modified to check, by if statements, for divide-by-zero errors and, if encountered, output error statements revealing where in the code such errors occurred (while fixing the divide-by-zero error to allow for completion of the simulations). Square root calculations were also protected to ensure no errors occurred. Logarithmic and inverse trigonometric functions were not error protected. \begin{figure}[] \centering \includegraphics[trim = 0.4in 1.5in 0.5in 1.5in, width=6.5in]{p200AlCCP.pdf} \caption[]{Comparison of experimental data by Machner {\it et al.} \cite{Machner} (green points) with results from the Before-CCP CEM03.03 (blue dotted lines) and the After-CCP CEM03.03 (red solid lines) for 200 MeV p + $^{27}$Al $\rightarrow$ $...$ The before and after code crash protection results are equivalent.} \label{fig:p200AlCCPComp} \end{figure} This was a large project as it involved slight modification of all the CEM code. However, as it will provide crash protection for future applications of CEM, including crash protection within future versions of MCNP, we determined it was worth it. As this crash protection involved the addition of numerous if-statements into the code, we investigated the impact on computation time. The influence on CPU runtime was not significant and could not be detected above the normal variations in runtime that occur due to time-of-day CPU speed fluctuations, or having a month between runs (and LANL servers subsequently getting faster, perhaps). In addition, we validated the crash protected code by rerunning many reactions to ensure we got the same results as the non-protected code. Fig.~\ref{fig:p200AlCCPComp} is an example of the before and after results. As can be seen, they are identical and the ``before'' blue dashed line is not even visible underneath the ``after'' solid red line. \subsection{Recalibration of MEM Parameters} With the expansion of the preequilibrium model to allow emission of light fragments up to $^{28}$Mg complete, we then turned our attention to recalibrating $\gamma_{\beta}$. This process is long and involves the re-fitting of all available reliable experimental data. We are in the middle of this process, but include several results below. Preliminary results are very encouraging. \subsubsection{200 MeV p + $^{27}$Al} Figure~\ref{fig:p200Al} demonstrates the potential of the modified precompound code we built, for the same reaction and data as shown in Figure~\ref{fig:p200AlCompOld}, 200 MeV p + $^{27}$Al. The red solid lines show results from the new precompound code we designed in FY2013; the blue dotted lines present calculations from the old code; and the green points are experimental data from Machner, {\it et al.} \cite{Machner}. The upgraded MEM provides dramatically improved ability to describe the cross section at intermediate to high energies. Figure~\ref{fig:PreeqCompAl200Int} presents energy-spectra of nucleons, d, t, $^3$He, and $^4$He, as well as energy-spectra of heavier fragments $^6$Li, $^7$Be, $^{10}$B, and $^{12}$C. It demonstrates that the modified-MEM code predicts the high-energy tails of light fragment spectra, without destroying the spectra of established particles and fragments. Table~\ref{p200AlTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table} \caption{$\gamma_\beta$ values for 200 MeV p + $^{27}$Al} \begin{ruledtabular} \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 2.0 & 4.0 & 20.0 & 30.0 & 1.0 & 1.0& 5.0 & 2.0 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 1.7 & 10.0 & 0.3 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 & 0.2 & 0.1 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.1 & 0.1 & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline\hline \multicolumn{4}{l} {E* = 35.0 $\pm$ 33.5 MeV} & \multicolumn{3}{l} {Z = 12.5 $\pm$ 0.8} & \multicolumn{3}{l} {A = 25.9 $\pm$ 0.9} \\ \hline\hline \end{tabular} \end{ruledtabular} \label{p200AlTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.4in 1.5in 0.5in 1.5in, width=6.5in]{p200Al.pdf} \caption[]{Comparison of experimental data by Machner {\it et al.} \cite{Machner} (green points) with results from the unmodified CEM03.03 (blue dotted lines) and the modified MEM CEM03.03 (red solid lines) for 200 MeV p + $^{27}$Al $\rightarrow$ $...$} \label{fig:p200Al} \end{figure} \begin{figure}[] \centering \includegraphics[trim = 0.5in 0.5in 1.0in 1.0in, width=6.5in]{PreeqCompAl200Int.pdf} \vspace*{-2.0in} \caption[]{Angle integrated cross section using the modified MEM code for the reaction 200 MeV p + $^{27}$Al $\rightarrow$ ...} \label{fig:PreeqCompAl200Int} \end{figure} \subsubsection{190 MeV p + $^{nat}$Ag} Figure~\ref{fig:p190Ag} demonstrates the potential of the modified precompound code we built for 190 MeV p + $^{nat}$Ag. The red solid lines show results from the new precompound code we designed in FY2013; the blue dotted lines present calculations from the old code; and the green points are experimental data from Green, {\it et al.} \cite{Green}. The upgraded MEM provides dramatically improved ability to describe the cross section at intermediate to high energies. Table~\ref{p190AgTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 190 MeV p + $^{nat}$Ag} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 1.0 & 1.0 & 0.8 & 4.0 & 0.035 & 0.01 & 0.08 & 0.2 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.03 & 0.02 & 0.035 & 0.04 & 0.04 & 0.0015 & 0.0015 & 0.0015 & 0.00001 & 0.00001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline\hline \multicolumn{4}{l} {E* = 68.6 $\pm$ 49.1 MeV} & \multicolumn{3}{l} {Z = 47.1 $\pm$ 0.7} & \multicolumn{3}{l} {A = 106.2 $\pm$ 0.5} \\ \hline\hline \end{tabular} \label{p190AgTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.4in 1.5in 0.5in 1.5in, width=6.5in]{p190Ag500M.pdf} \caption[]{Comparison of experimental data by Green {\it et al.} \cite{Green} (green points) with results from the unmodified CEM03.03 (blue dotted lines) and the modified MEM CEM03.03 (red solid lines) for 190 MeV p + $^{nat}$Ag $\rightarrow$ $...$} \label{fig:p190Ag} \end{figure} \subsubsection{300 MeV p + $^{nat}$Ag} Figure~\ref{fig:p300Ag} demonstrates the potential of the modified precompound code we built for 300 MeV p + $^{nat}$Ag. The red solid lines show results from the new precompound code we designed in FY2013; the blue dotted lines present calculations from the old code; and the green points are experimental data from Green, {\it et al.} \cite{Green}. The upgraded MEM provides dramatically improved ability to describe the cross section at intermediate to high energies. Table~\ref{p300AgTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 300 MeV p + $^{nat}$Ag} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 6.0 & 3.0 & 3.0 & 2.0 & 0.01 & 0.01 & 0.012 & 0.01 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.0025 & 0.0022 & 0.002 & 0.002 & 0.0015 & 0.0015 & 0.0015 & 0.0015 & 0.00001 & 0.00001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline\hline \multicolumn{4}{l} {E* = 83.4 $\pm$ 63.4 MeV} & \multicolumn{3}{l} {Z = 46.8 $\pm$ 0.8} & \multicolumn{3}{l} {A = 105.3 $\pm$ 1.2} \\ \hline\hline \end{tabular} \label{p300AgTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.4in 1.5in 0.5in 1.5in, width=6.5in]{p300Ag500M.pdf} \caption[]{Comparison of experimental data by Green {\it et al.} \cite{Green} (green points) with results from the unmodified CEM03.03 (blue dotted lines) and the modified MEM CEM03.03 (red solid lines) for 300 MeV p + $^{nat}$Ag $\rightarrow$ $...$} \label{fig:p300Ag} \end{figure} \subsubsection{480 MeV p + $^{nat}$Ag} Figure~\ref{fig:p480Ag} demonstrates the potential of the modified precompound code we built for 480 MeV p + $^{nat}$Ag. The red solid lines show results from the new precompound code we designed in FY2013; the blue dotted lines present calculations from the old code; and the green points are experimental data from Green, {\it et al.} \cite{Green480}. The upgraded MEM provides dramatically improved ability to describe the cross section at intermediate to high energies. Table~\ref{p480AgTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 480 MeV p + $^{nat}$Ag} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 0.01 & 0.01 & 0.002 & 0.001 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.0022 & 0.0022 & 0.0008 & 0.0015 & 0.0015 & 0.0015 & 0.0015 & 0.0015 & 0.00001 & 0.00001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline\hline \multicolumn{4}{l} {E* = 113.4 $\pm$ 87.6 MeV} & \multicolumn{3}{l} {Z = 46.6 $\pm$ 1.0} & \multicolumn{3}{l} {A = 104.6 $\pm$ 1.7} \\ \hline\hline \end{tabular} \label{p480AgTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.4in 1.5in 0.5in 1.5in, width=6.5in]{p480Ag500M.pdf} \caption[]{Comparison of experimental data by Green {\it et al.} \cite{Green480} (green points) with results from the unmodified CEM03.03 (blue dotted lines) and the modified MEM CEM03.03 (red solid lines) for 480 MeV p + $^{nat}$Ag $\rightarrow$ $...$} \label{fig:p480Ag} \end{figure} \subsubsection{1200 MeV p + $^{197}$Au} We also have good preliminary results for a reaction where a higher-energy incident particle strikes a larger target mass. Figure~\ref{p1200Au} compares experimental data by Budzanowski, {\it et al.} \cite{Budzanowski} with results by the unmodified CEM03.03 and the modified-MEM CEM03.03 for the reaction 1200 MeV p + $^{197}$Au $\rightarrow$ $...$. Table~\ref{p1200AuTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 1200 MeV p + $^{197}$Au} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 2.0 & 4.0 & 1.5 & 2.0 & 0.0004 & 0.0012 & 0.001 & 0.00007 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.00007 & 0.00001 & 0.00015 & 0.00001 & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.00001 & 0.00001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 & 0.00001 \\ \hline\hline \multicolumn{4}{l} {E* = 320.6 $\pm$ 208.0 MeV} & \multicolumn{3}{l} {Z = 78.2 $\pm$ 1.3} & \multicolumn{3}{l} {A = 190.9 $\pm$ 3.7} \\ \hline\hline \end{tabular} \label{p1200AuTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.5in 1.5in 0.5in 1.5in, width=6.5in]{p1200Au.pdf} \caption[]{Comparison of experimental data by Budzanowski, {\it et al.} \cite{Budzanowski} (green points) with results from the unmodified CEM03.03 (blue dashed lines) and the modified-MEM CEM03.03 (red solid lines) for 1200 MeV p + $^{197}$Au $\rightarrow$ $...$} \label{p1200Au} \end{figure} The modified-MEM CEM03.03 peak of spectra at low energies exceeds the peak of the experimental data. However, this is an issue with the evaporative stage and our work has thus far focused on the precompound stages. The high-energy tails do match the experimental data well with this new modified MEM code. The evaporation model can be fixed, and we would like to do so in the future. \subsubsection{1200 MeV p + $^{nat}$Ni} Figure~\ref{p1200Ni} compares experimental data by Budzanowski, {\it et al.} \cite{BudzanowskiNi} with results by the unmodified CEM03.03 and the modified-MEM CEM03.03 for the reaction 1200 MeV p + $^{nat}$Ni $\rightarrow$ $...$. Table~\ref{p1200NiTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 1200 MeV p + $^{61}$Ni} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 1.0 & 1.0 & 1.0 & 4.0 & 0.008 & 0.01 & 0.007 & 0.003 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.001 & 0.0004 & 0.002 & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.0002 & 0.0001 & 0.0001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline\hline \multicolumn{4}{l} {E* = 177.2 $\pm$ 140.3 MeV} & \multicolumn{3}{l} {Z = 26.5 $\pm$ 1.6} & \multicolumn{3}{l} {A = 56.4 $\pm$ 3.3} \\ \hline\hline \end{tabular} \label{p1200NiTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.5in 1.5in 0.5in 1.5in, width=6.5in]{p1200Ni.pdf} \caption[]{Comparison of experimental data by Budzanowski, {\it et al.} \cite{BudzanowskiNi} (green points) with results from the unmodified CEM03.03 (blue dashed lines) and the modified-MEM CEM03.03 (red solid lines) for 1200 MeV p + $^{nat}$Ni $\rightarrow$ $...$} \label{p1200Ni} \end{figure} \subsubsection{1900 MeV p + $^{nat}$Ni} Figure~\ref{p1900Ni} compares experimental data by Budzanowski, {\it et al.} \cite{BudzanowskiNi} with results by the unmodified CEM03.03 and the modified-MEM CEM03.03 for the reaction 1900 MeV p + $^{nat}$Ni $\rightarrow$ $...$. Table~\ref{p1900NiTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 1900 MeV p + $^{61}$Ni} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 1.5 & 2.0 & 8.0 & 4.0 & 0.004 & 0.01 & 0.007 & 0.002 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.0001 & 0.00005 & 0.001 & 0.00005 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline\hline \multicolumn{4}{l} {E* = 242.7 $\pm$ 194.9 MeV} & \multicolumn{3}{l} {Z = 25.8 $\pm$ 2.1} & \multicolumn{3}{l} {A = 54.7 $\pm$ 4.6} \\ \hline\hline \end{tabular} \label{p1900NiTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.5in 1.5in 0.5in 1.5in, width=6.5in]{p1900Ni.pdf} \caption[]{Comparison of experimental data by Budzanowski, {\it et al.} \cite{BudzanowskiNi} (green points) with results from the unmodified CEM03.03 (blue dashed lines) and the modified-MEM CEM03.03 (red solid lines) for 1900 MeV p + $^{nat}$Ni $\rightarrow$ $...$} \label{p1900Ni} \end{figure} \subsubsection{2500 MeV p + $^{nat}$Ni} Figure~\ref{p2500Ni} compares experimental data by Budzanowskik, {\it et al.} \cite{BudzanowskiNi} with results by the unmodified CEM03.03 and the modified-MEM CEM03.03 for the reaction 2500 MeV p + $^{nat}$Ni $\rightarrow$ $...$. Table~\ref{p2500NiTable} details the $\gamma_\beta$ used in the expanded MEM. At the bottom of the table are values for the residual nuclei energy, E*, atomic number, Z, and mass number, A. \begin{table}[] \caption{$\gamma_\beta$ values for 2500 MeV p + $^{61}$Ni} \centering \begin{tabular}{llllllllll} \hline\hline n & p & d & t & $^3$He & $^4$He & $^6$He & $^8$He & $^6$Li & $^7$Li \\ 1.0 & 1.0 & 1.5 & 2.0 & 7.0 & 4.0 & 0.004 & 0.01 & 0.007 & 0.0007 \\ \hline $^8$Li & $^{9 }$Li & $^{7 }$Be & $^{9 }$Be &$^{10}$Be & $^{11}$Be & $^{12}$Be & $^{8 }$B & $^{10}$B & $^{11}$B \\ 0.00007 & 0.00003 & 0.001 & 0.000015 & 0.00001 & 0.00003 & 0.00003 & 0.00003 & 0.00001 & 0.000004 \\ \hline $^{12}$B &$^{13}$B &$^{10}$C &$^{11}$C &$^{12}$C &$^{13}$C &$^{14}$C &$^{15}$C &$^{16}$C &$^{12}$N \\ 0.0000001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{13}$N &$^{14}$N &$^{15}$N &$^{16}$N &$^{17}$N &$^{14}$O &$^{15}$O &$^{16}$O &$^{17}$O &$^{18}$O \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{19}$O &$^{20}$O &$^{17}$F &$^{18}$F &$^{19}$F &$^{20}$F &$^{21}$F &$^{18}$Ne &$^{19}$Ne &$^{20}$Ne \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{21}$Ne &$^{22}$Ne &$^{23}$Ne &$^{24}$Ne &$^{21}$Na &$^{22}$Na &$^{23}$Na &$^{24}$Na &$^{25}$Na &$^{22}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline $^{23}$Mg &$^{24}$Mg &$^{25}$Mg &$^{26}$Mg &$^{27}$Mg &$^{28}$Mg \\ 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\ \hline\hline \multicolumn{4}{l} {E* = 296.8 $\pm$ 236.6 MeV} & \multicolumn{3}{l} {Z = 25.3 $\pm$ 2.4} & \multicolumn{3}{l} {A = 53.3 $\pm$ 5.7} \\ \hline\hline \end{tabular} \label{p2500NiTable} \end{table} \begin{figure}[] \centering \includegraphics[trim = 0.5in 1.5in 0.5in 1.5in, width=6.5in]{p2500Ni.pdf} \caption[]{Comparison of experimental data by Budzanowski, {\it et al.} \cite{BudzanowskiNi} (green points) with results from the unmodified CEM03.03 (blue dashed lines) and the modified-MEM CEM03.03 (red solid lines) for 2500 MeV p + $^{nat}$Ni $\rightarrow$ $...$} \label{p2500Ni} \end{figure} \section{Future Work} Preliminary results are very encouraging, but we have more work to do to generalize our new MEM across all reactions and a greater spectrum of possible light fragments, and we would like to investigate other modes of precompound emission of high-energy light fragments such as coalescence and Fermi break-up. Future work we plan to undertake: \begin{enumerate} \item{Complete the parameterization of $\gamma_j$ in Eq.~\ref{GammaBeta};} \item{Perform comparisons with the modified-MEM CEM03.03 and Hagiwara's \cite{Hagiwara} data for heavier silicon and aluminum targets, as well as with other reactions we find reliable data for;} \item{Implement the MEM upgrades in LAQGSM;} \item{Upgrade the GENXS particle tallies within MCNP6 so that output can be obtained for LF spectra beyond $^4$He;} \item{Expand the coalescence model to also emit light fragments heavier than $^4$He. This will enable us to describe higher-energy cross sections of light fragments, even beyond what the new MEM can now;} \item{Upgrade our Fermi break-up model to more physically emit light fragments.} \end{enumerate} \section{Conclusion} We successfully finished our expansion of the MEM to include 66 particles (light fragments up to $^{28}$Mg). Our previous work the summer of 2012 had only expanded the MEM to 26 particles (up to $^{14}$C). We also implemented code crash protection throughout the entirety of CEM. Lastly, we have begun the tedious work of recalibrating MEM parameters across a broad range of reactions. Our results demonstrate that modifying CEM to simulate precompound emission of light fragments yields better cross sections of intermediate- and high-energy light fragments. Comparisons with several reactions, including experimental results obtained by Machner {\it et al.} \cite{Machner}, Bubak {\it et al.} \cite{Bubak}, and Budzanowski {\it et al.} \cite{Budzanowski}, demonstrate the potential of the new MEM we built to correctly predict high-energy spectra of light fragments. Our preliminary results indicate our new MEM works well across different energy regimes, for both light and heavy targets. However, more work is necessary to generalize the new MEM across arbitrary reactions. \clearpage \section{Acknowledgements} One of us (LMK) is grateful to \begin{enumerate}[label=\emph{\alph*})] \item{Dr. Stepan Mashnik, for his continued mentoring and ample technical and scientific support and encouragement;} \item{Dr. Tim Goorley and Los Alamos National Laboratory for the opportunity to study with some of the world's greatest experts in nuclear physics, particularly high-energy physics.} \item{Dr. Akira Tokuhiro, for his continued support and expertise in serving as my thesis advisor;} \end{enumerate} This study was carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. DE-AC52-06NA253996. \bibliographystyle{report}
1,108,101,564,454
arxiv
\section{Introduction} The intense emission of 511 keV gamma-rays from the Galactic center \cite{sieg} and within active terrestrial thunderstorms \cite{abc} has been observed. It seems quite natural to attribute both observations to the formation of clouds of parapositronia, each decaying into two gamma-quanta with energy 511 keV. Moreover, one is tempted to relate these facts to the well known in experimental particle physics excess \cite{skl} of soft dileptons and photons over their calculated supply from all well established sources. Characteristic gamma-ray emission from the Galaxy's interstellar medium with a line at 511 keV has been observed with the spectrometer SPI on ESA's INTEGRAL observatory \cite{sieg}. It registered bright emission from an extended bulge-like region, while emission from the disk is faint. Terrestrial gamma-ray flashes are usually observed by spacecraft in low-Earth orbit, but have also been observed by aircrafts \cite{abc} and on the ground \cite{chub1, chub2, chil} during active thunderstorms. Their energy spectra are consistent with a source mostly composed of positron annihilation gamma-rays, with a prominent 511 keV line clearly visible in the data. It is reasonable to assume that electromagnetic fields are responsible for this effect. The outcome of a collision of two charged objects (two nuclei, in particular) is especially soft if they do not come very close to one another but pass at large distances (impact parameters) and only their electromagnetic fields interact. These processes are named ultraperipheral because of the large spatial extension of electromagnetic forces compared to the much lower range of strong nuclear interactions. It is shown here that very soft dielectrons and photons are produced in these grazing collisions \cite{dr1, dr2}. \section{Sources of positrons and positronia} The main problem of the hypothesis about positronia' clouds charged for emission of the 511 keV photon peak is the rate of parapositronium production. Electrons are very abundant as main ingredients of atoms while positrons must be produced in some interactions. Electromagnetic and weak forces must be responsible for their production. Parapositronia and unbound electron-positron pairs can be created by electromagnetic fields of any moving charged objects with energy exceeding the threshold of their production. Weak forces are active in radioactive nuclear decays and decays of unstable particles created in strong interactions. Astrophysicists analyze intensively the objects in the Universe which could be responsible for these effects. The long list of candidates has been proposed. It includes radioactive decay of unstable isotopes produced in nucleosynthesis sources throughout the Galaxy \cite{dieh}, accreting binary systems (microquasars) \cite{takh}, old neutron stars (former pulsars) \cite{isto}, various sources inside the supermassive black hole in our Galaxy's center \cite{dogi, cai, takh}, dark matter decay or annihilation \cite{farz, cai}, as dark matter would be gravitationally concentrated in the inner Galaxy etc. Quantitative estimates of their contributions leave considerable uncertainties. The puzzle still remains, and no conclusive candidate has emerged. Other complexities arise because, once ejected from their sources, the positrons may travel far from their origins to the surrounding interstellar gas. Kinetic equations are often used to show that processes of Coulomb collisions are effective enough to cool down these relativistic positrons and to thermalize them before their annihilation. Positrons slow down to energies of a few eV and annihilation may occur. Here, we do not enter in the detailed discussion of these problems but propose the mechanism of direct electromagnetic creation of parapositronia and low-mass electron-positron pairs in ultraperipheral collisions of heavy ions. The results can be confronted to forthcoming experimental data of NICA collider and their relevance to the above findings established. \section{Ultraperipheral nuclear collisions at NICA collider} Production of $e^+e^-$-pairs in electromagnetic fields of colliding heavy ions was first considered by Landau and Lifshitz in 1934 \cite{lali}. It was shown that the total cross section of this process rapidly increases with increasing energy $E$ as $\ln ^3E$ in asymptotics. This is still the strongest energy dependence in particle physics. The cross sections of strong nuclear interactions can not increase faster than $\ln ^2E$ according to the Froissart theorem \cite{froi}. Moreover, the numerical factor $Z^4\alpha ^4$ compensates in the total cross section the effect of the small electromagnetic coupling $\alpha $ for heavy ions with large charge $Ze$. Therefore, the ultraperipheral production of $e^+e^-$-pairs (as well as $\mu ^+\mu ^-$ etc.) in ion collisions can become the dominant mechanism at very high energies \cite{dr3}. It is already widely studied at colliders. At the same time, it is desirable to learn these processes at conditions closest to astrophysics requirements. In particular, the energies of NICA collider are above the threshold of dielectrons production but lower than the threshold for dimuons \cite{ijmp}. That is why we demonstrate here the properties of positronia (and 511 keV photons) and dielectrons just in this energy interval from 5 GeV to 11 GeV per nucleon in the center of mass system. We show that the heuristic knowledge of these processes is helpful in understanding some astrophysical phenomena as well. Abundant creation of pairs with rather low masses is the typical feature of ultraperipheral interactions \cite{dr1}. Unbound lepton pairs and parapositronia are produced in grazing collisions of interacting ions where two photons from their electromagnetic clouds interact. Two-photon fusion production of lepton pairs has been calculated with both the equivalent photon approximation proposed in \cite{wei, wil} and via full lowest-order QED calculations \cite{rac, bgms} reviewed recently in \cite{drufn}. According to the equivalent photon approximation, the spectra of dileptons created in ultraperipheral collisions can be obtained from the general expression for the total cross section \begin{equation} \sigma _{up}(X)= \int dx_1dx_2\frac {dn}{dx_1}\frac {dn}{dx_2}\sigma _{\gamma \gamma }(X). \label{e2} \end{equation} Feynman diagrams of ultraperipheral processes contain the subgraphs of two-photon interactions leading to production of some final states $X$ (e.g., $e^+e^-$ pairs). These blobs can be represented by the cross sections of these processes. Therefore, $\sigma _{\gamma \gamma }(X)$ in (\ref{e2}) denotes the total cross section of production of the state $X$ by two photons from the electromagnetic clouds surrounding colliding ions and $dn/dx_i$ describe the densities of photons carrying the share $x_i$ of the ion energy. The~distribution of equivalent photons with a fraction of the nucleon energy $x$ generated by a moving nucleus with the charge $Ze$ can be denoted as \begin{equation} \frac {dn}{dx}=\frac {2Z^2\alpha }{\pi x}\ln \frac {u(Z)}{x} \label{flux} \end{equation} if integrated over the transverse momentum up to some value (see, e.g., \cite{blp}). The physical meaning of the ultraperipherality parameter $u(Z)$ is the ratio of the maximum adoptable transverse momentum to the nucleon mass as the only massless parameter of the problem. Its value is determined by the form factors of colliding ions (see, e.g., \cite{vyzh}). It is clearly seen from Eq. (\ref{flux}) that soft photons with small fractions $x$ of the nucleon energy dominate in these fluxes. The cross section $\sigma _{\gamma \gamma }(X)$ usually inserted in (\ref{e2}) in case of creation of the unbound dielectrons $X=e^+e^-$ is calculated in the lowest order perturbative approach and looks \cite{blp,brwh} as \begin{equation} \sigma _{\gamma \gamma }(X)=\frac {2\pi \alpha ^2}{M^2} [(3-v^4)\ln \frac {1+v}{1-v}-2v(2-v^2)], \label{mM} \end{equation} where $v=\sqrt {1-\frac {4m^2}{M^2}}$ is the velocity of the pair components in the pair rest system, $m$ and $M$ are the electron and dielectron masses, correspondingly. The cross section tends to 0 at the threshold of pair production $M=2m$ and decreases as $\frac {1}{M^2}\ln M$ at very large $M$. The distribution of masses $M$ of dielectrons is obtained after inserting Eqs (\ref{flux}), (\ref{mM}) into (\ref{e2}) and leaving free one integration there. One gets \cite{dr1} \begin{equation} \frac {d\sigma }{dM}=\frac {128 (Z\alpha )^4}{3\pi M^3} [(1+\frac {4m^2}{M^2}-\frac {8m^4}{M^4}) \ln \frac {1+\sqrt {1-\frac {4m^2}{M^2}}}{1-\sqrt {1-\frac {4m^2}{M^2}}}- (1+\frac {4m^2}{M^2})\sqrt {1-\frac {4m^2}{M^2}}] \ln ^3\frac {u\sqrt {s_{nn}}}{M}, \label{sM} \end{equation} where $\sqrt {s_{nn}}$ is the c.m.s. energy per a nucleon pair. The dielectron distribution (\ref{sM}) is shown in Fig. 1 for three NICA energies ranging from 11 GeV to 8 GeV and 6.45 GeV per nucleon. The parameter $u$=0.02 has been chosen in accordance with its value obtained in Ref. \cite{vyzh} where careful treatment of nuclei form factors is done. The total cross section of ultraperipheral production of unbound pairs is \begin{equation} \sigma (ZZ(\gamma \gamma )\rightarrow ZZe^+e^-)=\frac {28}{27} \frac {Z^4\alpha ^4}{\pi m^2}\ln^3\frac {u^2s_{nn}}{4m^2}. \label{vz} \end{equation} \begin{figure} \centerline{\includegraphics[width=16cm, height=14cm]{terz1++.pdf}} Fig. 1. The distribution of masses of dielectrons produced in ultraperipheral collisions at NICA energies $\sqrt {s_{nn}}$=11 GeV (blue, upper), 8 GeV (red, middle), 6.45 GeV (green, lower). \end{figure} The sharp peak at low masses $M$ in Fig. 1 demonstrates the most important feature of ultraperipheral processes - the abundant production of soft dielectrons with masses of the order of several electron masses $m=0.511$ MeV (effectively, less than 5 MeV). Moreover, the non-perturbative effects drastically enlarge production of unbound pairs with extremely low masses. The perturbative expression for the cross section $\sigma _{\gamma \gamma }(X)$ of (\ref{mM}) can be generalized to include the non-perturbative effects crucial near the pair production threshold $M=2m$. It happens to be possible for Coulomb interaction governing the behavior of the components of a pair. At the production point, the components of pairs with low masses close to 2$m$ move very slowly relative to one another. They are strongly influenced by the attractive Coulomb forces. In the non-relativistic limit, these states are transformed by mutual interactions of the components to effectively form a composite state whose wave function is a solution of the relevant Schroedinger equation. The normalization of Coulomb wave functions plays an especially important role at low velocities. It differs from the normalization of free motion wave functions used in the perturbative derivation of Eq. (\ref{mM}). The amplitude $R_C$ of the process $\gamma\gamma\to e^+e^-$ with account of the interaction between leptons is connected to the amplitude $R_0$ without the final state interaction by the relation \begin{equation} R_C=\int \Psi_f(r)R_0(r)d^3r \label{rc} \end{equation} where $\Psi_f(r)$ is the wave function for bound (parapositronium) or unbound lepton pairs in the coordinate representation. For lepton pairs in $S$-state (the orbital momentum $l$=0) the characteristic distances of the pair production are $~1/m$, whereas the Coulomb interaction between leptons acts over the much larger distances $~1/{m\alpha}$ in the bound state production and $~1/k$ for the unbound states where $k$ is the relative momentum. Therefore, the wave function can be considered as constant in (\ref{rc}) and one gets \begin{equation} R_C= \Psi_{kS}(r=0)\int R_0(r)d^3r= \Psi_{kS}(r=0) R_0(p=0). \label{rc0} \end{equation} This relation is valid not only for bound states, but also for the creation of the unbound lepton pairs if $kr_s\ll 1$. Such factorization of matrix elements has been widely used in the dimesoatoms production. It is useful for any process where the characteristic distances of pair production and of final state interactions are substantially different. The normalization of the unbound pair wave function reads \cite{llqm} \begin{equation} |\psi_{kS}(\vec r=0)|^2 =\frac{\pi\xi}{sh(\pi\xi)}e^{\pi\xi} =\frac{2\pi\xi}{1-e^{-2\pi\xi}};~~~\xi=\frac{2\pi\alpha m}{k}. \label{psi} \end{equation} This is the widely used Sommerfeld-Gamow-Sakharov (SGS) factor \cite{som, gam, somm, sakh} which unites the non-perturbative and perturbative matrix elements. It results in the so-called "$\frac {1}{v}$-law" of the enlarged outcome of the reactions with extremely low-mass pairs produced. This factor is described in the standard textbooks on non-relativistic quantum mechanics (see, e.g., \cite{llqm}) and used in various publications (e.g., \cite{baier, ieng, cass, arko}). The Sakharov recipe of its account for production of $e^+e^-$-pairs desctibed in \cite{sakh} consists in direct multiplication of the differential distribution of Eq. (\ref{sM}) by the SGS-factor written as \begin{equation} T=\frac {2\pi \alpha}{v(1-\exp (-2\pi \alpha/v))}. \label{sgs} \end{equation} It enhances the contribution of the low-mass (low-$v$) pairs. Thus the proper distribution of dielectron masses in ultraperipheral processes is \begin{eqnarray} \frac {d\sigma }{dM^2}&=&\frac {128 (Z\alpha )^4}{3M^4} \frac {\alpha }{\sqrt {1-\frac {4m^2}{M^2}} (1-\exp (-2\pi \alpha/\sqrt {1-\frac {4m^2}{M^2}}))}\times\nonumber \\ & &\left[(1+\frac {4m^2}{M^2}-\frac {8m^4}{M^4}) \ln \frac {1+\sqrt {1-\frac {4m^2}{M^2}}}{1-\sqrt {1-\frac {4m^2}{M^2}}}- (1+\frac {4m^2}{M^2})\sqrt {1-\frac {4m^2}{M^2}}\right] \ln ^3\frac {u\sqrt {s_{nn}}}{M}, \nonumber \\ \label{sM2} \end{eqnarray} The distribution of the relative (in $e^+e^-$ rest system) velocity $v$ is like \begin{equation} \frac {d\sigma }{dv^2}=\frac {16(Z\alpha )^4}{3m^2} [(3-v^4)\ln \frac {1+v}{1-v}-2v(2-v^2)] \frac {\alpha}{v(1-\exp (\frac {-2\pi \alpha}{v}))} \ln^3 \frac {u\sqrt {s_{nn}(1-v^2)}}{2m}. \label{sv2} \end{equation} Let us remind that the velocity $v$ is related to the velocity of the positron $v_+$ in the electron rest system as \begin{equation} v^2=\frac {1-\sqrt {1-v_+^2}}{1+\sqrt {1-v_+^2}} \label{vv} \end{equation} so that $v_+=2v$ at $v\rightarrow 0$ and both $v$ and $v_+$ tend to 1 in the ultrarelativistic limit. The relative velocities $v$ and $v_+$ are the relativistic invariants represented by the Lorentz-invariant masses $m$ and $M$. It is shown in Fig. 1 that the cross section of creation of unbound $e^+e^-$-pairs tends to zero at the threshold $M=2m$ if the perturbative expression Eq. (\ref{mM}) is used. Account of the non-perturbative SGS-factor (\ref{sgs}) in Eqs (\ref{sM2}) and (\ref{sv2}) changes the situation, drastically increasing the yield of pairs at low masses $M$, i.e. with small velocities $v$. In Figs 2 and 3, we compare the yields of pairs with (curves a - (\ref{sM2}), (\ref{sv2})) and without (curves b - (\ref{sM})) account of the SGS-factor at NICA energy 11 GeV as functions of masses $M$ and velocities $v$. \begin{figure} \includegraphics[width=\textwidth]{dSdM2.pdf} Fig. 2. The distribution of masses of dielectrons produced in ultraperipheral collisions at NICA energy $\sqrt {s_{nn}}$=11 GeV with (a) and without (b) account of the SGS-factor. Their difference (a-b) is shown by the dashed line. The region of small masses is shown in the right-hand side at the enlarged scale. Note the factor $10^{-6}$ at the abscissa scale which reduces it to MeVs. \end{figure} \begin{figure} \centerline{\includegraphics[width=\textwidth]{dSdV2.pdf}} Fig. 3. The distribution of the relative velocities in dielectrons produced in ultraperipheral collisions at NICA energy $\sqrt {s_{nn}}$=11 GeV with (a) and without (b) account of the SGS-factor. Their difference (a-b) is shown by the dashed line. The velocities for the region of small masses are shown in the right-hand side. Note the factor $10^{-3}$ at the abscissa scale. \end{figure} The cross sections of ultraperipheral production of unbound $e^+e^-$-pairs are especially strongly enhanced at low masses $M$ (at low relative velocities $v$) compared to their perturbative values (marked by b). It is clearly seen in the righthand sides of Figs 2 and 3 which demonstrate the region near the threshold $M=2m$. Surely, the cross section would tend to zero at the threshold $M=2m$ due to the energy-momentum conservation laws not fully respected by the simplified SGS-recipe. However, it must happen in the tiny region near the threshold and can be neglected in integral estimates. At the same time, the overall contribution due to the correction is not high. It amounts to about 4.6 percents at the peak of the $M^2$ distribution and 2.5 percents at the peak of the $v^2$ distribution. The integral contributions differ by 3.4 percents only. Beside unbound pairs, parapositronia can be directly produced in two-photon interactions \cite{dr4}. The total cross section (\ref{e4}) is written as \begin{equation} \sigma_{Ps}=\frac {16Z^4\alpha ^2\Gamma }{3m^3}\ln ^3\frac {u\sqrt {s_{nn}}}{m}. \label{e4} \end{equation} It is much lower than the cross section for creation of unbound pairs (\ref{vz}). The energy distribution of gamma-quanta from decays of parapositronia produced in ultraperipheral collisions at NICA energies is shown in Fig. 4. \begin{figure} \centerline{\includegraphics[width=16cm, height=14cm]{terz2++.pdf}} Fig. 4. The energy distribution of gamma-quanta from decays of parapositronia produced in ultraperipheral collisions at NICA energies $\sqrt {s_{nn}}$=11 GeV (blue, upper), 8 GeV (red, middle), 6.45 GeV (green, lower). \end{figure} It is obtained from Eq. (\ref{e2}) by omitting one of the integrations there and inserting the resonance cross section for $\sigma _{\gamma \gamma }(X)$ (see, e.g., \cite{drufn}). For Au-Au collisions at NICA one gets \begin{equation} \omega \frac {d\sigma }{d\omega }=\frac {4Z^4\alpha ^2\Gamma }{m^3} (\ln ^2\frac {u\sqrt {s_{nn}}}{m}-\ln^2\frac {\omega }{m}), \label{gam} \end{equation} where $\Gamma \approx 5.2\cdot 10^{-15}$ GeV is the decay width of the parapositronium. This distribution (\ref{gam}) is shown in Fig. 4 as $\omega d\sigma /d\omega $ for three energies of NICA ranging from 11 GeV to 8 GeV and 6.45 GeV per nucleon. Again, the parameter $u$=0.02 is chosen in accordance with its value obtained in Ref. \cite{vyzh}. As expected, the photon spectra are concentrated near the electron mass 511 keV and they are rather wide. The motion of parapositronia produced in ultraperipheral collisions at high enough energies of NICA is responsible for the broadened spectra in Fig. 4. In general, the direct ultraperipheral production of parapositronia is about million times less effective than the creation of dielectrons as estimated from Eqs (\ref{vz}), (\ref{e4}). Positrons move non-relativistically relative to electrons in pairs with low masses. They can annihilate with high probability. Thus the pairs with masses near 2$m$ in Figs 2 and 3 can create additional gamma-quanta with energies near 511 keV especially in astrophysical surroundings. \section{From colliders to astrophysics} Studies at particle accelerators demonstrate that dileptons are abundantly produced in ultraperipheral collisions. However, the spectra of gamma-quanta shown in Fig. 4 are much wider than those observed from Galaxy and within thunderstorms. It implies that kinetic energies of parapositronia are not small in collider studies. At the same time, the small measured width of the galactic 511 keV line indicates that parapositronia must be almost at rest there. Therefore, the way to use collider data for astrophysics and terrestrial events is not a direct one. One must point out the reliable sources of positrons and describe the mechanism of their cooling down to thermal energies suitable for formation of positronia at rest. An attempt of this kind was published in Ref. \cite{dogi}. It was stated: "We assume the black hole is a source of high energy protons generated by star accretion. The galactic black hole could be a powerful source of relativistic protons. Secondary positrons produced by pp collisions at energies 30 MeV are cooled down to thermal energies by Coulomb collisions, and annihilate in the warm neutral and ionized phases of the interstellar medium with temperatures about several eV, because the annihilation cross-section reaches its maximum at these temperatures. From kinetic equations we shall show that processes of Coulomb collisions are effective enough to cool down these relativistic positrons and to thermalize them before their annihilation, which can explain the origin of the annihilation emission from the Galactic center." Here, it is demonstrated that the heavy nuclei are $Z^4$ times more effective in production of positrons and direct positronia than protons. Existence of the Fe-component in cosmic rays indicates the presence of heavy nuclei in the Galaxy. Moreover, the spectra of unbound pairs are much softer than those assumed in \cite{dogi}, especially if the Sommerfeld-Gamow-Sakharov-factor is properly accounted. Therefore, positrons will become thermalized easier and create parapositronia decaying to two 511 keV gamma-rays. \section{Conclusions} Electromagnetic fields created by fast moving charged particles are responsible for emission of the 511 keV gamma-rays from the Galactic center and within the terrestrial thunderstorms. Studies at NICA collider can help in understanding main parameters of the ultraperipheral collisions of protons and heavy ions. {\bf Acknowledgments} This work was supported by the RFBR project 18-02-40131. \vspace{6pt} The author declares no conflicts of interest.
1,108,101,564,455
arxiv
\section{Introduction} % Iterative Learning Control (ILC) enables significant performance improvements for batch-to-batch control applications, by generating a command signal that compensates for repetitive disturbances through learning from previous iterations, also called batches or trials. Theoretical and implementation aspects, including convergence, causality, and robustness, have been addressed in, e.g., \cite{BristowThaAll2006}, \cite{AhnMooChe2007}, \cite{RogersGalOwe2007}, \cite{Owens2016}, \cite{PipeleersMoo2014}. Furthermore, successful applications have been reported in, e.g., robotics \cite{WallenDreGunRob2014}, mechatronics \cite{BolderZunKoeOom2017}, manufacturing \cite{HoelzleBar2016}, building control \cite{PengSunZhaTom2016}, nuclear fusion \cite{FeliciOom2015}, and rehabilitation \cite{FreemanHugBurChaLewRog2009}. However, several disadvantages of present ILC frameworks that limit further applications include \begin{inparaenum}[i)] \item high implementation cost due to highly unstructured command signals, which are expensive to implement;\label{item:1} \item amplification of trial-varying disturbances, including measurement noise;\label{item:2} \item inflexibility to changing reference trajectories.\label{item:3} \end{inparaenum} The aim of the present paper is to develop an ILC framework that addresses these aspects \ref{item:1})-\ref{item:3}) by enforcing sparsity. Regarding \ref{item:1}) ILC typically generates signals that require a large number of command signal updates thus leading to an expensive implementation. ILC directly learns from measured signals that are contaminated by trial-varying disturbances such as measurement noise. These trial-varying disturbances are often modeled as a realization of a stochastic process \cite{Ljung1999}. As a result, the ILC command signals have infinite support. In sharp contrast, command signals that are obtained through traditional feedforward designs, including \cite{LambrechtsBoeSte2005}, have finite support and are highly sparse. Command signals with a high number of non-zero elements, or another appropriate structural constraint, may lead to a prohibitively expensive implementation, e.g., in wireless sensor networks, wireless control applications, or embedded platforms with shared resources \cite{GoossensAzeChaDevGooKoeLiMirMolBeyNelSin2013}. Note that this is a different aspect than the actual computation of the command signal itself, which can be done in between subsequent tasks, see \cite{ZundertBolKoeOom2016b} for results in this direction. Regarding \ref{item:2}), ILC typically amplifies trial-varying disturbances. In fact, typical ILC approaches amplify these disturbances by a factor of two, as is shown in the present paper. Approaches to attenuate trial-varying disturbances include norm-optimal ILC with appropriate input weighting \cite{BristowThaAll2006}, higher-order ILC for addressing disturbances with trial-domain dynamics \cite{GunnarssonNor2006}, and stochastic approximation-based ILC \cite{ButcherKar2011}. Also, a wavelet filtering-based approach is presented in \cite{MerryMolSte2008}, where a certain noise attenuation is achieved by setting certain wavelet coefficients to zero. In the present paper, a different approach is pursued to attenuate disturbances, where also wavelets immediately fit into the formulation, yet the sparsity can be enforced in an optimal way. Regarding \ref{item:3}), changing reference signals typically lead to performance degradation of ILC algorithms \cite{BoerenBarKokOom2016}, since these essentially constitute trial-varying disturbances. This is in sharp contrast to traditional feedforward designs \cite{LambrechtsBoeSte2005} and is widely recognized in ILC designs. A basis task approach is proposed in \cite{HoelzleAllWag2011}, where the command input is segmented. A basis function framework is developed and applied in \cite{WijdevenBos2010}, \cite{MeulenTouBos2008}, \cite{BolderOomKoeSte2014c} using polynomial basis functions, which is further extended to rational basis functions in \cite{ZundertBolOom2015}. These basis functions are typically selected based on prior information, e.g., based on the approach in \cite{LambrechtsBoeSte2005}, and trial-and-error. In model estimation and signal processing, the use of measured signals has comparable consequences, which has led to new regularization-based approaches that enforce sparsity. Early approaches include the non-negative garrote \cite{Breiman1995} and Least Absolute Shrinkage and Selection Operator (LASSO) \cite{Tibshirani1996}. These are further generalized in \cite{TibshiraniTay2011}, \cite{HastieTibWai2015}, \cite{BuhlmannGee2011}, \cite{BachJenMaiObo2011}. Related applications in system identification include \cite{RojasHja2011}, \cite{OhlssonLjuBoy2010}. Although important developments have been made in ILC and several successful applications have been reported, present approaches do not yet exploit the potential of enforcing additional structure and sparsity. The aim of the present paper is to develop a unified optimization-based approach to ILC that allows for explicitly enforcing structure and sparsity, enabling improved resource efficiency, disturbance attenuation, and flexibility to varying reference signals. The approach employs convex relaxations, enabling the use of standard optimization routines. The main contribution of the present paper is a unified framework to sparse ILC. As subcontributions, trial-varying disturbances are analyzed in detail for explicit ILC algorithms (Sec.~\ref{sec:analyzeexplicitILC}). Subsequently, a general optimization-based framework to sparse ILC is developed (Sec.~\ref{sec:spilc}), including many specific cases that are relevant to ILC applications. The results are confirmed through an application to a wafer stage system (Sec.~\ref{sec:examples}). Related developments to the results in the present paper include the use of sparsity in control, where the main results have been related to Model Predictive Control (MPC), see \cite{AnnergrenHanWah2012}, \cite{KhoshfetratOhlLju2013}, \cite{Gallieri2015}. \emph{Notation:} Throughout, $\|x\|_{\ell_p}$ denotes the usual $\ell_p$ norm, $p \in \mathbb{Z}_{> 0}$. Also, $\|x\|_0 = \sum_{i} \mathbf{1} (x_i \neq 0)$, i.e., the cardinality of $x$. Note that $\|x\|_0$ is not a norm, since it does not satisfy the homogeneity property. It relates to the general $p$-norm by considering the limit $p\rightarrow 0$ of $\|x\|_p$. In addition, $\|\tf[X]\|_{\mathcal{L}_\infty}$ and $\|\tf[X]\|_{\mathcal{H}_\infty}$ denote the usual $\mathcal{L}_\infty$ and $\mathcal{H}_\infty$ norms of discrete time systems, respectively. Throughout, $J$ denotes a system that maps an input space to an output space, operating either over finite or infinite time, which follows from the context. In certain cases, the system is assumed linear, time invariant, and scalar, with transfer function representation $\tf[J]$. The spectrum of a signal $x$ is denoted $\phi_{x}$. \section{Problem formulation}\label{sec:probform} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{closedloop2}% \caption{Parallel ILC structure \eqref{eq:parallelILC} as an example of \eqref{eq:generalILCsystem}.} \label{fig:parallelILC} \end{figure} Consider the ILC system \begin{equation}\label{eq:generalILCsystem} e_{j} = r -J f_j -v_j \end{equation} be given, where $e_j\in \ell_2$ denotes the error signal to be minimized, $r\in \ell_2$ is the reference signal, $f_j \in \ell_2$ denotes the command signal, and $v_j \in \ell_2$ represents trial-varying disturbances, including measurement noise. Here and in the sequel, all signals are tacitly assumed to have appropriate dimensions. Furthermore, $J$ represents the true system, either open-loop or closed-loop, with causal and stable transfer function $\tf[J] \in \ensuremath{\mathcal{RH}_\infty}$. The index $j \in \ensuremath{\mathbb{Z}}_{\ge 0}$ refers to the trial number. Throughout, the command signal $f_{j+1}$ is generated by an ILC algorithm \begin{equation}\label{eq:generalILCupdate} f_{j+1} = F(f_j, e_j), \end{equation} where the ILC update $F$ is defined in more detail later on. The general setup \eqref{eq:generalILCsystem} encompasses the parallel ILC setup in Figure~\ref{fig:parallelILC}, where \begin{equation}\label{eq:parallelILC} e_j = S\tilde r - SGf_j - S \tilde v_j \end{equation} where $S$ follows from its transfer function $\tf[S] = \frac{1}{1+\tf[G]\tf[C]}$, $r = S\bar r$, $J = SG$, $v_j = S \tilde v_j$, and $\tf[C]$, $\tf[G]$ are assumed to be linear. From \eqref{eq:generalILCupdate} and \eqref{eq:generalILCsystem}, it is immediate that the trial-varying disturbance $v_j$ directly affects the ILC command signal. In view of this observation, the problem investigated in this paper is to develop an ILC algorithm \eqref{eq:generalILCupdate} that satisfies the following requirements: \begin{compactenum}[{R}1)] \item the iteration \eqref{eq:generalILCsystem}-\eqref{eq:generalILCupdate} is convergent over $j$;\label{item:converge} \item the iteration \eqref{eq:generalILCsystem}-\eqref{eq:generalILCupdate} leads to a small error $e_j$ in the presence of trial-invariant disturbances $r$ and trial-variant disturbances $v_j$;\label{item:disturbanceattenuation} \item the resulting command signal $f_j$ has a certain structure, including\label{item:resourceefficient} \begin{compactenum} \item \label{eq:fjspare} a small $\|f_j\|_0$, and/or, \item \label{eq:dfjspare} a piecewise constant $f_j$ with a small number of jumps. \end{compactenum} \end{compactenum} Here, R\ref{item:converge} is a basic requirement for any ILC algorithm and ensures stability in the trial domain, in addition to the assumed stability in the time domain that is guaranteed by stability of $J$ in \eqref{eq:generalILCsystem}, see also \cite{RogersGalOwe2007} for the stability of such two-dimensional systems. Requirement~R\ref{item:disturbanceattenuation} essentially states that the ILC algorithm should effectively compensate for $r$, while avoiding amplification of trial-varying disturbances $v_j$. Requirement~R\ref{item:resourceefficient} is imposed to enable resource-efficient implementations in terms of sampling or communication requirements, depending on the particular application requirements. \section{Analysis of Trial-Varying Disturbances in Explicit ILC}\label{sec:analyzeexplicitILC} In this section, trial-varying disturbances in ILC algorithms are analyzed. In particular, explicit linear ILC algorithms of the general form \begin{equation}\label{eq:freqilcupdate} f_{j+1} = Q (f_j + L e_j) \end{equation} are considered. The infinite time scalar case is considered, where $Q: \ell_2 \mapsto \ell_2$ and $L: \ell_2 \mapsto \ell_2$. Here, $Q$ and $L$ have associated transfer functions $\tf[Q] \in \ensuremath{\mathcal{RL}_\infty}$ and $\tf[L] \in \ensuremath{\mathcal{RL}_\infty}$. Note that $\tf[J] \in \ensuremath{\mathcal{RH}_\infty}$ reflects causality and stability of the system. The fact that $\tf[Q] \in \ensuremath{\mathcal{RL}_\infty}$ and $\tf[L] \in \ensuremath{\mathcal{RL}_\infty}$ reflects that typical ILC algorithms are typically non-causal, and are usually implemented such that bounded solutions are obtained through finite-time preview or via stable inversion through a bilateral $Z$-transform \cite{ZundertBolKoeOom2016b}. % The trial-varying disturbance $v_j$ in \eqref{eq:generalILCsystem} will propagate throughout the iterations through the iteration-domain update \eqref{eq:freqilcupdate}. The following assumption is widely adopted \cite{Ljung1999}. \begin{assum}\label{assum:noise} Let $v_j = H n_j$, where $n_j$ is i.i.d. zero-mean white noise with variance $\lambda_e$, $\tf[H]$ monic and bistable. \end{assum} Clearly, $v_j$ typically does not have compact support. As a result, $f_{j+1}$ will not have compact support in general due to ILC algorithm \eqref{eq:freqilcupdate}. To enable a more detailed analysis, the following auxiliary result provides a suitable condition to guarantee that the iteration defined by \eqref{eq:generalILCsystem} and \eqref{eq:freqilcupdate} converges. \begin{thm}\label{thm:contractionmap} The iteration defined by \eqref{eq:generalILCsystem} - \eqref{eq:freqilcupdate} converges monotonically in the $\ell_2$ norm to a fixed point $f_\infty$ and resulting $e_\infty$ iff \begin{equation} \|\tf[Q](1-\tf[L]\tf[J])\|_{\mathcal{L}_\infty} < 1. \end{equation} \end{thm} \begin{proof} Substituting \eqref{eq:generalILCsystem} into \eqref{eq:freqilcupdate} leads to \begin{math} f_{j+1} = Q(I-LJ)f_j + QLr - QLv_j. \end{math} Using transfer function representations and subsequent application of the Banach fixed-point theorem in conjunction with \cite[Theorem 4.4]{ZhouDoyGlo1996} yields the desired result. \end{proof} Note that Theorem~\ref{thm:contractionmap} allows for non-causal ILC algorithms, i.e., $Q, L \in \ensuremath{\mathcal{RL}_\infty}$. This is more general compared to related analyses, including \cite[Chapter 3]{Moore1993}, which only allow for causal ILC algorithms by restricting to the $\mathcal{H}_\infty$ norm. The following result is the main result of this section and reveals the propagation of noise in the iteration defined by \eqref{eq:generalILCsystem} and \eqref{eq:freqilcupdate}. \begin{thm}\label{thm:noiseanalysis} Given the system \eqref{eq:generalILCsystem} and ILC update \eqref{eq:freqilcupdate} with $f_0 = 0$, Assumption~\ref{assum:noise}, and that the iteration is stable in the sense of Theorem~\ref{thm:contractionmap}, then, \begin{equation}\label{eq:limiterrorspectrum} \textstyle \phi_{e_\infty} = \left| \frac{1- \tf[Q]}{1-\tf[Q](1-\tf[L]\tf[J])} \right|^2 \phi_r + \left( 1 + \frac{\left| \tf[J]\tf[Q]\tf[L] \right|^2}{1-|\tf[Q](1-\tf[L]\tf[J])|^2} \right)\phi_v. \end{equation} \end{thm} Theorem~\ref{thm:noiseanalysis} provides a detailed analysis of the propagation of noise for the general ILC algorithm \eqref{eq:freqilcupdate}. In special cases, the result can be further simplified. For instance, in inverse-model ILC, $\tf[Q]= 1$ and $\tf[L] = \tf[J]^{-1} \in \ensuremath{\mathcal{RH}_\infty}$, in which case Theorem~\ref{thm:noiseanalysis} reveals that \begin{equation}\label{eq:noiseamplification} \phi_{e_\infty} = 2 \phi_v. \end{equation} The result \eqref{eq:noiseamplification} reveals that the limit error spectrum involves an amplification of the noise spectrum by a factor of two. Inclusion of a learning gain $\alpha \in (0,1]$ in inverse-model ILC, i.e., replacing \eqref{eq:freqilcupdate} by $f_{j+1} = Q (f_j + \alpha L e_j)$, mitigates the amplification of trial-varying disturbances, i.e., \begin{equation} \phi_{e_\infty} = \left( 1 + \frac{\alpha^2}{2\alpha - \alpha^2} \right)\phi_v. \end{equation} By taking $\alpha \rightarrow 0$, a first-order Taylor series approximation yields \begin{equation}\label{eq:rolealpha} \phi_{e_\infty} \approx \left( 1 + \frac{1}{2} \alpha \right)\phi_v. \end{equation} Hence, choosing $\alpha$ small leads to a limit error $\phi_{e_\infty} = \phi_v$, which intuitively corresponds to the optimal result, since the iteration-domain feedback \eqref{eq:freqilcupdate} cannot attenuate $v_j$ in iteration $j$. An alternative to attenuate $\phi_v$ is to re-design the controller $C$ in \eqref{eq:parallelILC}, which should from a disturbance attenuation perspective be designed such that $\tf[S] \approx \tf[H]^{-1}$, as is advocated in \cite{BoerenBruOom2017}. Note that this affects $J$ in \eqref{eq:generalILCsystem}.% \begin{remark} The results in this section rely on infinite time signals and LTI systems. Alternative ILC designs based on finite-time optimization \cite{BristowThaAll2006}, see also the forthcoming section, explicitly address the boundary effects, typically leading to an LTV ILC update \eqref{eq:generalILCupdate}, even if $J$ is LTI. In~\cite{ZundertBolKoeOom2016b}, it is shown that these optimization-based designs are equivalent to a certain linear-quadratic-tracking problem. As a result, the solution reaches a certain stationary value for sufficiently long task lengths, in which case an LTI $L$ and $Q$ can be derived for which the results of Theorem~\ref{thm:noiseanalysis} apply. This also implies that the design of weighting filters for such optimization-based design can be further investigated, as is briefly summarized in the next section.% \end{remark} \section{Sparse ILC}\label{sec:spilc} In this section, the general optimization-based ILC framework is presented that allows for enforcing additional structure compared to alternative ILC structure. In fact, traditional norm-optimal ILC algorithms \cite{BristowThaAll2006} are recovered as a special case. In the next subsection, the general framework is presented and motivated, followed by specific design choices in the subsequent sections. \subsection{General approach}\label{sec:generalapproach} Throughout, the criterion \begin{equation}\label{eq:gencrit} \begin{split} \crit(f_{j+1}) = & \frac{1}{2} \|W_e e_{j+1} \|_2^2 + \frac{1}{2} \|W_f f_{j+1} \|_2^2 \\&+ \frac{1}{2} \|W_{{\ensuremath{{\Delta f}}}} \left( f_{j+1} - f_j \right) \|_2^2 + \lambda \|D f_{j+1} \|_1 \end{split} \end{equation} is considered. Here, finite time signals of length $N$ are considered to obtain an optimization problem with a finite number of decision variables, i.e., $e_j, f_j \in \mathbb{R}^N$. The matrices are defined in the sequel and are assumed to have compatible dimensions. In addition, existence of a unique solution is typically assumed, which can be directly enforced by assuming appropriate positive (semi-) definiteness assumptions on the design variables $W_e$, $W_f$, $W_{\ensuremath{{\Delta f}}}$, $D$, and $\lambda$. Also, $e_{j+1}$ in \eqref{eq:gencrit} is considered to be the noise-free prediction $e_{j+1} = r-Jf_{j+1}$. Since also $r$ is unknown, the main idea in ILC is to use this approximation also for $e_j$, leading to \begin{equation}\label{eq:iterativeej} e_{j+1} = e_j - J(f_{j+1} - f_j), \end{equation} where $e_j$ is the measured error signal during trial $j$. Thus, substituting \eqref{eq:iterativeej} into \eqref{eq:gencrit} renders the optimization problem as a function of the known variables $e_j, f_j$, user-defined variables, and the decision variable $f_{j+1}$. The motivation for considering \eqref{eq:gencrit} is as follows. First, if $\lambda = 0$, then standard norm-optimal ILC is recovered, e.g., as in \cite{GunnarssonNor2001}. In this case, an analytic solution of the form \eqref{eq:freqilcupdate} is directly obtained with \begin{align} L &= (J^T \bar W_e J + W_{{\ensuremath{{\Delta f}}}})^{-1}J^T \bar W_e \label{eq:NOILC1} \\ Q &= (J^T \bar W_e J + \bar W_f + \bar W_{{\ensuremath{{\Delta f}}}})^{-1}(J^T \bar W_e J + \bar W_{{\ensuremath{{\Delta f}}}}),\label{eq:NOILC2} \end{align} where $\bar W_e = W_e^T W_e$, $\bar W_f = W_f^TW_f$, and $\bar W_{\ensuremath{{\Delta f}}} = W_{\ensuremath{{\Delta f}}}^TW_{\ensuremath{{\Delta f}}}$. The second motivation stems from the observation that the terms $\frac{1}{2} \|W_f f_{j+1} \|_2^2$ and $\frac{1}{2} \|W_{{\ensuremath{{\Delta f}}}} \left( f_{j+1} - f_j \right) \|_2^2$ essentially involve a ridge regression or Tikhonov regularization. If $f_j = 0$, then the two terms coincide. If $f_j \neq 0$, i.e., during the ILC iterations, then $W_f$ typically leads to $Q \neq I$ in \eqref{eq:freqilcupdate}, providing robustness with respect to modeling errors \cite{Bristow2008}. Increasing $W_{\ensuremath{{\Delta f}}}$ attenuates trial-varying disturbances, which is similar to reducing $\alpha$ in \eqref{eq:rolealpha}. Note that $W_f$ also plays a small role to decrease trial-varying disturbances, since it essentially leads to a smaller mean-square error. However, it leads to a non-zero limit error $e_\infty$, even in the absence of $v_j$ due to the weight on $f_j$, which coincides with a $\tf[Q] \neq 1$ in Theorem~\ref{thm:noiseanalysis}. The third and main motivation for considering the extended criterion \eqref{eq:gencrit} is the additional term $\lambda \|D f_j \|_1$ that is used to enforce sparsity and structure. Note that sparsity is measured directly through the $\ell_0$ norm. However, inclusion of an $\ell_0$ penalty in the criterion \eqref{eq:gencrit} leads to a non-convex optimization problem, which in fact is NP-hard, see \cite{Natarajan1995}. The $\ell_1$ norm is a convex relaxation of the $\ell_0$ norm. To see this, note that \eqref{eq:gencrit} is essentially in Lagrangian form. For the purpose of explanation, consider the simplified form by selecting $W_e = I$, $j = 1$, $f_0 = 0$, $W_f = 0$, $W_{\ensuremath{{\Delta f}}} = 0$, and $D = I$. Using \eqref{eq:iterativeej} \begin{equation}\label{eq:l1simplified} \crit(f_1) = \frac{1}{2} \|e_0 - J f_1 \|_2^2 + \lambda \|f_1\|_1, \end{equation} which is equivalent to the primal optimization problem \begin{equation} \begin{aligned} & \min_{f_1} & & \frac{1}{2}\|e_0 - J f_1 \|_2^2 \\ & \text{subject to} & & \|f_1\|_1 \leq t. \label{eq:l1primal} \end{aligned} \end{equation} for the range of $t$ where the constraint in \eqref{eq:l1primal} is active. This implies that for a given value of $\lambda$, there exists a value of $t$ for which \eqref{eq:l1simplified} and \eqref{eq:l1primal} have identical minima. In this simplified case, the interpretation in \cite{Tibshirani1996} applies to the ILC problem. In particular, the constraint in \eqref{eq:l1primal} is plotted in Fig.~\ref{fig:motivationellone} in addition to several elliptical contour lines of the objective function in \eqref{eq:l1primal}. The solution to \eqref{eq:l1primal} corresponds to the smallest ellipsoid that touches the rhombus of the constraint. If this happens at the corner, as is common and also in this case, then one of the coefficients is zero and a sparse solution is obtained. In contrast, traditional norm-optimal ILC, i.e., corresponding to the solution \eqref{eq:NOILC1} - \eqref{eq:NOILC2}, typically does not lead to a sparse solution with zero entries in $f_1$. To see this, consider a similar simplified case as in \eqref{eq:l1simplified} \begin{equation}\label{eq:l2simplified} J(f_1) = \frac{1}{2} \|e_0- J f_1 \|_2^2 + \tau \|f_1\|_2, \end{equation} which is again in Lagrangian form. Here, $\tau$ directly relates to the weights in \eqref{eq:gencrit} if $W_f$ and $W_{\ensuremath{{\Delta f}}}$ are selected as the common diagonal case with initialization $f_0 = 0$. The primal optimization problem corresponding to \eqref{eq:l2simplified} is given by \begin{equation} \begin{aligned} & \min_{f_1} & & \frac{1}{2}\|e_0 - J f_1 \|_2^2 \\ & \text{subject to} & & \|f_1\|_2 \leq t. \label{eq:l2primal} \end{aligned} \end{equation} In Fig.~\ref{fig:motivationellone}, the constraint is again shown together with the contour lines of the objective function. Due to the lack of corners of the constraint, the presence of zeros in the solution of \eqref{eq:l2primal} is very unlikely in general. Hence, the $\ell_1$ norm promotes sparse solutions, whereas the $\ell_2$ norm in general does not. \begin{figure}% \centering \includegraphics[width=.75\linewidth]{lassoV2}% \caption{Enforcing sparsity in ILC. Assuming $N = 2$, $f_1$ contains two elements. The constraint set, i.e., the $\ell_1$ ball is plotted in green. In addition, ellipsoidal contour lines corresponding to the objective in \eqref{eq:l1primal} are plotted. The optimal solution is found at the point where the contour line first touches the constraint set, which in this case implies $f_1(1) = 0$, hence $f_1$ is sparse. In contrast, in the ridge regression case of \eqref{eq:l2primal} (whose constraint is shown in red), the solution is not sparse. In particular, this solution is obtained when the contour lines of the objective function in \eqref{eq:l2primal} first touches the constraint set corresponding to the $\ell_2$ ball. In addition, $f_1^{\star}$ denotes the unconstrained solution to the objective function in \eqref{eq:l1primal} and \eqref{eq:l2primal}.} \label{fig:motivationellone} \end{figure} Finally, it is remarked that if $\lambda > 0$, then the solution to \eqref{eq:gencrit} typically cannot be obtained in closed-form as in \eqref{eq:NOILC1}-\eqref{eq:NOILC2}. Interestingly, a unique solution to \eqref{eq:gencrit} exists due to convexity. The optimization problem \eqref{eq:gencrit} can be readily solved using general convex optimizers. In addition, several efficient algorithms have been developed, see, e.g., \cite[Chapter 5]{HastieTibWai2015} for an overview. Several of such algorithms provide the entire solution path as a function of $\lambda$. The particular algorithm depends on the choice of $D$, but several relevant choices are outlined below. \subsection{Sparse command signals via lasso}\label{sec:lassoilc} In view of requirement R\ref{eq:fjspare} in Sec.~\ref{sec:probform}, in certain applications it is required to have a sparse command signal $f_j$. To this end, $D$ in \eqref{eq:gencrit} can be selected as $D = I$. As a result, the value of $\lambda > 0$ will dictate the sparsity of the solution. In addition, in this classical lasso approach, $W_f$ and $W_{\ensuremath{{\Delta f}}}$ may be selected as $W_f = 0$ and $W_{\ensuremath{{\Delta f}}} = 0$, i.e., traditional design guidelines for norm-optimal ILC regarding positive definiteness of these matrices, as in \cite{GunnarssonNor2001}, need not be considered, even for the situation where $J$ is singular. The resulting criterion becomes \begin{equation}\label{eq:lassocrit} \begin{split} \crit(f_{j+1}) = & \frac{1}{2} \|W_e (e_j - J f_{j+1}) \|_2^2 + \lambda \|f_{j+1}\|_1, \end{split} \end{equation} which closely reflects the original lasso approach in \cite{Tibshirani1996}. \subsection{Elastic net lasso}\label{sec:elasticnet} In the lasso ILC approach in Sec.~\ref{sec:lassoilc}, the commonly used weighting matrices in $W_f$ and $W_{\ensuremath{{\Delta f}}}$ are set to zero. Interestingly, by selecting either $W_f$ or $W_{{\ensuremath{{\Delta f}}}}$ unequal to zero, an ILC algorithm that relates to the elastic net is obtained, see \cite{ZouHas2005}, which combines lasso and ridge regression. An important advantage is that the elastic net improves group sparsity, where several components become zero simultaneously. Notice that a drawback of the so-called naive elastic net, which coincides with $W_f \neq 0$, $W_{{\ensuremath{{\Delta f}}}} = 0$, is that it leads to a double shrinkage, and it benefits from a correction step \cite{ZouHas2005}. In contrast, in ILC the alternative choice $W_f = 0$, $W_{{\ensuremath{{\Delta f}}}} \neq 0$ can be made, which enforces sparsity in addition to attenuating trial-varying disturbances, see Sec.~\ref{sec:generalapproach}. \subsection{Sparse updates via fused lasso}\label{sec:fusedlasso} In view of Requirement R\ref{eq:dfjspare}, it may be required that the signal $f_j$ is not necessarily sparse but piecewise constant, i.e., its value is only changed occasionally in time. This requires a certain structure of signal, which is different than sparsity $\|f_j\|_0$. The main idea is to select $D$ as \begin{equation}\label{eq:fusedlassoD} D_{f} = \begin{bmatrix} -1 & 1 & \\ & -1 & 1 \\ & & \ddots & \ddots\\ & & & -1 & 1 \end{bmatrix}, \end{equation} a choice which is also known as the fused lasso, e.g., \cite{TibshiraniSauRosZhuKni2005} and leads to the criterion \begin{equation}\label{eq:fusedlassocrit} \begin{split} \crit(f_{j+1}) = & \frac{1}{2} \|W_e (e_j - J f_{j+1}) \|_2^2 + \lambda \|D_ff_{j+1} \|_1. \end{split} \end{equation} Interestingly, the fused lasso \eqref{eq:fusedlassocrit} can be recast as a traditional lasso of the form \eqref{eq:lassocrit}, yet with an increment-input-output system description. To establish the connection, let $\tf[J^i] = \tf[J] (1-z^{-1})$ be the increment-input-output system. Also, expand $D_f$ in \eqref{eq:fusedlassoD} as \begin{equation}\label{eq:fusedlassoD2} D_{f}^i = \begin{bmatrix} 1 \\ -1 & 1 & \\ & -1 & 1 \\ & & \ddots & \ddots\\ & & & -1 & 1 \end{bmatrix}. \end{equation} Then, a change of variables \begin{equation} f^i_{j+1} = D_f^i f_{j+1}, \end{equation} where $f^i_{j+1} $ denotes the incremental input, leads to \begin{equation}\label{eq:fusedlassocrittransformed} \begin{split} \crit(f_{j+1}) = & \frac{1}{2} \|W_e (e_j - J^i f_{j+1}^i) \|_2^2 + \lambda \|f_{j+1}^i \|_1, \end{split} \end{equation} with $J^i = J (D_f^i)^{-1}$ corresponding to $\tf[J^i]$. \subsection{Sparse fused lasso}\label{sec:sparsefusedlasso} Up to this point, Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare} have been addressed separately in Sec.~\ref{sec:lassoilc} and Sec.~\ref{sec:fusedlasso}, respectively. In certain applications, it may be desired to impose both Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare}. Interestingly, Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare} can be enforced both by selecting \begin{align}\label{eq:sparsefusedlasso} D = \begin{bmatrix} \alpha D_f & I \end{bmatrix}, \end{align} in \eqref{eq:gencrit}. Here, the parameter $\lambda$ can still be chosen to enforce sparsity, i.e., Requirement R\ref{eq:fjspare}, whereas the additional tuning parameter $\alpha \in \mathbb{R}_{\geq 0}$ enforces Requirement R\ref{eq:dfjspare}. This leads to the so-called sparse fused lasso \cite{TibshiraniTay2011}. Note that additional requirements can easily be incorporated using a similar construction as \eqref{eq:sparsefusedlasso}. \subsection{Basis function ILC} In recent extensions to ILC, several basis functions are employed. On the one hand, wavelet basis functions are used in, e.g., \cite{MerryMolSte2008}. These immediately fit in the formulation \eqref{eq:gencrit}, see also \cite[Sec.\ 2.1.3]{TibshiraniTay2011}, enabling a systematic way for thresholding while explicitly addressing the performance criterion. On the other hand, flexibility to varying reference signals is achieved by employing basis functions that depend on the reference. In particular, the command signal is parameterized as $f_{j+1} = \Psi(r) \theta_{j+1}$, see, e.g., \cite{WijdevenBos2010}, \cite{MeulenTouBos2008}, \cite{BolderOomKoeSte2014c}, \cite{ZundertBolOom2015}. The proposed framework can be employed to minimize the number of required basis functions. For instance, a large set can be postulated, e.g., following the guidelines in \cite{LambrechtsBoeSte2005}. Next, an alternative formulation of \eqref{eq:gencrit} can be considered, e.g., \begin{equation}\label{eq:gencritbasis} \begin{aligned} & \min_{\theta_{j+1}} & & \|\theta_{j+1}\|_1 \\ & \text{subject to} & & \frac{1}{2} \|W_e e_{j+1} \|_2^2 + \frac{1}{2} \|W_f \Psi(r) \theta_{j+1} \|_2^2 \\ & & &\quad + \frac{1}{2} \|W_{{\ensuremath{{\Delta f}}}} \Psi(r) \left( \theta_{j+1} - \theta_{j} \right) \|_2^2 \leq t, \end{aligned} \end{equation} where a suitable value of $t$ can be obtained by solving the standard norm-optimal ILC in \eqref{eq:NOILC1}-\eqref{eq:NOILC2}. \subsection{Extensions, analysis, and discussion} \label{sec:extensions} A general framework for enforcing sparsity and structure in iterative learning control has been proposed, and several specific choices have been outlined. Further extensions that are beyond the scope of the present paper but can be directly incorporated include group lasso \cite{YuanLin2006}, adaptive lasso \cite{Zou2006}, reweighted $\ell_1$ \cite{CandesWakBoy2008}, and the use of non-convex penalties \cite{BertsimasKinMaz2016}. \subsubsection{Reestimation for debiasing} Note that the lasso shrinks the estimate compared to the least-squares terms in \eqref{eq:gencrit}. Through a reestimation step of the nonzero coefficients, debiasing is obtained. Note that in certain cases, the bias helps to obtain a smaller overall error, i.e., including both bias and variance aspects, which closely relates to the well-known Stein estimator \cite{Stein1956}. However, for ILC such a bias is undesired, since it is automatically eliminated by performing iterations, see Theorem~\ref{thm:contractionmap}. Thus, it is expected that as the ILC iterations increase, the advantages of reestimating for debiasing become more important. Similar reestimation steps are proposed in \cite{RojasHja2011}, \cite{RojasTotHja2014}, \cite[Page 439]{Murphy2012}, \cite[Sec. \ 7.1]{KimKohBoyGor2009}. Interestingly, in the context of ILC, the idea of enforcing sparsity followed by a reestimation step essentially has the same role as a $Q$-filter in traditional ILC, see \cite{BoerenBarKokOom2016} for details. \subsubsection{Sparse signal recovery} The main motivation for using the $\ell_1$ norm in \eqref{eq:gencrit} essentially is to provide a convex relaxation of the $\ell_0$ norm. In case the optimal command input, i.e., for $j \rightarrow \infty$ and $v_j = 0$, the signal $f_{\infty}$ that minimizes $J(f_\infty)$, is sparse, a relevant question is whether this optimal sparse vector can be recovered using the formulation \eqref{eq:gencrit}. The answer depends on the sparsity of the underlying optimal command input $f_j$, as well as on the matrix $J$. In \cite{CandesTao2005}, a sufficient condition that relies on the restriced isometry property is provided. However, these conditions are violated for many practical cases. Nonetheless, the formulation \eqref{eq:gencrit} provides an effective way to enforce sparsity. \subsubsection{Monotonic convergence} Monotonic convergence is a commonly used requirement for practical applications. Indeed, it is well-known that poorly designed ILC algorithms can lead to a significant learning transient. It is well-known that traditional norm-optimal ILC, i.e., setting $\lambda = 0$ in \eqref{eq:gencrit}, is monotonically convergent in $f_j$, see, e.g., \cite{Bristow2008}, where the usual assumption $v_j = 0$ is tacitly assumed to analyze monotonic convergence. However, if $\lambda > 0$, the criterion \eqref{eq:gencrit} involves multiple norms, i.e., both the $\ell_1$ and the $\ell_2$ norm. As a result, monotonic convergence requires a more detailed analyis. To proceed, consider for instance the elastic net lasso of Sec.~\ref{sec:elasticnet} with $D = I$, $W_f = 0$, $W_{\ensuremath{{\Delta f}}} \succ 0$. In this case, monotonic convergence of the ILC cannot be guaranteed in general if $\lambda = 0$. Interestingly, in this case the criterion \eqref{eq:gencrit} can be recast as \begin{equation}\label{eq:gencritmonconv} \begin{split} \crit(f_{j+1}) = & \frac{1}{2} \left\| \left( \begin{bmatrix} W_e e_j \\ 0 \end{bmatrix} + \begin{bmatrix} W_e \\ W_{\ensuremath{{\Delta f}}} \end{bmatrix}f_j \right) - \begin{bmatrix} W_e \\ W_{\ensuremath{{\Delta f}}} \end{bmatrix} f_{j+1} \right\|_2^2 \\ &+ \lambda \| f_{j+1} \|_1. \end{split} \end{equation} Next, there exists a value of $\tau$ such that the optimization problem \begin{equation}\label{eq:critformonconv} \begin{aligned} & \min_{f_{j+1}} & & \| f_{j+1} \|_1 \\ & \text{subject to} & &\frac{1}{2} \left\| \begin{bmatrix} W_e e_j \\ 0 \end{bmatrix} - \begin{bmatrix} W_e \\ W_{\ensuremath{{\Delta f}}} \end{bmatrix} (f_{j+1}-f_j) \right\|_2^2 \leq \tau. \end{aligned} \end{equation} has an identical solution as \eqref{eq:gencritmonconv} at a certain iteration $j$. If $\tau$ is fixed, then the criterion \eqref{eq:critformonconv} can be directly used to enforce monotonic convergence of $f_j$ in the $\ell_1$-norm. \section{Application to a Wafer Stage}\label{sec:examples} \subsection{Setup} \begin{figure}% \centering \fbox{\includegraphics[width=.9\linewidth]{A7}}% \caption{Considered wafer stage application.} \label{fig:systemdef} \end{figure} The considered system is a wafer stage, see Fig.~\ref{fig:systemdef}. Wafer stages are positioning systems that are used in the production of integrated circuits (ICs) through a photolithographic process. The considered wafer stage is controlled in all six motion degrees-of-freedom, i.e., three translations and three rotations. The system is a dual-stage system, where the long stroke enables a stroke of $1 \ \mathrm{m}$ in the horizontal plane, whereas the short stroke enables a positioning accuracy of $1 \ \mathrm{nm}$. Further details on the system and the considered actuation and sensor system are provided in \cite{OomenHerQuiWalBosSte2014}. Throughout, a sampling frequency of $1 \ \mathrm{kHz}$ is adopted, as in \cite{Oomen2014}. To enable a detailed comparison between the various approaches in Sec.~\ref{sec:spilc}, the identified model in \cite{Oomen2014} is considered as true system, i.e., the result as described in \cite{Oomen2014} is denoted $G_o$. In addition, the feedback controller designed in \cite{Oomen2014} is adopted to stabilize the system. In Fig.~\ref{fig:system}, the open-loop $G_o$ and closed-loop $S_o G_o$ are depicted. In addition, a closed-loop model is made, where a model error is introduced by selecting $J = 0.7 S_oG_o$. This model error is introduced to investigate robust convergence properties of ILC. The resulting model $J$ is also depicted in Fig.~\ref{fig:system}. The additive noise $\tilde v_j$ is zero mean white noise with a normal distribution and variance $\lambda_e = 1.5 \cdot 10^{-7}$. As a result, $H$ in Assumption~\ref{assum:noise} has transfer function $\tilde H = \frac{1}{1+\tilde G \tilde C}$. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{system}% \caption{Open-loop true system $G_o$ in (\Colorone), closed-loop true system $S_oG_o$ (\Colortwo), closed-loop model $J$ (\Colorthree).} \label{fig:system} \end{figure} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{task}% \caption{Reference $r$ in \eqref{eq:generalILCsystem} (\Colorone), scaled acceleration profile (\Colortwo).} \label{fig:task} \end{figure} The task $r$ is shown in Fig.~\ref{fig:task}, which is a position signal. In addition, the corresponding scaled acceleration profile is depicted, which is expected to constitute the main contribution of $f_j$ \cite{LambrechtsBoeSte2005}, \cite{MeulenTouBos2008}. For the considered wafer stage application in Fig.~\ref{fig:systemdef}, the constant velocity phase is most important for performance, see \cite[Fig.\ 16 and Fig.\ 20]{Butler2011}, which takes place between $0.03\ \mathrm{s}$ and $0.24 \ \mathrm{s}$. The goal of this section is to illustrate and compare the proposed approaches in Sec~\ref{sec:spilc}. The reference situation, i.e., feedback only with $f_0 = 0$ in Fig.~\ref{fig:parallelILC} is shown in Fig.~\ref{fig:ilctikresults} (\Colorone), Fig.~\ref{fig:ilctikresults2}, and Fig.~\ref{fig:spectra}. In particular, the approaches in Sec.~\ref{sec:spilc} are applied in this section. % \subsection{Traditional Norm-Optimal ILC}\label{sec:exampleNOILC} First, the traditional norm-optimal ILC solution is implemented with $\lambda = 0$ in \eqref{eq:gencrit} with the analytic solution \eqref{eq:NOILC1}-\eqref{eq:NOILC2}. Here, $W_e = I$, $W_f = 0$, and $W_{{\ensuremath{{\Delta f}}}} = 10^{-10}$. Notice that $W_{{\ensuremath{{\Delta f}}}}$ is relatively small but nonzero, since a nonzero $W_{{\ensuremath{{\Delta f}}}}$ or $W_f$ is required to enforce a unique optimal solution. The results after $40$ iterations are depicted in Fig.~\ref{fig:ilctikresults}. Clearly, the error is reduced to a very small value. As is expected, the feedforward is nonzero at every time instant and very noisy. To further analyze these results, the $2$-norm of the stochastic, i.e., trial-varying, part of the error is computed as $\sqrt{\sum_{t = 1}^N (e_j(t) - \hat e_\infty(t))^2}$, see Fig.~\ref{fig:ilctikresults2}. Here, $\hat e_\infty $ is computed as follows. After a sufficient number of iterations $n_{\text{conv}}$, the ILC algorithm is assumed to have converged, after which an additional iterations $n_{\text{iter}}$ is used to compute \begin{math} \hat e_\infty = \frac{1}{n_{\text{iter}}} \sum_{j = n_{\text{conv}}}^{n_{\text{conv}}+n_{\text{iter}}-1}e_j. \end{math} Clearly, Fig.~\ref{fig:ilctikresults2} reveals that the trial-varying part of the error is amplified by a factor $2$, which corroborates the result of Theorem~\ref{thm:noiseanalysis}, where $Q \approx 1$ due to the specific selection of weighting filters. To further investigate the amplification of trial-varying disturbances, the spectrum of the trial-varying part of the errors in Fig.~\ref{fig:ilctikresults2} is estimated, see Fig.~\ref{fig:spectra}. In addition, the spectrum $\phi_v = \left| \frac{1}{1+\tilde G \tilde C}\right|^2 \lambda_e$ is computed, as well as $2\phi_v$. Again, this clearly confirms the result of Theorem~\ref{thm:noiseanalysis}. In particular, the presented ILC approach with $\lambda = 0$ and small $W_f$ and $W_{\ensuremath{{\Delta f}}}$ leads to a perfect attenuation of trial-invariant disturbances. However, it amplifies trial-varying disturbances by a factor two, and leads to an $f_j$ with large $\|f_j\|_0$, violating Requirement R\ref{eq:fjspare}, as well as R\ref{eq:dfjspare}. Summarizing, the results in Fig.~\ref{fig:ilctikresults}, Fig.~\ref{fig:ilctikresults2}, and Fig.~\ref{fig:spectra} confirm that norm-optimal ILC amplifies trial-varying disturbances, and leads to a non-sparse solution in view of Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare}. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilctikresults}% \caption{Top: error $e_{j}$. Bottom: command signal $f_{j}$. Shown are iteration $j=0$ (\Colorone), iteration $j=40$ for traditional norm-optimal ILC of Sec.~\ref{sec:exampleNOILC} (\Colortwo).} \label{fig:ilctikresults} \end{figure} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilctikresults2}% \caption{Iteration $j=0$ (\Colorone), iteration $j=40$ for traditional norm-optimal ILC of Sec.~\ref{sec:exampleNOILC} (\Colortwo).} \label{fig:ilctikresults2} \end{figure} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{spectra}% \caption{Estimated spectrum of trial-varying part of the error without ILC (solid blue) and for traditional norm-optimal ILC of Sec.~\ref{sec:exampleNOILC} (solid red). Also shown are the spectra $2\phi_v$ (dashed blue) and $\phi_v$ (dashed red).} \label{fig:spectra} \end{figure} \subsection{Lasso ILC}\label{sec:exampleLassoILC} To address Requirement R\ref{eq:fjspare}, the approach in Sec.~\ref{sec:lassoilc} is applied. In particular, $W_e = I$, $W_f = 0$, and $W_{{\ensuremath{{\Delta f}}}} = 0$, $D = I$, and $\lambda = 5 \cdot 10^{-9}$. Next, the ILC iteration is started, and after $40$ iterations it leads to $e_{40}$ and $f_{40}$ in Fig.~\ref{fig:ilclasresults}. Interestingly, $\|f_{40}\|_0$ is much smaller for the lasso ILC approach compared to the results of Sec.~\ref{sec:exampleNOILC}, as is confirmed in Fig.~\ref{fig:ilclasresults3}, thereby addressing Requirement R\ref{eq:fjspare}. Also, the $2$-norm of the error signal is computed, see Fig.~\ref{fig:ilclasresults2}. Clearly, the error reduces significantly over the iterations. Finally, also the re-estimated lasso, as is explained in Sec.~\ref{sec:extensions}, is implemented. The results are also depicted in Fig.~\ref{fig:ilclasresults2}. Interestingly, it can be observed that re-estimating leads to a smaller limit error, as is expected. However, note that during the iterations, the approach of Sec.~\ref{sec:lassoilc} leads to a smaller error compared to the re-estimated version in several of the initial iterations. An explanation for this aspect is that the biased estimate leads to a smaller overall error, which is a similar effect as in the Stein estimator. Hence, it is concluded that for non-iterative approaches, the biased estimate can be useful in terms of a bias/variance trade-off, but in the iterative schemes the benefit of re-estimation is clearly confirmed in Fig.~\ref{fig:ilclasresults}. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclasresults3}% \caption{$\ell_0$-norm of the error for norm-optimal ILC of Sec.~\ref{sec:exampleNOILC} (\Colortwo) and lasso ILC of Sec.~\ref{sec:exampleLassoILC} (\Colorthree), which leads to a reduced error signal.} \label{fig:ilclasresults3} \end{figure} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclasENresults1}% \caption{Computed 2-norm of the error for various ILC algorithms. Traditional norm-optimal ILC in Sec.~\ref{sec:exampleNOILC} (\Colortwo) leads to a significant error reduction in the initial iterations, and then remains at a certain level due to the amplification of trial-varying disturbances. The lasso approach of Sec.~\ref{sec:exampleLassoILC} with re-estimation (\Colorfour) leads to a significant reduction in the initial iterations, in addition to a reduced limit error, since it reduces amplification of trial-varying disturbances. Also note that the lasso approach without re-estimation (\Colorthree) leads to an improved estimate in the first iteration, yet remains at a large error after convergence, which is due to the bias error in the solution. Finally, the elastic-net lasso approach of Sec.~\ref{sec:exampleENLassoILC} is shown (\Colorfive), which leads to a comparable converged performance as the lasso ILC (\Colorthree), since both do not include re-estimation in this case.} \label{fig:ilclasresults2} \end{figure} \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclasresults}% \caption{Top: error $e_{40}$ at iteration $j=40$. Bottom: command signal $f_{40}$ at iteration $j=40$. Shown are lasso ILC of Sec.~\ref{sec:exampleLassoILC} (\Colorthree) and re-estimated lasso ILC of Sec.~\ref{sec:exampleLassoILC} (\Colorfour).} \label{fig:ilclasresults} \end{figure} \subsection{Elastic net lasso ILC}\label{sec:exampleENLassoILC} In this section, the approach of Sec.~\ref{sec:elasticnet} is pursued, where the lasso regularisation is extended with a ridge regression term. In particular, $W_f = 0$, while $W_{{\ensuremath{{\Delta f}}}} = 1\cdot 10^{-6} I$. The resulting error $e_{40}$ and command input $f_{40}$ are depicted in Fig.~\ref{fig:ilclasENresults}. The error is of comparable magnitude as the lasso ILC in Sec.~\ref{fig:ilclasresults}, while the command input is substantially smoother. The error in fact has slightly reduced compared to lasso ILC, as is shown in Fig.~\ref{fig:ilclasresults2}, which comes at the price of a slower convergence rate due to an increased $W_{{\ensuremath{{\Delta f}}}}$. Notice that the elastic net lasso can also be improved by re-estimation, which is not done here to facilitate the presentation. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclasENresults}% \caption{Top: error $e_{40}$ at iteration $j=40$. Bottom: command signal $f_{40}$ at iteration $j=40$. Shown is the elastic net lasso ILC of Sec.~\ref{sec:exampleENLassoILC} (\Colorfive), leading to a smooth command input $f_j$.} \label{fig:ilclasENresults} \end{figure} \subsection{Fused lasso ILC}\label{sec:examplefusedLassoILC} The results in the previous sections have addressed Requirement R\ref{eq:fjspare}. In certain situations, e.g., wireless sensors or embedded implementations, it may be required to minimize the number of times the command input is updated, i.e., Requirement R\ref{eq:dfjspare}. This is a different form of structure compared to sparsity. To address this, the fused lasso of Sec.~\ref{sec:fusedlasso} is employed. In particular, in the general criterion \eqref{eq:gencrit} is considered, where the weighting filters are selected as $W_f = W_{\ensuremath{{\Delta f}}} = 0$, $D = D_f$ in \eqref{eq:fusedlassoD}, and $\lambda = 3 \cdot 10^{-12}$. Next, the ILC iteration is invoked. The results are shown in Fig.~\ref{fig:ilclasfusedresults}. Compared to the results of Fig.~\ref{fig:ilclasresults} in Sec.~\ref{sec:exampleLassoILC}, the error has reduced significantly. However, this comes at the price of sparsity. Indeed, only the first samples are zero, since the algorithm is initialized with $f_1(0) = 0$. Interestingly, only a limited number of command signal updates are required to achieve a small error signal. This will also attenuate the effect of trial-varying disturbances. Note that the error can be further reduced by including a re-estimation step, which is not shown here to facilitate the presentation. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclassparseresultsNEW2}% \caption{Top: error $e_{40}$ at iteration $j=40$. Bottom: command signal $f_{40}$ at iteration $j=40$. Shown is the fused lasso ILC of Sec.~\ref{sec:examplefusedLassoILC} (\Colorfive), leading to a command input $f_j$ that addresses Requirement R\ref{eq:dfjspare}. In particular, the command signal $f_{40}$ aims to minimize the error signal in addition to the updates, i.e., instants where $f_{40}$ changes as a function of time. This does not explicitly address sparsity of $f_{40}$ itself, as can be clearly observed from the zoom plot.} \label{fig:ilclasfusedresults} \end{figure} \subsection{Sparse fused lasso ILC}\label{sec:examplesparsefusedLassoILC} In the previous sections, Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare} are achieved separately in Sec.~\ref{sec:exampleLassoILC} and Sec.~\ref{sec:examplefusedLassoILC}, respectively. To address both requirements simultaneously, the sparse fused lasso approach of Sec.~\ref{sec:sparsefusedlasso} is adopted. The regularization penalties in \eqref{eq:sparsefusedlasso} are selected such that these essentially combine the two penalties in Sec.~\ref{sec:exampleLassoILC} and Sec.~\ref{sec:examplefusedLassoILC}. The results are depicted in Fig.~\ref{fig:ilclassparsefusedresults}. It can directly be observed that it combines the sparsity of Sec.~\ref{sec:exampleLassoILC} while at the same time reducing the number of command signal updates as in Sec.~\ref{sec:examplefusedLassoILC}. As such, it is concluded that the sparse fused lasso addresses Requirement R\ref{eq:fjspare} and Requirement R\ref{eq:dfjspare} simultaneously. The relative penalties can be further tuned to balance the importance of both penalties, as well as the resulting error signal. In addition, the resulting error signal can be further enhanced through a re-estimation step. \begin{figure}% \centering \includegraphics[width=.9\linewidth]{ilclassparsefusedresultsNEW2}% \caption{Top: error $e_{40}$ at iteration $j=40$. Bottom: command signal $f_{40}$ at iteration $j=40$. Shown is the sparsefused lasso ILC of Sec.~\ref{sec:examplesparsefusedLassoILC} (\Colorfive), leading to a command input $f_j$ that addresses Requirement R\ref{eq:dfjspare}. In comparison to the fused lasso approach in Sec~\ref{sec:examplefusedLassoILC}, here additional regularization parameters enforce a zero input signal, which can clearly be seen in the zoom plot.} \label{fig:ilclassparsefusedresults} \end{figure} \section{Conclusion} A general framework is presented that extends optimization-based iterative learning control to include additional structure, including sparsity. The approach is shown on a mechatronic system, where it is shown to have significant benefits, including \begin{inparaenum}[i)] \item resource-efficiency in terms of sparse command signals, e.g., facilitating embedded controller implementations; \item resource-efficiency in terms of limiting the number of changes in the command signal, e.g., facilitating implementation in limited-capacity communication networks; \item automated basis function selection in flexible iterative learning control employing basis functions; and \item attenuation of trial-varying disturbances, which for the considered wafer scanner example leads to significant performance increase. \end{inparaenum} Regarding the latter, a detailed analysis of trial-varying disturbances in ILC reveals that such trial-varying exogenous signals are often amplified by typical ILC algorithms. The proposed framework enables a significant reduction of this amplification, typically up to a factor of two. The proposed framework enables many user-specific choices, and can be easily extended. For instance, for third-order or higher-order setpoints, it may be useful to impose regularization parameters of equally high polynomial orders, known as polynomial trend filtering \cite[Sec.\ 2.1.2]{TibshiraniTay2011}, which is a special case of the general criterion \eqref{eq:gencrit}. Ongoing research focusses on specialized algorithms for the considered scenarios, enabling faster computation. In addition, the correlation between variables is subject of further investigation. Finally, various aspects of monotonic convergence, which has here been analyzed in terms of the $\ell_1$-norm, are being investigated, including robust monotonic convergence conditions \cite{WijdevenDonBos2009}, \cite{DuysonPipSwe2016} and data-driven ILC frameworks \cite{JanssensPipSwe2013}, \cite{AAABolderKleOom}.% \section*{Appendix} In this section, a proof of Theorem~\ref{thm:noiseanalysis} is provided. Several auxiliary results are presented. In particular, note that at iteration $j$, the error is a function of all previous signals affecting the loop due to the iteration-domain integrator in \eqref{eq:freqilcupdate}. In the following lemma, the summation of $j$ terms of the trial-invariant disturbance $r$ in \eqref{eq:generalILCsystem} is eliminated. \begin{lemma}\label{lemma:firststepproof} Consider the system \eqref{eq:generalILCsystem} and ILC update \eqref{eq:freqilcupdate} with $f_0 = 0$ and assume that iteration is stable in the sense of Theorem~\ref{thm:contractionmap}. Then, \begin{align}\label{eq:eliminater} e_j =& \left( 1 - \tf[J] \frac{1-(\tf[Q](1-\tf[L]\tf[J]))^{j}}{1-\tf[Q](1-\tf[L]\tf[J])}\tf[Q]\tf[L]\right)r\\ &- v_j - \tf[J] \sum_{n=0}^{j-1}(\tf[Q](1-\tf[L]\tf[J])^n\tf[Q]\tf[L]v_{j-n-1}. \end{align} \end{lemma} \begin{proof} Substituting \eqref{eq:generalILCsystem} into \eqref{eq:freqilcupdate} yields \begin{equation} f_{j+1} = \tf[Q]((1-\tf[L]\tf[J])f_j + \tf[L]r - \tf[L] v_j). \end{equation} Given $f_0 = 0$ and subsequent successive substitution yields \begin{math} f_1 = \tf[Q](\tf[L]r - \tf[L]v_0) \end{math}, \begin{math} f_2 = \tf[Q]((1-\tf[L]\tf[J])+1)\tf[Q]\tf[L]r - \tf[Q](1-\tf[L]\tf[J])\tf[Q]\tf[L]v_0 -\tf[Q]\tf[L]v_1, \end{math} and hence \begin{equation} f_j = \sum_{i=0}^{j-1}(\tf[Q](1-\tf[L]\tf[J]))^{i} \tf[Q]\tf[L]r - \sum_{n=0}^{j-1}(\tf[Q](1-\tf[L]\tf[J]))^{n}\tf[Q]\tf[L]v_{j-1-n}. \end{equation} Next, using the geometric series \begin{equation}\label{eq:geometricseries} \sum_{l = 0}^{j-1} r^l = \frac{1 - r^j}{1-r}, \end{equation} this leads to \begin{equation}\label{eq:almostfinalstep} f_j = \frac{1-(\tf[Q](1-\tf[L]\tf[J]))^{j}}{1-\tf[Q](1-\tf[L]\tf[J])}\tf[Q]\tf[L]r - \sum_{n=0}^{j-1}(\tf[Q](1-\tf[L]\tf[J]))^{n}\tf[Q]\tf[L]v_{j-1-n}. \end{equation} Finally, substitution of \eqref{eq:almostfinalstep} into \eqref{eq:generalILCsystem} yields the desired result \eqref{eq:eliminater}. \end{proof} The result~\eqref{eq:eliminater} reveals that the error contains a summation over $j$ trial-varying disturbance terms $v_j$, whereas the influence of the trial-invariant disturbances is captured in a single term through the use of a geometric series. Although the trial-varying disturbance varies on each experiment, a closed-form expression can be obtained by exploiting Assumption~\ref{assum:noise}. \begin{lemma}\label{lemma:secondstepproof} Let Assumption~\ref{assum:noise} hold. Then, under the assumptions of Lemma~\ref{lemma:firststepproof}, \begin{align}\label{eq:eliminatev} \phi_{e_j} =& \left| 1 - \tf[J] \frac{1-(\tf[Q](1-\tf[L]\tf[J]))^{j}}{1-\tf[Q](1-\tf[L]\tf[J])}\tf[Q]\tf[L] \right|^2 \phi_r \\ &+ \left( 1 + \left| \tf[J]\tf[Q]\tf[L] \right|^2 \frac{1-|\tf[Q](1-\tf[L]\tf[J])|^{2j}}{1-|\tf[Q](1-\tf[L]\tf[J])|^2} \right)\phi_v \end{align} \end{lemma} \begin{proof} Taking spectra yields \begin{align} \phi_{e_j} =& \left| 1 - \tf[J] \frac{1-(\tf[Q](1-\tf[L]\tf[J]))^{j}}{1-\tf[Q](1-\tf[L]\tf[J])}\tf[Q]\tf[L] \right|^2 \phi_r \\ &+ \left( 1 + \left| \tf[J]\tf[Q]\tf[L] \right|^2 \sum_{n=0}^{j-1} \left| (\tf[Q](1-\tf[L]\tf[J]))^n \right|^2 \right)\phi_v. \end{align} Next, using \eqref{eq:geometricseries} yields the desired result \eqref{eq:eliminatev}. \end{proof} The closed-form solution~\eqref{eq:eliminatev} enables a direct proof of Theorem~\ref{thm:noiseanalysis}. \begin{proof}(of Theorem~\ref{thm:noiseanalysis}) Taking the limit $j \rightarrow \infty$ implies that $|(1-\tf[L]\tf[J]))^{j}| \rightarrow 0$, directly leading to the desired result \eqref{eq:limiterrorspectrum}. \end{proof} \section*{Acknowledgements} This paper is the result of several research visits of both authors, which is supported in part of the research programme VENI with project number 13073, which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO). In addition, the authors gratefully acknowledge the fruitful discussions with Jurgen van Zundert, Maurice Heemels, Dip Goswami, and Martijn Koedam for resource-efficient control, as part of the Robust Cyber-Physical Systems (RCPS) project (no. 12694). \bibliographystyle{abbrv}
1,108,101,564,456
arxiv
\section{Introduction} The macroscopic properties of a quantum system in equilibrium can be understood from the appropriate thermodynamic potential. Studies of Lee-Yang zeros of the grand-canonical potential as a function of a complex fugacity or of Fisher zeros of the canonical potential as a function of complex temperature, in particular, have significantly contributed to our understanding of equilibrium phase transitions.\cite{Yang1952,Lee1952,Fisher1965,Bena2005} In recent years, there have been attempts to follow a similar approach to non-equilibrium dynamics. For quench dynamics in closed quantum systems it has been suggested that {\it dynamical phase transitions} (DPT's) can be defined based on the Loschmidt echo\cite{Heyl2013} \begin{equation} \label{Lo} {\cal L}_0(t)=\langle\Psi_0|\e^{-iH_1t}|\Psi_0\rangle\,. \end{equation} Here $|\Psi_0\rangle$ is the pure quantum state before the quench and $H_1$ the time-independent Hamiltonian responsible for the unitary time evolution. The Loschmidt echo has the form of a partition function with boundaries fixed by the initial state. In analogy to the Fisher zeros in equilibrium one can thus study the zeros of the Loschmidt echo for complex time $t$. In Ref.~\onlinecite{Heyl2013} it has been shown that for the specific case of the transverse Ising model these zeros form lines in the complex plane which cross the real axis only for a quench across the equilibrium critical point. In a many-body system one expects that the overlap between the time-evolved and the initial state is in general exponentially small in system size in analogy to the {\it Anderson orthogonality catastrophe} in equilibrium.\cite{Anderson1967} To obtain a non-zero and well-defined quantity in the thermodynamic limit it is thus useful to consider the return rate \begin{equation} \label{return} l_0(t)=-\lim_{L\to\infty}\frac{1}{L}\ln|{\cal L}_0(t)| \,. \end{equation} where $L$ is the system size. Zeros in ${\cal L}_0(t)$ at critical times $t_c$ then correspond to non-analyticities (cusps or divergencies) in $l_0(t)$.\cite{Heyl2013,Karrasch2013,Andraschko2014,Halimeh2017,Homrighausen2017,Jafari2017a} It is, however, important to stress that in contrast to the particularly simple case of the transverse Ising model there is in general no one-to-one correspondence between dynamical and equilibrium phase transitions.\cite{Andraschko2014,Vajna2014} It is possible to find non-analytical behavior of the return rate without crossing an equilibrium critical point in the quench, and one can cross a critical line without non-analyticities in $l_0(t)$ being present. For one-dimensional topological systems it has been shown, in particular, that crossing a topological phase transition in the quench always leads to a DPT but the opposite does not have to be true.\cite{Vajna2015} Thus there are still some issues about the appropriateness of the Loschmidt echo as a useful indicator. Nevertheless the notion of a dynamical phase transition is an exciting concept extending key elements of many-body physics to non-equilibrium. Lately, DPT's have also been studied experimentally. In Ref.~\onlinecite{Flaschner2016} vortices in a gas of ultracold fermions in an optical lattice were studied and their number interpreted as a dynamical order parameter which changes at a DPT. Even more closely related to the described formalism to classify DPT's is an experiment where a long-range transverse Ising model was realized with trapped ions. In this case the time-evolved state was projected onto the initial state which allowed access to the Loschmidt echo \eqref{Lo} directly.\cite{Jurcevic2017} While these experiments are an exciting first step to test these far-from-equilibrium theoretical concepts they also lead to a number of new questions. Chief among them is the question how experimental imperfections affect the Loschmidt echo and DPT's. On the one hand, the initial state is typically not a pure but rather a mixed state at a certain temperature $T$. This raises the question how the Loschmidt echo can be generalized to thermal states. On the other hand, the dynamics is also typically not purely unitary. Decoherence and particle loss processes do affect the dynamics as well, requiring a generalization of \eqref{Lo} to density matrices. Finally dynamical processes and phase transitions can be induced entirely by coupling to reservoirs in which case no pure-state or $T=0$ limit exists.\cite{HoeningMoos} In this paper we will address these questions. In Sec.~\ref{Sec_Lo} we discuss various different ways to generalize the Loschmidt echo to finite temperatures. We concentrate, in particular, on projective measurements of time-evolved density matrices relevant for example, for trapped ion experiments, as well as on a proper distance measure between the initial and the time-evolved density matrix following Refs.~\onlinecite{Zanardi2007a,CamposVenuti2011}. We study both of these generalized Loschmidt echos for the case of unitary dynamics of Gaussian fermionic models in Sec.~\ref{Gaussian}. As examples, we present results for the transverse Ising and for the Su-Schrieffer-Heeger (SSH) model. In Sec.~\ref{Open} we consider the generalized Loschmidt echo for open-system dynamics of Gaussian fermionic models described by a Lindblad master equation (LME). A short summary and conclusions are presented in Sec.~\ref{Concl}. \section{The Loschmidt echo} \label{Sec_Lo} We will first review some properties of the standard Loschmidt echo for unitary dynamics of pure states in Sec.~\ref{zero_Lo} before discussing several possible generalizations to mixed states in Sec.~\ref{DM_Lo}. \subsection{Pure states} \label{zero_Lo} The Loschmidt echo for unitary dynamics of a pure state is defined by Eq.~\eqref{Lo}. Its absolute value can be used to define a metric in Hilbert space $\phi=\arccos|{\cal L}_0(t)|$ with $0\leq |{\cal L}_0(t)| \leq 1$ which characterizes the distance between the initial state $|\Psi_0\rangle$ and the time-evolved state $|\Psi(t)\rangle =\e^{-iH_1t}|\Psi_0\rangle $.\cite{Nielsen_book} From this point of view the Loschmidt echo is a time-dependent version of the {\it fidelity} $F=|\langle\Psi_0|\Psi_1\rangle|$ which has been widely used to study equilibrium phase transitions.\cite{Vidal2003a,Venuti2007,Schwandt2009,Zhou2008,Zanardi2006,Zanardi2007a,Zanardi2007b,Chen2007,You2007,Yang2007,Dillenschneider2008,Sarandy2009,Sirker2010,Sirker2014a,Koenig2016} Because of the Anderson orthogonality catastrophy one has to consider a fidelity density $f=-\lim_{L\to\infty}\ln|F|/L$ for a many-body system in the thermodynamic limit $L\to \infty$ in analogy to the Loschmidt return rate defined in Eq.~\eqref{return}. If $|\Psi_0\rangle$ and $|\Psi_1\rangle$ are both ground states of a Hamiltonian $H(\lambda)$ for different parameters $\lambda$ then the fidelity susceptibility $\chi_f=(\partial^2 f)/(\partial\lambda)^2|_{\lambda=\lambda_c}$ will typically diverge at an equilibrium phase transition. Similarly, one might expect that a quench can lead to states $|\Psi(t_c)\rangle$ at critical times $t_c$ which are orthogonal to the initial state implying ${\cal L}_0(t_c)=0$ and resulting in a non-analyticity in the return rate $l_0(t_c)$. A peculiarity of the return rate is that its non-analyticity does not only depend on the properties of the initial and final Hamiltonian before and after the quench but also on time. For a quench from $H_0$ to $H_1$, in particular, the critical time $t_c$ will in general depend upon if one starts with the ground state of the initial Hamiltonian or some excited eigenstate. \subsection{Mixed states} \label{DM_Lo} \subsubsection{Loschmidt echo as a metric} If the Loschmidt echo is primarily seen as defining a metric in Hilbert space, then it is natural to ask if a similar metric can also be defined for density matrices $\rho(t)$. In order for the generalized Loschmidt echo $|{\cal L}_\rho(\rho(0),\rho(t))|$ to give rise to a proper measure of distance in the space of density matrices we want the following relations to hold \begin{itemize} \item[1)] $0\leq |{\cal L}_\rho(\rho(0),\rho(t))|\leq1$ and $|{\cal L}_\rho(\rho(0),\rho(0))|=1$, \item[2)] $|{\cal L}_\rho(\rho(0),\rho(t))|=1$ iff $\rho(0)=\rho(t)$, and \item[3)] $|{\cal L}_\rho(\rho(0),\rho(t))|=|{\cal L}_\rho(\rho(t),\rho(0))|$. \end{itemize} Without time dependence, this problem reduces again to the definition of a fidelity for density matrices.\cite{Bures1969,Uhlmann1976,Jozsa1994} A direct generalization of this fidelity leads to\cite{Zanardi2007a,CamposVenuti2011} \begin{equation} \label{LoT} {\cal L}_\rho(t)\equiv |{\cal L}_\rho(\rho(0),\rho(t))|=\Tr\sqrt{\sqrt{\rho(0)}\rho(t)\sqrt{\rho(0)}}\,. \end{equation} Note that this definition satisfies $\lim_{\beta\to\infty}{\cal L}_\rho(t)=|{\cal L}_0(t)|$ if $\rho(0)$ is a thermal density matrix and the time evolution is unitary. $\beta=T^{-1}$ is the inverse temperature with $k_B=1$. ${\cal L}_\rho(t)$ is symmetric between $\rho(0)$ and $\rho(t)$ and also satisfies the other conditions above. The induced metric $\phi=\arccos[{\cal L}(t)]$ also fulfills the triangle inequality.\cite{Nielsen_book} From this point of view, Eq.~\eqref{LoT} is thus the proper generalization of the Loschmidt echo to density matrices. Despite its relatively complicated appearance, $|{\cal L}_\rho(\rho_1,\rho_2)|$ has a straightforward physical meaning.\cite{Jozsa1994} If we understand $\rho_1$ and $\rho_2$ as reduced density matrices obtained by a partial trace over a larger system which is in a pure state $|\phi_{1,2}\rangle$ respectively, then $|{\cal L}_\rho(\rho_1,\rho_2)|=\max |\langle\phi_1|\phi_2\rangle|$ where the maximum is taken over all purifications of $\rho_1$ and $\rho_2$, respectively. I.e., ${\cal L}_\rho$ provides the purification to the states in the enlarged Hilbert space which are as parallel as possible and consistent with the mixed states of the subsystem. A seemingly simpler and more straightforward generalization such as \begin{equation} \label{Ltilde} |\tilde {\cal L}_\rho(t)|=\sqrt{\frac{\Tr\{\rho(0)\rho(t)\}}{\Tr\rho^2(0)}} \end{equation} does, in general, not fulfill the conditions above. If we start, for example, in a completely mixed state $\rho(0)=\sum_n \frac{1}{N}|\Psi_n\rangle\langle \Psi_n|$ and evolve under dissipative dynamics to a pure state $\rho(t\to\infty)=|\Psi_0\rangle\langle \Psi_0|$ then $|\tilde {\cal L}_\rho(0)|=|\tilde {\cal L}_\rho(\infty)|=1$ which clearly is also not a desirable property. Using a spectral representation in a basis where $\rho(0)=\sum_n p_n |\Psi^0_n\rangle\langle \Psi^0_n| $ is diagonal, Eq.~\eqref{Ltilde} for the special case of unitary time evolution can be represented as \begin{equation} \label{Ltilde_spec} |\tilde {\cal L}_\rho(t)|^2={\frac{\sum_{m,n} p_mp_n |\langle \Psi^0_m|\e^{-iHt}|\Psi_n^0\rangle|^2}{\sum_n p_n^2}} \, , \end{equation} where $p_n$ are weights with $\sum_n p_n=1$. In Sec.~\ref{Gaussian} we will investigate ${\cal L}_\rho(t)$ for unitary dynamics in Gaussian models with $\rho(0)$ being a canonical density matrix at a given finite temperature $T$. At the same time, we will also briefly discuss the result for $\tilde {\cal L}_\rho(t)$ which---for unitary dynamics---in this specific case does fulfill $0\leq |\tilde {\cal L}_\rho(t)|\leq 1$. This is no longer the case for open system dynamics described by an LME and we will therefore exclusively discuss ${\cal L}_\rho(t)$ in Sec.~\ref{Open}. \subsubsection{Projection onto a pure state} While \eqref{LoT} allows to generalize the properties of the Loschmidt echo as a metric to density matrices, ${\cal L}_\rho(t)$ might not necessarily be the quantity measured experimentally. In Ref.~\onlinecite{Jurcevic2017}, for example, DPT's in the transverse Ising model have been investigated using a system of trapped ions. In this experiment the system is prepared in an initial configuration, the system is then time evolved and the Loschmidt echo measured by a projection. If the system is prepared in a pure state and the projection is onto the same pure state then the Loschmidt echo \eqref{Lo} is measured. Here we want to consider the case that the preparation of the system is not ideal---leading to a mixed instead of a pure state---while the projection is still onto the ground state of the initial Hamiltonian. I.e., we consider the case that only one of the states is impure. In this case we can define a generalized Loschmidt echo by replacing $\rho(0)\to |\Psi_0^0\rangle\langle \Psi_0^0|$ in Eq.~\eqref{LoT} leading to \begin{eqnarray} \label{Lproj} |{\cal L}_p(t)|^2 &=& {\langle \Psi_0^0|\rho(t)|\Psi_0^0\rangle}/{\langle\Psi_0^0|\rho(0)|\Psi_0^0\rangle} \\ &=& \sum_n \frac{p_n}{p_0} |\langle \Psi_0^0 |\e^{-iHt}|\Psi_n^0\rangle|^2 \nonumber \, . \end{eqnarray} The second line is a spectral representation in the eigenbasis of $\rho(0)$ and we have introduced a normalization factor such that ${\cal L}_p(0)=1$. Note that for a thermal initial density matrix $\lim_{\beta\to\infty}|{\cal L}_p(t)|^2 = |{\cal L}_0(t)|^2$. In Sec.~\ref{Gaussian} we will also investigate this generalization of the Loschmidt echo for unitary dynamics and present results for experimentally relevant cases such as the transverse Ising and the SSH model. \subsubsection{Alternative generalizations} The definition of a generalized Loschmidt echo for mixed states is not unique and several other possible generalizations have been discussed previously in the literature. In Ref.~\onlinecite{Dutta} and Ref.~\onlinecite{Heyl2017} the quantity \begin{eqnarray} \label{Lav} {\cal L}_{\textrm{av}} &=& \Tr\left\{\rho(0)U(t)\right\} \\ &=& \sum_n p_n \langle \Psi^0_n|\e^{-iH_1 t}|\Psi^0_n\rangle \nonumber \end{eqnarray} is considered where $U(t)$ is the time-evolution operator. From the spectral representation for unitary time evolution with a time-independent Hamiltonian shown in the second line of Eq.~\eqref{Lav} it is clear that this generalization measures an average over pure-state Loschmidt echos rather than the `overlap' between mixed states as defined in Eq.~\eqref{LoT}. Also in contrast to \eqref{Ltilde_spec} only diagonal terms enter; Eq.\eqref{Lav} cannot be used to define a measure of distance between {\it two} density matrices. For a generic Gibbs ensemble one expects, in general, that ${\cal L}_{\textrm{av}}=0$ is only possible if $p_0=1$, since even if the Loschmidt echos of different states $\vert \Psi_n^0\rangle$ will vanish at some time, the corresponding critcial times will in general be different. For a Gaussian model in a {\it generalized Gibbs ensemble}, where the occupation of each $k$-mode is individually conserved, zeros are however also possible at finite temperatures.\cite{Heyl2017} A similar approach---motivated by the characteristic function of work\cite{Talkner2007}---was also used in Ref.~\onlinecite{Abeling2016} where the specific case of a canonical density matrix as initial condition was considered and a generalized Loschmidt echo defined by \begin{eqnarray} \label{tildeLav} \tilde {\cal L}_{\textrm{av}} &=& \frac{1}{Z}\Tr\left\{\e^{iH_1t}\e^{-iH_0t}\e^{-\beta H_0}\right\} \\ &=& \frac{1}{Z}\sum_n \e^{-(\beta+it)E_n^0} \langle \Psi^0_n|\e^{iH_1t}|\Psi^0_n\rangle \nonumber \, . \end{eqnarray} The result is a thermal average over the Loschmidt echo of pure states and thus very different from the overlap between density matrices defined in Eq.~\eqref{LoT}. For all generalized Loschmidt echos discussed here an appropriate return rate \eqref{return} can be defined. It is the return rate in the thermodynamic limit which we want to study in the following. \section{Unitary dynamics in Gaussian models} \label{Gaussian} We consider free fermion models described by the Hamiltonian \begin{equation} \label{Gmodel} H=\sum_{k\ge 0} \Psi_k^\dagger \mathcal{H}_k \Psi_k \end{equation} with $\Psi_k=(c_k,c_{-k}^\dagger)^T$. Here $c_k$ is an annihilation operator of spinless fermions with momentum $k$. This Hamiltonian describes models with a single-site unit cell which are bilinear in the creation and annihilation operators and can contain pairing terms as in the transverse Ising and Kitaev chains, see Sec.~\ref{Sec_Ising}. If we identify $d_k\equiv c_{-k}^\dagger$ then the Hamiltonian \eqref{Gmodel} can also describe models with a two-site unit cell which contain only hopping and no pairing terms such as the SSH and Rice-Mele models, see Sec.~\ref{Sec_SSH}. The momentum summation in both cases runs over the first Brillouin zone. It is often convenient to write the $2\times 2$ matrix $\mathcal{H}_k$ as $\mathcal{H}_k =\mathbf{d}_k\cdot\mathbf{\sigma}$ where $\mathbf{d}_k$ is a three-component parameter vector and $\mathbf{\sigma}$ the vector of Pauli matrices. During the quench the parameter vector $\mathbf{d}_k$ is changed leading to an initial Hamiltonian $H_0$ and a final Hamiltonian $H_1$. In the two different bases in which the Hamiltonians are diagonal we have \begin{equation} \label{Gmodel2} H_{i}=\sum_{k\ge 0} \varepsilon_k^{i}\left( c_{ki}^\dagger c_{ki} +c_{-ki}^\dagger c_{-ki}-1 \right) \end{equation} with energies $\varepsilon_k^i>0$ and $i=0,1$. The operators in which the two Hamiltonians are diagonal are related by a Bogoliubov transform \begin{equation} \label{Bogo} c_{k0} = u_k c_{k1} + v_k c_{-k1}^\dagger \; ;\; c_{k1} = u_k c_{k0} -v_k c_{-k0}^\dagger. \end{equation} The Bogoliubov variables can be parametrized by an angle $\theta_k$ as $u_k=\cos\theta_k$ and $v_k=\sin\theta_k$. For each $k$-mode there are $4$ basis states. We can either work in the eigenbasis $|\Psi_j^0\rangle$ of $H_0$ or the eigenbasis $|\Psi_j^1\rangle$ of $H_1$ which can be expressed as \begin{eqnarray} \label{trafo} |\Psi_0^0\rangle &=& |0\rangle_0 =(u_k - v_k c_{k1}^\dagger c_{-k1}^\dagger) |0\rangle_1 \nonumber \\ |\Psi_1^0\rangle &=& c_{k0}^\dagger |0\rangle_0 = c_{k1}^\dagger |0\rangle_1 \nonumber \\ |\Psi_2^0\rangle &=& c_{-k0}^\dagger |0\rangle_0 = c_{-k1}^\dagger |0\rangle_1 \\ |\Psi_3^0\rangle &=& c_{k0}^\dagger c_{-k0}^\dagger |0\rangle_0 = (v_k + u_k c_{k1}^\dagger c_{-k1}^\dagger) |0\rangle_1 \nonumber \end{eqnarray} or vice versa \begin{eqnarray} \label{trafo2} |\Psi_0^1\rangle &=& |0\rangle_1 =(u_k + v_k c_{k0}^\dagger c_{-k0}^\dagger) |0\rangle_0 \nonumber \\ |\Psi_1^1\rangle &=& c_{k1}^\dagger |0\rangle_1 = c_{k0}^\dagger |0\rangle_0 \nonumber \\ |\Psi_2^1\rangle &=& c_{-k1}^\dagger |0\rangle_1 = c_{-k0}^\dagger |0\rangle_0 \\ |\Psi_3^1\rangle &=& c_{k1}^\dagger c_{-k1}^\dagger |0\rangle_1 = (-v_k + u_k c_{k0}^\dagger c_{-k0}^\dagger) |0\rangle_0 \nonumber \, . \end{eqnarray} Here $|0\rangle_{0,1}$ are the ground states of $H_{0,1}$. The Loschmidt echo at zero temperature can be easily calculated using the transformation \eqref{trafo} leading to \begin{eqnarray} \label{LT0} {\cal L}_0(t) &= &\prod_k \left[ u_k^2 \e^{i\varepsilon_k^1 t} + v_k^2 \e^{-i\varepsilon_k^1 t}\right] \\ &=& \prod_k \left[ \cos\left(\varepsilon_k^1 t\right)+ i\sin(2\theta_k)\sin\left(\varepsilon_k^1 t\right)\right] \nonumber \end{eqnarray} and $|{\cal L}_0(t)|^2 = \prod_k |{\cal L}_0^k(t)|^2$ with \begin{equation} \label{LT0p} |{\cal L}_0^k(t)|^2= \left[ 1-\sin^2(2\theta_k)\sin^2\left(\varepsilon_k^1 t\right)\right] \, . \end{equation} Here $\cos(2\theta_k)=\hat{\mathbf{d}}_k^0\cdot\hat{\mathbf{d}}_k^1$ with $\hat{\mathbf{d}}^i_k$ being the normalized parameter vector. Note that the result \eqref{LT0} is also valid for free fermion models with a two-site unit cell but without pairing terms although the ground state is different. From \eqref{LT0p} it is evident that ${\cal L}_0(t_c)=0$ if a momentum $k_c$ exists with $\hat{\mathbf{d}}_{k_c}^0\cdot\hat{\mathbf{d}}_{k_c}^1=0$, i.e. $\sin(2\theta_k)=1$. The critical times are then given by \begin{equation} \label{tc} t_c=\frac{\pi}{2\varepsilon_{k_c}^1}(2n+1). \end{equation} For any of the generalized Loschmidt echos defined before we can write the return rate as \begin{equation} \label{Gaussian_return} l(t) =-\frac{1}{2\pi}\int\ln|L^k(t)|\, dk \, . \end{equation} In the following we will explicitly calculate $l(t)$ for the different generalized Loschmidt echos. \hspace*{0.2cm} \subsection{Projection onto a pure state} We want to first investigate the case where only one of the states is impure. A natural generalization is then the Loschmidt echo defined in Eq.~\eqref{Lproj}. For the considered Gaussian models \eqref{Gmodel} the Loschmidt echo separates into a product $|{\cal L}_p(t)|^2=\prod_k |{\cal L}_p^{k}(t)|^2$. If we, furthermore, assume that our initial mixed state is described by a canonical ensemble then we obtain \begin{eqnarray} \label{Lproj2} |{\cal L}_p^{k}(t)|^2 &=& {\langle \Psi_0^0|\rho_k(t)|\Psi_0^0\rangle}/{\langle\Psi_0^0|\rho_k(0)|\Psi_0^0\rangle} \\ &=& \sum_{n=0}^3 \e^{-\beta (E_{kn}^0-E_{k0}^0)} |\langle \Psi_0^0|\e^{-iH_1t}|\Psi_n^0\rangle|^2 \nonumber \end{eqnarray} where we have used the spectral representation of the density matrix $\rho_k(t)$ in terms of the eigenstates of $\mathcal{H}_k^0$ and $\beta$ is the inverse temperature. The eigenenergies of the $4$ eigenstates for each $k$-mode are denoted by $E_{kn}^0=\bigl(-\varepsilon_k^0,0,0,\varepsilon_k^0\bigr)$. Using the representation \eqref{trafo} of the eigenstates in terms of the operators of the final Hamiltonian $H_1$ one finds \begin{equation} \label{Lproj3} |{\cal L}_p(t)|^2=\prod_k \left[ 1-\left(1-\e^{-2\beta\varepsilon_k^0}\right)\sin^2(2\theta_k)\sin^2(\varepsilon_k^1 t)\right]. \end{equation} It is obvious that ${\cal L}_p(t)=0$ is only possible at zero temperature in which case $|{\cal L}_p(t)|\equiv |{\cal L}_0(t)|$, see Eq.~\eqref{LT0p}. If one starts from a mixed state then the DPT's are washed out even if one projects onto the ground state. With the appropriately chosen ground state and the associated energies $E_{kn}^0$, the result \eqref{Lproj3} also holds for the models with a two-site unit cell such as the SSH and Rice-Mele models. \subsection{Thermal density matrices} \label{Thermal} The calculation of \eqref{LoT} for the case that $\rho(0)$ is a thermal density matrix is instructive for the dissipative case discussed in Sec.~\ref{Open} so we briefly rederive the known result\cite{Zanardi2007a,CamposVenuti2011} for ${\cal L}_\rho(t)$ here. It is most convenient to perform the calculation in the eigenbasis of the time-evolving Hamiltonian $H_1$ using the transformation \eqref{trafo2}. Because only the states $|\Psi_0^i\rangle$ and $|\Psi_3^i\rangle$ are mixed by the transformation, the initial unnormalized density matrix $\rho_k(0)$ can be rearranged into two $2\times 2$ block matrices $\mathbf{I}_2$ (identity matrix) and $\mathbf{r}_k(0)$ with \begin{widetext} \begin{equation} \label{r_k} {\bm r}_{k}(0)= \begin{pmatrix} \cosh\left(\beta\varepsilon^0_k\right)+\sinh\left(\beta\varepsilon^0_k\right)\cos(2\theta_k) & -\sinh\left(\beta\varepsilon^0_k\right)\sin(2\theta_k)\\ -\sinh\left(\beta\varepsilon^0_k\right)\sin(2\theta_k) & \cosh\left(\beta\varepsilon^0_k\right)-\sinh\left(\beta\varepsilon^0_k\right)\cos(2\theta_k) \end{pmatrix}\,. \end{equation} \end{widetext} $\sqrt{r_k(0)}$ is obtained from \eqref{r_k} by replacing $\beta\to\beta/2$ and $r_k(t)$ by replacing $r_k^{(12)}\to \e^{2i\varepsilon_k^1 t}r_k^{(12)}$ and $r_k^{(21)}\to \e^{-2i\varepsilon_k^1 t}r_k^{(21)}$. The partition function is given by $Z_k=\Tr\rho_k=\Tr(\mathbf{I}_2)+\Tr\mathbf{r}_k(0)=2+2\cosh(\beta\varepsilon_k^0)$. We can now simplify the generalized Loschmidt echo \eqref{LoT} in this case to \begin{equation} \label{LoT2} {\cal L}_\rho(t)=\prod_k\frac{2+\lambda_{k1}(t)+\lambda_{k2}(t)}{2+2\cosh(\beta\varepsilon_k^0)} \end{equation} where $\lambda_{ki}^2(t)$ are the eigenvalues of $\sqrt{\mathbf{r}_k(0)}\mathbf{r}_k(t)\sqrt{\mathbf{r}_k(0)}$ which are given by \begin{equation} \label{l12} \lambda_{k1,2}(t)=\sqrt{1+|{\cal L}_0^k(t)|^2\sinh^2[\beta\epsilon^0_k]}\pm |{\cal L}_0^k(t)|\sinh[\beta\epsilon^0_k]\,, \end{equation} with ${\cal L}_0^k(t)$ defined in Eq.~\eqref{LT0p}. As a final result we thus obtain\cite{Zanardi2007a,CamposVenuti2011} \begin{equation} \label{LoT3} {\cal L}_\rho(t)=\prod_k\frac{1+\sqrt{1+|{\cal L}_0^k(t)|^2\sinh^2(\beta\varepsilon^0_k)}}{1+\cosh(\beta\varepsilon^0_k)}\,. \end{equation} For any finite temperature this means that ${\cal L}_\rho(t)>0$ for all times, i.e., there are no DPT's. For $\beta\to\infty$ the result reduces to the zero-temperature result, Eq.~\eqref{LT0p}. The result \eqref{LoT3} also holds for Gaussian models with a two-site unit cell such as the SSH and Rice-Mele models. We now also briefly discuss the possible generalization $\tilde {\cal L}_\rho(t)$ defined in Eq.~\eqref{Ltilde}. While this function, in general, does not fulfill the requirements listed in Sec.~\ref{DM_Lo} it turns out that for the case considered here at least $0\leq |\tilde {\cal L}_\rho(t)|\leq 1$ is fulfilled. We start again from a thermal density matrix. The spectral representation using the eigenstates of $H_1$ then reads \begin{equation} \label{Ltilde2} |\tilde {\cal L}_\rho(t)|^2=\frac{\sum_{n,m}\e^{i(E_m^1-E_n^1)t}|\langle\Psi_n^1|\e^{-\beta H_0}|\Psi_m^1\rangle|^2}{\sum_n \e^{-2\beta E_n^0}}\, . \end{equation} Only the eigenstates $|\Psi_0^1\rangle$ and $|\Psi_0^3\rangle$ mix and it is easy to check the final result \begin{eqnarray} \label{Ltilde3} &&|\tilde {\cal L}_\rho(t)|^2 = \prod_k\left[ \cosh^{-2}(\beta \varepsilon_k^0) +\tanh^2(\beta\varepsilon_k^0) |{\cal L}_0^k(t)|^2\right] \nonumber \\ && = \prod_k\left[1-\tanh^2(\beta\varepsilon_k^0) \sin^2(2\theta_k)\sin^2(\varepsilon_k^1 t)\right] \, . \end{eqnarray} $\tilde {\cal L}_\rho(t)=0$ is again only possible if $T=0$. \subsubsection{Ising and Kitaev models} \label{Sec_Ising} The finite-temperature results can be directly applied to concrete models. The Kitaev chain, for example, is defined by \begin{equation} \label{kh} H=\sum_{i}\left[\Psi^\dagger_{i}\left( \Delta\textrm{i}{\bm\tau}^y-J{\bm\tau}^z\right)\Psi_{i+1}+\textrm{H.c.}-\Psi^\dagger_{i}\mu{\bm\tau}^z\Psi_{i}\right]\, \end{equation} where $\Psi^\dagger_{i}=(c^\dagger_{i},c_{i})$ and $c_{ i}^{(\dagger)}$ annihilates (creates) a spinless particle at site $i$. The Kitaev chain is topologically non-trivial when $\mu<2|J|$ and $\Delta\neq0$. Note that $\Delta=0$ is a phase boundary between phases with winding numbers $\pm1$. As a special case the transverse Ising model \begin{equation} \label{Ising} H(g)=-\frac{1}{2}\sum_{i}{\bm\sigma}^z_i{\bm\sigma}^z_{i+1}+\frac{g}{2}\sum_{i=1}^N{\bm\sigma}^x_i \end{equation} is obtained if one sets $\mu=-g/2$ and $J=1/4=-\Delta$ in \eqref{kh}. After a Fourier transform, for a chain with periodic boundary conditions, the Hamiltonian \eqref{kh} is of the form of Eq.~\eqref{Gmodel} with parameter vector \begin{equation} \mathbf{d}_k=\begin{pmatrix} 0,2\Delta\sin k,-2J\cos k -\mu \end{pmatrix}\,, \end{equation} and $\cos(2\theta_k)=\hat{\mathbf{d}}_k^0\cdot\hat{\mathbf{d}}_k^1$. In Fig.~\ref{Fig1} we plot the return rate in the thermodynamic limit, Eq.~\eqref{Gaussian_return}, for a quench from $g=0.5$ to $g=1.5$. \begin{figure} \includegraphics[width=0.99\columnwidth]{Fig1.pdf} \caption{(Color online) The return rate $l(t)$ for the Ising chain in the thermodynamic limit for a quench from $g=0.5$ to $g=1.5$ at different temperatures $T$. (a) Projection onto the ground state, Eq.~\eqref{Lproj3} (note that the curves for $T=0$ and $T=0.05$ are almost on top of each other), and (b) generalized Loschmidt echo, Eqs.~\eqref{LoT}, and \eqref{LoT3}.} \label{Fig1} \end{figure} While the cusp in the return rate at the critical time $t_c$ is only slightly rounded off for temperatures up to $T=0.1$ if we project onto the ground state, Eq.~\eqref{Lproj}, signatures of a DPT are already almost lost at this temperature if we use the generalized Loschmidt echo \eqref{LoT} which measures the distance between the initial and the time-evolved thermal density matrix. \subsubsection{SSH and Rice-Mele models} \label{Sec_SSH} The Rice-Mele and the SSH chains are models with a two-site unit cell and alternating hoppings $1\pm\delta$ and potentials $\pm V$. The Hamiltonian for the Rice-Mele model is given by \begin{eqnarray} \label{RM_model} H&=&\sum_i\Psi^\dagger_i\left[-(1+\delta){\bm \sigma}^x+V{\bm \sigma}^z\right]\Psi_i\\\nonumber&&-(1-\delta)\sum_j\Psi^\dagger_i \begin{pmatrix}0&0\\1&0\end{pmatrix}\Psi_{i+1}+\textrm{H.c.} \end{eqnarray} with $\Psi_i=(c_i,d_i)$. After a Fourier transform this model can also be represented by the generic Hamiltonian \eqref{Gmodel} with the identification $d_k\equiv c_{-k}^\dagger$. The parameter vector in this case is given by \begin{equation} \mathbf{d}_k=\begin{pmatrix} -2 \cos k, 2\delta\sin k,V \end{pmatrix}\,. \end{equation} The SSH model is a special case of the Rice-Mele model obtained by setting the alternating potential $V=0$. In Fig.~\ref{Fig2} the return rate for a symmetric quench from $\delta=-0.5$ to $\delta=0.5$ for $V=0$ is shown. \begin{figure} \includegraphics[width=0.99\columnwidth]{Fig2.pdf} \caption{(Color online) The return rate $l(t)$ for the SSH chain in the thermodynamic limit for a quench from $\delta=-0.5$ to $\delta=0.5$ at different temperatures $T$. (a) Projection onto the ground state, Eq.~\eqref{Lproj3}, and (b) generalized Loschmidt echo, Eq.~\eqref{LoT3}. Note that the curves for $T=0$ and $T=0.2$ are almost on top of each other.} \label{Fig2} \end{figure} While the cusp in the return rate at the critical time $t_c$ is washed out in this case as well, a signature of the DPT at zero temperature is more cleary visible also at finite temperatures as compared to the quench in the Ising model shown in Fig.~\ref{Fig1}. \section{Open systems} \label{Open} In systems where the Loschmidt echo has been studied experimentally such as cold atomic gases and trapped ions\cite{Flaschner2016,Jurcevic2017} interactions with electromagnetic fields are used to control the particles. These systems are therefore intrinsically open systems and decoherence and loss processes are unavoidable. Using the Born-Markov approximation such open systems can be described by a Lindblad master equation \begin{equation} \label{LME} \dot \rho(t) = -i [H,\rho] + \sum_\mu \left(L_\mu\rho L_\mu^\dagger -\frac{1}{2}\left\{L_\mu^\dagger L_\mu,\rho\right\}\right). \end{equation} Here $L_\mu$ are the Lindblad operators describing the dissipative, non-unitary dynamics induced by independent reservoirs labelled by $\mu$, and $\{\cdot,\cdot\}$ is the anti-commutator. In order to have a bilinear LME which can be solved exactly, we continue to consider Hamiltonians as defined in Eq.~\eqref{Gmodel} with periodic boundary conditions which can be diagonalized in Fourier space. We consider Lindblad operators that are linear in creation and annihilation operators leading to a linear dynamics \begin{equation} \label{Lindblad1} L_\mu= \sqrt{\gamma_{\mu}} c_{\mu} \; \mbox {and} \; L_\mu= \sqrt{\bar\gamma_{\mu}} c^\dagger_{\mu} \end{equation} describing particle loss and creation processes with amplitudes $\gamma_{\mu}>0$ and $\bar\gamma_{\mu}>0$, respectively. This form ensures that the dissipative terms in Eq.~\eqref{LME} are also bilinear. More specifically we consider reservoirs that couple each to only one $k$-mode \begin{equation} \label{Lindblad2} L_k= \sqrt{\gamma_{\pm k}} c_{\pm k} \; \mbox {and} \; L_k= \sqrt{\bar\gamma_{\pm k}} c^\dagger_{\pm k} \; . \end{equation} To solve the Lindblad equation we will use the superoperator formalism.\cite{Carmichael1998} The $n\times n$ density matrix $\rho$ is recast into an $n^2$-dimensional vector $||\rho\rangle\rangle$ and the Hamiltonian and Lindblad operators become superoperators acting on this vector. The LME \eqref{LME} and its solution can then be written as \begin{equation} \label{LME2} ||\dot\rho\rangle\rangle =\mathcal{L}\, ||\rho\rangle\rangle \quad ; \quad ||\rho\rangle\rangle(t)\, =\exp(\mathcal{L}t)\,||\rho(0)\rangle\rangle \, . \end{equation} For the purely unitary time evolution considered in the previous section the Lindbladian $\mathcal{L}$ takes the form \begin{equation} \label{HLind} \mathcal{L} = -i\left(H\otimes\mathbf{I}_n + \mathbf{I}_n\otimes H^\dagger\right) \end{equation} where $\mathbf{I}_n$ is the $n\times n$ identity matrix. Similarly, the individual Lindblad operators \eqref{Lindblad2} can be written as superoperators acting on $||\rho\rangle\rangle$. The solution vector $||\rho\rangle\rangle(t)$ can then be recast into a matrix allowing one to calculate the generalized Loschmidt echos also for open systems. \subsection{Particle loss} We consider again free fermionic models of the type \eqref{Gmodel} with the $4$ basis states \eqref{trafo} for each $k$-mode. As a first example, we investigate a simple mixed initial state $\rho_k(0)=\frac{1}{2}\left(|\Psi_1^0\rangle\langle\Psi_1^0| + |\Psi_2^0\rangle\langle\Psi_2^0|\right)$ and a time evolution under the Lindblad operators $L_{1k}=\sqrt{\gamma_k}c_k$ and $L_{2k}=\sqrt{\gamma_{-k}}c_{-k}$. In this case it is straightforward to show that the density matrix takes the form $\rho_k(t)=\frac{1}{2}\text{diag}(2-\e^{-\gamma_kt}-\e^{-\gamma_{-k}t},\e^{-\gamma_kt},\e^{-\gamma_{-k}t},0)$. The non-equilibrium steady state (NESS) is thus the completely empty state for $\gamma_{\pm k}\neq 0$. Since both $\rho(0)$ and $\rho(t)$ are diagonal it follows immediately that the generalized Loschmidt echo is given by \begin{equation} \label{Lrholoss} {\cal L}_\rho(t)=\frac{1}{2}\prod_k\left(\e^{-\gamma_k t/2} + \e^{-\gamma_{-k}t/2}\right) \, . \end{equation} As one might have expected, ${\cal L}_\rho(t)$ shows an exponential decay in this case. If $\gamma_k=\gamma_{-k}=\gamma=\mbox{const}$ then the return rate in the thermodynamic limit \eqref{Gaussian_return} increases linearly, $l(t)=\gamma t/2$, and thus diverges only at infinite time. \subsection{Quench in Kitaev-type models with particle loss} \label{Kitaevloss} Next, we want to consider a quench for a Kitaev-type model with Hamiltonian \eqref{Gmodel2} with the basis states \eqref{trafo2}. As in Sec.~\ref{Thermal} we start with a thermal density matrix $\rho(0)$ but now also allow for particle loss processes as in the example above. Crucially, the matrix $\rho_k(t)$ still can be decomposed into two $2\times 2$ block matrices. We can therefore write ${\cal L}_\rho^k(t)=\Tr\sqrt{M_1}+\Tr\sqrt{M_2}$ with $M_i=\sqrt{\rho^i_k(0)}\rho^i_k(t)\sqrt{\rho^i_k(0)}$ and $\rho_k^{1,2}$ being the two block matrices. With $\Tr\sqrt{M_i}=\sqrt{\lambda^i_1}+\sqrt{\lambda^i_2}>0$ we can write $\left(\Tr\sqrt{M_i}\right)^2=\lambda^i_1+\lambda^i_2+2\sqrt{\lambda^i_1\lambda^i_2}=\Tr M_i+2\sqrt{\det M_1}$.\cite{Jozsa1994} For the Loschmidt echo we therefore find \begin{equation} \label{LoTloss} {\cal L}_\rho(t)=\prod_k\sum_{i=1,2}\sqrt{\Tr M_i +2\sqrt{\det M_i}} \, . \end{equation} Using this formula it is straightforward to obtain an explicit result for ${\cal L}_\rho(t)$ which, however, is quite lengthy for finite temperatures. We therefore limit ourselves here to presenting the result for $T=0$ only. In this case one of the block matrices is zero and we obtain the following closed-form expression \begin{eqnarray} \label{LoTloss2} L^2_\rho(t)&=&\prod_k \e^{-\Gamma^+_k t}\bigg[ \cos 2\theta_k\sinh\left(\Gamma_k^+ t\right) -\sin^2 2\theta_k \sin^2(\varepsilon_k^1 t)\nonumber \\ &+& \left.\frac{1}{2}\sin^22\theta_k\left(1-\cosh(\Gamma_k^- t)\right)+\cosh(\Gamma_k^+ t) \right] \, . \end{eqnarray} Here we have defined $\Gamma_k^\pm =(\gamma_k \pm\gamma_{-k})/2$. It is easy to see that this result reduces to Eq.~\eqref{LT0p} for $\gamma_k=\gamma_{-k}=0$. Furthermore, there are no DPT's for finite loss rates. As an example for the broadening of the cusps in the return rate \eqref{Gaussian_return} we consider the same quench in the transverse Ising model as before. \begin{figure} \includegraphics[width=0.99\columnwidth]{Fig3.pdf} \caption{(Color online) The return rate $l(t)$ for the Ising chain in the thermodynamic limit for a quench from $g=0.5$ to $g=1.5$ at $T=0$ for different particle loss rates $\gamma=\gamma_k=\gamma_{-k}$. Inset: Broadening of the first cusp at $t=t_c$.} \label{Fig3} \end{figure} Fig.~\ref{Fig3} shows that small loss rates already lead to a significant broadening of the first cusp at $t=t_c$ and completely wash out the cusps at longer times. Furthermore, the NESS for a non-zero loss rate is always the empty state so that the return rate at infinite times becomes {\it independent} of the loss rate and is given by \begin{equation} \label{returntinfty} l(t\to\infty)=-\frac{1}{2\pi}\int_0^\pi\ln\left(\frac{1+\hat{\mathbf{d}}_k^0\cdot\hat{\mathbf{d}}_k^1}{2}\right)\, dk \, . \end{equation} \subsection{Quench in Kitaev-type models with particle creation and loss} \label{Kitaevcreation} So far we have seen that both finite temperatures and particle loss processes destroy DPT's. One can then ask if it is possible to engineer dissipative processes in an open quantum system in such a way that DPT's persist. By constructing a concrete example we will show that this is indeed possible. We consider the case that particles with momentum $k$ are annihilated with rate $\gamma_k$ while particles with momentum $-k$ are created with rate $\bar\gamma_{-k}$. As in the case with particle loss considered in Sec.~\ref{Kitaevloss} the density matrix $\rho_k(t)$ still has block structure and a calculation along the same lines is possible. At $T=0$ we obtain a result which is very similar to Eq.~\eqref{LoTloss2} and reads \begin{eqnarray} \label{LoTcreation} L^2_\rho(t)&=&\prod_k \e^{-\tilde\Gamma^+_k t}\bigg[ \cos 2\theta_k\sinh(\tilde\Gamma_k^- t) -\sin^2 2\theta_k \sin^2(\varepsilon_k^1 t) \nonumber \\ &+&\left.\frac{1}{2}\sin^22\theta_k(1-\cosh(\tilde\Gamma_k^- t))+\cosh(\tilde\Gamma_k^- t) \right]. \end{eqnarray} The rates are now defined as $\tilde\Gamma_k^\pm =(\gamma_k \pm\bar\gamma_{-k})/2$. The essential difference when comparing Eq.~\eqref{LoTcreation} with the previous result \eqref{LoTloss2} is that inside the bracket only the rate $\tilde\Gamma_k^-$ is present. For $\tilde\Gamma_k^-=0$, i.e. $\gamma_k=\bar\gamma_{-k}$, the Loschmidt echo becomes $L^2_\rho(t)=\prod_k\exp(-\tilde\Gamma^+_k t)|{\cal L}_0^k(t)|^2$ which is the zero-temperature result \eqref{LT0p} with an additional exponential decay. DPT's are thus still present for this particular case at the same critical times $t_c$ despite the dissipative processes. As an example, we consider again the quench in the transverse Ising chain. In Fig.~\ref{Fig4} we show results for the fine-tuned point $\gamma=\gamma_k=\bar\gamma_{-k}$. \begin{figure}[htp] \includegraphics[width=0.99\columnwidth]{Fig4.pdf} \caption{(Color online) The return rate $l(t)$ for the Ising chain in the thermodynamic limit for a quench from $g=0.5$ to $g=1.5$ at $T=0$ for various equal particle loss and creation rates $\gamma=\gamma_k=\bar\gamma_{-k}$.} \label{Fig4} \end{figure} The cusps remain clearly visible for finite dissipation rates. For a $k$ independent rate $\tilde\Gamma_k^+\equiv \tilde\Gamma^+$ as chosen in Fig.~\ref{Fig4} the result for the return rate is \begin{equation} \label{Ising_return_creation} l(t)=\frac{\tilde\Gamma^+ t}{2}-\frac{1}{\pi}\int_0^\pi \ln |{\cal L}_0^k(t)| \, dk \, . \end{equation} This is simply the zero-temperature return rate in the closed system plus a linear increase with slope $\tilde\Gamma^+/2$. In the NESS at long times all particles will be in the $-k$ states leading to a vanishing Loschmidt echo and a diverging return rate. \section{Conclusions} \label{Concl} We have studied a generalization of the Loschmidt echo to density matrices which is applicable both to finite temperatures and to open systems. It is based on a direct generalization of the fidelity for mixed states to dynamical problems and provides a measure of the distance between the initial and the time-evolved density matrix. As such it is very different from previous generalizations studied in the context of dynamical phase transitions which are based on thermal averages over the Loschmidt echos of pure states and are only applicable to unitary dynamics. For bilinear one-dimensional fermionic lattice models with periodic boundary conditions we have shown that finite temperatures always wash out the non-analyticities in the return rate of the generalized Loschmidt echo. Dynamical phase transitions only exist at zero temperature. For open quantum systems described by a Lindblad master equation we similarly find that particle loss processes smooth out cusps in the return rate so that signatures of the dynamical phase transition are hard to detect even if the loss rates are very small. Finally, we showed that it is possible to fine-tune particle loss and creation processes in such a way that dynamical phase transitions can be observed despite the dissipative dynamics. The generalized Loschmidt considered in this paper can be understood as a tool to measure distances between density matrices. As such it might be helpful in engineering and controlling specific states using dissipative dynamics. Zeros of the Loschmidt echo signal, in particular, that a mixed state has been reached such that all purifications to states in an enlarged Hilbert space are orthogonal to purifications of the initial state. \hspace*{0.4cm} Shortly after submitting this paper Ref.~\onlinecite{Mera2017} became available, which is on a related topic. \acknowledgments JS acknowledges support by the Natural Sciences and Engineering Research Council (NSERC, Canada) and by the Deutsche Forschungsgemeinschaft (DFG) via Research Unit FOR 2316. MF acknowledges support by the Deutsche Forschungsgemeinschaft (DFG) via the Collaborative Research Center SFB-TR 185.
1,108,101,564,457
arxiv
\section{Introduction} For understanding the AdS/CFT correspondence at the string level it is useful to peruse construction of classical string configurations moving in AdS spacetime. The theory is classically (and probably also quantum mechanically) integrable and following the experience with two dimensional integrable field theories the construction of its exact solution might be possible. At present particular examples representing classical motions in AdS are known. The first example is given by the rotating folded string of GKP.\cite{GKP} It was established to be dual to twist two (large) spin operators of Super Yang-Mills (SYM) theory whose anomalous dimensions are of interest for scattering processes of QCD. An extension of the folded string solution is given by the n-spike configuration constructed by Kruczenski.\cite{Kruczenski} These solutions are special in that they represent static configurations in a uniformly rotating reference frame. One therefore expects a potentially much larger set of solutions with nontrivial dynamics. In this contribution we describe our recent effort\cite{JJKV} in developing techniques for generating AdS string solutions. As it has been much studied recently the string in AdS (or more generally in the full $AdS \times S$ spacetime) is integrable both at the classical and quantum level. In conformal type gauges the equations are that of nonlinear sigma models defined on the spaces in question. One well known approach to study integrability of sigma models and generate classical solutions is based on a Pohlmeyer type reduction\cite{Pohlmeyer,de Vega} in which the invariant dynamics is identified with the dynamics of sine-Gordon (or more generally of Toda) type. The other well known method is the dressing method, which was successfully applied for construction of magnon type solutions.\cite{SV}\cdash\cite{CDO} In the study of AdS dynamics we follow the inverse scattering method where one generates string configurations starting with soliton solutions of the reduced field theory. This method was originally applied in Ref. \refcite{de Vega} for the study of string dynamics in de Sitter space. In Anti de Sitter space we have found a natural identification of soliton configurations, they were identified in Ref. \refcite{JJKV} with spikes of string foldings present in the Gubser-Klebanov-Polyakov-Kruczenski (GKP-K) solution. We will review this identification in what follows and give a more in depth discussion of it. The rotating string solutions of GKP-K simplify drastically in the limit $\omega=1$, when the rotational momentum is maximal. In that limit the spikes extend to the boundary of the AdS space but in the process some features of the solution get lost. We will describe at the sinh-Gordon level the intricacies of the $\omega=1$ limit and then proceed with the discussion of the $w>1$ case. While we concentrate on the two-soliton consideration we outline the construction for the (static) n-soliton case. In the Euclidean worldsheet framework Alday and Maldacena have used minimal area classical string configurations to evaluate scattering amplitudes of Yang-Mills gluons\cite{AM} in terms of Wilson loops.\cite{KT} In this case the momenta of individual gluons specify a polygon geometry of the string worldsheet. It is then a challenging problem to construct minimal area string configurations with general n-polygon boundary conditions\cite{JKSV}\cdash\cite{Nastase}. This was accomplished in Ref. \refcite{AM} for the four-point case through an analytic continuation of Minkowski worldsheet solutions described above. One has the expectation that knowledge of more general Minkowski space solutions can therefore be of use for the gluon scattering problem also. \section{AdS string as a $\sigma$-model} We will concentrate in what follows on string dynamics in purely AdS spacetime. The string equations of motion in curved spacetime can be formulated as generalized nonlinear $\sigma$ models provided one uses a conformal type gauge. Defining the $AdS_d$ space as $q^2=-q_{-1}^2-q_0^2+q_1^2+\cdots+q_{d-1}^2=-1$, the conformal gauge string equations are given by a noncompact $SO(d-1,2)$-symmetric $\sigma$ model with the action \begin{equation} A={\sqrt{\lambda} \over 2\pi}\int d\sigma d\tau \bigl(\partial q \cdot \partial q + \lambda (\sigma ,\tau)(q\cdot q+1)\bigr) \end{equation} where $\sigma,\tau$ are the Minkowski worldsheet coordinates, the equations of motion are \begin{equation} q_{\xi\eta}-(q_\xi \cdot q_\eta)q=0 \end{equation} with $\xi=(\sigma+\tau)/2,\eta=(\sigma-\tau)/2$. In addition to guarantee the conformal gauge we have to impose the Virasoro conditions \begin{equation} q_\xi^2=q_\eta^2=0. \end{equation} It was demonstrated a number of years ago (by Pohlmeyer) that nonlinear sigma models subject to Virasoro type constraints can be reduced to known, integrable field equations of sine-Gordon (or Toda) type. This reduction is accomplished by concentrating on $SO(d-1,2)$ invariant sub-dynamics of the sigma model. The steps of the reduction were well described in Refs. \refcite{JJKV}--\refcite{de Vega} and consist in the following. One starts by identifying first an appropriate set of basis vectors for the string coordinates. For $AdS_3$, the basis can be chosen as \begin{equation} e_i=(q,q_\xi,q_\eta,b) \label{basis1} \end{equation} where $b$ is a fourth orthonormal vector, satisfying $b \cdot b=1,b \cdot q=b \cdot q_\xi=b \cdot q_\eta=0$. The reduced (invariant) scalar field is introduced through a scalar product $q_\xi \cdot q_\eta \equiv e^{\alpha(\xi,\eta)}$, and one proceeds to derive the equation of motion for $\alpha$, which reads \begin{equation} \alpha_{\xi\eta}-e^\alpha-uve^{-\alpha}=0 \label{sinh} \end{equation} where $u$ and $v$ are two additional (invariant) scalar fields given by $u=b \cdot q_{\xi\xi},v=b \cdot q_{\eta\eta}$. They are found to obey the equations $u_\eta=v_\xi=0$ and now the closed set of equations defines the generalized sinh-Gordon model. One can next work out the equations obeyed by the elements of the basis, the derivatives of the vectors (\ref{basis1}) can be expressed in terms of the basis itself \begin{equation} {\partial e_i \over \partial \xi}=A_{ij}(\xi,\eta) e_j,~~{\partial e_i \over \partial \eta}=B_{ij}(\xi,\eta) e_j, \end{equation} where \begin{equation} A=\begin{pmatrix} 0 & 1 & 0 & 0 \cr 0 & \alpha_\xi & 0 & u \cr e^\alpha & 0 & 0 & 0 \cr 0 & 0 & -ue^{-\alpha} & 0 \end{pmatrix},~~B=\begin{pmatrix} 0 & 0 & 1 & 0 \cr e^\alpha & 0 & 0 & 0 \cr 0 & 0 & \alpha_\eta & v \cr 0 & -ve^{-\alpha} & 0 & 0 \end{pmatrix}. \end{equation} One finds therefore a linear system of differential equations for the vectors. The associated integrability condition reads \begin{equation} \partial_\eta A-\partial_\xi B+[A,B]=0. \end{equation} The integrability condition is then seen to generate the equations of motion corresponding to a generalized sinh-Gordon theory. The vector equations on the other hand define the motion (and coordinates) of the string itself, they have to be solved, which leads to a scattering problem of Dirac type. This equations exhibit $SO(2,2)$ symmetry, and can be further simplified by redefining the orthonormal basis as\cite{Papanicolaou} \begin{equation} e_1=b,~~e_2={q_\xi+q_\eta \over \sqrt{2}e^{\alpha/2}},~~e_3={q_\xi-q_\eta \over \sqrt{2} i e^{\alpha/2}},~~e_4=iq. \end{equation} Then the $A,B$ matrices become \begin{eqnarray} A=\begin{pmatrix} 0 & -{u\over \sqrt{2}}e^{-\alpha/2} & {iu\over \sqrt{2}}e^{-\alpha/2} & 0 \cr {u\over \sqrt{2}}e^{-\alpha/2} & 0 & {i\over 2}\alpha_\xi & -{i\over \sqrt{2}}e^{\alpha/2} \cr -{iu\over \sqrt{2}}e^{-\alpha/2} & -{i\over 2}\alpha_\xi & 0 & {1\over \sqrt{2}}e^{\alpha/2} \cr 0 & {i\over \sqrt{2}}e^{\alpha/2} & -{1\over \sqrt{2}}e^{\alpha/2} & 0 \end{pmatrix}, \\ B=\begin{pmatrix} 0 & -{v\over \sqrt{2}}e^{-\alpha/2} & -{iv\over \sqrt{2}}e^{-\alpha/2} & 0 \cr {v\over \sqrt{2}}e^{-\alpha/2} & 0 & -{i\over 2}\alpha_\eta & -{i\over \sqrt{2}}e^{\alpha/2} \cr {iv\over \sqrt{2}}e^{-\alpha/2} & {i\over 2}\alpha_\eta & 0 & -{1\over \sqrt{2}}e^{\alpha/2} \cr 0 & {i\over \sqrt{2}}e^{\alpha/2} & {1\over \sqrt{2}}e^{\alpha/2} & 0 \end{pmatrix}. \end{eqnarray} One now exploits the fact that $SO(2,2)=SO(2,1) \times SO(2,1)$, expanding the $A,B$ matrices in terms of two commuting sets of $SO(2,1)$ generators \begin{equation} A=w_{1,(+)}^i J_i + w_{1,(-)}^i K_i,~~~B=w_{2,(+)}^i J_i + w_{2,(-)}^i K_i, \end{equation} with $i=1,2,3$. Remember $SO(2,1)=SU(1,1)$, we can rewrite this problem in terms of the spinor representation of the $SU(1,1)$ group. Defining two spinors $\phi$ and $\psi$ satisfying the differential equations \begin{eqnarray} \phi_\xi&=&w_{1,(+)}^i\sigma_i\phi=A_1\phi,\hspace{.19in}\phi_\eta=w_{2,(+)}^i\sigma_i\phi=A_2\phi, \\ \psi_\xi&=&w_{1,(-)}^i\sigma_i\psi=B_1\psi,\hspace{.15in}\psi_\eta=w_{2,(-)}^i\sigma_i\psi=B_2\psi, \end{eqnarray} where $\sigma_i$ are the anti-Hermitian generators of $SU(1,1)$ group. The matrices $A_1,A_2,B_1,B_2$ can be found in Ref. \refcite{JJKV}. The string solution is finally given by \begin{eqnarray} q_{-1} &=& {1\over 2}(\phi_1 \psi_1^* - \phi_2 \psi_2^*)+ c. c.~, \hspace{.3in} q_0={i\over 2}(\phi_1 \psi_1^* - \phi_2 \psi_2^*)+ c. c.~, \\ q_1 &=& {1\over 2} (\phi_2 \psi_1 - \phi_1 \psi_2) + c. c.~, \hspace{.32in} q_2={i\over 2} (\phi_2 \psi_1 - \phi_1 \psi_2) + c. c.~. \end{eqnarray} \section{GKP solution as a two-soliton configuration} Gubser, Klebanov and Polyakov\cite{GKP} pointed out the relevance of semiclassical quantization in AdS giving the example of a large $S$ (spin angular momentum) rigidly rotating string. One constructs the GKP solution in the conformal gauge with the sigma model action \begin{equation} A={\sqrt{\lambda} \over 4\pi} \int d\tau d\sigma G_{ij} \partial_\alpha X^i \partial^\alpha X^j \end{equation} and the Virasoro constraints \begin{equation} T_{++}=\partial_+ X^i \partial_+ X^j G_{ij}=0,~~~T_{--}=\partial_- X^i \partial_- X^j G_{ij}=0. \end{equation} The classical motion describing a rigid rotation of a folded closed string is given by the ansatz $t=c\tau,~\theta=c\omega\tau$ and $\rho=\rho(\sigma)$. The Virasoro constraints give \begin{equation} \rho'^2=c^2(\cosh^2\rho-\omega^2\sinh^2\rho) \label{GKPsolution} \end{equation} where the scaling constant $c$ is adjusted to define the period of $\sigma$. We can set $c=1$ and denote the position of the fold (spike) as $\sigma_0$. To demonstrate the stated correspondence with solitons we expand the solution (\ref{GKPsolution}) near the spike with $\omega=1+2\eta$, where $\eta \ll 1$, one finds \begin{equation} {\rho^\prime}^2 \sim e^{2\rho}(e^{-2\rho}-\eta). \end{equation} Denoting $u=e^{-\rho}$, we have ${u^\prime}^2 \sim u^2-\eta$. Consider the boundary condition $u_0=e^{-\rho_0}$ at $\sigma=\sigma_0$, one finds \begin{equation} \rho(\sigma)=-\ln \bigl(\sqrt{\eta} \cosh (\sigma-\sigma_0) \bigr), \end{equation} so that \begin{equation} \alpha \equiv \ln(q_\xi \cdot q_\eta)=\ln(2\rho'^2)=\ln(2\tanh^2\sigma). \end{equation} This is exactly the one-soliton solution to the sinh-Gordon equation $\alpha_{\xi\eta}-e^\alpha+4 e^{-\alpha}=0$.\cite{JJKV} The closed GKP solution has two folds (spikes) and therefore corresponds in the sinh-Gordon picture to a two-soliton configuration. We will next describe its construction starting from the solutions of the sinh-Gordon system. \section{AdS string solutions} Consider the generalized sinh-Gordon equation (\ref{sinh}), making a shift of the field $\alpha=\hat{\alpha}+{1 \over 2}\ln(-uv)$, we have \begin{equation} \hat{\alpha}_{\xi \eta}-2\sqrt{-uv}\sinh\hat{\alpha}=0. \end{equation} In the case of $u=2,v=-2$, we consider the periodic solution \begin{equation} \hat{\alpha}_1=\ln[k~{\rm sn}^2({\sigma \over \sqrt{k}}, k)] \end{equation} with periodicity $L=2\sqrt{k}K(k)$ where $K(k)$ is the elliptic function and $k$ is a parameter with $0<k<1$ (see Fig. \ref{f1} (a)). The spinors are found to be \begin{eqnarray} \phi_1&=&{1 \over 2}\exp\Bigl[-{i\over2}{1+k \over \sqrt{k}} \tau\Bigr]\Bigl( \sqrt{{1+k{\rm sn}^2+{\rm cn}~{\rm dn} \over (1+k) {\rm sn}}}+\sqrt{{(1+k) {\rm sn} \over 1+k{\rm sn}^2+{\rm cn}~{\rm dn}}}\Bigr) \\ \phi_2&=&{1 \over 2}\exp\Bigl[-{i\over2}{1+k \over \sqrt{k}} \tau\Bigr]\Bigl( \sqrt{{1+k{\rm sn}^2+{\rm cn}~{\rm dn} \over (1+k) {\rm sn}}}-\sqrt{{(1+k) {\rm sn} \over 1+k{\rm sn}^2+{\rm cn}~{\rm dn}}}\Bigr) \\ \psi_1&=&{1 \over 2}\exp\Bigl[-{i\over2}{1-k \over \sqrt{k}} \tau\Bigr]\Bigl( \sqrt{{1-k{\rm sn}^2+{\rm cn}~{\rm dn} \over (1-k) {\rm sn}}}+\sqrt{{(1-k) {\rm sn} \over 1-k{\rm sn}^2+{\rm cn}~{\rm dn}}}\Bigr) \end{eqnarray} \begin{eqnarray} \psi_2&=&{1 \over 2}\exp\Bigl[-{i\over2}{1-k \over \sqrt{k}} \tau\Bigr]\Bigl( \sqrt{{1-k{\rm sn}^2+{\rm cn}~{\rm dn} \over (1-k) {\rm sn}}}-\sqrt{{(1-k) {\rm sn} \over 1-k{\rm sn}^2+{\rm cn}~{\rm dn}}}\Bigr) \end{eqnarray} where cn, sn and dn are Jacobi elliptic functions with respect to $({\sigma \over \sqrt{k}}, k)$. The string solution is given by \begin{equation} q_1=\begin{pmatrix} {1 \over \sqrt{1-k^2}}{\rm dn}({\sigma \over \sqrt{k}},k) \cos \sqrt{k}\tau \cr {1 \over \sqrt{1-k^2}}{\rm dn}({\sigma \over \sqrt{k}},k) \sin \sqrt{k}\tau \cr {k \over \sqrt{1-k^2}}{\rm cn}({\sigma \over \sqrt{k}},k) \cos {1\over \sqrt{k}}\tau \cr {k \over \sqrt{1-k^2}}{\rm cn}({\sigma \over \sqrt{k}},k) \sin {1\over \sqrt{k}}\tau \end{pmatrix}. \label{solution1} \end{equation} For $0<k<1$, this solution is well defined and periodic so that the string is closed. The above solution is divergent at $k=1$. This does not mean there is no regular solution when $k=1$. In this case, the sinh-Gordon solution becomes \begin{equation} \hat{\alpha}_{k=1}=\ln[\tanh^2 \sigma], \end{equation} we can choose different normalization coefficients of the $\psi$ spinor and the string solution is given by\cite{JJKV} \begin{equation} q_{k=1}={1 \over 2\sqrt{2} \cosh \sigma} \begin{pmatrix} 2\tau\cos\tau-\sin\tau(\cosh 2\sigma+2) \cr 2\tau\sin\tau+\cos\tau(\cosh 2\sigma+2)\cr -2\tau\cos\tau+\sin\tau\cosh 2\sigma \cr -2\tau\sin\tau-\cos\tau\cosh 2\sigma \end{pmatrix}. \end{equation} In the limit $k=1$ the string parameter space is decompactified, we see an infinite string solution which touches the boundary of AdS. Due to the nonvanishing boundary condition at infinity, there is momentum flow at the boundary of the string and the energy is not conserved. These were the features of the one soliton string configuration given in Ref. \refcite{JJKV}. It is also interesting to note another limit which leads to vacuum at $k=1$. First, we shift the sinh-Gordon solution by half the period $\sigma \rightarrow \sigma+\sqrt{k}K(k)$ and obtain \begin{equation} \hat{\alpha}_2=\ln[k~{\rm cn}^2({\sigma \over \sqrt{k}}, k)~{\rm nd}^2({\sigma \over \sqrt{k}}, k)] \end{equation} (see Fig. \ref{f1}(b)). We note that this solution reduces to vacuum $\hat{\alpha}=0$ in the limit of $k=1$. The corresponding string solution is given by \begin{equation} q_2=\begin{pmatrix} {\rm nd}({\sigma \over \sqrt{k}},k) \cos \sqrt{k}\tau \cr {\rm nd}({\sigma \over \sqrt{k}},k) \sin \sqrt{k}\tau \cr k~{\rm sd}({\sigma \over \sqrt{k}},k) \cos {1 \over \sqrt{k}} \tau \cr k~{\rm sd}({\sigma \over \sqrt{k}},k) \sin {1 \over \sqrt{k}} \tau \end{pmatrix}. \label{solution2} \end{equation} As we expected, in the limit of $k=1$, this solution reduces to the vacuum string solution in Ref. \refcite{JJKV}. These two limits at $k=1$ are essentially expanding the sinh-Gordon solution around the soliton (region 1) and the vacuum (region 2), respectively, as shown in Fig. \ref{f1}. \begin{figure}[h] \centerline{\psfig{file=Figure1.eps,width=0.8\textwidth}} \caption{(a) First periodic sinh-Gordon solution $\hat{\alpha}_1$ when $k=0.964$; (b) Second periodic sinh-Gordon solution $\hat{\alpha}_2$ when $k=0.964$. They are related by a translation of $\sigma \rightarrow \sigma + \sqrt{k}K(k)$.} \label{f1} \end{figure} Do the rescaling $\sqrt{k}\tau \rightarrow \tau, \sqrt{k}\sigma \rightarrow \sigma$ and write $k=1/\omega$, the string solutions (\ref{solution1}) and (\ref{solution2}) correspond to the minus or plus solution of (\ref{GKPsolution}). Remember that the GKP solution is a two-soliton configuration with the period of $\sigma \in [0,2L]$ where $L={2 \over \omega}K({1 \over \omega})$ after rescaling. The energy and angular momentum are exactly calculated to be \begin{eqnarray} E&=&{\sqrt{\lambda} \over 2\pi}\int_0^{2L}d\sigma \cosh^2 \rho={2\sqrt{\lambda} \over \pi}\Bigl[{\omega \over \omega^2-1}E\Bigl({1 \over \omega}\Bigr) \Bigr], \\ S&=&{\sqrt{\lambda} \over 2\pi}\int_0^{2L}d\sigma~\omega \sinh^2 \rho={2\sqrt{\lambda} \over \pi}\Bigl[{\omega^2 \over \omega^2-1}E\Bigl({1 \over \omega}\Bigr)-K\Bigl({1 \over \omega}\Bigr)\Bigr]. \end{eqnarray} where ${\rm E}({1 \over \omega})$ and ${\rm K}({1 \over \omega})$ are elliptic functions. Therefore, \begin{equation} E-\omega S={2\omega \sqrt{\lambda} \over \pi}\Bigl[K\Bigl({1 \over \omega}\Bigr)-E\Bigl({1 \over \omega}\Bigr)\Bigr]. \end{equation} For long strings, $\omega = 1+2\eta$ where $\eta \ll 1$, we can expand the elliptic functions and get \begin{equation} E-S={\sqrt{\lambda}\over \pi} \ln S+\cdots \end{equation} which agrees with Ref. \refcite{GKP}. \section{N-soliton construction} Following the discussion of the periodic two-soliton (GKP) solution (see Fig. \ref{f2}(a)), we would like to consider a possible n-soliton generalization. In terms of a naive gluing procedure one would be lead to a configuration given in Fig. \ref{f2}(b). At the sinh-Gordon level the analogue construction is simple, one essentially extends the length of the space to accommodate n static solitons. However it does not follow that this configuration is continuous and nonsingular in the string configuration space. We have the fact that at the center of AdS space where $\rho=0$, there results only two solutions of (\ref{GKPsolution}): $\rho'(\sigma)=\pm 1$ (see Fig. \ref{f3}(a)). This allows for two angles: 0 and $\pi$ which corresponds to the gluing in the case of a GKP configuration. This implies that the string extends along a straight line. In the physical gauge $t=\tau,\theta=\omega \tau+\sigma$ Kruczenski\cite{Kruczenski} succeeded with the construction of a n-spike static configurations (see Fig. \ref{f2}(c)). This indeed regulates the naive configuration of Fig. \ref{f2}(b). As we will discuss this solution can be obtained by lifting the minimum value of $\rho$ to be $\rho_0$ where $\rho'(\sigma)=0$ (see Fig. \ref{f3}(b)) and gluing n spikes at that point. It will correspond to sinh-Gordon solutions with non-zero boundary conditions. \begin{figure}[t] \centerline{\psfig{file=Figure2.eps,width=0.95\textwidth}} \caption{(a) GKP two-soliton configuration plotted in the plane $x=\rho\cos\theta,y=\rho\sin\theta$ where $\rho,\theta$ are the global coordinates; (b) A attempt to construct the GKP type three-soliton solution; (c) Kruczenski's three-spike string solution.} \label{f2} \end{figure} \begin{figure}[h] \centerline{\psfig{file=Figure3.eps,width=0.8\textwidth}} \caption{(a) GKP $\rho$ as a function of $\sigma$ when $k=0.964$; (b) Kruczenski $\rho$ as a function of $\sigma$ when $\rho_1=2,\rho_0=0.2688735$.} \label{f3} \end{figure} We can reproduce Kruczenski's solution in the conformal gauge by making the ansatz $t=\tau+f(\sigma),\theta=\omega \tau+g(\sigma),\rho=\rho(\sigma)$. The equations of motion and Virasoro constraints can be solved by \begin{eqnarray} f'(\sigma)&=&{\omega \sinh 2\rho_0 \over 2\cosh^2 \rho},~~~~~g'(\sigma)={\sinh 2\rho_0 \over 2\sinh^2 \rho}, \cr \rho'^2(\sigma)&=&{(\cosh^2\rho-\omega^2\sinh^2\rho)(\sinh^2 2\rho-\sinh^2 2\rho_0) \over \sinh^2 2\rho}. \label{Krucsolution} \end{eqnarray} Near the spike, we have $\rho \sim \rho_1 \equiv {\rm arccoth}\omega$, further assume $\rho_1 \gg \rho_0$, we can recover (\ref{GKPsolution}) from (\ref{Krucsolution}). Therefore, a soliton is located at each spike and the finite n-spike string solution is a n-soliton configuration. The differential equation (\ref{Krucsolution}) can be solved to be \begin{equation} \rho={1\over2}\rm{arccosh}\bigl(\cosh 2\rho_1 {\rm cn}^2(u,k) + \cosh 2\rho_0 {\rm sn}^2 (u,k)\bigr) \label{first} \end{equation} where \begin{equation} u \equiv \sqrt{\cosh 2\rho_1+\cosh 2\rho_0 \over \cosh 2\rho_1-1} \sigma,~~~k \equiv \sqrt{\cosh 2\rho_1 - \cosh 2\rho_0 \over \cosh 2\rho_1 + \cosh 2\rho_0}, \end{equation} and $\rho_0,\rho_1$ are the minimum and maximum values of $\rho$, respectively. The gauge transformation functions are found to be \begin{eqnarray} f&=&{\sqrt{2} \omega \sinh 2\rho_0 \sinh \rho_1 \over (\cosh 2\rho_1+1) \sqrt{\cosh 2\rho_1+\cosh 2\rho_0}}\Pi\Bigl({\cosh 2\rho_1-\cosh 2\rho_0 \over \cosh 2\rho_1 +1},x,k\Bigr) \\ g&=&{\sqrt{2} \sinh 2\rho_0 \sinh \rho_1 \over (\cosh 2\rho_1-1) \sqrt{\cosh 2\rho_1+\cosh 2\rho_0}}\Pi\Bigl({\cosh 2\rho_1-\cosh 2\rho_0 \over \cosh 2\rho_1 -1},x,k\Bigr) \end{eqnarray} where $x={\rm am}(u,k)$ and $\Pi(n,x,k)$ is the incomplete elliptic integral. The sinh-Gordon solution corresponding to the n-spike solution is \begin{equation} \alpha=\ln 2(\cosh^2 \rho-\omega^2 \sinh^2 \rho) \end{equation} with $uv=(1-\omega^2)^2\sinh^2 2\rho_0-4\omega^2$. After the change of variables, we get \begin{equation} \hat{\alpha}=\ln \bigl[k~{\rm sn}^2(u,k)\bigr] \end{equation} which, at the limit of $\rho_0=0$, reduces to $\hat{\alpha}_1$ (after scaling). There is also a shifted solution corresponding to $\hat{\alpha}_2$. In this sense, we say the Kruczenski's solution can be obtained by lifting the minimum value of $\rho$ as compared to the GKP solution. A parallel statement holds for the associated sinh-Gordon soliton solutions. For completeness we give the energy and angular momentum of these configurations. They can be exactly computed to read \begin{eqnarray} E&=&{n\sqrt{\lambda} \over \pi}\sqrt{{\rm ch} 2\rho_1 - 1 \over {\rm ch} 2\rho_1+{\rm ch} 2\rho_0} \Bigl[{1 \over 2}({\rm ch} 2\rho_1+{\rm ch} 2\rho_0){\rm E}(k)-{\rm sh}^2 \rho_0 {\rm K}(k)\Bigr], \\ S&=&{n\omega\sqrt{\lambda} \over \pi}\sqrt{{\rm ch} 2\rho_1 - 1 \over {\rm ch} 2\rho_1+{\rm ch} 2\rho_0} \Bigl[{1 \over 2}({\rm ch} 2\rho_1+{\rm ch} 2\rho_0){\rm E}(k)-{\rm ch}^2 \rho_0 {\rm K}(k)\Bigr], \\ E&-&\omega S={n \sqrt{\lambda} \over \pi} \sqrt{{\rm ch} 2\rho_1+{\rm ch} 2\rho_0 \over {\rm ch} 2\rho_1 - 1}\Bigl[K(k)-E(k)\Bigr]. \end{eqnarray} In the limit of $\omega \rightarrow 1$ and assume $\rho_1 \gg \rho_0$, we find \begin{equation} E-S=n {\sqrt{\lambda} \over 2\pi} \ln S +\cdots \end{equation} showing agreement with the result of Ref. \refcite{Kruczenski}. \section{Conclusion} In this contribution we reviewed and generalized some simple classical solutions for strings moving in AdS spacetime. We have studied in depth on the so-called spiky string configurations\cite{GKP,Kruczenski} and their properties. We reviewed the approach of Ref. \refcite{JJKV} which is based on the identification of string spikes with soliton configurations. This explains the usefulness of the (inverse) scattering technique in constructing string configurations of this type. In the review we paid particular attention to the distinction between compact and non-compact string parameter space solutions elaborating on the limit relating the two. The soliton and (inverse) scattering techniques are expected to be of definite use for studying more general sets of solutions (and their dynamical properties). It is also possible that they will be of use for addressing the very interesting and highly nontrivial `platoux' problem of the Euclidean worldsheet string theory. \section*{Acknowledgments} We are grateful to C. Kalousios and A. Volovich for collaboration on which this work is based. We would also like to thank M. Abbott, I. Aniceto and M. Spradlin for comments and discussions. One of us (AJ) is grateful to Hiroshi Itoyama, Hikaru Kawai and Masao Ninomiya for their kind hospitality. This work is supported by the Department of Energy under contract DE-FG02-91ER40688.
1,108,101,564,458
arxiv
\section{Descriptions by the equivalent ellipsoid} \label{equivalent_ellipsoid} In order to quantify the shape and orientation of random-structured clusters, we translate them to equivalent ellipsoids having the same principal moments of inertia~\citep{Harshe_2010}. The principal moments of inertia $I_1$, $I_2$, $I_3$ ($I_1 \leq I_2 \leq I_3$) are obtained by diagonalization of the moment of inertia tensor. By using them, the lengths of the semi-principal axes of the equivalent ellipsoid ($a \geq b \geq c $) are given as follows: \begin{gather} a = \sqrt{\frac{5}{2} \frac{I_2 + I_3 - I_1}{N}}, \quad b = \sqrt{\frac{5}{2} \frac{I_3 + I_1 - I_2}{N}}, \notag \\ c = \sqrt{\frac{5}{2} \frac{I_1 + I_2 - I_3}{N}}. \end{gather} The anisotropic shape of clusters is described by the aspect ratio $r$ of the equivalent ellipsoid: \begin{equation} r \equiv \frac{2a}{b+c}, \end{equation} In order to estimate the compaction of clusters, the effective volume fraction $ \phi_{\mathrm{eff}} $ is introduced as the ratio between the total volume of particles $V_{\mathrm{p}}=(4\pi/3)N$ and the volume of the ellipsoid $V_{\mathrm{e}} = (4\pi/3) a b c$, that is \begin{equation} \phi_{\mathrm{eff}} \equiv \frac{N}{a b c}. \end{equation} \section{Conclusion} \label{sec_conclusion} Restructuring of colloidal aggregates in shear flows has been investigated by coupling an interparticle contact model with Stokesian dynamics. We have introduced a method to reduce calculation cost for near-rigid behavior of aggregates in shear flows. That method has realized fluid-particle coupling simulations for a significant time period. The simulations with the stepwise increase of shear rate have demonstrated the reinforcement of clusters due to irreversible compaction with increase of the hydrodynamic stress. We expect that the observed consolidation behavior induced by less-abrupt-application of flows is rather general. The introduced aspect ratio \textit{vs.}~orientation representation has also clarified the two types of compaction behaviors ending up in rod-shaped clusters orienting to the rotational axis and to round-shaped clusters. This anisotropic compaction can be a sort of hydrodynamic effect, \textit{i.e.} the enhancement of the tendency was seen in the comparison with the free-draining approximation. Thus, the simulations with selected parameters for the contact model showed the characteristic behaviors of colloidal aggregates under flow conditions. \section{Discussion} \label{sec_discussion} \subsection{Consolidation} \label{sec_consolidation} As seen in \ref{sec_compaction}, the shear rates for the compaction of clusters range over multiple orders of magnitude, \textit{i.e.} initial fractal clusters are fragile while compacted clusters are more and more robust to imposed flows. In our simulation with Stokesian dynamics (SD), the highest shear rate (resulting in $\phi_{\mathrm{eff}}\approx 0.49$) is about $10^3$ times larger than the critical shear rate $\Gamma_{\mathrm{rc}}$ where $\phi_{\mathrm{eff}}\approx 0.12$. This may be explained by the following two non-linear effects: \begin{enumerate} % \item[(i)] The smaller the hydrodynamic radius is, the weaker the hydrodynamic stress acting on the cluster becomes % \item[(ii)] The higher the number of newly generated loops within the cluster, the more the cluster resists restructuring. % \end{enumerate} In order to see effect (i), the shear-rate dependence of the stresslet acting on a cluster was evaluated. The stresslet acting on a cluster is composed as follows~\cite{Harshe_2010,Seto_2011}: \begin{equation} \tens{S}_{\mathrm{cl}} = \sum_{i} \bigg\{ \tens{S}^{(i)}_{\mathrm{H}} + \frac{ \vec{l}^{(i)} \otimes \vec{F}^{(i)} + (\vec{l}^{(i)} \otimes \vec{F}^{(i)})^{\mathrm{T}} }{2} - \frac{ \vec{l}^{(i)} \cdot \vec{F}^{(i)}}{3} \tens{I} \bigg\}, \end{equation} where $\vec{l}^{(i)} \equiv \vec{r}^{(i)}-\vec{r}_0$. The stresslet $\tens{S}^{(i)}_{\mathrm{H}}$ for individual particles $i$ has already been calculated in \eqref{mobility_form}. This stresslet $\tens{S}_{\mathrm{cl}}$ indicates the contribution of a single cluster to the bulk stress of a sheared suspension. For dilute suspensions of rigid spheres, the stresslet is proportional to the shear rate: $(\tens{S}_{\mathrm{sph}})_{xz} =(20/3)\pi \eta_0 a^3 \dot{\gamma}$. This is why the effect of restructuring on the hydrodynamic stress appears in the ratio between $(\tens{S_{\mathrm{cl}}})_{xz}$ and $\dot{\Gamma}$. With the shrinkage of clusters, the efficiency is decreased [\figref{stresslet} (a)]. However, this decrease is not large enough to explain the wideness of the shear-rate range. The volume-fraction dependence of the stresslet was also plotted to see effect (ii) [\figref{stresslet} (b)]. If the total strain on time interval $\Gamma^{\ast}$ is infinitely large, the compaction of cluster at each shear rate is expected to be settled after some restructuring, and the further compaction requires a higher hydrodynamic stress. This is why this plot approximately represents the compactive strength (yield stress) as a function of the volume fraction. Though the hydrodynamic stress given by the stresslet is not the same quantity as the mechanical stress, one may notice some similarity with volume-fraction dependent compressive yield stress $P_{\mathrm{y}}(\phi)$ of space-filling colloidal aggregate networks~\cite{Buscall_1987,Buscall_1988}. The power-law behavior seen in the intermediate range of \figref{stresslet} (b) shows the large exponent $4.5$, which was evaluated by fitting the averaged data within $0.13 < \phi_{\mathrm{eff}} < 0.34$. According to the cited work~\cite{Buscall_1988}, large exponents of power-law relations were explained as a result of network densification due to irreversible restructuring. If the hydrodynamic stresses induce deformation of clusters, the same consolidation effect is expected by the rule of new bond generations given in \secref{method_new_bond}. Thus, it was confirmed that the effect (ii) is responsible to explain the wideness of the shear-rate range for the compaction. \begin{figure}[htbp] \centering \includegraphics{stresslet_xz.pdf} \caption{ % (a) The shear-rate dependence of the ratio between the stresslet $(\tens{S}_{\mathrm{cl}})_{xz}$ and the shear rate $\dot{\Gamma}$ are shown. % (b) The volume-fraction dependence of the stresslets acting on clusters are plotted. % For both the plots, the final values at the each shear-rate step were sampled, and the error bars indicate the standard deviations. } \label{stresslet} \end{figure} \subsection{Reorientation and anisotropic compaction} \label{sec_reorientation} Rod-shaped clusters orienting to the rotational axis were observed in the simulations (see \ref{sec_shape_orientation}). Since the Brownian motion was not taken into account, the pioneer works by Jeffery on spheroids in shear flows may be referred to~\cite{Jeffery_1922,Happel_1965,Kim_1991}. Spheroids can be considered as one of the simplest object representing elongated shapes. Jeffery analytically showed that, for the dilute limit, they should be in the periodic orbits and have no tendency to set their axis in any particular direction under a simple shear flow. Besides, he expected that, since the dissipation of energy depends on their orbits, they would tend to adopt the orbital motion of the least dissipation of energy with additional elements in real suspensions. The trajectories of the principal axis for the smallest principal moment of inertia $\vec{n}_1$ are plotted in \figref{trajectories} for two typical cases ending up as the rod-shaped (a) and round-shaped (b) clusters. Each of them shows the trajectory of one simulation with increasing shear rates. The color scale of the trajectories represents the changes of the shear rates. Though the orbits are not closed, one can find some similarity with the Jeffery orbits in the short time behavior~(c.f. Figure 5.5 of ref.~\citep{Kim_1991}). For the rod-shaped compaction (a), the orbit tends to converge in a narrow circuit around the north pole. It can be noticed that in the round-shaped compaction (b), the orbit is easier to be affected by the irregular structure of the cluster, in particular when it crosses the $x$-$y$ plane. Thus, if the orientation of clusters is changeable, the uniform compaction can be expected. \begin{figure}[htb] \centering \includegraphics{trajectories.pdf} \caption{ % (Color online) % The trajectories of the principal axis for the smallest principal moment of inertia $\vec{n}_1$ are plotted on the unit sphere. % The two examples are shown: (a) the rod-shaped compaction, which starts from $(r,\Theta/2\pi)=(2.4,0.59)$ and ends to $(2.7,0.18)$, and (b) the round-shaped compaction from $(2.3,0.42)$ to $(1.4,0.67)$. % The north pole shows the rotational axis ($y$-axis). % The color scale of the trajectories represents the changes of the shear rates. % } \label{trajectories} \end{figure} Though the randomness of the cluster may lead to uniform compaction, the formations of the rod-shaped clusters and their reorientations are not explained yet. Another view point is the anisotropic compaction in shear flows. For clusters being restructured, the principal axis $\vec{n}_1$ of the cluster is no longer fixed to the cluster, but depends on the structure at each instant. If the compaction is anisotropic, it may look as the reorientation of the principal axis. In a shear flow, the drag forces acting on particles within rotating clusters increase with the distance from the rotational axis~\citep{Seto_2011}. So, the displacements of particles depend on their positions as well. Since the compaction is caused by generations of cohesive bonds, one can expect that anisotropic compaction reduces distances of particles from the rotational axis. Thus, as long as the rotational axis is unchanged, the clusters tend to be compacted to elongated shapes. \subsection{Hydrodynamic effect} \label{sec_hydrodynamic_effect} In order to pronounce the hydrodynamic effect, the free-draining approximation (FDA) was compared with SD. First, the hydrodynamic effect is clearly seen in the hydrodynamic stress for tenuous clusters, \textit{i.e.} the critical shear rate with SD was much larger than the one with FDA: $\dot{\Gamma}_{\mathrm{rc}}^{\mathrm{(SD)}}/\dot{\Gamma}_{\mathrm{rc}}^{\mathrm{(FDA)}} \approx 4.2$~(see \secref{sec_compaction}). In low Reynolds number flows, the disturbance of flows decays proportional to the inverse of the distance~\cite{Happel_1965,Kim_1991}, which results in the reduction of the drag forces acting on particles within isolated clusters~\cite{Seto_2011}. Second, the difference between the two methods was also confirmed in the shape and orientation tendencies by the compaction (\ref{sec_shape_orientation}.) The spatial distribution of the drag force within clusters has the same symmetry for FDA and SD~\citep{Seto_2011}. This is why some qualitative explanations for the formation of rod-shaped clusters are expected to be applicable for the simulation with not only SD but also FDA. However, as seen in \figref{FDA}, the result with FDA did not show clear tendency to form rod-shaped clusters. This result suggests that the hydrodynamic effect works as a kind of positive feedback for the anisotropic compaction in shear flows. \section{Introduction} The mechanical properties of colloidal aggregates are of fundamental interest in science and technology. To classify particulate gels and to understand their rheological behaviors is a key element. When attractive forces act among nano- or microscale particles, they form finite-sized clusters or a space-filling network. The latter shows a solid-like response to external stress, so it is regarded as a gel. In general, particulate gels are classified into two types according to the attraction strength between particles~\cite{Larson_1999}. If the attraction strength is sufficiently large, the particle surfaces are deformed at the bonding point, causing non-central forces~\cite{Johnson_1985}. In this case, Brownian forces neither cause debonding nor tangential displacements between contacting particles. So, branched tenuous structures formed in the aggregation process are maintained~\cite{Lin_1989}. On the other hand, if the attraction strength is weaker, denser and multilinking local structures, such as tetrahedral connections, are seen~\cite{Lu_2008}. This can be explained by tangential displacements due to Brownian forces. The tangential displacements between contacting particles, \textit{i.e.} sliding, rolling and torsion, play important roles in the structure formation and mechanical property of colloidal aggregates. However, for nano or microscale particles, it is not simple to characterize these interparticle interactions. For example, characterization of rolling resistances requires elaborate experiments such as AFM~\cite{Heim_1999} and optical tweezers~\cite{Pantina_2005}. Though these direct observations have clearly proven the existence of tangential forces, the particles available for such measurements are restricted to certain sizes. This is why there is still no general method to fully characterize the contact forces in colloidal systems. An alternative approach of investigating colloidal aggregates is to develop simulation methods. In particular, phenomena at the mesoscopic level are expected to hold all necessary particle-scale information, and the comparison between simulations and experimental observations can be used for the characterization of contact forces. This work introduces a simulation method of coupling interparticle contact models and hydrodynamic interaction models. The contact model used in this work is similar to the one developed in granular physics% ~\citep{Iwashita_1998,Kadau_2002,Dominik_2002,Wada_2007,Luding_2008}, which is able to capture aggregates maintaining their structures under low stress while being restructured under high stress. The bond strength is assumed to be sufficiently larger than the thermal energy $k_{\mathrm{B}}T$, therefore Brownian forces are not considered. Instead, hydrodynamic stress induces restructuring of clusters. The hydrodynamic interaction model employed here is Stokesian dynamics (SD)% ~\cite{Durlofsky_1987,Brady_1988,Ichiki_2002}, which provides the relations between velocities of particles and the forces acting on them in the Stokes regime. The evaluation of the hydrodynamic interactions is the most difficult and time consuming part of the simulation due to its long-ranged and many-body nature. SD is based on Faxén’s law and multipole expansions to obtain the far-field mobility matrix, which can simulate particle disturbed flows with reasonable computational effort. We apply this simulation method to investigate the restructuring behavior of finite-sized tenuous clusters under flow conditions. Investigation of colloidal aggregates under flow conditions is a traditional problem in colloidal science. The original study on cluster sizes under shear flows dates back to almost a century ago~\citep{Smoluchowski_1917}, which considered the cluster growth due to shear-induced collisions. To estimate equilibrium cluster sizes, one needs to know about the breakup mechanisms due to the hydrodynamic stress as well. Theoretical studies of this problem appeared after decades~% \cite{Bagster_1974,Adler_1979a,Sonntag_1986a}, and simulation studies of aggregate breakups have been appearing over recent years~% \cite{Potanin_1993,Higashitani_2001,Harada_2006,Becker_2008, Becker_2009,Eggersdorfer_2010,Harshe_2011a}. Restructuring of clusters is an additional and challenging issue in this context, since it depends on details of the contact forces. In order to focus on restructuring behavior, a special situation is considered: the shear flow is increased in a stepwise, thus less abrupt, manner than in previous works. In this case, the clusters are hardly broken; instead they are reinforced by new bonds generated during the restructuring process. The time evolution of clusters is expected to reflect the nature of contact forces. Some characteristic restructuring behavior was observed in the following simulations. The contents of the paper are as follows: the used methods, the contact model and SD, are briefly described in \secref{sec_contact_model} and \secref{sec_method_SD}. The coupling for the overdamped motion is formulated in \secref{sec_overdamped_motion}. The optimization for the dilute limit of aggregate suspensions is given in \secref{sec_optimization}. Approaches to study the problem are explained in \secref{sec_shearrate} and \secref{sec_stepwise_shear}. After describing the parameters used for the simulations in \secref{sec_parameters}, the results are shown by considering two main issues: (i) how does the imposed shear flow result in the compaction of aggregates? (\secref{sec_compaction}) (ii) what is the tendencies of the shape formation and orientation? (\secref{sec_shape_orientation}) A discussion about the compaction in terms of consolidation is presented in \ref{sec_consolidation}, the observed tendencies in \ref{sec_reorientation}, and the hydrodynamic effect in \ref{sec_hydrodynamic_effect}. Finally, the outcome of the work is concluded in \secref{sec_conclusion}. \section{Method} \subsection{The contact model} \label{sec_contact_model} \subsubsection{Model for the elasticity} A simple contact model was employed to simulate spherical particles cohesively connected. The interaction between two particles is described by a cohesive bond involving 4 types of degrees of freedom: normal (the center-to-center direction), sliding, bending% \footnote{ We use `bending' instead of `rolling', because they are equivalent except for a numerical factor, but `bending' is more intuitive for dealing with deformations of colloidal aggregates. }, and torsional displacements% ~\citep{Johnson_1985,sakaguchi_1993,Dominik_1997,Iwashita_1998,Zhang_1999,% Kadau_2002,Dominik_2002,Delenne_2004,Jiang_2005,Tomas_2007,Wada_2007,% Gilabert_2007,Luding_2008}. These relative displacements are expressed by using position vectors and vectors fixed at respective particles. So, a rotation of the frame of reference does not affect the result, i.e. objectivity is satified~\cite{Luding_2008}. In this work, the Hookean force-displacement relationships are assumed for these degrees of freedom, which are characterized by the spring constants $k_{\mathrm{N}}$, $k_{\mathrm{S}}$, $k_{\mathrm{B}}$, and $k_{\mathrm{T}}$. \begin{description}[leftmargin=0pt] \item[\emph{Normal displacement}] Let us suppose two spherical particles $i$ and $j$ located at $\vec{r}^{(i)}$ and $\vec{r}^{(j)}$. The center-to-center distance $r^{(i,j)} \equiv |\vec{r}^{(i)} - \vec{r}^{(j)}|$ is changed by the normal element of the acting force. A monodisperse system is considered here, so the radius of particle is denoted by $a$. The force-displacement relation is given by \begin{equation} \vec{F}_{\mathrm{N}}^{(i,j)} = k_{\mathrm{N}} (r^{(i,j)} - 2a) \, \vec{n}^{(i,j)}, \end{equation} where $\vec{n}^{(i,j)} \equiv (\vec{r}^{(j)} - \vec{r}^{(i)}) /r^{(i,j)} $ is the normal direction. \item[\emph{Sliding displacement}] Sliding displacement is a tangential element of the relative displacement between particles with fixed orientations. In order to express the sliding displacement vector $\vec{d}^{(i,j)}$, unit vectors fixed to each particle: $\vec{\xi}^{(i;j)}$ and $\vec{\xi}^{(j;i)}$, were introduced, called \emph{contact-point indicators} in this paper (\figref{contact_modes}). Using the indicators, the positions of the original contact points are written as follows: \begin{equation} \vec{r}_{\mathrm{o.c.}}^{(i;j)} = \vec{r}^{(i)} + a \, \vec{\xi}^{(i;j)}, \quad \vec{r}_{\mathrm{o.c.}}^{(j;i)} = \vec{r}^{(j)} + a \, \vec{\xi}^{(j;i)}. \end{equation} When two particles get in contact, \textit{i.e.} at the stress-free state, the contact points are the same $\vec{r}_{\mathrm{o.c.}}^{(i;j)} = \vec{r}_{\mathrm{o.c.}}^{(j;i)} $, and the contact-point indicators are set to $\vec{\xi}^{(i;j)} = \vec{n}^{(i,j)}$ [(a) in \figref{contact_modes}]. The sliding displacement vector is given by the projection of the deviation $\vec{r}_{\mathrm{o.c.}}^{(j;i)} - \vec{r}_{\mathrm{o.c.}}^{(i;j)} $ onto the perpendicular bisector between the two particles: \begin{align} \vec{d}^{(i,j)} &\equiv \vec{r}_{\mathrm{o.c.}}^{(j;i)} - \vec{r}_{\mathrm{o.c.}}^{(i;j)} - \bigl\{(\vec{r}_{\mathrm{o.c.}}^{(j;i)} - \vec{r}_{\mathrm{o.c.}}^{(i;j)} )\cdot \vec{n}^{(i,j)} \bigr\}\vec{n}^{(i,j)} \notag \\& = a \bigl\{ \Delta \vec{\xi}^{(i,j)} - ( \Delta \vec{\xi}^{(i,j)} \cdot \vec{n}^{(i,j)} ) \vec{n}^{(i,j)} \bigr\}, \end{align} where $\Delta \vec{\xi}^{(i,j)} \equiv \vec{\xi}^{(j;i)} - \vec{\xi}^{(i;j)}$. So, the force-displacement relation is given by \begin{equation} \vec{F}_{\mathrm{S}}^{(i,j)} = k_{\mathrm{S}} \vec{d}^{(i,j)}. \end{equation} \item[\emph{Bending displacement}] Bending is a type of tangential displacement involving rotation, with the angle between the contact-point indicators quantifying this displacement. This angle is assumed to be small, so it can be approximated by the norm of the vector product $| \vec{\xi}^{(j;i)} \times ( - \vec{\xi}^{(i;j)}) |$. Since it includes some torsional element, the bending angle vector $\vec{\varphi}^{(i,j)} $ is obtained by subtracting the normal part: \begin{equation} \vec{\varphi}^{(i,j)} \equiv - \vec{\xi}^{(j;i)} \times \vec{\xi}^{(i;j)} + \bigl\{( \vec{\xi}^{(j;i)} \times \vec{\xi}^{(i;j)} ) \cdot \vec{n}^{(i,j)} \bigr\}\vec{n}^{(i,j)}. \end{equation} By using the bending angle vector, the moment-angle relation is given by \begin{equation} \vec{M}^{(i,j)}_{\mathrm{B}} = k_{\mathrm{B}} a^2 \vec{\varphi}^{(i,j)}. \label{moment_bending} \end{equation} \item[\emph{Torsional displacement}] Torsion is the rotational displacement around the normal vector $\vec{n}^{(i;j)}$. In order to express the torsional angle, another set of unit vectors fixed to each particle: $\vec{\eta}^{(i;j)}$ and $\vec{\eta}^{(j;i)}$, are introduced, called \emph{torsion indicators} (\figref{contact_modes}). When two particles get in contact, \textit{i.e.} at the stress-free state, they are set by choosing ones from the vectors being orthogonal to the normal vector: $\vec{\eta}^{(i;j)} \cdot \vec{n}^{(i;j)} = 0$ and $\vec{\eta}^{(j;i)} \cdot \vec{n}^{(i;j)} =0$, and parallel to each other $\vec{\eta}^{(i;j)} = \vec{\eta}^{(j;i)}$ [(a) in \figref{contact_modes}]. Since the torsional angle is also assumed to be small, it can be approximated by the norm of the vector product $|\vec{\eta}^{(i;j)} \times \vec{\eta}^{(j;i)}|$. The torsional angle vector $\vec{\theta}^{(i,j)}$ is defined as the normal element of their vector product: \begin{equation} \vec{\theta}^{(i,j)} \equiv \bigl\{(\vec{\eta}^{(i;j)} \times \vec{\eta}^{(j;i)}) \cdot \vec{n}^{(i,j)}\bigr\} \vec{n}^{(i,j)}. \end{equation} By using the torsional angle vector, the moment-angle relation is given by \begin{equation} \vec{M}_{\mathrm{T}}^{(i,j)} = k_{\mathrm{T}} a^2 \vec{\theta}^{(i,j)}. \end{equation} \end{description} \begin{figure}[htb] \centering \includegraphics{xi_and_eta} \caption{The contact-point indicators $\vec{\xi}^{(i;j)}$ and $\vec{\xi}^{(j;i)}$, and the torsion indicators $\vec{\eta}^{(i;j)}$ and $\vec{\eta}^{(j;i)}$, are illustrated for the stress-free state (a) and stressed state (b), respectively. % The normal vector $\vec{n}^{(i,j)}$ always indicates the center-to-center direction. } \label{contact_modes} \end{figure} Thus, the forces and moments on the contact point between particle $i$ and $j$ are related to the corresponding displacements. The force and torque acting on the particle $i$ from the contacting particles $j$ are given by their sums: \begin{equation} \begin{split} \vec{F}_{\mathrm{P}}^{(i)} &= \sum_j \left( \vec{F}_{\mathrm{N}}^{(i,j)} + \vec{F}_{\mathrm{S}}^{(i,j)} \right) , \\ % \vec{T}_{\mathrm{P}}^{(i)} &= \sum_j \left( a \, \vec{n}^{(i,j)} \times \vec{F}_{\mathrm{S}}^{(i,j)} + \vec{M}_{\mathrm{B}}^{(i,j)} + \vec{M}_{\mathrm{T}}^{(i,j)} \right) . \end{split} \label{contact_force_and_torque} \end{equation} The suffix P indicates the particle contact interactions in contrast to the hydrodynamic interactions. \subsubsection{Model for the plasticity} \label{method_plasticity} In the contact model, the potential energy is stored in the introduced bonds as long as the stresses acting on the bonds are small. When stresses become larger than a certain threshold, the bond breaks and the stored energy is dissipated. If particles are still in contact, the contact-point indicators and torsion indicators are reset with the configuration to release the potential energy stored in the tangential springs. The supportable strength for a cohesive bond depends on the direction of the acting forces or moments. So, the breakableness is characterized by two critical forces and two critical moments: $F_{\mathrm{Nc}}$, $F_{\mathrm{Sc}}$, $M_{\mathrm{Bc}}$, and $M_{\mathrm{Tc}}$. In general, all components of the bond are stressed simultaneously. This is why a criterion of breakage can be introduced by a destruction function $\zeta(F_{\mathrm{N}}, F_{\mathrm{S}},M_{\mathrm{B}}, M_{\mathrm{T}})$, whose positive value indicates breakage. Here, a simple energy-like function is used~\citep{Delenne_2004}: \begin{equation} \zeta = \vartheta(F_{\mathrm{N}}) \frac{F_{\mathrm{N}}^2}{F_{\mathrm{Nc}}^2} + \frac{F_{\mathrm{S}}^2}{F_{\mathrm{Sc}}^2} + \frac{M_{\mathrm{B}}^2}{M_{\mathrm{Bc}}^2} + \frac{M_{\mathrm{T}}^2}{M_{\mathrm{Tc}}^2} - 1, \label{destruction_function} \end{equation} where $\vartheta(F_{\mathrm{N}})$ is Heaviside function $\vartheta(F_{\mathrm{N}})= 1$ for $F_{\mathrm{N}} \geq 0$ and $\vartheta(F_{\mathrm{N}})= 0$ for $F_{\mathrm{N}} < 0$. According to the intensive studies by \citet{Dominik_1997}, the critical normal and sliding forces, $F_{\mathrm{Nc}}$ and $F_{\mathrm{Sc}}$, are much larger than the corresponding forces of the critical bending and torsional moments, $M_{\mathrm{Bc}}/a$ and $M_{\mathrm{Tc}}/a$. For a typical case, the ratio can be the order of $10^2$. Thus, this work focuses on the effects for the bending and torsional breakups and excludes the separation and sliding breakups. Besides, the direct measurements of the critical bending moment have been reported \cite{Heim_1999,Pantina_2005}, while no direct measurement is available for the critical torsional moment. For simplicity the same strength for the bending and torsional moments is assumed here. In short, a special case of the bond breakableness written by $F_{\mathrm{Nc}}\to \infty$ and $F_{\mathrm{Sc}}\to \infty$ and $M_{\mathrm{Bc}} = M_{\mathrm{Tc}} = M_{\mathrm{c}}$ is considered. The strength of bond is then given with one parameter $M_{\mathrm{c}}$ by \begin{equation} \zeta = \frac{ M_{\mathrm{B}}^2 + M_{\mathrm{T}}^2 }{M_{\mathrm{c}}^2} - 1. \label{simplified_destruction_functions} \end{equation} \subsubsection{Model for the new connection} \label{method_new_bond} As of now, interactions between contacting particles are defined, but no assumption about particles being initially farther apart has been made. We consider a short-range cohesive interaction in this work. As a simple case, it is assumed that no interaction acts between remote particles ($r^{(i,j)} > 2a$). If, however, two particles approach each other and get into contact, \textit{i.e.} $r^{(i,j)}= 2a$, they start to interact with each other, which is modeled by the generation of a cohesive bond. \subsection{Hydrodynamic interaction (Stokesian dynamics)} \label{sec_method_SD} Stokesian dynamics (SD) is employed for evaluating the hydrodynamic interactions~% \cite{Durlofsky_1987,Brady_1988,Ichiki_2002}. Here, simple shear flows $\vec{u}^{\infty}(\vec{r}) = z \dot{\gamma} \vec{e}_{x}$ are considered to apply, where $\dot{\gamma}$ is the shear rate. The force-torque-stresslet (FTS) version of SD is required to solve the flow conditions. By using the translational velocity $\vec{U}^{\infty}$, vorticity $\vec{\Omega}^{\infty}$, and rate-of-strain $\tens{E}^{\infty}$, the flow field $\vec{u}^{\infty} (\vec{r})$ is expressed as follows: \begin{equation} \vec{u}^{\infty} (\vec{r}) = \vec{U}^{\infty} + \vec{\Omega}^{\infty} \times \vec{r} + \tens{E}^{\infty} \vec{r}. \label{equation_linear_flows} \end{equation} with the following nonzero elements: $\Omega^{\infty}_y = \dot{\gamma}/2$ and $E^{\infty}_{xz} = E^{\infty}_{zx}= \dot{\gamma}/2$. The hydrodynamic interactions acting on a particle $i$, \textit{i.e.} the drag force $\vec{F}^{(i)}_{\mathrm{H}}$, torque $\vec{T}^{(i)}_{\mathrm{H}}$, and stresslet $\vec{S}^{(i)}_{\mathrm{H}}$, are given as linear combinations of the relative velocities from the imposed flow: the translational and rotational velocities $\vec{U}^{(j)} - \vec{u}^{\infty}(\vec{r}^{(j)})$ and $\vec{\Omega}^{(j)}-\vec{\Omega}^{\infty}$, of all particles ($j = 1, \dotsc,N$) and the rate of strain $-\vec{E}^{\infty}$. The linear combinations for all particles are expressed as a matrix form \begin{equation} \begin{pmatrix} \vec{F}_{\mathrm{H}} \\ \vec{T}_{\mathrm{H}} \\ \vec{S}_{\mathrm{H}} \end{pmatrix} = - \tens{R} \begin{pmatrix} \vec{U} - \vec{U}^{\infty}(\vec{r}) \\ \vec{\Omega} - \vec{\Omega}^{\infty} \\ - \vec{E}^{\infty} \end{pmatrix}, \label{resistance_form} \end{equation} where the vectors involve $11 N$ elements for all particles, and the matrix $\tens{R}$ is the so-called grand resistance matrix% \footnote{ Since both the stresslet and rate-of-strain tensors are symmetric and traceless, the five independent elements are denoted as vector forms, such as $\vec{S} \equiv (S_{xx},S_{xy},S_{xz},S_{yz},S_{yy})$. }. It must be noted that the lubrication correction of SD is not applied in this work. For suspensions where the interparticle interaction is absent, the lubrication forces play essential role for near contact particles~\citep{Phung_1996,Foss_2000}. On the other hand, for rigid clusters, i.e. if the relative velocities between particles are zero due to strong cohesive forces, the lubrication correction has no contribution. Thus, the lubrication correction to the mobility matrix can safely be omitted~\cite{Bossis_1991,Harshe_2010,Seto_2011}. Though, the relative velocities between particles are not perfectly zero in this work, near rigid-motion of clusters is investigated by only gradually increasing the shear rate. Due to the resulting small relative velocities of the primary particles, the lubrication correction is expected to be less important. The neglect of lubrication forces is also a necessity for the computational approach presented later. The investigated simulation times are only accessible with reasonable computational effort, if the time-scales of the long-range hydrodynamic forces and short-range contact forces can be separated. This separability allows the reuse of the mobility matrix for several time steps which significantly enhances computational performance. This would unfortunately not be the case if lubrication forces would be considered. \subsection{Overdamped motion} \label{sec_overdamped_motion} To simulate the time evolution of particles with contact models, configurations of spherical particles are described by not only their central positions $\vec{r}^{(i)}(t)$ ($i=1,\dotsc,N$), but also by their orientations. The orientation of a particle $i$ is expressed by using a quaternion $\tilde{q}^{(i)}(t)$. If we set $\tilde{q}^{(i)}(0)=1$ at the initial time, the quaternion $\tilde{q}^{(i)}(t)$ means the rotation from the initial orientation. The rotation of a vector $\vec{\xi}$ fixed to the particle is written as $\vec{\xi}(t) = \tilde{q}^{(i)}(t) \vec{\xi}(0) \{\tilde{q}^{(i)}(t) \}^{-1} $. When the contact forces are strong enough, Brownian forces are negligible. But, the hydrodynamic forces depend on the imposed shear flow. Therefore, only contact and hydrodynamic forces acting on the particles were considered. In general, the particles follow the Newton's equations of motion: \begin{equation} m \frac{d \vec{U}}{dt} = \vec{F}_{\mathrm{P}} + \vec{F}_{\mathrm{H}}, \quad I \frac{d \vec{\Omega}}{dt} = \vec{T}_{\mathrm{P}} + \vec{T}_{\mathrm{H}}, \label{full_eq_of_motion} \end{equation} where $m$ and $I$ are the mass and moment of inertia of the particles, respectively. The velocities $\vec{U}$, angular velocities $\vec{\Omega}$, forces $\vec{F}$ and torques $\vec{T}$ include $N$ vectors for all particles. For colloidal systems, the inertia of particles are negligibly small compared to the hydrodynamic forces. By neglecting the inertia terms, the equations of motion \eqref{full_eq_of_motion} are approximated by the force- and torque-balance equations: \begin{equation} \vec{F}_{\mathrm{P}} + \vec{F}_{\mathrm{H}} \approx 0, \quad \vec{T}_{\mathrm{P}} + \vec{T}_{\mathrm{H}} \approx 0. \label{balance_equations} \end{equation} Systems following these balance equations are called overdamped. In order to solve the overdamped motion with SD, the mobility form: \begin{equation} \begin{pmatrix} \vec{U} - \vec{U}^{\infty} \\ \vec{\Omega} - \vec{\Omega}^{\infty} \\ \vec{S}_{\mathrm{H}} \end{pmatrix} = - \tens{M} \begin{pmatrix} \vec{F}_{\mathrm{H}} \\ \vec{T}_{\mathrm{H}} \\ - \vec{E}^{\infty} \end{pmatrix}, \label{mobility_form} \end{equation} is used instead of the resistance form \eqref{resistance_form}. The numerical library developed by Ichiki~\citep{Ichiki_2006} was used to obtain the mobility matrix $\tens{M}$. By combining \eqref{balance_equations} and \eqref{mobility_form}, the velocities of the particles $(\vec{U}, \vec{\Omega})$ are given by the functions of contact interactions $(\vec{F}_{\mathrm{P}}, \vec{T}_{\mathrm{P}})$: \begin{equation} \vec{U}(t) = \vec{U}( \vec{F}_{\mathrm{P}}, \vec{T}_{\mathrm{P}}), \quad % \vec{\Omega}(t) = \vec{\Omega}( \vec{F}_{\mathrm{P}}, \vec{T}_{\mathrm{P}}). \end{equation} Once their velocities are determined, the time evolution of the particles are given by integrating the time derivative relations: \begin{equation} \begin{split} \frac{\mathrm{d} \vec{r}^{(i)}}{\mathrm{d}t} &= \vec{U}^{(i)}, \quad \frac{\mathrm{d} \tilde{q}^{(i)}}{\mathrm{d}t} = \hat{\tens{\Omega}}^{(i)} \tilde{q}^{(i)}, \end{split} \label{eqs_time_derivative_relations} \end{equation} where the matrix $\hat{\mathsf{\Omega}}^{(i)} $ is constructed by the elements of the angular velocity $\vec{\Omega}^{(i)}$ as follows: \begin{equation} \hat{\tens{\Omega}}^{(i)} \equiv \begin{pmatrix} 0 & -\Omega_x^{(i)} & -\Omega_y^{(i)} & -\Omega_z^{(i)} \\ \Omega_x^{(i)} & 0 & -\Omega_z^{(i)} & \Omega_y^{(i)} \\ \Omega_y^{(i)} & \Omega_z^{(i)} & 0 & -\Omega_x^{(i)} \\ \Omega_z^{(i)} & -\Omega_y^{(i)} & \Omega_x^{(i)} & 0 \end{pmatrix}. \end{equation} Since the overdamped motions with the simplified contact model and approximated hydrodynamics are considered, the accuracy of the numerical integration has no primary importance. Therefore, the explicit Euler method was used to integrate the differential equations \eqref{eqs_time_derivative_relations} with a discretized time step $\delta t$. \subsection{Reusing the mobility matrix for deforming clusters} \label{sec_optimization} The bottle neck to simulate the time evolution is the calculation of the mobility matrix $\tens{M}$ in \eqref{mobility_form} in each time step. Since the contact forces are changed by short displacements of particles, the time step $\delta t$ needs to be set small enough, causing a large calculational effort. This is why a way to reduce the computational effort has to be introduced. The mobility matrix $\tens{M}$ depends only on the positions of particles. If the relative positions of particles within an isolated cluster remain unchanged, the hydrodynamic interactions under any flow written as in \eqref{equation_linear_flows} can be evaluated with a single mobility matrix. Though clusters are not rigid in this work, up to a certain degree the deformed structure can be considered the same for the hydrodynamic interactions. As long as the deformation is negligible in this sense, a mobility matrix may be reused repeatedly. In order to evaluate the motion of an isolated cluster, one can take the center-of-mass of a cluster as the origin of the coordinate without loss of generality. Let us suppose that the deformation of the structure of the cluster for a time interval $\Delta t$ is negligible. In this case, the time evolution of the particles from $t$ to $t' = t + \Delta t$ can be approximated by \begin{equation} \vec{r}^{(i)} (t') \approx \tens{R}_{t\to t'}\vec{r}^{(i)} (t), \end{equation} where $\tens{R}_{t\to t'}$ is a rotation matrix. The hydrodynamic interaction at the time $t'$, \textit{i.e.} the relations between $(\vec{F}^{(i)}_{\mathrm{H}}(t') , \vec{T}^{(i)}_{\mathrm{H}}(t') )$ and $ (\vec{U}^{(i)}(t'), \vec{\Omega}^{(i)}(t') )$ can be obtained by using the mobility matrix at the time $t$ as follows: \begin{equation} \begin{pmatrix} \Delta \bar{\vec{U}} (t') \\ \Delta \bar{\vec{\Omega} }(t') \\ \bar{\vec{S}}_{\mathrm{H}} (t') \end{pmatrix} = - \tens{M}(t) \begin{pmatrix} \bar{\vec{F}}_{\mathrm{H}} (t')\\ \bar{\vec{T}}_{\mathrm{H}} (t')\\ - \bar{\vec{E}}^{\infty} (t') \end{pmatrix} \end{equation} where one has the following relations: \begin{equation} \begin{split} \bar{\vec{F}}^{(i)}_{\mathrm{H}} (t') &= \tens{R}^{-1}_{t \to t'}\vec{F}^{(i)}_{\mathrm{H}}(t') , \\ \bar{\vec{T}}^{(i)}_{\mathrm{H}}(t') &= \tens{R}^{-1}_{t \to t'} \vec{T}^{(i)}_{\mathrm{H}}(t') , \\ \bar{\tens{E}}^{\infty} (t') &= \tens{R}^{-1}_{t \to t'} \tens{E}^{\infty} \tens{R}_{t \to t'}, \end{split} \end{equation} and \begin{equation} \begin{split} \vec{U}^{(i)}(t') &= \tens{R}_{t \to t'} \Delta \bar{\vec{U}}^{(i)} + \vec{U}^{\infty}(\vec{r}^{(i)}(t')),\\ \vec{\Omega}^{(i)}(t') &= \tens{R}_{t \to t'} \Delta \bar{\vec{\Omega}}^{(i)} + \vec{\Omega}^{\infty}. \end{split} \end{equation} Now one needs to determine the rotation matrix $\tens{R}_{t \to t'}$ for the cluster, which is deformed during the actual time evolution. For a trial rotation matrix $\tens{R}$, the positions of particles $\vec{r}^{(i)}(t)$ are transformed to \begin{equation} \vec{s}^{(i)} = \tens{R} \, \vec{r}^{(i)}(t). \end{equation} The optimal rotation matrix $\tens{R}_{\mathrm{opt}}$ should minimize the differences between the actual positions $\vec{r}^{(i)}(t')$ and the transformed positions $\vec{s}^{(i)}$. One can take the following objective function to be minimized: \begin{equation} D(\tens{R}) \equiv \frac{1}{N} \sum_{i} \bigl\{\vec{r}^{(i)}(t') - \vec{s}^{(i)} \bigr\}^2 \end{equation} The gradient descent method is employed to find the optimal rotation matrix $\tens{R}_{\mathrm{opt}}$. Thus, the rotation matrix $\tens{R}_{t \to t'}$ can be determined: $\tens{R}_{t \to t'} = \tens{R}_{\mathrm{opt}}$. The objective function with the optimal rotation $D(\tens{R}_{t \to t'})$ represents the degree of the deformation. If the deformation of the cluster becomes larger than a threshold: $D(\tens{R}_{t \to t'}) \geq D_{\mathrm{max}}$, the mobility matrix needs to be updated. \subsection{Introduction of a dimensionless shear rate} \label{sec_shearrate} The behavior of a cluster formed by strong cohesion under a strong flow is equivalent to the case of weak cohesion and a weak flow. In order to reduce the redundancy, a dimensionless variable, the ratio between hydrodynamic interactions and contact forces, is introduced. The cohesive force gives the typical force of the simulation $F_{0}$; the critical force for bending and torsional breakages is taken for that: $F_{0} = M_{\mathrm{c}}/a$, because they play an important role for the restructuring of tenuous clusters. Since hydrodynamic interactions are proportional to the shear rate $\dot{\gamma}$ in the Stokes regime, the dimensionless shear rate can be defined by $ \dot{\Gamma} \equiv 6 \pi \eta_0 a^2 \dot{\gamma}/ F_{0} = 6 \pi \eta_0 a^3 \dot{\gamma} / M_{\mathrm{c}}$, which indicates the flow strength for the contact force. In this work, the shear-rate dependence is discussed in terms of this dimensionless variable $\dot{\Gamma}$. \subsection{Stepwise increase of shear rates} \label{sec_stepwise_shear} In general, three types of behaviors are expected for a cluster in shear flows: \begin{description}[leftmargin=0pt] % \item[\emph{Rigid body rotation}] % When the hydrodynamic stress is sufficiently weak, the cluster rotates without changing its structure. % \item[\emph{Restructuring}] % When the hydrodynamic stress slightly exceeds the strength of the cluster, the cluster is restructured. % Newly generated cohesive bonds during the restructuring may reinforce the cluster. % If the strength of the cluster exceeds the hydrodynamic stress, it turns to the `rigid body rotation' regime. % \item[\emph{Breakup}] % When the hydrodynamic stress is much stronger than the strength of the cluster, the cluster is significantly elongated and may be broken up into smaller pieces. \end{description} In other simulation studies \cite{Potanin_1993,Higashitani_2001,Harada_2006, Zeidan_2007,Becker_2009,Becker_2010,Eggersdorfer_2010,Harshe_2011a}, the shear flow is abruptly applied as a step function of time. In that case, the restructuring plays a limited role in a certain range of shear rates. This change of shear rate in a single step is a simple but very special case in terms of shear history. If the flow strength is less abruptly increased, restructuring may reinforce the cluster before reaching higher shear rates. In this work, the focus is placed on such restructuring and consolidation aspects. So, the flow is turned on less abruptly. In order to plot the intermediate states of clusters by shear rates, the shear rate is increased in a stepwise manner. The $k$-th shear rate is given by \begin{equation} \dot{\Gamma}_k = \dot{\Gamma}_{1} \bigl( \dot{\Gamma}_{\mathrm{max}} / \dot{\Gamma}_{1} \bigr)^{(k-1)/(k_{\mathrm{max}}-1)}, \end{equation} where $\dot{\Gamma}_{\mathrm{1}}$ is the initial shear rate, $\dot{\Gamma}_{\mathrm{max}}$ the final shear rate and $k_{\mathrm{max}}$ the number of steps, and it is kept for the time period $t^{\ast}_k$ resulting in an equivalent total shear strain $\Gamma^{\ast}$, \textit{i.e.} $t^{\ast}_k = \Gamma^{\ast}/\dot{\Gamma}_{k}$ (see \figref{stepwise-shear-rates}). \begin{figure}[htb] \centering \includegraphics{shearrates.pdf} \caption{% The shear rate $\dot{\Gamma}$ is increased in a stepwise manner. % The horizontal axis shows the total shear strain. } \label{stepwise-shear-rates} \end{figure} \section{Results} \subsection{Parameters for the simulation} \label{sec_parameters} Fractal clusters generated by the reaction limited hierarchical cluster-cluster aggregation (CCA) were used as an initial configuration~\cite{Botet_1984,Jullien_1987}. The fractal dimension is $d_{\mathrm{f}} \approx 2$. It is worth noting that such generated clusters have no loop structure. In previous works~\cite{Seto_2011,Seto_2012}, the hydrodynamic behavior of various sizes of the same CCA clusters have been examined by assuming rigid structures. Here, the restructuring behavior of small clusters with $N=64$ was investigated. For randomly structured clusters, one needs to evaluate a sufficient number of samples to study any generalizable behavior, therefore 50 independent clusters were simulated under the same conditions. A random selection of the initial clusters is shown as projections on $x$-$z$ and $x$-$y$ planes in \figref{snapshots_cca} (a) and (b). \begin{figure*}[htb] \centering \includegraphics{snapshots_CCA64.pdf} \caption{% % A random selection of the initial clusters (CCA clusters, $N=64$) are shown by $x$-$z$ and $y$-$z$ projections (a) and (b), and the corresponding compacted clusters after $\dot{\Gamma}=15.9$ are also shown by $x$-$z$ and $y$-$z$ projections (c) and (d). % } \label{snapshots_cca} \end{figure*} The required parameters for the contact model (see \secref{sec_contact_model}) are only the ratios between the spring constants of different modes and the critical moment. For the spring constants, the same value was set for the bending and torsional modes, and 10 times larger for normal and sliding modes: \begin{equation} k_{\mathrm{T}} = k_{\mathrm{B}}, \quad k_{\mathrm{N}} = k_{\mathrm{S}} = 10 k_{\mathrm{B}}. \end{equation} The critical moment $M_{\mathrm{c}}$ for bending and torsional springs in \eqref{simplified_destruction_functions} was set to the value which gives 1\% of the particle's radius for the critical displacements. The used parameters for the imposed shear flows (see \secref{sec_stepwise_shear}) are presented by \tabref{flow_parameters}. In order to distinguish the hydrodynamic effect with Stokesian dynamics (SD), the free-draining approximation (FDA) was also used as the reference. For both methods, the ranges of the shear-rate changes were chosen to see the rigid-body rotation regime with the lower shear rates and the sufficient compaction with the higher shear rates. For reusing the mobility matrix for deformed clusters (see \secref{sec_optimization}), the threshold $D_{\mathrm{max}} = 0.01a^2$ was given, which is small enough to evaluate the drag forces in acceptable precision for our purpose. The actual update numbers of the mobility matrix during one cluster rotation are given in \figref{fig_update_num}. These numbers are much less than the time steps to integrate the equations of motion, but these updates sufficiently reflect the long-range hydrodynamic interactions acting on deforming clusters. \begin{table}[tbh] \caption{ Parameters of the imposed flows. } \label{flow_parameters} \newcolumntype{C}{>{\centering\arraybackslash}X} \newcolumntype{R}{>{\raggedright\arraybackslash}X} \newcolumntype{L}{>{\raggedleft\arraybackslash}X} \begin{tabularx}{\columnwidth}{lCCC} \hline & Symbol & SD & FDA \\ \hline Initial shear rate &$\dot{\Gamma}_1$ & 0.003 & 0.001 \\ Final shear rate &$\dot{\Gamma}_{\mathrm{max}}$ & 15.9 & 10\\ Number of steps & $ k_{\mathrm{max}}$ & 28 & 30\\ Total shear strain & \multirow{2}{*}{$\Gamma^{\ast}$} & \multirow{2}{*}{20} & \multirow{2}{*}{20} \\ ~ on time interval & \\ \hline \end{tabularx} \end{table} \begin{figure}[htb] \centering \includegraphics{updatenum_SD.pdf} \caption{ The mobility matrix is updated when the cluster deformation exceeds the threshold. % The average numbers of the updates during one cluster rotation are shown. % The error bars exhibit the standard deviations over 50 independent simulations. % } \label{fig_update_num} \end{figure} \subsection{Compaction} \label{sec_compaction} First, the relation between the compaction and the flow strength was considered. The radius of gyration \begin{equation} R_{\mathrm{g}}^2 \equiv \frac{1}{N} \sum_{i=1}^{N} (\vec{r}^{(i)} - \vec{r}_0)^2, \end{equation} where $ \vec{r}_0 $ is the center of mass of the cluster, approximately represents the hydrodynamic radius of the fractal clusters% ~\cite{Wessel_1992,Lattuada_2003a,Seto_2011}. Indeed, the radius of gyration has been commonly used to quantify the size of random structured colloidal aggregates. \figref{fig_compaction} (a) shows the shear-rate dependence of the radius of gyration, where final values at each shear-rate step were sampled. The averages and standard deviations were taken over 50 independent simulations. However, this quantity is not optimal to address the compaction behavior because results for compacted clusters depend on their shapes. \begin{figure}[hbt] \centering \includegraphics{compaction.pdf} \caption{ The compaction behavior is seen by the shear-rate dependence of the radius of gyration $R_{\mathrm{g}}$ (a), and the effective volume fraction $\phi_{\mathrm{eff}}$ (b). % The final values at the each shear-rate step were sampled, and the averages and standard deviations were taken over 50 independent simulations. % The results with SD and FDA are shown by circles $(\bigcirc)$ and triangles ($\bigtriangleup$), respectively. } \label{fig_compaction} \end{figure} The volume fraction is an alternative to quantify the compaction as it takes into account cluster shapes. As seen in \figref{snapshots_cca} (c) and (d), some of compacted clusters exhibit elongated shapes. Though the definition of volume fraction is not simple for isolated clusters, a rough estimation was used here. An arbitrary shaped cluster can be translated into an ellipsoid having the equivalent moments-of-inertia. We take the ratio between the total volume of particles and the volume of the equivalent ellipsoid as the effective volume fraction $\phi_{\mathrm{eff}}$ (see
1,108,101,564,459
arxiv
\section{Model Hamiltonian} \label{Sec:Model} The anisotropic Hubbard model on the two-leg square ladder is defined as\cite{Zhu2016Hubbard} \begin{eqnarray} H_{H}=-\sum_{\langle ij\rangle \sigma} t_{ij} \left(c_{i\sigma}^\dagger c_{j\sigma} + \mathrm{h.c.}\right) + U\sum_i n_{i\uparrow}n_{i\downarrow}, \label{Eq:Hubbard} \end{eqnarray} where $\<ij\>$ indicates nearest-neighbor (NN) bonds with hopping integral $t_{ij}=t$ on the rungs and $t_{ij}=\alpha t$ on the legs, as sketched in Fig. \ref{Fig:Model}(a). $c_{i\sigma}^\dagger$ creates an electron on site $i$ with spin polarization $\sigma$. The electron number operator is $n_i=\sum_\sigma c_{i\sigma}^\dagger c_{i\sigma}$, and $U$ is the on-site repulsion. The $t$-$J$ Hamiltonian on two-leg ladder is given by (see Fig. \ref{Fig:Model}(b)) \begin{equation} H_{tJ}=-\sum_{\< ij\>\sigma} t_{ij}\left(c_{i\sigma}^\dagger c_{j\sigma} + \mathrm{h.c.}\right) + \sum_{\< ij\>} J_{ij}\left(\mathbf{S}_i\cdot\mathbf{S}_j - \frac{1}{4}n_in_j\right). \label{Eq:TJMODEL} \end{equation} Similar with the Hubbard ladder, $t_{ij}=t$ labels the hopping integral on the rungs and $t_{ij}=\alpha t$ on the legs. The spin superexchange interactions on the rungs and legs are given by $J_{ij}=J$ and $J_{ij}=\alpha J$, respectively. $\mathbf{S}_i$ is the spin operator on site $i$. Different than the Hubbard model, the action of the $t$-$J$ Hamiltonian is restricted to the Hilbert space constrained by the no-double-occupancy condition, i.e., the number operator $n_i\leq 1$. The site index $i=(x,y)$ with $y=1,2$ denoting the two legs and $x$ runs from $1$ to $L$. Following the previous studies\cite{Zhu2013TJ,Zhu2015Charge,White2015TJ}, we consider the same range of parameters $0<\alpha\leq 1$ for both models .\footnote{Note that this is not the exact $t$-$J$ correspondence of the Hubbard model we are studying, which should be $J_{ij} = \alpha^2J$ on rungs. However, this doesn't lead to any qualitative difference.} Specifically, in the following DMRG calculation, we will set $t=1$ as an energy unit for the Hubbard ladder, while set $J=1$ as an energy unit for the $t$-$J$ ladder. \begin{figure}[!htbp] \centerline{\includegraphics[width=\linewidth]{phase.pdf}} \caption{(Color online) Ground state phase diagram of the two-leg Hubbard ladder. The blue line is the phase boundary labeled by the critical value $\alpha_c$ as a function of $U/t$. In the left region ($\alpha<\alpha_c$), there is no charge modulation in the hole density profile, while the right region ($\alpha>\alpha_c$) has clear charge modulation.}\label{Fig:HubbardPhase} \end{figure} \section{Single hole in the two-leg Hubbard ladder} \label{Sec:Hubbard} Previous studies\cite{Zhu2013TJ,Zhu2015Charge,White2015TJ} of the two-leg anisotropic $t$-$J$ ladder show that a nontrivial charge modulation appears in the hole density profile in the isotropic limit $\alpha=1$, which is sharply different than the strong-rung limit $\alpha<\alpha_c$. However, whether this still holds true for the Hubbard model is unknown due to the presence of the three-site correlated hopping terms. To answer this question, we have performed an extensive DMRG study on the two-leg Hubbard ladder (see Eq. (\ref{Eq:Hubbard}) and Fig. \ref{Fig:Model}(a)). Our study shows that a single hole doped in the Hubbard ladder behaves similarly with that in the $t$-$J$ ladder, where the results are summaried in the phase diagram in Fig. \ref{Fig:HubbardPhase} for $5\leq U/t\leq 20$ and $0<\alpha \leq 1$. Specifically, there are two distinct phases, a conventional quasiparticle phase without charge modulation for $\alpha<\alpha_c$ and an interesting charge modulation phase for $\alpha>\alpha_c$. Actually, this charge modulation was shown to appear even on very small clusters suggesting its robustness.\cite{Zhu2016Hubbard} The phase diagram in Fig. \ref{Fig:HubbardPhase} is determined from standard DMRG simulations, where a sufficient large number of DMRG states (see below) were kept to limit the truncation error per step to $\leq 10^{-7}$. For each system size, the ground states at half-filling and with a single doped hole were accurately obtained. The phase boundary between the two distinct phases was determined by calculating the single hole kinetic energy $E_k^h$, compared to half-filling, as \begin{eqnarray} E_k^h = E_k^{\mathrm{one-hole}} - E_k^{\mathrm{half-fill}}.\label{Eq:HoleKinetic} \end{eqnarray} Here $E_k^{\mathrm{one-hole}}$ is the ground state kinetic energy of the system with one hole, and $E_k^{\mathrm{half-fill}}$ is the kinetic energy at half-filling. Therefore, $E_k^h$ solely represents the kinetic energy of the single injected hole. For a fixed $U/t$, the second derivative of the single hole kinetic energy, i.e., $E_k^{\prime\prime}(h)=\frac{d^2E_k^h}{d\alpha^2}$, shows a sharp peak at the critical value $\alpha=\alpha_c$, labeling the phase boundary between the two distinct phases. As an example, Fig. \ref{Fig:KineticEnergy} shows $E_k^{\prime\prime}(h)$ as a function of $\alpha$ for $U/t=20$ and various system sizes. It is clear that the finite-size effect is negligible and our results of $\alpha_c$ represent the reliable value in the thermodynamic limit, i.e., $L\to \infty$. In the following, we will directly compare the Hubbard ladder with the $t$-$J$ ladder in various aspects and provide evidences to show that a single hole doped in both models behaves similarly. \begin{figure}[!htbp] \centerline{\includegraphics[width=0.5\textwidth]{ek_second.pdf}} \caption{(Color online) Second derivative of single hole kinetic energy $E_k''(h) = d^2E_k^h / d\alpha^2$ vs. $\alpha$ with different sample sizes at $U/t=20$. The peak position, labeled by the blue dashed line, determines the phase boundary between the two distinct phases.}\label{Fig:KineticEnergy} \end{figure} \paragraph{Hole density distribution:} % We first calculate the hole density distribution function $\left<n_x^h\right> = \sum_{y} \left(1 - \left<n_{x,y}\right>\right)$ for both the Hubbard and $t$-$J$ ladders, where $x$ is the rung index and $y$ is the leg index. Prior to the insertion of the hole, the density profiles are simply flat as the charge fluctuation is gapped in the Mott regime. With the insertion of a single hole by removing one electron (e.g., down spin electron) out, the hole distribution is extended over the whole system. Examples for the Hubbard model at $U/t=20$ and the $t$-$J$ model at $t/J=5$ are given in Fig. \ref{Fig:tJHubSim}. We keep up to $m=2048$ block states in the DMRG simulation with a negligible truncation of less than $10^{-10}$ and perform $100-500$ sweeps for decent convergence. Similar with the $t$-$J$ model, in the strong rung case such as $\alpha=0.5$, the hole density distribution is extended over the whole system, which is smooth and without charge modulation. In sharp contrast, the hole distribution $\left<n_x\right>$ develops a clear charge modulation in both systems in the isotropic limit $\alpha=1$. This clearly demonstrates the similarity of the two models and hence the correlated hopping terms are not crucial in determining ground state properties of a single doped hole in either model. \begin{figure}[!htbp] \includegraphics[width=\linewidth]{density.pdf} \caption{(Color online) Hole density distribution $\left<n_x^h\right>$ of Hubbard model at $U/t=20$ and $t$-$J$ model at $t/J=5$ for (a) $\alpha=0.5$ and (b) $\alpha=1.0$. Here the system size is $100\times 2$ and $x$ is the rung index.}\label{Fig:tJHubSim} \end{figure} \paragraph{Spin-charge correlation:}% Previous studies show that there is no spin-charge separation in the two-leg $t$-$J$ ladder for $0 \leq \alpha \leq 1$\cite{Zhu2015Quasiparticle,White2015TJ}. In this section, we show that this is also true for the two-leg Hubbard ladder. To prove this, we calculate the spin-charge correlation function $\left< n^h(i{_0}) s^z(i)\right>$ (see Fig. \ref{Fig:SpinCharge}) which measures the spin profile when a dynamic hole is on site $i_0=(50,2)$ and a spin is on site $i=(x,y)$ of a $N=100\times 2$ ladder. With the correlation function shown on a log scale as a function of distance $d=|x-50|$ along the ladder, the exponential confinement of the spin and charge is apparent in the linear $d$ dependence. A linear fit gives a decay length of $\xi=0.85(5)$ for the Hubbard model at $\alpha=0.5$ and $U/t=20$, showing that the spin and charge degrees of freedom are tightly bound together. A similar fit for $\alpha=1.0$ gives a length scale $\xi=3.380(5)$. For a direct comparison, we have also calculated the spin-charge correlation function for the $t$-$J$ model at $t/J=5$, which is related to the Hubbard coupling at $U/t=4t/J=20$. Consistent with the Hubbard model and previous studies, the spin-charge correlation function is also short-ranged with a correlation length $\xi=0.83(5)$ at $\alpha=0.5$ and $\xi=3.225(3)$ at $\alpha=1.0$. As the spin-charge correlation function is always short-ranged, we hence conclude that there is no spin-charge separation in both systems. \begin{figure}[!htbp] \includegraphics[width=\linewidth]{hole-spin-cor.pdf} \caption{(Color online) Hole-spin correlation functions $|\left<n^h_{i_0}s^z_i\right>|$ for (a) $t$-$J$ model at $t/J=5$ and (b) Hubbard model at $U/t=20$. Here, $i_0=(50,2)$ is the hole site index, $i=(x,y)$ is the spin site index and $d=|x-50|$ is the distance between the hole and spin along the ladder. The exponentially decaying correlation functions show that the spin degrees of freedom are exponentially localized close to the dynamic hole for both $\alpha=0.5$ and $\alpha=1.0$. The solid lines show the linear fit to the data and $\xi$ is the spin-charge correlation length.}\label{Fig:SpinCharge} \end{figure} \paragraph{Effective mass:}% As the spin and charge of the hole are not separated, it is meaningful to ask whether the doped hole or the spin-charge bound particle behaves as a quasiparticle or is localized. If the doped hole behaves as a quasiparticle, we shall expect a finite effective mass $m$ which can be determined by the formula \begin{equation} \Delta E_0(L)=E_{\mathrm{0}}^{\mathrm{one-hole}}(L) - E_{\mathrm{0}}^{\mathrm{half-fill}}(L) - \mathrm{const.} \label{Eq:EffectiveMass} \end{equation} Here $E_0^{\mathrm{half-fill}}(L)$ ($E_0^{\mathrm{one-hole}}(L)$) is the ground state energy of the system at hall-filling (with single doped hole), and $L$ is the length of the ladder. For a quasiparticle, $\Delta E(L)$ is expected to be proportional to $\pi^2/2mL^2$, where $m$ is the effective mass. On the contrary, if the injected hole is localized, $\Delta E(L)$ should decay exponentially with $L$ with a diverging effective mass. We find that $\Delta E(L)$ decays as $1/L^2$ in our simulation in both phases, which indicates that the doped hole seems not localized in real space. Plots of effective mass $m$ is shown in Fig. \ref{Fig:EffectiveMass}. As seen in the figure, $m$ is finite in both regions $\alpha<\alpha_c$ and $\alpha>\alpha_c$, while it diverges at the phase boundary between the two phases, e.g., $\alpha_c=0.79$ at $U/t=20$. These results are similar with previous studies of the $t$-$J$ model\cite{Zhu2013TJ,Zhu2015Charge, White2015TJ}, which further suggests that the simple $t$-$J$ model captures the ground state physics of a single doped hole in the Hubbard model. \begin{figure} \includegraphics[width=0.5\textwidth]{effmass.pdf} \caption{(Color online) The effective mass $m$ (in unit of $t$) of the single hole (or spin-charge object) of the two-leg Hubbard ladder at $U/t=20$. The effective mass diverges at $\alpha_c=0.79$, but remains finite in both $\alpha<\alpha_c$ and $\alpha>\alpha_c$ parameter regions. Inset shows the inverse of effective mass $1/m$ as a function of $\alpha$.}\label{Fig:EffectiveMass} \end{figure} \section{Elementary excitation energy in the two-leg $t$-$J$ ladder}\label{Sec:Elementary} In the above section, we have shown that the single hole doped in the two-leg Hubbard ladder behaves qualitatively the same as in the two-leg $t$-$J$ ladder. Therefore, we will focus on the $t$-$J$ ladder in this section since it is much easier to simulate, whose results however should also apply for the Hubbard ladder. In addition to the ground state properties, it is also crucial to have a good understanding of the elementary excitations. In particular, calculating the energy gap to the (first and/or second) excited states is of fundamental importance for determining the intrinsic behavior of the single hole doped in the two-leg antiferromagnet, which is complementary to the study of ground state properties. For a direct comparison with previous studies\cite{Zhu2013TJ,Zhu2015Charge, White2015TJ}, we will focus on $t/J=3$ in what follows. There is a standard way to find excited states and gaps using DMRG\cite{Stoudenmire2002}. First, we use DMRG to compute a ground state $|\psi_0\rangle$ of the Hamiltonian $H$ with energy $E_0$ in Eq. (\ref{Eq:TJMODEL}) to high accuracy. Then we define a new Hamiltonian $H_1=H+wP_0$, where $P_0=|\psi_0\rangle\langle \psi_0|$ is a projection operator and $w$ is an energy penalty for states not orthogonal to $|\psi_0\rangle$. If $w$ is large enough, the ground state $|\psi_1\rangle$ of $H_1$ with energy $E_1$ will be the second lowest eigenstate of $H$, i.e., the first excited state or a second ground state. Having found $|\psi_1\rangle$, we can continue to compute the next excited state $|\psi_2\rangle$ with energy $E_2$ if necessary by including both $P_0$ and $P_1=|\psi_1\rangle\langle \psi_1|$ in a new Hamiltonian $H_2=H_1+wP_1=H+wP_0+wP_1$. For the current simulation, we use $w=100$ to make sure that different eigenstates are orthogonal to each other with a negligible overlap $|\langle \psi_i|\psi_j\rangle|^2\leq 10^{-13}$. Practically, utilizing the above procedure requires a well-converged ground state. However, this is very hard for the single hole problem in general.\cite{Zhu2013TJ,Zhu2015Charge, White2015TJ} Although it is relatively easy to obtain a state with an extended hole density profile by performing a big number of (e.g., hundreds of) DMRG sweeps, this is still not enough to obtain the exact ground state of the system which is reflection symmetric, since the Hamiltonian itself has the reflection symmetry. To solve this problem, we will adopt the following symmetrization strategy: we first obtain a relatively converged ground state with an extended hole density profile, then we symmetrize the system by copying all the operators in the left part of the system to the right part, and use this as the initial state for the next step simulation. Generally, such a symmetrization process may raise the energy a bit at the beginning, however, repeating this process several times will make the``initial'' state close enough to the exact ground state. Eventually, the real ground state of the system is obtained with reflection symmetric hole density distribution and slightly lower ground state energy (e.g., $\sim 10^{-5}J$ for $N=140\times 2$). With the exact ground state thus obtained, we continue to calculate the energies of excited states as described above. The results are given in Fig. \ref{Fig:Energy}. For $\alpha=0.5$, we find that the first excited state of the system $|\psi_1\rangle$ is consistent with the conventional quasiparticle picture. For example, the ground state $|\psi_0\rangle$ has a single peak while $|\psi_1\rangle$ has double peaks. Moreover, the elementary excitation energy $\delta E_1=E_1-E_0$ decays as $1/L^2$. These results are consistent with previous studies\cite{Zhu2013TJ,Zhu2015Charge, White2015TJ} and further establish the quasiparticle nature of the single doped hole in the strong rung limit $\alpha<\alpha_c$. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{gaps_and_density.pdf} \caption{(Color online) Excited state energy gaps and density profiles. (a) Scaling of excitation energy $E_1 - E_0$ with lattice size at $\alpha = 0.5$. (b) Density profiles for ground state and first excited state at $\alpha = 0.5$. (c) Excitation energies $E_1 - E_0$ and $E_2 - \tilde{E}_0$ at $\alpha = 1.0$, where $\tilde{E}_0 = (E_1 + E_0) / 2$. (d) Density profiles for ground state, first excited state and second excited state at $\alpha=1.0$.}\label{Fig:Energy} \end{figure} In contrast to $\alpha=0.5$ where the energy dispersion $\epsilon(k)$ is minimized at $k=0$, $\epsilon(k)$ is minimized at an incommensurate momentum $k=\pm k_0$ for $\alpha=1$, which gives rise to the oscillations in the charge density distribution (see Fig. \ref{Fig:Energy}(d)), as has been noted before \cite{Zhu2013TJ,Zhu2015Charge, White2015TJ}. Consequently, a``quasi-two-fold-degenerate'' ground state ($|\psi_0\rangle$ and $|\psi_1\rangle$) may be expected while $|\psi_2\rangle$ is the ``real'' first excited state, hence $E_2-E_1\gg E_1-E_0$. Indeed, our results are consistent with this and are plotted in Fig. \ref{Fig:Energy}(c). The energy splitting $E_1-E_0$ between the two ``quasi-degenerate'' ground states scales as $1/L^3$ (see Fig. \ref{Fig:Energy}(c)), which is caused by the combination of the charge modulation and open boundary condition. An explicit example can be found in the Supplementary Materials \ref{Sec:FreeFermion} for comparison. \footnote{Under periodic boundary condition, ground state $|\psi_0\protect\rangle$ and $|\psi_1\protect\rangle$ are exactly degenerate. Without the charge modulation, the degeneracy disappear and $|\psi_1\protect\rangle$ becomes the first excited state.} In order to minimize the possible effect of ``quasi-degeneracy'', we define a ``proper'' ground state energy $\tilde{E}_0=(E_1+E_0)/2$ and the excitation energy gap $\Delta=E_2-\tilde{E}_0$. Similar with $\alpha=0.5$, we find that the energy gap $\Delta$ at $\alpha=1.0$ also decays as $1/L^2$. The hole density profile of the first excited state $|\psi_2\rangle$ shows double wavepackets in contrast to the single wavepacket of the ``quasi-degenerate'' ground states $|\psi_0\rangle$ and $|\psi_1\rangle$. This is also similar with $\alpha=0.5$ case. Our results hence suggests that the single hole at $\alpha=1.0$ also behaves as a quasiparticle when $L\gg\xi$ ($\xi$ denotes the spin-charge correlation length), which is consistent with previous studies. \cite{White2015TJ} It is worth mentioning that although the doped hole behaves like a ``quasiparticle'' in both phases, there is a significant difference between them. In the strong-rung case ($\alpha<\alpha_c$), the spin and charge of the hole are tightly bound together without internal structure, so the hole behaves as a conventional Bloch quasiparticle. On the contrary, in the charge modulation phase $\alpha>\alpha_c$, the spin and charge of the hole are only loosely bound together with an interesting internal structure and nontrivial mutual statistics, which will lead to an important residual effect to be discussed in next section. This residual effect can dramatically change the local structure of the ground state wavefunction of the single hole, which however cannot be explained by a conventional quasiparticle picture. \section{Residual Effect} \label{Sec:ResidualEffect} As just mentioned, although our results suggest that the single hole doped in the isotropic two-leg Hubbard and $t$-$J$ ladders behaves as a quasiparticle, a mysterious residual effect is present which may not be explained by the conventional Bloch-quasiparticle picture. For a simple ``Bloch''-quasiparticle with energy dispersion located at incommensurate momentum $k=\pm k_0$, the ground state wavefunction is fast oscillating and crosses zeros (i.e., nodes) frequently at momentum $k\neq \pm k_0$, as labeled by the shaded region in Fig. \ref{Fig:WaveFunction}(a) for the free Fermion system. However, this is not true for the strongly interacting Hubbard and $t$-$J$ ladders. Although the hole density profile also shows significant modulation, the ground state wavefunction does not cross zeros at momentum $k\neq \pm k_0$, as seen in Fig. \ref{Fig:WaveFunction}(b). We argue that these nodes cannot be lifted by a simple finite-ranged Wannier function, suggesting that these nodes are unavoidable in a conventional quasiparticle picture. A possible explanation is that the ground state wavefunction of the system consists two parts $|\psi_0\rangle = |\psi_0^L \rangle + |\psi_0^\xi \rangle$. Here $|\psi_0^L \rangle$ represents the long-wavelength contribution which accounts for the quasiparticle behavior of the single doped hole when the system size $L$ is much bigger than the spin-charge separation length scale $\xi$, i.e., $L\gg \xi$. Since there is no spin-charge separation, the system will only consider the doped hole as a single object while its internal structure is hidden. However, in the short length scale $\sim \xi$, the spin and charge of the doped hole will not behaves a whole object anymore since they are not tightly bound together. Instead, there is a nontrivial mutual statistics between them\cite{Weng1996}, i.e., a hole moving on the local antiferromagnetic spin background will induce a nontrivial phase-string effect. This has been shown to be relevant for the disappearance of the nodes structure in the two-leg $t$-$J$ ladder which was denoted as $|\psi_0^\xi\rangle$ here coming from the nontrivial mutual statistics between spin and charge part of the doped hole\cite{Wang2015VMC}. On the contrary, in the conventional quasiparticle picture, the spin and charge part of the doped hole are tightly bound together so that there is no internal structure. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{density_compare_new.pdf} \caption{(Color online) Ground state hole density profile $n_x^h$ for the (a) free fermion model in Section \ref{Fig:Single} and (b) isotropic two-leg $t$-$J$ ladder. It is clear that nodes are present at momentum $k\neq k_0$ for the free fermion system, while they are absent for the isotropic $t$-$J$ ladder.}\label{Fig:WaveFunction}. \end{figure} \section{Conclusion}\label{Sec:Conclusion} In this paper, we have systematically investigated the nature of a single hole doped in the two-leg antiferromagnet using large-scale DMRG simulation. We found that the doped hole in the Hubbard ladder behaves similarly with that in the $t$-$J$ ladder in the ground state. The elementary excitations of the doped hole are consistent with a quasiparticle. Interestingly, although the doped hole behaves like a quasiparticle in the long length limit, it is different with a simple Bloch-quasiparticle in the short length scale comparable with the spin-charge correlation length. In this limit, the nontrivial internal structure inside the loosely bound spin-charge object, namely the mutual statistics between the spin and charge of the doped hole, leads to a nontrivial residual effect dramatically changing the local structure of the ground state wavefunction. This may be potentially caused by the fundamental change of statistical sign structures as proposed in previous studies\cite{Weng2011Mott,Zhang2014Sign}. In the future, it will be important to design experiments to identify this nontrivial effect in other systems, which could potentially explain the role of sign structures in Hubbard and $t$-$J$ model directly. \section{Acknowledgment}% We thank Steven Kivelson, Xiaoliang Qi, Zheng-Yu Weng and Zheng Zhu for insightful discussions, and especially Zheng-Yu Weng for pointing out the novel residual effect. SL, HCJ and TPD were supported by the Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, under Contract DE-AC02-76SF00515.
1,108,101,564,460
arxiv
\section{Introduction} Cyanoacetylene, HC$_3$N, has been attracting attention due to its abundance in a number of extraterrestrial environments. Among these are interstellar clouds,~\cite{turner71} circumstellar envelopes,~\cite{bieging93} comets~\cite{irvine81} and atmosphere of Saturn's moon Titan.~\cite{kunde81, coates07} The particular interest in the electron collisions with this molecule stems primarily from two sources. The first is the presence of the carbon-chain molecular anions such as C$_8$H$^-$, C$_6$H$^-$, C$_4$H$^-$ and C$_3$N$^-$ in the interstellar medium.~\cite{brunker07, cernicharo07, thaddeus08} The second is the 2007 observation of the Cassini mission,~\cite{coates07} that the upper atmosphere of Titan contains anions with mass/charge ratio of up to $\approx$ 10000. Extensive investigations have shown, that depending on the altitude, the dominant anion species in Titan's atmosphere are either CN$^-$ and C$_3$N$^-$, or C$_n$H$^-$, with $n = 2, 4, 6$.~\cite{vuitton09} The dissociative electron attachment (DEA) to neutral polyynes (HC$_n$H or HC$_n$N) as a possible dominant source of these anions has been ruled out early. The DEA studies to C$_2$H$_2$,~\cite{may_acet09} C$_4$H$_2$,~\cite{may_diac08} and HC$_3$N~\cite{graupner06} have shown that while the cross sections are considerably high, the fragmentation channels are endothermic. The energetic thresholds for the production of fragment anions lie in all these cases above 1~eV, and are thus inaccessible for thermal electrons. Nonetheless, a formation of transient anions - resonances - leads not only to DEA but due to competing electron autodetachment channel also to vibrational excitation of the molecules. This influences both the vibrational energy distribution of the gas and the electron energy distribution function in the above-mentioned astrochemical environments. The only electron collision experiments with HC$_3$N to our knowledge are the early positive and negative ionization studies of Dibeler~\cite{dibeler60} and Harland~\cite{harland86} and the DEA experiments in the group of T. Field, QU Belfast.~\cite{graupner06, gilmore15} The latter group has initially reported a yield of individual fragment ions~\cite{graupner06} and later recalibrated these yields using signal from background water vapor to determine the absolute partial cross section values.~\cite{gilmore15} Theoretically, the resonances in cyanoacetylene were explored by Sommerfeld and Knecht~\cite{sommerfeld05} with the complex absorbing potential approach, by Sebastianelli and Gianturco~\cite{sebastianelli12} with the single-center expansion scattering calculations and by Kaur et al.~\cite{kaur16} by R-matrix theory. Orel and Chourou~\cite{orel_hc3n11} performed multidimensional nuclear dynamics calculations on the resonant states of HC$_3$N. In the present paper we probe the resonant states in cyanoacetylene by the means of electron energy loss spectroscopy. We report the absolute differential elastic and vibrationally inelastic cross sections at 135$^\circ$ scattering angle. These measurements bring detailed information about the resonant electronic states and the dynamics of the nuclear motion on their potential energy surfaces. The observed selectivity in the excitation of certain vibrational modes facilitates the assignment of the involved resonances. We also report direct absolute measurement of the DEA cross section. \begin{table*} \begin {center} \caption{Unoccupied molecular orbitals of neutral HC$_3$N and corresponding resonance energies formed by capture of an electron into the orbital (in eV).} \label{tab:orbitals} \begin{ruledtabular} \begin{tabular}{lcllll} Symmetry & MO isosurface & Present scaling & CAP~\cite{sommerfeld05} & Scattering calc.~\cite{sebastianelli12} & R-matrix~\cite{kaur16} \\ \hline $\pi_1^*$ & \raisebox{-0.5\totalheight}{\includegraphics[width=4cm]{mo15.png}}& 0.48 & 0.7 & 1.94 & 1.51 \\ $\sigma_1^*$ & \raisebox{-0.5\totalheight}{\includegraphics[width=4cm]{mo16.png}} & 3.09& & & \\ $\pi_2^*$ & \raisebox{-0.5\totalheight}{\includegraphics[width=4cm]{mo18.png}} & 5.50& 6.2 & 8.19 & \\ $\sigma_2^*$ & \raisebox{-0.5\totalheight}{\includegraphics[width=4cm]{mo19.png}} & 5.39 & & 9.24 & 8.0 \\ \end{tabular} \end{ruledtabular} \end {center} \end{table*} \section{Experiment} Three electron-collisions setups were used for the present experiments, recently transferred to Prague from the University of Fribourg. The electron scattering experiments were performed on the electrostatic spectrometer with hemispherical electron monochromator and analyser.~\cite{allan_ELS92, allan_ELS05} The electrons scattered on the effusive beam of the pure sample gas were analysed at the fixed scattering angle of 135$^\circ$. The energy of the incident beam was calibrated on the 19.365 eV 2$^2$S resonance in helium. Electron-energy resolution was 17 meV. The absolute elastic scattering cross section was calibrated against the one of helium using a relative flow method. The detailed error budget of the cross section calibration has been presented in Ref.~\onlinecite{allan_thf07}. The uncertainly of the elastic cross section is $\pm$15\%. The vibrationally inelastic cross sections are normalized with respect to the elastic peak. Since the individual vibrational modes are not fully resolved, the individual vibrational excitation cross sections are much less precise and should be considered as indicative values, which describe the intensity of the inelastic signal at a given energy loss The absolute dissociative electron attachment cross sections were measured on the absolute DEA spectrometer with time-of-flight mass analyzer.~\cite{may_acet09, may_diac08} A pulsed magnetically collimated electron beam, produced in a trochoidal electron monochromator, crosses collision cell filled with a stagnant gas and the anions produced are extracted towards short (15 cm) time-of-flight mass analyzer placed perpendicularly to the electron beam. For the cross section calibration, we have used the 4.4~eV band in the O$^-$ production from CO$_2$ with the energy-integrated cross section of 13.3 eV pm$^2$. The same band is used for the electron energy scale calibration and for the determination of the electron beam resolution which was $\approx$ 250~meV. The uncertainty of the absolute DEA calibration is $\pm$ 20\% which includes both the systematic and statistical errors. The shape of the DEA bands was additionally measured on the DEA spectrometer with a trochoidal monochromator and quadrupole mass filter~\cite{stepanovic99, langer18, zawadzki_pyruvic18}. Here, a continuous electron beam crosses the effusive molecular beam and the yield of a certain anion mass chosen by the quadrupole is monitored. Due to absence of pulsing, this spectrometer has a better electron energy resolution of approximately 100~meV. The final DEA cross sections are thus obtained by scaling the high-resolution DEA yields from the quadrupole setup to the absolute values from the time-of-flight setup using the invariance of the energy-integrated cross sections.~\cite{janeckova_formic13, graupner_ccl2f210} The HC$_3$N sample was synthesized by the dehydration of the propiolamide, prepared by the reaction of methylpropiolate and amonia, the method introduced by Miller and Lemmon~\cite{miller67}. During the measurements, the sample (confined in a lecture bottle) was kept at the temperature of 7~$^\circ$ C. \section{Results and discussion} \subsection{Electronic structure and resonances} All three scattering processes probed in this work are strongly influenced by the formation of resonances - temporary anion states - in the electron molecule collision. We thus first review the available information on these states, which will facilitate the interpretation of the results and further discussion. Since the resonant states are embedded in continuum, their proper characterization requires advanced scattering calculations or modifications of the traditional quantum chemistry approaches. However, a useful insight can be gained from the basic electronic structure of the target molecule and a use of the scaling formulas. In a simplified picture a shape resonance can be imagined as trapping of the incident electron in an unoccupied molecular orbital of the target molecule. Cyanoacetylene is a linear polyyne with two triple bonds. The lowest four unoccupied orbitals, shown in the table~\ref{tab:orbitals}, have antibonding character along some, or all bonds. For the purpose of this paper we denote them $\pi_1^*, \pi_2^*$ and $\sigma_1^*$, $\sigma_2^*$. Chen and Gallup~\cite{chen90} developed an empirical scaling based on the Koopmans' theorem, relating the orbital energies ($E_{MO}$) and the corresponding resonance energies ($E_{res} = (E_{MO}$ - 2.33~eV) / 1.31). Values obtained using this formula are listed in table~\ref{tab:orbitals} in the ``present scaling'' column. It should be noted that the sensitivity of such estimate of $E_{res}$ to the choice of basis set and the scaling formula have been explored by Field and co-workers.~\cite{graupner06, millar17} The resulting resonant energies can be considered only as indications, however, as can be seen in table~\ref{tab:orbitals} they agree surprisingly well with the advanced theoretical approaches. The complex absorbing potential (CAP) method of Sommerfeld and Knecht predicted the $\pi_1^*$ resonance at 0.7~eV (width 0.15~eV) and the $\pi_2^*$ resonance at 6.2~eV (width 1.1~eV). The scattering calculations of Sebastianelli and Gianturco localized the $\pi^*$ resonances at somewhat higher energies of 1.94~eV (width 0.15~eV) and 8.19~eV (width 0.76~eV) and the $\sigma_2^*$ resonance at 9.23~eV (width 1.16~eV). The R-matrix calculations of Kaur et al. identified the $\pi_1^*$ at 1.51 eV and $\sigma_2^*$ at 8~eV. An alternative scaling formula developed recently by Field and co-workers especially for $\pi^*$ states in conjugated systems, predicts the two resonances at 0.5 and 5.1~eV. Two notes should be added at this point. First, the figures in table~\ref{tab:orbitals} are the isosurfaces of the molecular orbitals, i.e. unoccupied one-electron states. Sebastianelli and Gianturco~\cite{sebastianelli12} provided the graphical representations of the true one-electron scattering wave functions and they are very similar (basically indistinguishable by eye) to the present isosurfaces. This adds the credit to the simplified picture of the temporary orbital occupation by the incoming electron. The nodal planes and electron densities of the unoccupied orbitals will be useful in interpreting the selectivity of vibrational excitation. The second note concerns the $\sigma^*$ states. The corresponding resonances are expected to be very broad: their coupling with the barrierless s-wave autodetachment channel leads to their extremely short lifetimes. The fixed-nuclei scattering calculations, which localize the resonances from the variation of the eigenphase sum, have thus often difficulties in finding such broad resonances~\cite{AAMOP_chapter}: the eigenphase variation can be so weak that it is difficult to distinguish from the background scattering. This might be the case of the $\sigma_1^*$ resonance, lying between the two $\pi^*$ resonances, which was not reported in any of the scattering calculations. However, as will be shown below, this state is manifested in the vibrational excitation cross section of the C-H stretching mode. \subsection{Elastic scattering} Figure~\ref{fig:elas} shows the differential elastic electron scattering cross section at 135$^\circ$ scattering angle. The cross section sharply peaks towards 0~eV electron energy. This is caused by the dipole moment of HC$_3$N which is 3.72~Debye.~\cite{crc07} The elastic scattering cross sections in polar targets always reach high values, and in some cases even diverge, at very low energies.~\cite{fabrikant16} It should be noted that the true height of the low-energy spike is of course not accessible by a cross beam experiment such as the present one, since the monochromator and the analyzer can not reliably produce/analyze the electrons below some 30~meV kinetic energy. \begin{figure}[tb] \includegraphics[width = 7cm]{Fig1_elastic.pdf} \caption{Cross section for the elastic scattering on HC$_3$N at 135$^\circ$. The inset shows horizontally magnified electron energy scale.} \label{fig:elas} \end{figure} Two interesting features can be observed in the cross section at higher energies. One is the shallow minimum around 5~eV. As shown in the next section, a broad $\pi_2^*$ resonance dominates this region and the minimum is an imprint of this resonance in the elastic cross section. Since its formation leads to increase in all vibrational excitation channels, the drop in the elastic channel is caused by the conservation of the probability flux. The second interesting feature in the elastic cross section is the oscillatory structure between 0.4 and 0.8~eV. It is clearly connected with the threshold peaks and the $\pi_1^*$ resonance in the vibrational excitation cross sections in this energy range discussed below. Sebastianelli and Gianturco~\cite{sebastianelli12} and Kaur et al.~\cite{kaur16} have seen the influence of the resonances in the elastic scattering (in computed integral cross sections). However, since these were fixed-nuclei calculations, which do not reflect the probability flux towards the nuclear motion, the resonances were manifested as peaks in the cross sections, not as the dips observed here. \begin{figure}[tb] \includegraphics[width = 7cm]{Fig2_EELS.pdf} \caption{Electron energy loss spectra of HC$_3$N at 135$^\circ$ recorded at incident energies of 0.8~eV (top panel) and 5.5~eV (bottom panel).} \label{fig:eels} \end{figure} \subsection{Vibrational excitation} \begin{table} \begin {center} \caption{Experimental vibrational frequencies of HC$_3$N from Ref.~\onlinecite{leach14}} \label{tab:vibr} \begin{tabular}{llll} \hline Type & Label & Energy (meV) \\ \hline CCN bend & $\nu_7$ & 28 \\ CCC bend & $\nu_6$ & 62 \\ CCH bend & $\nu_5$ & 82 \\ C$-$C stretch & $\nu_4$ & 109 \\ C$\equiv$C stretch & $\nu_3$ & 257 \\ C$\equiv$N stretch & $\nu_2$ & 282 \\ C$-$H stretch & $\nu_1$ & 412 \\ \hline \end{tabular} \end{center} \end{table} Figure~\ref{fig:eels} shows electron energy loss spectra recorded at two different electron incident energies. The energy loss spectra reflect, which vibrational modes are excited upon the electron impact and their relative population with respect to the elastically scattered electrons with zero energy loss. The spectroscopic experimental vibrational energies from Ref~\onlinecite{leach14}. are shown in table~\ref{tab:vibr}. All the three bending modes are excited to certain extent. The softest vibration, CCN bend (26~meV excitation energy) is visible as a shoulder of the elastic peak at both impact energies. The CCC bending vibration (62~meV) is not visible at 0.8~eV but present at 5.5~eV impact energy. The most prominent bending vibration is the CCH bend with the excitation energy of 82~meV, with at least one overtone excited at both incident energies (the possible $v=2$ overtone peak overlaps with the C$\equiv$C stretching mode). The excitation of the stretching modes also shows certain selectivity: at both incident energies, the C-C stretch is excited only weakly and the other vibrations have varying strength. At 0.8~eV, the C$\equiv$N stretch (282~meV) progression dominates the spectrum, while at 5.5~eV the C-H stretch becomes the dominant stretching mode. An interesting peak occurs at 492~meV (unassigned in the figure), which has to originate from a combination vibration of C-H stretch and CCH bend ($\nu_1 + \nu_5$). \begin{figure}[tb] \includegraphics[width = 7cm]{Fig3_EDS_long.pdf} \caption{The vibrational excitation cross sections for individual vibrations in HC$_3$N as functions of incident electron energy.} \label{fig:eds_long} \end{figure} \begin{figure}[tb] \includegraphics[width = 7cm]{Fig4_EDS_short.pdf} \caption{The vibrational excitation cross sections for individual vibrations in HC$_3$N as functions of incident electron energy with the low-energy horizontal scale expanded.} \label{fig:eds_short} \end{figure} \begin{figure}[tb] \includegraphics[width = 7cm]{Fig5_2d.pdf} \caption{Two-dimensional electron energy loss spectrum of HC$_3$N. The intensity of the elastic peak (energy loss = 0 eV) is reduced by a factor of 20 with respect to the rest of the spectrum.} \label{fig:2d} \end{figure} Figure~\ref{fig:eds_long} shows the excitation curves of the individual vibrations. Here, the energy difference between the monochromator and analyzer is kept constant and both are being scanned. Such excitation curves are a sensitive probe for the formation of resonances: if a temporary anion is formed at certain incident electron energy, the probability of energy transfer to nuclear motion (= vibrational excitation) strongly increases. The observed bands can be divided into two groups, the narrow ones at low-energies, approximately below 1 eV and much broader bands at higher energies, above 2~eV. The low-energy part of the spectra is separately shown in figure~\ref{fig:eds_short} and in the form of a two-dimensional spectrum in figure~\ref{fig:2d}. The high-energy part (with the reduced number of channels) is shown rescaled in figure~\ref{fig:eds_comp}. Let us first focus on the high-energy part. The dominant contribution to the excitation of all vibrations seems to originate from the formation of $\pi_2^*$ resonance, however, clear differences in the excitation of individual modes are demonstrated in figure~\ref{fig:eds_comp}. Since the two $\sigma^*$ resonances are dissociative along the molecular axis and will probably excite the bending vibrations only negligibly, we presume that the ``true'' shape of the $\pi_2^*$ resonance is demonstrated by the CCH bend excitation curve (top panel of figure~\ref{fig:eds_comp}). This places the center of the $\pi_2^*$ resonance to 5.3~eV. \begin{figure}[tb] \includegraphics[width = 7cm]{Fig6_EDS_comp.pdf} \caption{High-energy part of the individual vibrational excitation excitation cross sections and DEA C$_3$N$^-$ ion yield. The raw data from figure~\ref{fig:eds_long} are multiple times reduced (neighbouring channels are averaged). For the sake of this comparison, the data are arbitrarily scaled.} \label{fig:eds_comp} \end{figure} The C-H stretch vibration has the maximum clearly shifted to lower energies. The $\sigma_1^*$ orbital (table~\ref{tab:orbitals}) has the largest coefficient on the corresponding carbon and hydrogen atoms and an antibonding character along this bond. We conclude, that the C-H stretch vibration is the only one, which is influenced by the formation of the broad $\sigma_1^*$ resonance with the center around 4~eV. The C$\equiv$N vibration excitation curve is shifted to higher energies when compared to the CCH bend. This is caused by the formation of the $\sigma_2^*$ resonance with a strong antibonding character across the C$\equiv$N bond. This resonance is also visible in the excitation of the C-C stretch mode as the right shoulder superimposed on the dominant $\pi_2^*$ resonance. We now turn to the low energy part of the vibrational excitation spectra shown in detail in figures~\ref{fig:eds_short} and~\ref{fig:2d}. The excitation curves have peculiar shapes. This is caused by an interplay of two effects. The first one is related to the strong dipole moment of cyanoacetylene (3.72~Debye) which is expected to lead to threshold peaks in the vibrational excitation cross sections. Such peaks, first observed in hydrogen halides~\cite{rohr76} are common in all polar molecules. The second effect is the formation of the $\pi_1^*$ resonance around 0.5~eV. The small width of the resonance leads to a pronounced boomerang structure, visible in all vibrational modes. The boomerang structure originates from the vibrational motion of the nuclear wavepacket on the anion potential energy surface. Due to the long lifetime of the resonant state, the nuclei will undergo several vibrations prior to the electron detachment. The oscillatory structure originates from the interference of the outgoing and returning nuclear wavepacket.~\cite{herzenberg71} It is commonly manifested as a structure on top of a vibrational excitation band. The present accidental overlap of the $\pi^*$ resonance and the threshold peak causes the rather exotic accumulation of the boomerang structure on the falling edge of the peak. The present data enable to judge the accuracy of different methods used to calculate the resonant energies in table~\ref{tab:orbitals}. So far, the only experimental data on these states came from the DEA spectroscopy~\cite{graupner06}. Those are, however, influenced by energetical threshold cutoffs, or by the formation of core-excited resonances. The present data enable an unambiguous determination of the position of the $\pi_2^*$ resonance at 5.3~eV. This compares surprisingly favorably with the value obtained from the scaling formula (5.5~eV) and reasonably well with the CAP value of 6.2~eV. The single-center expansion scattering calculation~\cite{sebastianelli12} overestimates the position of this resonance by almost 3~eV (8.18~eV). For the $\pi_1^*$ resonance, the determination of the experimental center is complicated due to overlap with the threshold peak, however, judging from the boomerang structure in the C-C stretch and C-H bend excitations (figure~\ref{fig:eds_short}), the center can be placed to 0.5~eV. Again, the CAP method predicts this resonance better than the two scattering calculations (0.7~eV vs. 1.94 and 1.51~eV). These two also overestimate the energy of the $\sigma_2^*$ resonance, which has the experimental center between 6 and 7~eV, judging from the C-N stretch excitation curve in figure~\ref{fig:eds_comp}. A further insight into the low-energy part can be gained from the two-dimensional spectrum in figure~\ref{fig:2d}. 2D electron energy loss spectrum~\cite{regeta13} is a collection of many energy loss spectra recorded at various incident energies. It provides a complete picture of the vibrational nuclear dynamics. A horizontal cut through such spectrum corresponds to an energy loss spectrum, such as shown in figure~\ref{fig:eels}, a vertical cut corresponds to an excitation curve of a given energy loss, such as shown in figure~\ref{fig:eds_short}. The diagonal line $E_i = \Delta E$ is the threshold line, corresponding to the outgoing electrons with zero kinetic energies. The 2D spectrum agrees fully with the individual vibrational cross sections. Additionally, it reveals one more feature: approximately above 0.2~eV incident electron energy, the electrons along the diagonal ($\Delta E = E_i$) form a weak continuous stripe instead of appearing only at the sharp energies of individual vibrations. These electrons are ejected with residual energies close to zero, independent of their incident energy. Note, that the analyzer has a low transmission of electrons with residual energies below some 30~meV to 50~meV, hence the threshold signal appears somewhat higher than $E_r = 0$~eV. It is also visible as the high background signal in the energy loss spectrum on the upper panel of figure~\ref{fig:eels} These threshold electrons can be interpreted using the potential energy surfaces of Sommerfeld and Knecht.~\cite{sommerfeld05} According to their calculations, cyanoacetylene posseses a valence-bound anion, however, it's equilibrium geometry is far from the neutral one. It has a trans-bent zig-zag structure, however, with an adiabatic electron affinity close to zero. Apart from this, HC$_3$N supports a dipole-bound state with the potential energy curve lying several meV below the neutral one, it's equilibrium geometry thus corresponds to the neutral's linear structure. The linear transit between the two anion states (valence and dipole-bound) shows a barrier of approximately 0.2~eV. The origin of the slow electrons is thus following: if an electron with the incident energy $E_i > 0.2$~eV is captured in the low-lying $\pi^*$ resonance, the nuclear framework starts to move towards the geometry of the valence-bound anion, distorting the linear structure towards the trans-cis bent one. As soon as the geometry gets to the point where the anion surface lies below that of the neutral, the electron detachment is suppressed: it is energetically impossible for the electron to detach. However, the excess energy is stored in the nuclear degrees of freedom and efficiently randomizes over the vibrational degrees of freedom. The motion on the electronically bound part of the potential surface is statistical, so the nuclei may again get to the configuration, where the valence anion energy lies above that of the neutral. At this crossing point of the neutral and the anion surface the electron is unbound again and can detach. A number of previous examples~\cite{allan_habil89, allan_formic_prl07, allan_feco18} shows that such electrons detach basically as soon as they can and are thus emitted with close-to-zero residual energies. \begin{figure}[tb] \includegraphics[width = 7cm]{Fig7_DEA_all.pdf} \caption{Partial DEA cross sections for the production of all anionic fragments from HC$_3$N. Red lines: present data, black lines: Gilmore and Field.~\cite{gilmore15} } \label{fig:dea_all} \end{figure} \begin{figure}[tb] \includegraphics[width = 7cm]{Fig8_MS.pdf} \caption{Cumulative negative ion time-of-flight spectrum in the energy range 3 to 8~eV. Lines with points: experimental data, dashed lines: fitted contributions from the peaks with mass to charge ratios 24, 25 and 26, red line: sum of the individual contributions.} \label{fig:MS} \end{figure} \begin{figure}[tb] \includegraphics[width = 7cm]{Fig9_DEA_m50.pdf} \caption{Low-energy DEA band for the C$_3$N$^-$ fragment. Red line: present data, black points: Gilmore and Field.~\cite{gilmore15} } \label{fig:dea_m50} \end{figure} \subsection{Dissociative electron attachment} Figure~\ref{fig:dea_all} shows the absolute cross section for the production of individual fragment anions from HC$_3$N. The recent data of Gilmore and Field~\cite{gilmore15} are shown for comparison. The two data sets show an excellent agreement concerning the shapes of the individual DEA bands. However, there is a consistent quantitative disagreement. We will use the energy-integrated cross section $\sigma_I$ (invariant of the beam resolution) for the discussion. On the main DEA band, spanning between 3 and 8~eV, the ratio of our $\sigma_I$ for the C$_3$N$^-$ production (411~eV pm$^2$) to that of Gilmore and Field is 0.47. This disagreement is more or less consistent for all the four fragments The present branching ratio between the fragments C$_3$N$^-$:C$_2^-$:C$_2$H$^-$:CN$^-$ are 1:0.14:0.12:0.95. The branching ratio of Gilmore and Field are 1:0.13:0.15:1.33, they thus agree very well, apart from the CN$^-$ which had higher abundance in the measurements of Ref.~\onlinecite{gilmore15}. At this point, it should be noted that our time-of-flight analyzer does not fully resolve the three fragments with mass-to-charge ratios 24, 25 and 26. When designed,~\cite{may_acet09} the resolution has been compromised to the fact that the setup is quantitative. There are for example no grids separating the two acceleration regions. This on one hand distorts the Wiley-McLaren type time focusing, on the other hand it means undisturbed transmission of extracted anions. Still, as is illustrated in figure~\ref{fig:MS}, the mass resolution is high enough to determine the branching ratios between the three fragments reliably. The spectrum is cumulative~\cite{lengyel_beilstein17, lengyel_hno3_17} - it has been obtained as a sum of the mass spectra in the energy range 3 to 8~eV. The dashed lines show the individual contributions of the three close-lying fragments and the full red line shows their sum. Somewhat surprisingly, the quantitative level of agreement between the present data and those of Gilmore and Field is better for the first DEA band in the C$_3$N$^-$ production, shown magnified in figure~\ref{fig:dea_m50}. The ratio of the energy-integrated cross sections of this band is 0.68. The probable origin of the quantitative discrepancy are the different calibration methods used to obtain the absolute values. Gilmore and Field used the O$^-$ signal from the background water vapor for the cross calibration. The ratio of the HC$_3$N/H$_2$O number densities was obtained from the recorded ion yields of the positive ions and their calculated absolute cross sections in the BEB formalism. Considering this rather indirect approach, the present agreement of the absolute cross section within a factor of two can be actually viewed as very good. Both experiments have quoted uncertainty of $\pm$ 20\% and the difference between the absolute values is only slightly larger than the combined error limits. Due to more direct calibration procedure, the present values might be considered to be more reliable. The comparison with the vibrational excitation cross sections brings new light on the DEA mechanism. As seen in figure~\ref{fig:eds_comp}, the band at 5.5~eV is very similar the shape of CCH bend excitation cross section, which suggests that the DEA is mediated by the formation of the $\pi_2^*$ resonance. Graupner et al.~\cite{graupner06} did the same assignment, however, since their reference center of the $\pi_2^*$ resonance was that calculated by Sommerfeld and Knecht~\cite{sommerfeld05} at 6.2~eV, they had to argue with a survival probability shift in order to explain the different DEA peak position. The current comparison in figure~\ref{fig:eds_comp} shows that the DEA band actually overlaps with the $\pi_2^*$ resonance very well. Still, there is one aspect which invokes caution with this assignment, and this is the large width of the $\pi_2^*$ resonance. The corresponding bands (both in DEA and in vibrational excitation spectra) are approximately 2~eV broad. The width of the band is determined by two factors: (i) the autodetachment width $\Gamma$ and (ii) the projection of the nuclear wavefuction on the resonant state (reflection principle). Anyhow, such broad bands suggest, that $\Gamma$ itself is rather large, in agreement with the theoretical calculations which evaluated it to be 1.1~eV (Ref.~\onlinecite{sommerfeld05}) or 0.76~eV (Ref.~\onlinecite{sebastianelli12}). From the uncertainty principle, a resonance width of 1~eV corresponds to the lifetime towards electron autodetachment of 0.3 femtoseconds. It is somewhat surprising that such a short-lived state gives rise to rather high dissociative cross section. An alternative origin of the DEA yield would be a core-excited resonance: neutral HC$_3$N posses electronically excited states ($^1\Delta_u$) lying between 5.5 and 6.2~eV.~\cite{ferradaz09} Assuming a typical stabilization energy of 0.4~eV, the corresponding Feshbach resonance would be located exactly around the present DEA band. Such resonances are typically very narrow and are not visible in the vibrational excitation cross section.~\cite{allan_habil89} They also typically lead to rich fragmentation pattern.~\cite{janeckova_thf14, zawadzki_pyruvic18} The agreement in figure~\ref{fig:eds_comp} thus might be coincidental. It is worth noting that a similar dispute, whether the dominant DEA band is caused by an accidentally overlapping shape $\pi^*$ or a core-excited resonance, has appeared for diacetylene C$_4$H$_2$.~\cite{allan_diac11, curik_diac14} Only the C$_3$N$^-$ fragment, created by the hydrogen abstraction, is observed at lower energies with the peak at 1.7~eV. It was shown by calculating the threshold energies~\cite{graupner06} that other channels are energetically closed in this energy range. The threshold for the C$_3$N$^-$ production is 1.37 $\pm$ 0.2~eV which is causing a sharp onset of the present cross section in figure~\ref{fig:dea_m50}. Two effects can in principle contribute to the origin of this band. (i) As assigned previously~\cite{graupner06}, it can originate from a high-energy shoulder of the $\pi_1^*$ resonance (the center of the resonance lies considerably below the threshold energy). This seemed very reasonable, since this resonance is rather narrow so it would lead to high survival factor. However, as can be seen in figure~\ref{fig:eds_short}, all the cross sections for the vibrational excitation are diminishing above 1.3~eV, so the DEA band seems to have almost no overlap with the $\pi_1^*$ resonance. (ii) The second option is that the DEA proceed via formation of the $\sigma^*$ resonance, whose lower tail overlaps with the DEA band as can be seen in the C-H stretch vibrational excitation in figure~\ref{fig:eds_long}. Judging from a large width of such resonance alone, it should lead to negligible DEA cross section, since all electrons would autodetach. However, it is now well established, that in molecules with large dipole moments (or even nonpolar molecules with high polarizabilities), the dissociative cross section of $\sigma^*$ resonances can reach very high values. The interaction of dipole bound (or virtual states) with the pure $\sigma^*$ states suppresses the autodetachment channel. The cyanoacetylene's dipole moment of 3.72 Debye opens this possibility. It should be however noted, that such dipole-supported $\sigma^*$ resonances often lead to sharp structures in the DEA cross section. These structures - downward steps or even oscillations - appear at the opening of the new vibrational excitation channels in the direction of the dissociating bond, in this case the C-H vibration. Taking into account the anharmonic vibrational levels,~\cite{mallinson76} the 0$\to$4 transition in C-H stretch vibration is open at 1.56~eV and 0$\to$5 transition at 1.94 eV. No such structures are visible in figure~\ref{fig:dea_m50}. It should be noted that the DEA spectra of molecules like hydrogen halides~\cite{fedor_hbr07, fedor_hbr08, fedor_hcl10} or formic acid~\cite{janeckova_formic13} do show discernible structures at electron beam resolution comparable to the present one (approximately 100 meV). There seems to be no unambiguous evidence for any of the two mechanisms to be prevalent in the dehydrogenation DEA around 1.7~eV. Our recent results for the HNCO molecule~\cite{zawadzki_prl18} even suggest that often there is even no sharp distinction between these possible: upon any out-of-line geometry distortion the $\pi^*$ and $\sigma^*$ states mix and the actual dissociation mechanism is given by the interplay of them. \section{Conclusions} In conclusion, we have probed the resonances in cyanoacetylene by measuring cross sections for elastic electron scattering, vibrational excitation and dissociative electron attachment. Data from these three scattering channels are mutually consistent and provide information about both the non-dissociative and dissociative nuclear dynamics on the transient anion potential surfaces. Several effects influence the probed electron-induced processes. One is the strong dipole moment of HC$_3$N which is manifested as the low-energy peak in the elastic scattering and as the threshold peaks in all the vibrational excitation channels. The second dominating effect is the formation of four resonances. The lower $\pi_1*$ resonance is the narrowest and its long lifetime leads to pronounced boomerang oscillatory structures in the vibrational excitation cross sections. At higher electron energies, the formation of the broad $\sigma_1^*$ and $\sigma_2^*$ resonances is reflected in the vibrational excitation cross sections of the C-H stretch and C$\equiv$N stretch modes, while the CCH bending mode excitation is probably exclusively mediated by the formation of the $\pi_2^*$ resonance. This resonance also dominates the DEA spectrum and it leads to production of four anionic fragments. The existence of the bound HC$_3$N$^-$ anion and the crossing of its potential energy curve with that of the neutral molecule (boundary between the resonant and the bound state) is manifested by the threshold signal in the two dimensional energy loss spectrum. Here the electrons are emitted with close-to-zero residual energies independent of the incident energy, which is caused by the randomization of the vibrational motion on the bound anion surface. \section*{Acknowledgments} This work is part of the project Nr. 17-04844S of the Czech Science Foundation. L. B. acknowledges support from the FWF project DK-ALM:W1259-N27, M. P. and J. \v{Z}. acknowledge partial support from CSF project Nr. 17-14200S. We wish to thank Roman \v{C}ur\'{i}k, Prague, for numerous discussions of resonances and of this manuscript. \bibliographystyle{apsrev}
1,108,101,564,461
arxiv
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{echnologies} for digital music have become increasingly important, bolstered by rising global expenditures in digital music in excess of 64 billion USD in 2014 alone~\cite{mckinsey2015}. The popularity and relevance of automatic \emph{music generation} has recently been underscored by the launch of Google's Magenta project\footnote{\url{https://magenta.tensorflow.org/welcome-to-magenta}}, ``a research project to advance the state of the art in machine intelligence for music and art generation''. In this research, we develop a music generation system, called Morpheus~\cite{herremans2016morpheus}, that tackles one of the biggest remaining challenges in the field of automatic music composition: long term structure. Long term structure is that which generates coherence over larger time scales from phrases up to the entire piece; it refers to more than simply traditional ABA form, and includes the modulation of features such as loudness and tension, and the use of recurrent patterns and motivic transformations over time, so as to generate coherence over these large time scales. While most existing music generation systems create pieces that may sound good over a short time span, these outputs often lack long-term structure. MorpheuS can take any existing polyphonic music piece as template and morph it into a new piece with a predefined tension profile. The new piece will also preserve the same long term structure (i.e. pattern structure) as the template. To this day, it remains notoriously difficult to enforce constraints (e.g. long-term structure) in music generation systems based on machine learning methods such as Markov models~\cite{pachet2011markov}. In previous research, the first author therefore developed a novel method for constraining long-term structure through an optimization-based approach, combined with machine learning. The proposed framework consisted of an efficient variable neighborhood search (VNS) optimization algorithm that is able to generate melodies (or monophonic music) with a fixed semiotic structure (e.g. AABBACA)~\cite{herremans2013composing, herremans2014thesis, herremans2015generating} and evaluates its solution through the Euclidean distance between a Markov model built on a corpus and one trained on the generated piece. This research shows that the approach offers a viable way of constraining structure. In the current paper, the VNS algorithm is expanded to generate \emph{complex polyphonic music}. Although the algorithm is able to work with any type of polyphonic music, as a proof of concept, we focus on piano music in this research. A second novel aspect of MorpheuS is the inclusion of an original \emph{tension} model. Tension shapes our music listening experience. In some music genres, such as game and film music, there often is a direct relationship between tension, a narrative, and the emotion perceived~\cite{cohen2001music}. In this research, we aim to generate music that has a predefined tension profile through time. The target tension can be specified by the user, or calculated from a template piece using a computational model developed by the authors~\cite{Herremans_tenor2016}. A system like Morpheus that can generate music having a particular tension profile could be employed by film makers, composers, and game programmers when music matching a specific narrative is desired. The third contribution of this research is the integration of a state-of-the-art pattern detection algorithm~\cite{meredith2013cosiatec}, which is used to find recurring patterns and themes in the template piece. MorpheuS then uses the patterns found to configure the structure in a newly generated piece by introducing the patterns as hard-constraints during the generation process. \begin{figure}[ht] \centering \includegraphics[clip, trim=2cm 3.8cm 8.4cm 1.24cm, width=0.48\textwidth]{flow_morpheus.pdf} \caption{Overview of MorhpeuS' architecture \cite{herremans2016morpheus}.} \label{fig:arch} \end{figure} MorpheuS' functional architecture is displayed in Figure~\ref{fig:arch}. The system is implemented entirely in Java. The main modules of the algorithm, in bold, will be further discussed in Sections~\ref{sec:tensionmodel},~\ref{sec:patterndetection}, and~\ref{sec:opt}. Before embarking on these discussions, we briefly survey related systems in the next section. \section{Literature review} Before examining the individual components of the MorpheuS system, we give an overview of related research. The first subsection covers different techniques used in automatic music generation; the focus of this overview lies mainly on metaheuristic optimization algorithms. Next, we focus on the two types of long term structure that are incorporated in the system. A first aspect of long-term structure is ensured through a tension profile. By requiring that the music adhere to a coherent tension profile, MorpheuS can generate music displaying specific tension characteristics throughout the piece. This makes the output particularly well suited to game or film music scenarios. An overview thus follows of previous research on music generation with a narrative and tension. We then address the second aspect of long-term structure in MorpheuS, namely, recurring patterns in generated music, providing a review of how this has previously been tackled by researchers in automatic composition. For a more general survey of current music generation systems, the reader is referred to~\cite{herremans2017taxonomy, fernandez2013ai}. \subsection{Generation techniques for music} The idea that computers could compose music is as old as the computer itself. Ada Lovelace, who worked with Charles Babbage on the Difference Engine, predicted that the engine when realised could one day ``compose elaborate and scientific pieces of music of any degree of complexity or extent''~\cite{lovelace1843notes}. Since then, many automatic systems for music generation have been developed. In the 50s, the first piece composed entirely by a computer, ``The Illiac Suite'', was generated by a stochastic rule-based system~\cite{hiller1957musical}. More recently, a number of systems based on Markov models were developed for simple melody generation~\cite{pinkerton1956information, conklin1995multiple}, to harmonization~\cite{pachet2001musical, chuan2011generating} and improvisation systems \cite{dubnov2012music, assayag2006omax, franccois2013mimi4x}. In recent years deep learning models have entered the scene \cite{eck2002first, chen2001creating, ICML2012BoulangerLewandowski_590, cancino2017bach, sabathe2017deep, huang2016chordripple, hutchings2017drums, herremans2017modeling}. While many of these systems produce output that sounds good on a note-to-note level, they often lack long-term coherence. We aim to tackle this challenge in this research by employing pattern detection techniques. In order to exploit the patterns found, we opt for an optimization-based approach, which allows us to constrain structure. In Section~\ref{sec:opt} the problem of generating music with long-term structure is defined as a combinatorial optimization problem. This problem is computationally complex to solve, as the number of possible solutions grows exponentially with the length of the piece. As an example, a piece consisting of only 32 notes, with 24 possible pitches per note, has $32^{24}$ possible solutions. There have only been limited attempts at solving music generation problems with \emph{exact methods} such as integer programming. For example, \citet{cunha2016} uses integer programming with structural constraints to generate guitar solos based on existing licks. Their objective function is based on music theoretic rules. In research by~\cite{tanaka2015describing}, the authors propose a method to generate counterpoint---independent linear voices that combine to form a single harmonic texture. They formulate the generation task as an integer programming problem that uses existing composition rules as constraints to control global structure. However, this work remains a theoretical formulation, with no solution method as yet implemented. In order to overcome the difficulty and often long computational run times required to calculate exact solutions to optimization problems, many practical applications use \emph{metaheuristics}. A metaheuristic is defined by~\citet{sorensen2015metaheuristics} as ``a high-level problem-independent algorithmic framework that provides a set of guidelines or strategies to develop heuristic optimization algorithms. The term is also used to refer to a problem-specific implementation of a heuristic optimization algorithm according to the guidelines expressed in such a framework.'' These techniques often employ a variety of strategies to find a good solution in a limited amount of computing time; they do not guarantee an optimal solution, but typically good solutions are found~\cite{blum2003metaheuristics}. There exist three main groups of metaheuristics: population-based, constructive, and search-based algorithms~\cite{sorensenmetaheuristics}. The first group, which includes evolutionary algorithms, has seen recent gains in popularity in the literature. Population-based algorithms get their name from the fact that they inter-combine a set of solutions (population) to create new ones.~\citet{horner1991genetic} were the first to develop a genetic algorithm for music generation. These techniques have later been used to generate jazz solos \cite{biles2001autonomous}, counterpoint style music \cite{mcintyre1994bach, polito1997musica, phon1999evolving}, and rhythmic patterns \cite{tokui2000music, horowitz1994generating}, and to combine fragments for orchestration \cite{carpentier2010solving}. The second group, constructive metaheuristics, gradually build solutions from their constituent parts, for example, by growing individual notes in sequence. An example of this category includes ant colony optimization, which was first applied to music in 2007 to harmonize baroque music~\cite{geis2007ant}. The third category, local search-based heuristics, typically make iterative improvements to a single solution. They include algorithms such as iterated local search, simulated annealing, and variable neighborhood search~\cite{sorensenmetaheuristics}. An example of these techniques in the field of music generation can be found in the research of~\citet{davismoon2010combining}, who used simulated annealing to generate music according to a fitness function that was derived from a Markov model. The first author of the current paper, was the first to develop a variable neighborhood search (VNS) algorithm that was able to generate counterpoint music~\cite{herremans2012composing, herremans2014thesis}. This VNS was shown to outperform a genetic algorithm on the same task and has been modified in the current research to generate complex polyphonic music. \subsection{Narrative and tension} The tension profile, which is integrated in the algorithm so as to shape the tension of the generated music, is particularly important when generating music with a narrative, or program music. Program music has a long and illustrious history, a well-known example being Richard Strauss' ``Don Quixote''. Such narrative music tells a story, by using a set of organizational, representational, and discursive cues that deliver story information to the audience. Such cues can include tension profiles, leitmotifs (recurring melodic fragments associated with a person, idea, or story situation), and others. All of these elements typically elicit varying emotion responses during the unfolding of a piece when synchronized with simultaneous media such as video or game play. Existing systems in the domain of video and game music are discussed, followed by a more focused overview of literature on tension models. \subsubsection{Generating film music} A prominent application of music with narrative is film music. Music has been shown to be an important source of perceived emotion in film~\cite{cohen2001music, parke2007quantitative}. While~\citet{prechtl2014methodological} has conducted research on generating music that evokes basic emotions in the context of games, very little research exists on developing music generation systems that follow the emotion content of films. Even commercial applications such as the web-based music generation app, Jukedeck\footnote{\url{jukedeck.com}}, do not yet take into account the emotion narrative. Jukedeck generates background music for YouTube videos using a combination of rules and deep learning. A prototype system that generates background music and sound effects for short animation films was developed by~\citet{nakamura1994automatic}. For each scene, music (harmony, melody, rhythm) is generated based on rules from music theory whilst taking into consideration the mood, the intensity of the mood, and the musical key of the previous scene. The sound effects are determined by the characteristics and intensity of the movements on screen. In the next subsection we will discuss the related topic of game music. \subsubsection{Game music -- blending} The most dynamic form of narrative in music can be found in computer games, whereby a user creates his or her own unique scenario when moving through the game. The accompanying music needs to follow and support the suspense and emotion of the current game play. Game music is rarely generated on the fly. Short audio files are generally cross-faded together as the player moves through different game states~\citep{collins2008game}. An exception to this common practice can be seen in the game, Depression Quest\footnote{\url{https://isaacschankler.bandcamp.com/album/depression-quest-ost}}, as the music is generated dynamically as the user moves through the different scenarios of the game. With current cross-fading techniques, it is not uncommon for two fragments to clash rhythmically or harmonically, causing a jarring change in the music. The automatic DJ-system developed by~\citet{muller2012data} ensures smooth blending, yet the audio fragments need to be harmonically and rhythmically similar for the blending to work successfully. By restricting the range of harmonies and rhythms in this way, one also limits the musical variations and expressive capacity of the music. To overcome this limitation, some games implement procedural music approaches that use control logics or rules that control playback. One of the first procedural music and audio approaches for computer games can be found in the game `Otocky' for the Famicom platform. Otocky is a side-scrolling shooter, whereby the player controls a ship that fires balls at both enemies and flying musical notes. The melody part is formed by the player's firing behavior and in-game mechanics, and is rendered on top of a two note bass line~\cite{collins2009introduction}. For an overview of procedural music techniques, the reader is referred to \citet{collins2009introduction}. More recent work has focused on incorporating elements of tension and emotion into adaptive game music. \citet{prechtl2016adaptive} created a system that generates music from scratch instead of using existing fragments. Prechtl uses a Markov model for chord generation that takes into account emotion parameters such as alarm or danger. His study uncovered correlation between mode and valence, and between tempo/velocity and arousal. \citet{casella2001magenta} created MAgentA (not to be confused with Google's music generation project Magenta), an abstract framework for a video game background music generation that aims to create ``film-like'' music based on the mood of the environment using a cognitive model. At the time of publication, the authors mention that the framework was being integrated from the abstract level into the FantasyA Virtual Environment, but no further references could be found. The system developed by~\citet{brown2012mezzo} makes use of the concept of ``Leitmotifs'' , commonly used in Western opera.~\citet{brown2012mezzo}'s system stores different forms of each motif corresponding to varying degrees of harmonic tension and formal regularity. This allows the system to choose the appropriate fragment corresponding to the current state and story of a game. The reader is referred to~\citet{collins2009introduction} for a more complete overview of dynamic music in games. \subsubsection{Generating music with tension} Musical tension is an important tool for evoking emotion. According to \citet{farbood2012parametric}, the way that different listening experiences translate into actual `affect' is a complex process. Musical tension, measured based on musical structures, provides a stepping stone to understanding and quantifying subjective, emotional responses. The link between emotion and tension has become apparent in many studies~\cite{sloboda1991music, krumhansl1997can, scherer2001emotional, steinbeis2006role}. If a music generation system can generate music according to given tension profiles, it becomes directly relevant for applications in game and film music. Recent research has made advances in the quantification of aspects of musical tension, such as tonal tension~\cite{lerdahl2007modeling, Herremans_tenor2016}, even combining them to produce a composite model~\cite{farbood2006quantitative}. Based on an extensive empirical experiment,~\citet{farbood2012parametric} built a tension model that takes into account multiple musical parameters to obtain one comprehensive tension rating. Farbood implemented an earlier version of her tension model, that does not yet integrate features,~\citep{farbood2006quantitative} in the graphical computer-assisted composition system called Hyperscore, in which users can intuitively edit and visualize musical structures as they compose music~\cite{farbood2007composing}. Hyperscore shows low-level and high-level musical features (such as color, shape, dynamics, harmonic tension) and maps them to graphical elements which can be controlled by the user when crafting compositions. Thus, a user can draw a tension profile and Hyperscore will generate music with a similar profile. Similarly,~\citet{browne2009global}'s system arranges pre-written motifs according to a pre-specified tension profile using simulated annealing. An artificial neural network was used to compute tension profiles. The objective function of the algorithm was then formed by taking the Kullback-Leibler divergence between the desired and observed tension profiles. The optimal arrangement was then taken to be the one that minimizes this distance. In this study, we will focus on multiple aspects of \emph{tonal} tension independently versus considering a composite tension characteristic. The tonal component has proven to be a particularly strong structural influence on emotions. In~\citet{rutherford2002experiment}'s scary music study, they conclude that more scary music is generated by breaking the Western tonal music rules. This result was empirically verified by users who rated the scariness of the generated music. The computational model used in this research for calculating tonal tension is discussed in more detail in Section~\ref{sec:tensionmodel}. The next section considers the importance of patterns in music. \subsection{Structural patterns in generated music} Music is more than just a succession of notes that only needs to sound good in the short term. Having long-term structure: motives, patterns and variations of those patterns are essential for an enjoyable and satisfying listening experience. Music generation systems that use traditional statistical sampling methods based on Markov models typically only ensure short term relationships between notes~\citep{herremans2015generating}. One approach to obtaining long term structure was implemented by~\citet{roig2014automatic}, whose system concatenates rhythmic and melodic patterns in order to form new melodies based on a combination of rules and a statistical method. More complex statistical learning methods, such as recursive neural networks have recently gained popularity in the field of music generation due to the availability of large amounts of digital music data and increased computing power. While the first neural network for melody generation was implemented in the late 80s~\cite{todd1989connectionist}, this approach has become more relevant due to the ability of these networks to learn complex relationships between notes given a large enough corpus. Recent research in audio transcription by~\citet{ICML2012BoulangerLewandowski_590} shows promising results for music generation as well. They use a piano roll representation for polyphonic pieces to build a Recurrent Temporal Restrictive Bolzmann Machine (RT-RBM)-based model. This model learns harmony and melody, and local temporal coherence. Long term structure is not yet captured by the model. The polyphonic music generation system designed by~\citet{lattner2016imposing} implements convolutional restricted boltzmann machines and constrains the self-similarity matrix of a generated piece to a template. Another approach to long-term structure is explored by \citet{herremans2015generating} who examined the integration of Markov models in an optimization algorithm. By looking at different ways a statistical model can be used to construct an objective function, the approach ensures that the generated music has the same statistical distribution of features as a target dataset of pieces. By treating the problem of music generation as an optimization problem, ~\citeauthor{herremans2015generating} were able to impose larger-scale structure (e.g. ABBAC) on the generated music, in addition to short term statistical constraints. The resulting optimization problem was solved by a VNS metaheuristic to generate music for bagana, an Ethiopian lyre. This approach is extended in the current research to polyphonic music, with automatic detection of more complex long term patterns in the template piece. The detection method is described in greater detail in Section~\ref{sec:patterndetection}, after the next section, which focuses on the tension model. \section{Quantifying tension in music} \label{sec:tension} \label{sec:tensionmodel} Tension is a composite characteristic, which makes it very hard to capture or measure in a quantitative way. According to Mary Farbood~\cite{farbood2012parametric}, ``increasing tension is a feeling of rising intensity or impending climax, while decreasing tension can be described as a feeling of relaxation or resolution'' (p. 387). In~\citet{Herremans_tenor2016}, the authors developed a model for tonal tension based on the spiral array~\cite{chew2014tonality}, a three-dimensional model for tonality. The relevant part of the model is briefly described below, together with how it was implemented in the MorpheuS system to quantify tension in an optimization context. \subsection{The Spiral Array} The spiral array is a three-dimensional geometric model for tonality~\cite{chew2014tonality}. It consists of an outermost helix representing pitch classes (shown in Figure~\ref{fig:spiral}), and inner helices (not shown) representing higher level tonal constructs such as chords (major and minor) and keys (major and minor) successively generated from their lower level components. Any set of points in the spiral array can be weighted and summed to generate a \textit{center of effect} (\textit{c.e.}), representing the aggregate tonal effect of its components. \begin{figure*}[h] \centering \begin{subfigure}[b]{.3\textwidth} \includegraphics[width=1\textwidth]{spiral_diam2.pdf} \caption{Cloud diameter of a C major chord (small) versus the Tristan chord (large).} \label{fig:diam} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \includegraphics[width=1\textwidth]{spiral_mom.png} \caption{Cloud momentum from a C major chord to a C\# major chord.} \label{fig:mom} \end{subfigure} \begin{subfigure}[b]{.3\textwidth} \includegraphics[width=1\textwidth]{spiral_key.png} \caption{Tensile strain of a C major and C\# major chord in the key of C major.} \label{fig:key} \end{subfigure} \caption{An illustration of the three tension measures in the pitch class helix of the spiral array.} \label{fig:spiral} \end{figure*} Tonal representations in the spiral array mirror close tonal relationships between the entities, such as a perfect fifth between pitches, by their proximity in 3D space. For example, pitches one fifth apart are adjacent to each other in the pitch class helix (e.g. C-G) and pitches one major third apart are arranged vertically above one another (e.g. C-E). Similarly, the most likely key or chord of a cluster of pitches can be identified through a search for the key representation nearest to the c.e. of the pitch cluster. The tension model in MorpheuS uses only the pitch class and the major and minor key helices. The spiral array takes pitch spelling into account, meaning that enharmonically equivalent (but differently-spelled) pitches, such as G\# and Ab have different spatial representations. The interested reader can refer to~\cite{chew2014tonality} for a full description of the spiral array model. \subsection{A quantitative model of tension} The model, developed by the authors in~\cite{Herremans_tenor2016}, represents tonal tension as captured by the geometry in the spiral array. The software that calculates the tension according to this model is freely available online\footnote{\url{http://dorienherremans.com/tension}}. In order to calculate the tonal tension of a musical fragment, the piece is first divided into equal length segments, which can be mapped to clouds of points in the spiral array. The segment length is expressed in beats and can be set by the user (default setting is an $\frac{1}{8}$ note), a more detailed discussion of the effect of the segment length can be found in~\cite{Herremans_tenor2016}. Based on these clouds, three measures of tonal tension can be computed: \begin{description} \item[Cloud diameter] captures the diameter of the cloud of notes, which measures the dispersion of the cloud in tonal space. \item[Cloud momentum] reflects the movement in tonal space between two consecutive clouds of notes, by quantifying the distance between their c.e.'s. \item[Tensile strain] measures the distance between the c.e. of a cloud and the position of the global key in the array. \end{description} Figure~\ref{fig:spiral} illustrates each of the three tension measures with the pitch class helix of the spiral array. On the left, the (small) cloud diameter of a C major triad is shown together with the (much larger) diameter of the tristan chord, a well known tense chord~\cite{magee2002tristan}. The large tonal distance traversed by a transition from the C major to the C\# major chord is illustrated in Figure~\ref{fig:mom}, an example of the cloud momentum measure. Finally, Figure~\ref{fig:key} visualizes the tonal distance between the c.e.'s of each these two chords and the key of C major, which shows two contrasting tensile strain measures. For exact mathematical details of how to calculate the three measures of tension, the reader is referred to~\cite{Herremans_tenor2016}. MorpheuS uses these three tension characteristics to evaluate the musical output and match it to given template tension profiles. The weights for each of these characteristics can be set by the user, reflecting the aspect of tension deemed most important. The integration of tension in the objective function of the optimization is discussed in detail in Section~\ref{sec:opt}. The next section focuses on the the pattern detection algorithm implemented in MorpheuS to improve long-term coherence. \section{Detecting recurring patterns} \label{sec:patterndetection} Automatic recognition, description, classification and grouping of patterns are important problems in many domains~\cite{jain2000statistical}. Applications include image segmentation~\cite{sclove1983application}, human action recognition~\cite{ji20133d}, face description~\cite{ahonen2006face}, DNA sequence analysis~\cite{gingeras1998simultaneous}, speech recognition~\cite{itakura1975minimum}, music genre recognition~\cite{tzanetakis2002musical}, and affective computing~\cite{picard1997affective}. We focus on pattern analysis for polyphonic music. When listening to a musical piece, a listener is able to recognize structure through perceiving repetition and relationships between parts of the piece of music. In order for a generated musical piece to sound natural, such patterns should exist. MorpheuS uses recurring patterns such as themes and motives from a template piece to fix these structural elements in a new composition. The detected patterns consist of groups of notes that can recur transposed in different places throughout the piece. There has been research on pattern detection techniques for music audio~\cite{dannenberg2003pattern, aucouturier2007bag}, but our focus is on symbolic music (i.e. MIDI). MorpheuS uses two state-of-the-art greedy compression-based algorithms for MIDI, COSIATEC and SIATECCompress~\cite{meredith2013cosiatec}, both based on Meredith's ``Structure Induction Algorithm'' (SIA) and SIATEC. SIA finds all the maximal translatable patterns (MTP) in a point-set and SIATEC discovers all translational equivalence classes (TECs) of MTPs in a point-set~\cite{meredith2002method}. The performance of both algorithms is benchmarked on a compression task in~\cite{meredith2013cosiatec}. The specific application of finding patterns for music generation requires special consideration when applying these algorithms. A discussion of the effect of parameter choices on the chosen pattern detection algorithm can be found in Section~\ref{sec:patternresults}. MorpheuS offers the user a choice of which algorithm to use as each has its own strengths and weaknesses. When applied to polyphonic MIDI files, the compression algorithms use a point-set representation of the score, which positions each note in a two-dimensional pitch/time space. They then compute a compressed encoding, which takes the form of a set of TECs of maximal-length patterns. An example output of COSIATEC in pitch/time space is shown in Figure~\ref{fig:patterns}, whereby the time is expressed in tatums, i.e., ``the regular time division that mostly coincides with all note onsets''~\citep{bilmes1993timing}. Two longer patterns (displayed in red and green) are detected in the figure. A pattern (or a repetition of a pattern) is shown as a connected sequence of points, and its TEC consists of a musical transposition of the original pattern (one translation unit is a semitone). The two main patterns in the fragment recur, transposed, in the other hand. The red pattern, for instance, starts on the fifth note of the right hand (C), it recurs in the second bar (left hand) at the fifth note, transposed two octaves down. The encoded representation of the red (wavy) pattern in the figure is as follows: \footnotesize \begin{verbatim} T(P(p(360,72),p(480,71),p(600,75),p(720,76),p(840,70)), V(v(0,0),v(480,-2),v(1920,-24),v(2400,-26))) \end{verbatim} \normalsize whereby the set of pairs \verb|P()| represents a maximal-length pattern, consisting of individual points \verb|p()| in pitch/time space. The set \verb|V()| contains the translation vectors \verb|v()|, which when applied to \verb|P()| form a translational equivalent pattern. The combination of the pattern and its translation vectors form \verb|T()|, a translational equivalence class of maximal-length patterns (MTP TEC). \begin{figure}[ht!] \centering \begin{subfigure}[b]{1.0\columnwidth} \includegraphics[width=\columnwidth]{bach20_score.png} \caption{First two bars of Bach's 20th prelude (Book II). } \end{subfigure} \vspace{0.1in} \begin{subfigure}[b]{1.0\columnwidth} \includegraphics[width=\columnwidth]{bach20first2.png} \caption{Patterns detected with COSIATEC~\cite{meredith2013cosiatec}.} \end{subfigure} \caption{COSIATEC applied to a short musical excerpt. Each TEC is represented with a different color~\cite{herremans2016morpheus}. } \label{fig:patterns} \end{figure} The first algorithm implemented in Morpheus, SIATECcompress, runs SIATEC once to get a list of MTP TECs and then selects a subset of TECs that covers the input dataset~\cite{meredith2015music}. The second algorithm, COSIATEC, on the other hand, iteratively uses SIATEC to find the best TEC, then removes this from the input dataset and repeats the process~\cite{meredith2013cosiatec}. Both algorithms result in a set of TECs with high compression ratios that cover a point-set. The encodings generated by COSIATEC are generally more compressed, meaning that the size of the file listing all TECs is smaller. SIATECCompress produces patterns that may intersect, which may be more relevant in the context of music analysis, as a note may belong to more than one musically meaningful pattern. SIATECCompress performed best in the 2013 and 2014 MIREX competition on ``Discovery of repeated themes and sections'', and COSIATEC outperformed SIATECCompress on a Dutch folk song classification task with an accuracy rate of 84\% \cite{meredith2015music}. The next section describes polyphonic music generation as an optimization problem which imposes the way the patterns repeat in the generated piece using hard constraints. This allows us to constrain the form and repetition structures of the newly generated piece. \section{Optimization problem} \label{sec:opt} In this research, generating music is modeled as an optimization problem. The main advantage of this is that it allows us to constrain global structure, consisting of repeated patterns, and to optimize the music to fit a tension profile. In this section, the resulting combinatorial optimization problem is formally defined. \subsection{Variables} The algorithm starts with a template piece whose rhythm and dynamics are treated as constants in the generated piece. The aim of MorpheuS is then to find a new set of pitches, $x$, for each note of the template piece, that minimizes the objective function and satisfies the repeated pattern constraints. \subsection{Objective function} The objective of the optimization problem is to find a solution $x$ that matches a given tension profile as closely as possible. This tension profile can either be calculated from the template piece $t$ or could be manually input by the user. It comprises of three parts: one for each of the three tension measures $i \in \{0,1,2\}$ from Section~\ref{sec:tension}, represented as a vector $T_i(x)$ with length $l_i$. Since we want to match the tension profile of the solution $x$ to that of the template $t$, we calculate the Euclidean distance between these two tension profiles: \begin{equation} \label{eq:distance} D_i(x) = \sum_{j = 1}^{l_i}\sqrt{(T_{ij}(x) - T_{ij}(t))^2}. \end{equation} The sum of the distances between each of the tension measures forms the objective function $D(x)$, which we aim to minimize. \begin{equation} \label{eq:distance2} D(x) = \sum_{i = 0}^{2} a_i \times D_i(x), \end{equation} \noindent where $a_i$ is the weight for tension measure $i$. The weights offer the user a way to specify the relative importance of certain tension measures. In this paper, the weights are all set to 1. \subsection{Soft constraints} In addition to the hard constraints to be described in the next section, the user can elect to fix certain pitches in the solution. In order to do this, an additional term was added to the objective function $D(x)$ which imposes an arbitrary high penalty if $pitch(n_j)$ of note $j$ is not set to the required pitch ($setpitch(n_j)$): \begin{equation} D'(x) = D(x) + b \times \sum_{j = 0}^{j} C(n_j), \end{equation} whereby \begin{equation*} C(n_j)= \begin{cases} 0, & \text{if $pitch(n_j) = setpitch(n_j)$}\\ 1, & \text{otherwise} \end{cases} \end{equation*} and $b$ is an arbitrary large number. \subsection{Hard constraints} A number of the variables (pitches) of the solution $x$ are hard constrained to enforce the patterns detected in the template piece (as described in Section~\ref{sec:patterndetection}). This constraint ensures the recurrence of themes and motives in the output musical piece. The data structure used to store the solution is such that only the pitches of the original occurrence of the patterns \verb|p()| need to be decided. All other occurrences of a pattern are automatically set based on the pitches for the original pattern and the set of translational equivalence vectors \verb|V()| of the pattern. This setup speeds up the algorithm as it drastically reduces the size of the variables in the set $x$. In addition to the pattern constraints, an additional hard constraint is imposed on the pitch range for each track. This range is set based on the lowest and highest occurring pitch in the template piece for each track. Within this range, all possible pitches are allowed. \section{Variable Neighborhood Search} In this section, we describe the variable neighborhood search (VNS) algorithm used to solve the optimization problem defined above. Much of the research on the development of metaheuristics stems from more traditional fields such as vehicle routing and scheduling. In this research, we chose to implement a VNS algorithm because it has been shown to outperform several other heuristics (including genetic algorithms) on a range of problems~\citep{hansen2001variable}. Since its inception in the late 90s, it has been successfully applied to problems in combinatorial optimization including project scheduling \citep{fleszar2004solving}, finding extremal graphs \citep{caporossi2000variable}, vehicle routing \citep{braysy2003reactive}, graph coloring \citep{avanthay2003variable}, and piano fingering~\cite{balliauw15}. A VNS algorithm has previously been developed for generating counterpoint music ~\cite{herremans2012composing,herremans2013composing}. This algorithm has proven to be efficient and outperformed a genetic algorithm implemented on the same problem. The inner workings of the algorithm have been modified to work with complex polyphonic piano music, and the constraints and objective function described in Section~\ref{sec:opt} have been integrated into the algorithm. \subsection{Local search components} The core of a VNS algorithm is a local search strategy. Local search typically starts from an initial solution $x$, and iteratively makes a small change (i.e. a move) in order to find a better solution. We refer to the set of solutions $x'$ that can be reached by applying one type of move to a solution as the \emph{neighborhood}. In this case this means that the neighborhood will consist of all solutions that can be reached by applying one type of move to any of the time slices of the piece. A \emph{first descent strategy} was implemented in MorpheuS, whereby the neighborhood is built for one note/time slice at a time. As soon as a (feasible) solution is found that has a better value for the objective function $D(x')$, this solution is accepted as the new current solution $x$. An additional strategy for accelerating the search applies the moves chronologically from the start to the end of the piece. When a move is successful, this change will affect the tension profile only in its immediate vicinity. Therefore, the algorithm will backtrack only 4 time slices then resume the search. \vspace{-.4cm} \begin{figure}[h] \centering \includegraphics[width=0.42\textwidth]{Pmovesb.png} \caption{An example of a potential move using each of the three different types of moves~\cite{herremans2016morpheus}. } \label{fig:moves} \end{figure} \vspace{-.1cm} Three types of moves are implemented in MorpheuS based on~\cite{herremans2012composing}. An example of each type of move is displayed in Figure~\ref{fig:moves} using a very short fragment. The \nbh{change1} move changes the pitch of one note to all of the other possible pitches in the allowed range to form the neighborhood. The \nbh{swap} move consists of all musical pieces that can be created by swapping the pitch of any two notes of the current piece. Finally, \nbh{changeSlice} changes the pitches of two randomly chosen notes in a vertical time slice to all of the other allowed pitches in the range. The respective size of each neighborhood generated by these three types of moves is displayed in Table~\ref{tab:size}. In order to speed up the algorithm, a first descent strategy is implemented, in which the neighborhood is built one move at a time. Whereas a steepest descent strategy would generate the full neighborhood, the first descent strategy accepts a new solution as soon as it improves the value of the current solution (see previous subsection). \begin{table}[h] \centering \caption{Size of the neighborhood generated by each move type for a piece consisting of $n$ chords, each containing $m$ notes, and with a pitch range of $p$.} \label{tab:size} \begin{tabular}{lc} \toprule Move type & Neighborhood size \\ \midrule \nbh{change1} & $m \times n \times p$\\ \nbh{swap} & $((n \times m)-1)!$ \\ \nbh{changeSlice} & $p^2$ \\ \bottomrule \end{tabular} \end{table} \subsection{Outline of the VNS} A system diagram of the full algorithm implemented in MorpheuS is shown in Figure~\ref{fig:vns}. The VNS starts from a random (feasible) solution, which is built by assigning pitches from the range in a uniformly random manner. This initial solution is set as the current solution $x$. \begin{figure}\hspace{-1cm} \includegraphics[width = .6\textwidth]{vns_flow_morpheus.pdf} \caption{Flow chart of the variable neighborhood search algorithm~\cite{herremans2016morpheus}.} \label{fig:vns} \end{figure} The algorithm then performs local search using the \nbh{change1} neighborhood. When no improving feasible solution can be found, the VNS will then switch the local search to a different neighborhood type (e.g. \nbh{changeSlice}), which then allows the search to continue~\citep{mladenovi1997variable}. This process is repeated until no better solution can be found in any of the neighborhoods, in which case the algorithm is said to have arrived at a local optimum. The VNS algorithm implements a perturbation strategy to escape this local optimum and then continues the search for the global optimum~\citep{hansen2003variable}. A perturbation re-assigns a significant proportion of the pitches each to a uniform random (feasible) pitch. Based on the research of~\citet{herremans2013composing}, the number of perturbed pitches was set to 12\%. In an iterative local search strategy, the algorithm restarts from a totally random solution when a local optimum is reached. The perturbation strategy implemented in the VNS, however, leads to far better results~\citep{lourenco2003iterated}. The search process continues until the stopping criterion (i.e. a maximum allowed number of iterations) is reached. The order in which the different types of moves are applied is based on the increasing computational complexity of calculating the full neighorhood. In the next section, we evaluate the implemented algorithm and its musical results. \section{Results} The MorpheuS system is evaluated on three levels. The first examines the effects of pattern detection on the musical outcome. Next, we consider the efficiency of the optimization algorithm. Last but not least, the generated musical output is evaluated and compared to the original template piece. \subsection{Effect of pattern detection algorithm} \label{sec:patternresults} The selected pattern detection algorithm (COSIATEC versus SIATECCompress) exerts a big influence on the resulting pieces. Short but frequent patterns can overly constrain the generation process, thus forcing it to converge quickly to the original piece. Infrequently repeated patterns, even though they may be long, easily lose sight of the goal of constraining long term structure. The user can specify which algorithm is implemented: COSIATEC, which uniquely captures each note in precisely one pattern, or SIATECCompress, which captures more relationships between different notes, resulting in overlapping patterns. Each of these algorithm in turn has additional settings, such as maximum and minimum pattern lengths. We have generated three different pattern sets based on an excerpt of Rachmaninov's ``\'Etude Tableau Op. 39, No. 6'', shown in Figure~\ref{fig:rach3}. \begin{figure*}[hbt!] \centering \includegraphics[width=.8\textwidth, trim={0cm 8cm 0cm 2cm}, clip]{rach2_short.pdf} \caption{Original excerpt from Rachmaninov's ``\'Etude Tableau Op. 39, No. 6''} \label{fig:rach3} \end{figure*} \begin{figure*}[h] \begin{subfigure}[h]{\textwidth} \centering \includegraphics[height=2.6cm, width=.8\textwidth]{rach2A.png} \caption{Pattern set A: COSIATEC, minimum pattern length 5, compression ratio: 1.65, Number of TECs: 6} \label{fig:pa1} \end{subfigure} \begin{subfigure}[h]{\textwidth} \centering \includegraphics[height=2.6cm, width=.8\textwidth]{rach2C.png} \caption{Pattern set B: COSIATEC, maximum pattern length 2, compression ratio: 1.67, number of TECs: 6} \label{fig:pa3} \end{subfigure} \begin{subfigure}[h]{\textwidth} \centering \includegraphics[height=2.6cm, width=.8\textwidth]{rach2D.png} \caption{Pattern set C: SIATECCompress (no restrictions on the pattern length), compression ratio: 1.58, number of TECs: 11. Each TEC is represented with a different color.} \label{fig:pa2} \end{subfigure} \caption{Different patterns detected in Rachmaninov's ``\'Etude Tableau Op. 39, No. 6''} \label{fig:patternsRach3} \end{figure*} Based on this excerpt, we have calculated three sets of repeated patterns, as displayed in Figure~\ref{fig:patternsRach3}. Their main properties, namely, compression ratio, number of notes in patterns, \verb|P()|, and the size of the TECs, are shown in Table~\ref{tab:patterns}. Each of these three pattern sets was then used as a structural template during music generation in MorpheuS. The resulting music pieces generated, based on a short run of 10 iters (less then 1 minute generation time on a Dell XPS13 laptop with i7core and 8GB RAM) are displayed in the figure in Appendix~A. An example of each detected pattern, together with the set of translational equivalent vectors is shown on each score in green and orange respectively. The first pattern set (A), shown in Figure~\ref{fig:patternsRach3}(a), was detected using COSIATEC with a minimum pattern length of 5. This resulted in 6 TECs with a compression ratio of 1.65, and 69 unique notes that needed to be optimized by MorpheuS (see Table~\ref{tab:patterns}). The resulting piece, created by iterating through the VNS 10 times is displayed in (a) of Appendix~A. The music retains some of the contours of the original piece, but also contains a great deal of original musical content. When constraining COSIATEC to detect only very short patterns of maximum length 2, we obtain a set of TECs (B), shown in Figure~\ref{fig:patternsRach3}(b). This yields a very similar compression ratio, yet the piece generated based on this template pattern is very different. In this case, the original piece is almost replicated exactly due to the many constraints posed by the set of TECs. The prevalence of such short patterns typically severely limit the originality of the music generated. Here, MorpheuS only has 11 notes to optimize; the others were derived from the translation vectors of the pattern vector. Pattern set C, shown in Figure~\ref{fig:patternsRach3}(c), shows the results of running SIATECCompress without any constraints on pattern length. Although the compression ratio of COSIATEC is often higher than that of SIATECCompress~\cite{meredith2015music}, musically speaking, the latter may have further benefits. In SIATECCompress, a note can be contained in multiple sets of TECs, which allows the algorithm to find a different and larger set of TECs than COSIATEC. This may result in the algorithm capturing more meaningful musical relationships. The resulting music generated using pattern set C as a template offers a mix between the highly constrained nature of pattern set B and the freedom of pattern set A. An example of this can be found in the ascending pattern in bars 1 and 3. In pattern set C, both bars have an ascending line, yet the starting note is different. Pattern set B generates a more constrained output, whereby both bars are identical. With pattern set A, we see a much freer interpretation, whereby the two bars bear minimal resemblance to each other. Although each of the three example patterns offer a way to constrain long-term structure in generated music, the degree to which they constrain pitch patterns has significant effect on the musical outcome. \begin{table}[h] \centering \caption{Pattern sets generated for Rachmaninov's ``\'Etude Tableau Op. 39, No. 6''} \label{tab:patterns} \begin{tabular}{lllll} \toprule & Algorithm & CR & UP & TECs \\ \midrule Pattern set A & COSIATEC & 1.65 & 69 & 6 \\ Pattern set B & COSIATEC & 1.67 & 11 & 6 \\ Pattern set C & SIATECCompress & 1.58 & 34 & 11 \\ \bottomrule \end{tabular} \vspace{.2cm} \scriptsize CR: compression ratio, UP: number of pitches to be set by MorpheuS \end{table} \subsection{Evolution of solution quality} A formal comparison with other existing systems was not possible due to the fact that MorpheuS is the first algorithm that implements a tension based objective function with pattern constraints. The use of VNS for generating counterpoint, on which MorpheuS is based, has been tested extensively in~\cite{herremans2012composing} and shown to outperform a genetic algorithm on the same task. We can thus assume that, in the more constrained (due to the patterns imposed) musical task represented here, the algorithm will be at least as effective. In order to verify the effectiveness of the algorithm it was run 100 time on Kabalevsky's ``Clowns'' (from \textit{24 Pieces for Children}, Op. 39 No. 20) with SIATECCompress patterns, and the Rachmaninov ``\'Etude Tableau Op. 39, no. 6'' shown above (using pattern set C). Figure~\ref{fig:100runs} shows the range and average of the best objective function value found over time for 100 runs of the VNS for both pieces. The experiment was performed on a Dell XPS Ultrabook with i7Core and 8GB RAM. The average running time of the VNS was 136 seconds for ``Clowns'' and 526 seconds for the Rachmaninov piece. The size of the solution was 34 and 84 notes, respectively. \begin{figure}[hbt!] \centering \begin{subfigure}[h]{.45\textwidth} \includegraphics[width=1\columnwidth]{plot100rach.pdf} \caption{For Rachmaninov's ``\'Etude Tableau Op. 39, No. 6'' (with Pattern set C)} \label{fig:rachvns100} \end{subfigure} \begin{subfigure}[h]{.45\textwidth} \includegraphics[width=1\columnwidth]{plot100clowns.pdf} \caption{Kabalevsky's ``Clowns''.} \label{fig:clownsvns100} \end{subfigure} \caption{Evolution of the objective function over time, for 100 runs of the VNS. The plotted line shows the mean best solution found by the VNS for 100 runs. The ribbons show the maximum and minimum objective function values for the best solution found over the 100 runs at each move.} \label{fig:100runs} \vspace{-.4cm} \end{figure} Figures~\ref{fig:rachvns100} and~\ref{fig:clownsvns100} clearly show a steep improvement during the initial seconds of the algorithm's run for both pieces. This pattern can be observed for each of the 100 runs, as the maximum values for the best solution found (i.e. worst run of the algorithm), goes down quickly. Even after 2 minute , the algorithm manages to find small improvements to the current solution. \begin{figure}[hbt!] \centering \includegraphics[width=0.8\columnwidth, trim=0.1cm 0cm 0cm 0cm, clip]{plotstime.pdf} \caption{Evolution of the objective function over time for one run of the VNS, for Kabalevsky's ``Clowns''.} \label{fig:clownsvns} \end{figure} In Figure~\ref{fig:clownsvns}, we isolate one particular run of the algorithm on ``Clowns''. A clear descending trend can be observed when looking at the best solution found over time. The peaks in the graph indicate points of perturbation. Whenever the search gets trapped in a local optimum, the current solution is perturbed, leading to a temporarily worse solution. Note that even after 500 moves, the perturbation step manages to escape from a local optimum to find a better solution, thus confirming that the perturbation strategy is successful. \begin{figure*}[hbt!] \centering \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{diam_start.png} \caption{Cloud diameter: random (solid) \& original piece (dashed)} \label{fig:key_random} \end{subfigure} \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{diam_end.png} \caption{Cloud diameter: optimized (solid) \& original piece (dashed)} \label{fig:mom_random} \end{subfigure} \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{cloud_start.png} \caption{Cloud Momentum: random (solid) \& original piece (dashed)} \label{fig:key_start} \end{subfigure} \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{cloud_end.png} \caption{Cloud momentum: optimized (solid) \& original piece (dashed)} \label{fig:mom_start} \end{subfigure} \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{key_start.png} \caption{Tensile strain: random (solid) \& original piece (dashed)} \label{fig:key_end} \end{subfigure} \begin{subfigure}[h]{.49\textwidth} \includegraphics[width=1\textwidth, clip, trim={7cm 0 5cm 0}]{key_end.png} \caption{Tensile Strain: optimized (solid) \& original piece (dashed)} \label{fig:mom_end} \end{subfigure} \caption{The three types of tension profiles before and after optimization of Kabalevsky's ``Clowns'', together with the template profile (dashed). The y-axes represent the value of each tension score (cloud diameter, tensile strain and cloud momentum), and the x-axes represent score time. } \label{fig:tensionclowns} \vspace{-.4cm} \end{figure*} The three tension profiles: cloud diameter, tensile strain and cloud momentum, are shown in Figure~\ref{fig:tensionclowns}. The graphs show the original tension profile of the template piece (dashed line), and snapshots of the tension profiles of the output piece before optimization (random piece) and after optimization. It can be noticed that the tension profile of the original piece fluctuates between tension and relaxation, a dynamic that~\citet{lerdahl1987generative} discuss in their Generative Theory of Tonal Music. The graphs of the random piece, however, show a much more erratic tension profile distant from that of the original piece. Overall, the tension is also higher for the random pieces, which can in part be explained by the dissonance that we can expect in pieces with random pitches. A striking similarity can be seen between the tension profiles of the original piece (template) and the optimized piece, yet again confirming that the optimization algorithm indeed finds a solution that minimizes the objective function. This is confirmed by the correlation coefficients, which are high between the tension profiles of the optimized and template piece (0.9748, 0.9918, 0.9993), and lower between the random initial solution and the template piece (0.4103, 0.2053, and 0.6174). In the next section we will focus on the actual musical results. \subsection{Musical outcome} It must be noted that in the examples given in this paper, the goal of the optimization is to fit the original tension profile of the \emph{template} piece as closely as possible. This explains a tendency to revert to the original piece, but offers us a way to verify that the optimization algorithm performs well. A future version of MorpheuS may include a `similarity-penalty' in the objective function, to enforce originality in the generated pieces. Currently, the user is free to design their own tension profile to create an original piece, or use the tension profile of a template piece. Appendix~B shows the musical output of the optimization process described in the previous section together with the initial (random) starting piece, based on the template piece ``Clowns'' by Kabalevsky. A significant improvement in musical quality can be noticed between the random and the optimized piece. One of the most evident aspects is that the latter (optimized) piece is much more tonal. We can also see some long-term recurrent structures; for example, the theme from bars 1-5 returns in bars 18-22. The reader is invited to listen to this and other output generated by MorpheuS online\footnote{\url{http://dorienherremans.com/morpheus}} While these first tests are promising, there is still room for improvement. One interesting improvement is to add constraints on playability. While the current pitch range constraint ensures that assigned notes occur within the range of the template piece, the output can be far from idiomatic for the instrument and variables such as the unexpectedness of the note sequences can make them difficult to play. Improved playability could be achieved by a statistical machine learning approach (e.g. Markov model or recurrent neural network) to integrate transition probabilities in the current objective function. Following live performances of MorpheuS' pieces, we have received a range of comments from expert musicians, reflecting interesting perspectives on MorpheuS' compositions to inform future developments of the system. Upon hearing MorpheuS' version of a Haydn sonata movement, an academic composer remarked that the system shows some competence with the repetition of material, however it does not develop the material; furthermore, it does not know (like Stravinsky) to use the `right' kind of `wrong' notes; MorpheuS' use (or misuse) of cadences, cadential figures were inserted in odd places, were the most obvious anomalies distinct from human composition; also, the evolution of harmony with respect to phrase structure, does not work as the tension levels do not relate to the phrase structure. Several expert listeners remarked on the humor apparent in MorpheuS' pieces, particularly the ways in which it naively violates conventional expectations, often with surprising and sometimes adventurous but plausible results. The advantage of this naivet\'{e}, as one person puts it, is that the system ``has no fear'' and thus ``has courage to try things'', unencumbered by conditioning that constrains human composers to behave or make choices only in ways that are deemed acceptable. This was in the context of three morphed pieces by Bach and three morphed pieces by Kabalevsky. In a few of these pieces, there were awkward harmonic moments when returning to the beginning of a repeated section, leading one expert listener to comment that MorpheuS lacked the ability to make good transitions. However, the listener found it fascinating to hear the original Bach pieces (from ``A Little Notebook for Anna Magdalena'') through the lens of MorpheuS' compositions. In contrast, another expert listener found the Kabalevsky pieces ``more honest'' than the Bach ones, likely because the original Bach pieces were too recognizable in the morphed ones, yet lacked certain characteristics typically associated with the pieces. \section{Conclusions} MorpheuS consists of a novel framework that allows us to tackle one of the main challenges in the field of automatic music generation: long-term structure. An efficient variable neighborhood search algorithm was developed that enables the generation of complex polyphonic music, with recurring patterns, according to a tension profile. Two pattern detection techniques extract repeated and translated note patterns from a template piece. This structure is then used as scaffolding to constrain long-term structure in new pieces. The objective function of the optimization problem is based on a mathematical model for tonal tension. This allows for the generation of pieces with a predefined tension profile, which has potential applications to game or film music. The pieces generated by MorpheuS have proved to be promising and have been tested in live performance scenarios at concerts internationally. In future research it would be interesting to explore the integration of machine learning methods, e.g. deep learning techniques or Markov models, in the objective function of the optimization algorithm. This way, a style could be learned from a large corpus of music, and in turn be reflected in the generated piece. We expect that this would also improve the playability of the pieces and reduce awkward transitions. A second expansion would be to allow for more flexible pattern detection, such as recognition of inverted patterns, augmentations, and diminutions and other variations. It would equally be interesting to evaluate if the generated music elicits the same emotion responses as expected given a tension profile, by measuring physiological responses or by recording listener judgments of tension as described in~\cite{kim2008emotion,agres2015creativity}. The tension model could also be expanded to capture other characteristics of tension such as timbre and cadence. With regard to interaction design, it would also be interesting to test a transformational approach in which the optimization starts from the original piece and searches for a new piece that matches a tension profile provided by the user, thus transforming the original piece. Finally, in the context of adaptive game music generation, the VNS algorithm could be modified to allow for real-time generation, much like the system \citet{antor13} implemented as the FuX 2.0 mobile music generation app. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 658914. We are grateful to Prof. Dr. David Meredith for providing us with implementations of his COSIATEC and SIATECCompress algorithms. Finally, we thank the expert listeners who provided anecdotal feedback following performances of MorpheuS' output. They were Dr. Uzial Awrat, Dr. Oded Ben-Tal, Dr. Paul Edlin, and Carla Townsend Sturm. \bibliographystyle{IEEEtranN} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{dorien.png}}] {Dorien Herremans} is an Assistant Professor at the Information Systems Technology and Design Pillar at the Singapore University of Technology and Design, with a joint appointment at the Institute of High Performance Computing at the Agency for Science Technology and Research (A*STAR). In 2015, she was awarded the individual Marie Sk\l{}odowska-Curie Fellowship for Experienced Researchers, and worked at the Centre for Digital Music, Queen Mary University of London, on the project: ``MorpheuS: Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion''. Prof. Herremans received her PhD in Applied Economics from the University of Antwerp. Her PhD thesis was titled ``Compose$\equiv$Compute: Computer Generation and Classification of Music through Operations Research Methods''. After graduating as a commercial engineer in management information systems at the University of Antwerp in 2005, she worked as a Drupal consultant and was an IT lecturer at Les Roches University in Bluche, Switzerland. She also worked as a mandaatassistent at the University of Antwerp, in the domain of operations management, supply chain management and operations research, and was a visiting researcher at the Department of Computer Science and Artificial Intelligence at the University of the Basque Country, San Sebasti\'an. Prof. Herremans' research focuses the intersection of machine learning/optimization and digital music. She is a Senior Member of the IEEE and co-organizer of the First International Workshop on Deep Learning and Music as part of IJCNN. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{elaine.png}}] {Elaine Chew} is Professor of Digital Media in the School of Electronic Engineering and Computer Science at Queen Mary University of London (QMUL) where she is affiliated with the Centre for Digital Music. Prior to joining QMUL in 2011, she was a tenured associate professor at the University of Southern California, where she was the inaugural holder of the Viterbi Early Career Chair. She was a recipient of the US Presidential Early Career Award in Science and Engineering and NSF CAREER Award, and the Edward, Frances, and Shirley B. Daniels Fellow at the Radcliffe Institute for Advanced Study. She is also an alum of the NAS Kavli Frontiers of Science and NAE Frontiers of Engineering Symposia. Her research centers on the mathematical and computational modeling of music structure, musical prosody, music cognition, and ensemble interaction. She is author of over 100 peer-reviewed chapters and articles, and author and editor of 8 books and journal special issues on music and computing. She has served as program and general chair of the International Conferences on Music Information Retrieval (2008) and of Mathematics and Computation in Music (2009, 2015), and was invited convenor of the Mathemusical Conversations international workshop in 2015. She was awarded PhD and SM degrees in operations research at the Massachusetts Institute of Technology, and a BAS in mathematical and computational sciences (hon) and music (distinction) at Stanford University. \end{IEEEbiography} \vfill \footnotesize \section{Effect of different patterns on musical output of MorpheuS} \label{sec:patterneffect} \begin{figure*}[h!] \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=.8\textwidth, trim={0cm 7.6cm 0cm 1.8cm}, clip]{rach2A_b3.pdf} \caption{Generated piece with 10 iters, based on pattern set A} \label{fig:gpa1} \end{subfigure} \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=.8\textwidth, trim={0cm 8cm 0cm 2cm}, clip]{rach2Cb2.pdf} \caption{Generated piece with 10 iters, based on pattern set B}\label{fig:gpa3} \end{subfigure} \begin{subfigure}[h]{\textwidth} \centering \includegraphics[width=.8\textwidth, trim={0cm 7.6cm 0cm 1cm}, clip]{rach2Db2.pdf} \caption{Generated piece with 10 iters, based on pattern set C} \label{fig:gpa2} \end{subfigure} \caption{Generated music based on the three types of patterns detected in Rachmaninov's `` \'Etude Tableau Op. 39, No. 6''. An example of a TEC found by each algorithm is indicated by arcs in orange, with the original pattern in green.} \label{fig:gen_patternsRach3} \end{figure*} \FloatBarrier \clearpage \section{Example output of MorpheuS} \label{sec:output} \begin{figure*}[hbt!]\centering \begin{subfigure}[h]{.9\textwidth}\centering \includegraphics[width=1\textwidth, trim=0cm 8cm 0cm 1.55cm, clip]{clowns_start.pdf} \caption{Random (feasible) initial starting solution} \end{subfigure} \begin{subfigure}[h]{.9\textwidth} \includegraphics[width=1.0\textwidth, trim=0cm 7cm 0cm 1cm, clip]{clowns_morpheus.pdf} \caption{Best solution found after 25 iterations} \label{fig:clownsEnd} \end{subfigure} \caption{Evolution of MorpheuS' output before and after optimization, based on the template piece ``Clowns'' by Kabalevsky} \label{fig:clowns} \end{figure*} \end{document}
1,108,101,564,462
arxiv
\section{INTRODUCTION} \label{sec:Intro} The discovery of polarized radio emission extending perpendicular to the Galactic plane, such as nonthermal radio filaments \citep{Yusef84} and polarized plumes \citep{Tsuboi86}, has provided early evidence for a substantial poloidal component of the magnetic field in the central region of our Galaxy. High resolution observations at radio wavelengths have revealed more filaments of similar orientations \citep[e.g.,][]{LaRosa04,Yusef04}, where the magnetic fields were found to be predominantly aligned along the filaments. The accumulation of these observational results has led to the hypothesis that most of the volume of the Galactic center (GC) is permeated by a {\it poloidal} magnetic field. Evidence for the existence of a {\it toroidal} magnetic structure comes from far-infrared (FIR) and submillimeter (sub-mm) observations, which detect polarized thermal emission from magnetically aligned dust grains. The rotation axis of the dust grain aligns with the magnetic field, and thus the polarization of the thermal emission indicates the magnetic field direction (the measured direction of the E-vector is orthogonal to the magnetic field). The magnetic fields inferred from these observations \citep[e.g.,][]{Werner88,Morris92,Hildebrand93} run parallel to the Galactic plane. Recent linear polarization observations of the 450-$\mu$m continuum suggest a large-scale toroidal magnetic field extending over a region of $170 \times 30 $ pc \citep{Novak03}. \citet{Chuss03} found an interesting dependence of the magnetic field direction on the sub-mm flux. In low density regions, the field aligns generally perpendicular to the Galactic plane, while in high density regions, the field has a toroidal geometry. One explanation for this is that the global magnetic field in the early Galaxy was initially in a poloidal configuration, however the gravitational energy density in dense molecular clouds was strong enough to distort the poloidal field into a toroidal one, though it was insufficient in the low density regions. A model which can connect the poloidal and toroidal magnetic fields was proposed by \citet{Uchida85}, and was extended to a more realistic case by \citet{Shibata86}. This magnetohydrodynamic model was developed to explain the GC lobes found by \citet{Sofue84}. Since magnetic flux is generally frozen into matter, differential rotation and infall can shear an initially poloidal field into a toroidal one. Consequently, a toroidal field is developed close to the Galactic plane, while the field is vertical at high Galactic latitude. The \citet{Uchida85} model predicts that the toroidal component will generally be more dominant closer to the Galactic plane. A map of the direction of the magnetic field could therefore be a simple indicator of how well this model works in the GC. However, dust emissivity is high in dense, warm clouds, and thus observations of dust emission in FIR/sub-mm wavelengths are strongly limited to such regions, which show patchy distributions in the GC region. In this paper, we present near-infrared (NIR) polarimetry of point sources toward the GC covering a much larger region of the sky than previous observations of this region. We demonstrate that NIR polarization can provide information on the magnetic field structure not only in the Galactic disk, but also in the central region of our Galaxy. \section{OBSERVATIONS AND DATA REDUCTION} \label{sec:Obs} We conducted NIR polarimetric observations of the GC with the SIRPOL camera on the night of 4 July 2006. SIRPOL consists of a single-beam polarimeter \citep[a half-wave plate rotator unit and a fixed wire-grid polarizer;][]{Kandori06} and the NIR imaging camera SIRIUS \citep[Simultaneous Infrared Imager for Unbiased Survey;][]{Nagas99, Nagay03}, and is attached to the 1.4-m telescope IRSF (Infrared Survey Facility). SIRPOL provides images of a 7\farcm7 $\times$ 7\farcm7 area of sky in three NIR wavebands, $J$ ($1.25\mu$m), $H$ ($1.63\mu$m), and $K_S$ ($2.14\mu$m), simultaneously. The detectors are three 1024 $\times$ 1024 HgCdTe arrays, with a scale of 0\farcs45 pixel$^{-1}$. The filter system of IRSF/SIRPOL is similar to the MKO system \citep{Tok02}. We observed a $20\arcmin \times 20\arcmin$ area (nine SIRPOL fields) centered at the position of Sgr A$^*$ ($17^{\mathrm{h}} 45^{\mathrm{m}} 40.0^{\mathrm{s}}$, $-29\degr 00\arcmin 28\farcs0 $; J2000.0). We took 10-s exposures each at 4 wave plate angles ($0\fdg0$, $22\fdg5$, $45\fdg0$, $67\fdg5$) at each of 10 dithered positions. The weather condition was photometric, with a seeing of $\sim$1\farcs2 ($J$), $\sim$1\farcs1 ($H$), and $\sim$1\farcs0 ($K_S$). The IRAF (Image Reduction and Analysis Facility)\footnote{ IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation.} software package was used to perform dark- and flat-field corrections, followed by sky background estimation and subtraction. \section{DATA ANALYSIS AND RESULTS} \subsection{Polarization of Point Sources} \label{sec:PolPoint} To obtain the photometric magnitudes and errors in the three bands, we used the DAOFIND task in the DAOPHOT package \citep{Stetson87} to identify point sources in Stokes $I$ images [$I=(I_{0\degr}+I_{22\fdg5}+I_{45\degr}+I_{67\fdg5})/2$]. The sources were then input to the ALLSTAR task for PSF-fitting photometry. About 10 sources were used to construct the PSF in each image. Each Stokes $I$ image was calibrated with the photometric image of the same position obtained in previous imaging observations \citep{Nishi06a}, in which the standard star \#9172 \citep{Persson98} was used for calibration. We assumed that \#9172 has magnitudes of $J=12.48$, $H=12.12$, and $K_S=12.03$ in the IRSF/SIRIUS system. In Fig. \ref{fig:Col2} we show the $H-K_S$ histogram (top panel) and the $JHK_S$ color-color diagram (bottom panel) for stars with photometric errors of less than 0.11 mag. Also plotted are loci of unreddened giants and dwarfs. The arrow is parallel to the reddening vector, and its length corresponds to $A_{K_S}=1$ mag \citep{Nishi06a}. Considering large extinction toward the GC, stars with small $H-K_S$ should be attributed to foreground (disk) stars, and stars consisting of the strongest peak in the $H-K_S$ histogram is attributed to ones in the Galactic bulge. \begin{figure}[h] \begin{center} \plotone{./f1.eps} \caption{ $H-K_S$ histogram ({\it top}) and $JHK_S$ color-color diagram ({\it bottom}) for point sources with $J$, $H$, and $K_S$ photometric errors of less than 0.11 mag. The thick and thin curves represent the loci of giants and dwarfs, respectively \citep{Tokunaga00}. The arrow indicates the $A_{K_S}=1$ mag reddening vector \citep{Nishi06a}. } \label{fig:Col2} \end{center} \end{figure} Astrometric calibration was performed, field by field, with reference to the positions of point sources in the 2MASS point source catalog \citep{Skrutskie06}. Sources with photometric errors of less than 0.05 mag in 2MASS and our catalog were used for the calibration. As a result of this astrometric calibration, we obtained an rms of the positional difference of better than 0\farcs1 for sources with a $< 0.11$ mag photometric error. The Stokes parameters $I$, $Q$, and $U$ for point sources were determined from aperture polarimetry of combined images as follows. DAOFIND and APPHOT tasks were used for the point-source identification and the aperture photometry. We then obtained the intensity for each wave plate angle ($I_{0\degr}$, $I_{22\fdg5}$, $I_{45\degr}$, $I_{67\fdg5}$). Since aperture photometry with a small aperture radius gives a better photometric result than PSF-fitting photometry, aperture photometry was applied in the following procedure. The size of the PSF is slightly different among the four images due to variations of the seeing, hence we used different apertures for each image. The aperture diameters were set to be equal to $2 \times \mathrm{FWHMs}$ of the best fit Gaussian profile (GFWHM) determined by the PSFMEASURE task. The means of the adopted aperture sizes were $\sim$1\farcs2 ($J$), $\sim$1\farcs1 ($H$), and $\sim$1\farcs0 ($K_S$). The position angle offset of SIRPOL $\alpha$ was estimated to be $\alpha = 105\degr$ \citep{Kandori06} which was set to the origin of the angles $0\degr$, $22\fdg5$, $45\degr$, and $67\fdg5$. Based on the intensities of the four angles, we calculated the total intensity $I$ and two ``raw'' Stokes parameters $Q'$ and $U'$ as \[I = (I_{0\degr} + I_{22\fdg5} + I_{45\degr} + I_{67\fdg5})/2,\] \[Q' = I_{0\degr} - I_{45\degr},\] \[U' = I_{22\fdg5} - I_{67\fdg5}.\] The polarization degree $P$ and the position angle $\theta$ were derived by \[ P = \sqrt{(Q'^2+U'^2)}/I, \] \[ \theta = \frac{1}{2} \arctan(U/Q), \] where \begin{eqnarray} Q = Q' \cos(2 \alpha) - U' \sin(2 \alpha), U = Q' \sin(2 \alpha) + U' \cos(2 \alpha). \label{eq:AngCor} \end{eqnarray} The debiased $P$ was finally derived by $P_{\mathrm{db}} = \sqrt{P^2-\delta P^2}$ where $\delta P$ is the error of $P$, given by \begin{eqnarray} \delta P = \frac{1}{P} \sqrt{ \left( \frac{Q'}{I} \right)^2 \left[ \delta \left( \frac{Q'}{I} \right) \right]^2 + \left( \frac{U'}{I} \right)^2 \left[ \delta \left( \frac{U'}{I} \right) \right]^2 }. \label{eq:PError} \end{eqnarray} We regard sources for which $(P^2-\delta P^2) \leq 0$ as non-polarized source, and we do not consider such sources further in this paper. The typical magnitudes for $\delta P = 1 \%$ are 14.5 ($J$), 13.5 ($H$), and 12.0 ($K_S$). Fig. \ref{fig:Vmap} plots the degree and position angle of stars with a degree of polarization determined to have an accuracy better than 1\%. The orientation of each bar gives the inferred direction of polarization and the length of the bar is proportional to the degree of polarization. The coordinate offsets (\arcmin) were measured with respect to the location of Sgr A$^*$. Histograms of debiased $P$ and $\theta$ in each band are shown in Fig. \ref{fig:HistAP}. The mean degree and angle are 6.3 \% and 8\fdg0 in the $J$ band, 6.2 \% and 13\fdg6 in the $H$ band, and 4.3 \% and 14\fdg4 in the $K_S$ band. \citet{Kob80} detected $K$-band emission of unresolved point sources at the central $7\arcmin \times 7\arcmin$ region and obtained an average degree of polarization of 5\%, with position angles of 10\degr ~to 15\degr, showing a good agreement with our results in the $K_S$ band. The correlation between $\theta$ in the $K_S$ band and $H - K_S$ color is shown in Fig. \ref{fig:DistHKPK}. To clarify the dependence of $\theta$ on $H - K_S$, we divided the $H - K_S$ data set in bins of equal size (0.5 mag) and calculated the mean and the standard deviation of $\theta$ in each bin, represented by red crosses in Fig. \ref{fig:DistHKPK}. We can see a change of the mean position angle at $H - K_S \sim 1.0$: the mean angle is $\sim 5\degr$ at $0 < H - K_S < 1.0$, while it is $\sim 15\degr$ at $H - K_S > 1.0$. Such a change has already been indicated by \citet{Kob83}, and is also in good agreement with their results. We have identified two distinct populations in Fig. \ref{fig:HistAP}: stars with small $P$ and small $\theta$ (typically $P_J \la 5\%$ and $\theta \la 0\degr$), and stars with larger $P$ and $\theta \ga 10\degr$ (see also Fig. \ref{fig:DistHKPK}). It is conceivable that the former are nearby stars, and the latter are stars distributed in the Galactic bulge. The stars that corresponds to the strong peak in the $P_{J}$ histogram at $\sim 2\%$ have a color of $H-K_S \sim$ 0.1$-$0.2 (see also Fig. \ref{fig:Col2}) and $\theta_{J} \sim 0\degr$ and thus are most likely to be nearby dwarfs. This peak can be found in the $H$ band around $P_H \sim 1 \%$, but becomes invisible in the $P_{K_S}$ histogram. In the $K_S$ band, most of the stars detected are red giants in the Galactic bulge, which constitute a strong peak in the $P_{K_S}$ histogram. The strong peak at $P_{K_S} \sim 4\%$ corresponds to those at $\sim 7\%$ in $P_{H}$, and at $\sim 10\%$ in $P_{J}$, because this change of degree of polarization can be explained by the power law $P_{\lambda} \propto \lambda^{-2}$ of the interstellar polarization \citep{Nagata94}. The distinct populations, and the wavelength dependence of the polarization will be discussed in another paper (H. Hatano et al., in preparation). Since the vector maps in Fig. \ref{fig:Vmap} are crowded and thus almost illegible, we show the $K_S$-band mean vector map in Fig. \ref{fig:VHmapCol}. The mean degree and position angles were calculated using stars in a $0\farcm8 \times 0\farcm8$ grid with ${P_{K_S}}/\delta P_{K_S}>3.0$. The vectors are superposed on the three color ($J$, $H$, $K_S$) composite image of the same region. At a first glance, most of the vectors are in order, which can also be seen in the $\theta_{K_S}$ histogram, and are nearly parallel to the Galactic plane. Moving north-eastward across the image, the position angles slightly rotate clockwise. At a few positions where the number density of stars is small, and hence strong foreground extinction exists, the vectors have irregular directions, particularly at the northwestern corner. These irregularities might be explained by the inherent magnetic field configuration in foreground dark clouds. \begin{figure}[] \begin{center} \epsscale{0.9} \plotone{./f2a.eps} \plotone{./f2b.eps} \plotone{./f2c.eps} \caption{ Polarization of the Galactic center for stars with $P > 0\%$ and $\delta P < 1\%$. 2243, 7963, and 9661 stars in the $J$ (top), $H$ (middle), and $K_S$ (bottom) bands, respectively, are plotted. The coordinate offsets (\arcmin) were measured with respect to the location of Sgr A$^*$. Each bar is drawn parallel to the E-vector of the measured polarization. Their length indicates the measured degree of polarization. } \label{fig:Vmap} \end{center} \end{figure} \begin{figure}[h] \begin{center} \rotatebox{180}{ \plotone{./f3.eps} } \caption{ Histograms of degree of polarization (left) and position angle (right) in the $J$ (top), $H$ (middle), and $K_S$ (bottom) bands, for stars with $P > 0\%$ and $\delta P < 3\%$. 5795 ($J$), 17356 ($H$), and 18632 ($K_S$) stars are employed in these histograms. } \label{fig:HistAP} \end{center} \end{figure} \begin{figure}[h] \begin{center} \rotatebox{90}{ \plotone{./f4.eps} } \caption{ Position angle $\theta$ in the $K_S$ band vs. $H-K_S$ diagram for stars with $\delta H < 0.11$, $\delta K_S < 0.11, P_{K_S} > 0\%$ and $\delta P_{K_S} < 3\%$. The red crosses represent the mean and standard deviations of $\theta$ in 0.5 mag width bins. } \label{fig:DistHKPK} \end{center} \end{figure} \begin{figure}[h] \begin{center} \rotatebox{90}{ \plotone{./f5.eps} } \caption{ $K_S$-band polarization vector map superposed on the three color ($J$, $H$, $K_S$) composite image of the Galactic center. The Galactic center is the bright yellow blob in the center. The mean $P_{K_S}$ and ${\theta}_{K_S}$ are calculated for each $0\farcm8 \times 0\farcm8$ grid. Note here that stars with $P_{K_S} > 0\%$ and ${P_{K_S}}/\delta P_{K_S}>3.0$ are used for the calculation, so that the mean $P_{K_S}$ might be overestimated. } \label{fig:VHmapCol} \end{center} \end{figure} \subsection{Separating Foreground Polarization and Galactic Center Component} Polarization vectors of stars trace the plane-of-the-sky projection of the magnetic field, and polarimetric measurements of stars of different distances reveal the three-dimensional distribution of the magnetic field orientations. In our observed fields, we can detect stars in the Galactic disk and bulge. From the stars at the close side in the Galactic bulge (referred to hereafter as ``blue stars'' due to their relatively small reddening), we can obtain the degree of polarization and position angle, which are affected mainly by interstellar dust in the Galactic disk. The light from stars at the far side in the bulge (hereafter ``red stars'') is transmitted through the dust in the disk {\it and} the bulge. Therefore, using both blue and red stars, we can obtain the bulge (GC) component of the polarization. The procedure is as follows. (In the following process, only $K_S$-band polarization of stars is used.) As a first step, we divided the field into $10 \times 10$ sub-fields of $2\arcmin \times 2\arcmin$ and drew $H-K_S$ histograms for each sub-field with stars of $\delta H < 0.11$, $\delta K_S < 0.11$, and $H \leq 15.0$, which are much brighter than the 10$\sigma$ limiting magnitude in the $H$ band (16.7 mag). One of the $H-K_S$ histograms is shown in Fig. \ref{fig:ProcGCMag}, upper left panel. Using the histograms, we evaluated a peak value of the histogram ${(H-K_S)}_{\mathrm{peak}}$ for each sub-field. Fig. \ref{fig:CMDAll} is an $H$ vs. $H-K_S$ color-magnitude diagram (CMD) for stars with $\delta H < 0.11$ and $\delta K_S < 0.11$. This CMD shows that the criterion $H \leq 15.0$ is bright enough to avoid the influence of the limiting magnitudes ($H \approx 16.7$, $K_S \approx 15.5$) on the determination of ${(H-K_S)}_{\mathrm{peak}}$. Using the $H-K_S$ color, we divided the stars with $\delta P < 3\%$ into three sub-groups: ``nearby'' stars and ``blue'' and ``red'' stars in the bulge. We assume that nearby stars have a color of $H-K_S<1.0$, because at $H-K_S \sim 1.0$ the number of stars drops and approaches a minimum (Fig. \ref{fig:Col2}, top panel) and a clear change of position angles can be determined (Fig. \ref{fig:DistHKPK}). The ``blue'' stars are redder than $H-K_S=1.0$ and bluer than ${(H-K_S)}_{\mathrm{peak}}$. The stars with $H-K_S > {(H-K_S)}_{\mathrm{peak}}$ are selected as ``red'' stars (lower left panel in Fig. \ref{fig:ProcGCMag}). The average and standard deviation of ${(H-K_S)}_{\mathrm{peak}}$ of the 100 sub-fields are 1.85 and 0.24, respectively; Fig. \ref{fig:CMDRedBlue} shows the $H - K_S$ histogram (top panel) of the red and blue stars, and their location in the $K_S$ vs. $H - K_S$ CMD (bottom panel). The blue stars have a peak at $H - K_S \approx 1.5$, while the red stars have a peak at $H-K_S \approx 2.0$. The red and blue stars of {\it all} sub-fields are plotted, so their distribution is overlapped in the CMD of Fig. \ref{fig:CMDRedBlue}. There are $\sim 60$ to $\sim 340$ stars with $\delta P$ better than 3\% in each sub-field in the $K_S$ band. As a second step to subtract the polarization originating in the bulge, $Q'/I$ and $U'/I$ histograms in the $K_S$ band were constructed for the blue and red stars in each sub-field (upper and lower right panels in Fig. \ref{fig:ProcGCMag}). We calculated their means as $<Q'/I>_{\mathrm B}$, $<Q'/I>_{\mathrm R}$, $<U'/I>_{\mathrm B}$, and $<U'/I>_{\mathrm R}$. We then obtained the degree of polarization and position angle for the blue stars, \[ P_{\mathrm B} = \sqrt{ \left< \frac{Q'}{I} \right>^{2}_{\mathrm B} + \left< \frac{U'}{I} \right>^{2}_{\mathrm B} },~~ \theta_{\mathrm B} = \frac{1}{2} \arctan \left( \frac{\left< \frac{U}{I} \right>_{\mathrm B}}{\left< \frac{Q}{I} \right>_{\mathrm B}} \right), \] and for the red stars, \[ P_{\mathrm R} = \sqrt{ \left< \frac{Q'}{I} \right>^{2}_{\mathrm R} + \left< \frac{U'}{I} \right>^{2}_{\mathrm R} },~~ \theta_{\mathrm R} = \frac{1}{2} \arctan \left( \frac{\left< \frac{U}{I} \right>_{\mathrm R}}{\left< \frac{Q}{I} \right>_{\mathrm R}} \right). \] where $< U/I >_{\mathrm B}$ and $< Q/I >_{\mathrm B}$, and $< U/I >_{\mathrm R}$ and $< Q/I >_{\mathrm R}$ are the Stokes parameters whose position-angle offset are corrected [see equations (\ref{eq:AngCor})]. We obtained $P$ and $\theta$ for the red minus blue components using the following equations \citep{Goodrich86}: \[ P_{\mathrm {R-B}} = \sqrt{ \left( \left< \frac{Q'}{I} \right>_{\mathrm R} - \left< \frac{Q'}{I} \right>_{\mathrm B} \right)^{2} + \left( \left< \frac{U'}{I} \right>_{\mathrm R} - \left< \frac{U'}{I} \right>_{\mathrm B} \right)^{2} }, \] \[ \theta_{\mathrm {R-B}} = \frac{1}{2} \arctan \left( \frac{\left< \frac{U}{I} \right>_{\mathrm R} - \left< \frac{U}{I} \right>_{\mathrm B}} {\left< \frac{Q}{I} \right>_{\mathrm R} - \left< \frac{Q}{I} \right>_{\mathrm B}} \right), \] where $(< U/I >_{\mathrm R} - < U/I >_{\mathrm B})$ and $(< Q/I >_{\mathrm R} - < Q/I >_{\mathrm B})$ are also the Stokes parameters whose position-angle offset are corrected [see equations (\ref{eq:AngCor})]. The errors of $<Q'/I>$ and $<U'/I>$ were calculated from the standard error on the mean $\sigma/\sqrt{N}$ of the $Q'/I$ and $U'/I$ histograms, where $\sigma$ is the standard deviation and $N$ is the number of stars. The error of $P_{\mathrm {R-B}}$ was then calculated by propagation of errors [see equation (\ref{eq:PError})]. \begin{figure}[h] \begin{center} \rotatebox{-90}{ \epsscale{1.0} \plotone{./f6.eps} } \caption{ Upper left: $H-K_S$ histogram of a sub-field ($l,b = +1\arcmin, -1\arcmin$) with stars with $\delta H < 0.11$, $\delta K_S < 0.11$, and $H \leq 15.0$. The arrow represents the peak of the histogram. Lower left: $H-K_S$ histograms of the same sub-field for blue (hatched) and red (dotted) stars with $P_{K_S} > 0\%$ and $\delta P_{K_S} < 3\%$. Upper and lower right: $Q'/I$ and $U'/I$ histograms for the blue (hatched) and red (dotted) stars shown in the lower left histogram. } \label{fig:ProcGCMag} \end{center} \end{figure} \begin{figure}[h] \begin{center} \plotone{./f7.eps} \caption{ $H$ vs. $H - K_S$ color-magnitude diagram for stars with $\delta H < 0.11$ and $\delta K_S < 0.11$. The dashed line represents $H = 15.0$. } \label{fig:CMDAll} \end{center} \end{figure} \begin{figure}[h] \begin{center} \plotone{./f8.eps} \caption{ $H - K_S$ histograms (top) and $K_S$ vs. $H - K_S$ color-magnitude diagram (bottom) for red and blue stars with $P_{K_S} > 0\%$ and $\delta P_{K_S} < 3\%$. Theoretical isochrones for different extinctions of $A_{K_S} = 0, 1, 2, 3, 4,$ and 5 mag, are shown by solid curves. The arrow indicates the $A_{K_S}=1$ mag reddening vector \citep{Nishi06a}. } \label{fig:CMDRedBlue} \end{center} \end{figure} We show vector maps for $P_{\mathrm B}$ and $\theta_{\mathrm B}$ (blue bars) and $P_{\mathrm R}$ and $\theta_{\mathrm R}$ (red bars) in Fig. \ref{fig:VmapFGRed}, and the same map for $P_{\mathrm {R-B}}$ and $\theta_{\mathrm {R-B}}$ in Fig. \ref{fig:PABGChuss}. The average of $P_{\mathrm B}$ and $\theta_{\mathrm B}$ are 3.8 \% and 15\fdg1, and of $P_{\mathrm R}$ and $\theta_{\mathrm R}$ are 4.3 \% and 15\fdg0, respectively (Fig. \ref{fig:HistPAFGBG}, hatched and dotted histograms). Those of $P_{\mathrm {R-B}}$ and $\theta_{\mathrm {R-B}}$ are also obtained as 0.85 \% and 16\fdg0 (white histogram in Fig. \ref{fig:HistPAFGBG}), only for grids where the polarization is detected with $P_{\mathrm {R-B}}/\delta P_{\mathrm {R-B}} \geq 2$. The averages for $\theta_{\mathrm B}$ and $\theta_{\mathrm {R-B}}$ are similar, but their dispersions are different: the standard deviation of $\theta_{\mathrm B}$ is 6\fdg0, while that of $\theta_{\mathrm {R-B}}$ is 21\fdg5 (the average of $\delta \theta_{\mathrm {R-B}}$ is 7\fdg6.) The small dispersion of $\theta_{\mathrm B}$ suggests that the long axis of the interstellar dust grains in the Galactic disk is well aligned perpendicular to the Galactic plane. The histogram of $\theta_{\mathrm {R-B}}$ has a peak at $\sim 20\degr$, which also roughly coincides with the angle of the Galactic plane. A similar result is also obtained for the $H$-band polarization. In Fig. \ref{fig:PABGChuss}, our $P_{\mathrm {R-B}}$ and $\theta_{\mathrm {R-B}}$ are plotted overlaid on the $B$-vectors derived from FIR/sub-mm observations \citep{Dotson00,Novak00,Chuss03}. Although the observations are restricted to the position of dense molecular clouds, we find good agreement between the FIR/sub-mm and NIR observations in spite of the difference of wavelengths and methods of deriving polarization. \begin{figure}[h] \begin{center} \plotone{./f9.eps} \caption{ $K_S$-band polarization map derived from the blue-star component ($P_{\mathrm B}$ \& $\theta_{\mathrm B}$ : {\it blue bars}) and red-star component ($P_{\mathrm {R}}$ \& $\theta_{\mathrm {R}}$ : {\it red bars}). } \label{fig:VmapFGRed} \end{center} \end{figure} \begin{figure}[h] \begin{center} \epsscale{0.9} \plotone{./f10.eps} \caption{ $K_S$-band polarization map derived from the Galactic center component ($P_{\mathrm {R-B}}$ \& $\theta_{\mathrm {R-B}}$, {\it red bars}), where the polarization is detected with $P_{\mathrm {R-B}}/\delta P_{\mathrm {R-B}} \geq 2$. The polarization map at the center of our Galaxy derived from FIR/sub-mm observations ({\it blue bars}) is also shown. The length of the bars is proportional to the measured degree of polarization, and their orientation is drawn parallel to the inferred magnetic field direction. The FIR/sub-mm wavelengths data sets are from 60 $\mu$m \& 100 $\mu$m polarimetry by \citet{Dotson00}, and 350 $\mu$m polarimetry by \citet{Novak00} and \citet[][see also their Fig. 1]{Chuss03}. Some prominent radio filaments are shown as heavy dark lines. } \label{fig:PABGChuss} \end{center} \end{figure} \begin{figure}[h] \begin{center} \plotone{./f11.eps} \caption{ Top: Histograms of degrees of polarization for $P_{\mathrm B}$ ({\it hatched}), $P_{\mathrm R}$ ({\it dotted}), and $P_{\mathrm {R-B}}$ ({\it white}). Bottom: Histograms of position angles for $\theta_{\mathrm B}$ ({\it hatched}), $\theta_{\mathrm R}$ ({\it dotted}), and $\theta_{\mathrm {R-B}}$ ({\it white}). } \label{fig:HistPAFGBG} \end{center} \end{figure} \section{DISCUSSION} \label{sec:Disc} \subsection{Previous Infrared Polarimetry toward the GC} \label{subsec:PrevNIR} NIR polarimetry for diffuse emission and point sources toward the GC has been conducted since the 1970's \citep{Maihara73}. From the polarization angle aligned nearly along the Galactic plane, and the wavelength dependence of polarization well fitted by a power law \citep{Nagata94}, it has been interpreted that the polarization is of interstellar origin dominated by dust that lies in the Galactic disk \citep[e.g.,][]{Hough78,Kob80,Lebofsky82}. One of the deepest NIR polarimetric observations toward the GC was carried out by \citet{Eckart95} for a small field of $\sim 13\arcsec \times 13\arcsec$. They established significant polarization for 160 sources fainter than 13 mag in the $K$ band. The mean flux-weighted polarization is 4 \% at 25\degr, nearly parallel to the Galactic plane. A similar result, $4.1 \pm 0.6 \%$ at $30 \pm 10\degr$, was also obtained by \citet{Ott99}. \citet{Eckart95} concluded that most of the polarizations are caused via absorption by aligned dust grains in the Galactic plane. A change of the magnetic field configuration along the line of sight toward the GC has been pointed out by \citet{Kob83}. The diagram of position angles in the $K$ band versus $H-K$ for 15 discrete sources \citep[Fig. 5,][]{Kob83} showed that the position angles are smaller and less ordered for sources with $H-K<1.0$, while those with $H-K>1.0$ are confined to a relatively narrow range around $20\degr$. They concluded that this may indicate a change of the magnetic field direction at a distance corresponding to $H-K=1.0$ (5 kpc or more according to their calculation). We can also identify a change of the position angle in the $K_S$ bands at $H-K_S \sim 1.0$ (see Fig. \ref{fig:DistHKPK}). As shown in the color-color diagram in Fig. \ref{fig:Col2}, most of the stars with $H-K_S \ga 1.0$ have the color of giants (i.e., their unreddened positions are on the locus of giants), and the strong peak in the $H-K_S$ histogram (the top panel in Fig. \ref{fig:Col2}) suggests that they are distributed in the Galactic bulge. Hence, this change of the position angle indicates a transition of the magnetic field configuration along the line of sight, between the Galactic disk and bulge. \subsection{``Red'' and ``Blue'' Stars and Their Separation in the Line of Sight} To discriminate between foreground disk stars and those in the Galactic bulge, the criterion $H-K_S=1.0$ is applied in our analysis because most stars with $H-K_S>1.0$ are attributed as giants in the Galactic bulge. The histogram of $H-K_S$ and the color-color diagram of point sources with photometric errors of less than 0.1 mag in the three bands are shown in Fig. \ref{fig:Col2}. Nearby stars can be found around the giants' and dwarfs' loci (thick and thin curves) without reddening, while most of the redder stars, which are located at $H-K_S \ga 1.0$ and $J-H \ga 2.5$, have a color expected for reddened giants. Toward the GC, the number of stars in the bulge is larger than in the exponential disk by a factor of $\sim 50$ \citep{Wainscoat92}, and thus most of the stars at $H-K_S \ga 1.0$ and $J-H \ga 2.5$ are giants in the Galactic bulge. The bottom panel of Fig. \ref{fig:CMDRedBlue} shows the location of the red and blue stars in the $K_S$ vs. $H - K_S$ color-magnitude diagram. The theoretical Padova isochrones \citep{Girar02} for solar metallicity with an age of 10 Gyr are also plotted in the color-magnitude diagram. The isochrones are put at the distance of the GC \citep[7.5 kpc;][]{Nishi06b} and shifted along the reddening vector \citep[$A_{K_S}/E_{H-K_S} = 1.44$;][]{Nishi06a} with $K_S$ extinctions of 0, 1, 2, 3, 4, and 5 mag. The distances to the red and blue stars are difficult to estimate accurately, since we do not know the distribution of the interstellar dust along the line of sight; however, most of the giants in the Galactic bulge can be detected for at least $K_S < 15$, and for these stars, we can find a clear peak in the $H-K_S$ histograms. Fig. \ref{fig:CMDAll} clearly shows that our observations are essentially equally sensitive to stars on the near ($H-K_S \la 2.0$) and far ($H-K_S \ga 2.0$) sides of the Galactic bulge for $H \leq 15.0$. The clear peaks found in the $H-K_S$ histograms are thus due to the spatial distribution of stars, not due to a distance effect. This indicates that ${(H-K_S)}_{\mathrm{peak}}$ roughly corresponds to a real peak of the spatial distribution of giants along the line of sight. The color-magnitude diagram of Fig. \ref{fig:CMDRedBlue} also tells us that a large number of blue stars are distributed around the distance where $1.0 \la A_{K_S} \la 2.0$, and most of the red stars are further than the distance which suffers from the extinction of $A_{K_S} = 2.0$. From the distribution of $H-K_S$ color, we estimate the depth of the region where we have mapped the magnetic configuration shown in Fig. \ref{fig:PABGChuss}. For each sub-field, we calculated $H-K_S$ color differences between the peak in the $H-K_S$ histogram and the mean colors of blue and red stars, i.e., $(H-K_S)_{\mathrm{peak}} - \langle (H-K_S)_{\mathrm{blue}} \rangle$ and $\langle (H-K_S)_{\mathrm{red}} \rangle -(H-K_S)_{\mathrm{peak}}$. These color differences and corresponding extinction $A_{{K_S}_{\mathrm{blue}}} = 1.44 \times [(H-K_S)_{\mathrm{peak}} - \langle (H-K_S)_{\mathrm{blue}} \rangle]$ and $A_{{K_S}_{\mathrm{red}}} = 1.44 \times [\langle (H-K_S)_{\mathrm{red}} \rangle - (H-K_S)_{\mathrm{peak}}]$, where 1.44 comes from $A_{K_S}/E_{H-K_S} = 1.44$ \citep{Nishi06a}, show the amount of dust extinction in the region where we have obtained polarimetric information. First, the amount of extinction toward the peaks in the $H-K_S$ histograms can be calculated to be $A_{{K_S}_{\mathrm{peak}}} = 1.44 \times [(H-K_S)_{\mathrm{peak}}-(H-K_S)_0]$, where the mean intrinsic color $(H-K_S)_0$ of red giants is assumed to be $\sim 0.2$ (see Fig. \ref{fig:CMDRedBlue}). This corresponds to the amount of extinction up to the GC in each sub-field. Next, the extinctions $A_{{K_S}_{\mathrm{blue}}}$ and $A_{{K_S}_{\mathrm{red}}}$ are converted to the actual distances from the GC, using a simple model. To make conservative estimates, we use the model by \citet{Davies97}, who derived a rather extended distribution of dust. \citet{Davies97} showed that the extinction to the GC associated with cool diffuse interstellar dust can be calculated using \begin{eqnarray} A = C \times \int_0^{R_0} e^{-r/\alpha_{\mathrm{d}}} dr, \label{eq:Extinction} \end{eqnarray} where $\alpha_{\mathrm{d}} \approx 5.3$ kpc is a scale-length of the dust distribution in the radial direction, $C$ is a constant depending on the wavelength and dust density, and $R_0$ is the distance between the GC and the Sun \citep[$R_0 = 7.5$ kpc;][]{Nishi06b}. The distances from the GC $x_{\mathrm{blue}}$ and $x_{\mathrm{red}}$ corresponding to $A_{{K_S}_{\mathrm{blue}}}$ and $A_{{K_S}_{\mathrm{red}}}$, respectively, can be estimated with the equations \[ \Bigl( \int_0^{x_{\mathrm{blue,red}}} e^{-r/\alpha_{\mathrm{d}}} dr \Bigr) \Bigg/ \Bigl( \int_0^{R_0} e^{-r/\alpha_{\mathrm{d}}} dr \Bigr) = A_{{K_S}_{\mathrm{blue,red}}} \Big/ A_{{K_S}_{\mathrm{peak}}}.\] We obtained the average and standard deviation of $x_{\mathrm{blue}}$ as 0.5 and 0.1 kpc, and those of $x_{\mathrm{red}}$ as 1.0 and 0.5 kpc, respectively. We found a long-side tail in the $x_{\mathrm{red}}$ histogram, which enlarges the average and the standard deviation. This estimation suggests that the polarization shown in Fig. \ref{fig:PABGChuss} occurs between the average distances of $(R_0 - 0.5)$ kpc and $(R_0 + 1.0)$ kpc from the Sun, arising probably from the central 1$-$2 kpc region of our Galaxy. In reality, the central part of the Galaxy harbors a strong concentration of gas and dust called the ``Central Molecular Zone'' \citep[$R \la 200$ pc;][]{Morris96}, which approximately corresponds to the concentration of stars called the ``Nuclear Bulge'' \citep[$R \la 300$ pc;][]{Mezger96,Serabyn96}. According to \citet{Launhardt02}, stars belonging to the Nuclear Bulge dominate in the central part of the Galaxy (see their Fig. 14). Therefore, a large portion of the stars we have detected is located in the Nuclear Bulge, and the polarization at the GC originates mostly within the central few hundred pc. \subsection{Magnetic Field Configuration at the GC} As shown in Fig. \ref{fig:PABGChuss}, we present the magnetic field configuration at the central region of our Galaxy by discriminating between the polarizations due to the disk and GC origin. The peak of the histogram of the position angle is $\sim 20\degr$ (see Fig. \ref{fig:HistPAFGBG}), almost parallel to the Galactic plane. This coincidence, and the deficiency of $\theta_{\mathrm {R-B}}$ around $-60\degr$, the angle perpendicular to the Galactic plane, show the basically toroidal geometry of the magnetic field. We cannot find a clear systematic dependence of position angle on Galactic latitude (from $b \approx -0\fdg27$ to $+0\fdg18$), suggesting that there is no systematic transition of the magnetic field direction in this region. The direction of dust grain alignment at the GC has been investigated from polarized dust emission in the mid- and far-infrared and sub-mm wavelengths. The magnetic field implied by the emission is generally parallel to the Galactic plane in the circumnuclear disk \citep[e.g.,][]{Werner88,Morris92,Hildebrand93}. As seen in Fig. \ref{fig:PABGChuss}, the magnetic field configuration we obtained at the GC shows a good agreement globally with those obtained by \citet{Dotson00}, \citet{Novak00}, and \citet{Chuss03}, which are the highest angular resolution polarimetry data sets in the FIR/sub-mm wavelengths. The local features also show an excellent agreement: an X-shaped feature extends from $(\Delta \alpha, \Delta \delta) = (-3\arcmin, +10\arcmin)$ down through ($+5\arcmin, 0\arcmin$) \citep[described in \S 3.1.4,][]{Chuss03} is also confirmed in our map. The configuration at M$-$0.13$-$0.08 around ($-1\arcmin, -5\arcmin$) is also reproduced. The polarized FIR/sub-mm emission comes from molecular clouds, which are known to be located in the GC. Therefore we conclude that the position angles derived from our NIR polarimetry represent the direction of the aligned dust grains {\it in} the GC. \citet{Chuss03} has suggested that the magnetic field aligns generally perpendicular to the Galactic plane in low density regions, while the field has a toroidal configuration in high density regions. They explain this correlation using the idea that in underdense regions, the magnetic field energy density can support itself against gravitational forces, preserving a primordial poloidal magnetic field. In overdense regions like molecular clouds, on the other hand, the gravitational forces are strong enough to shear the magnetic field into a direction parallel to the Galactic plane. In this context, lower density regions should have a poloidal configuration. However, in our vector map, the field shows a predominantly toroidal direction even at locations where the FIR/sub-mm emission is too weak for polarimetry [at the southeastern corner in Fig. \ref{fig:PABGChuss}, see also Fig. 1 in \citet{Chuss03} and Fig. 2 in \citet{Novak03}.] The weak FIR/sub-mm emission suggests a paucity of dense clouds along the lines of sight, and the polarization in this direction can be considered to be interstellar in origin. Hence the ``interstellar'' magnetic field at the GC is dominated by a toroidal configuration, and the primordial poloidal magnetic field, if it existed, is not preserved today in this region. Several radio filaments exist in our observed field, and three of them are prominent: the GC Radio Arc \citep{Yusef84}, and the Northern and Southern Threads \citep[also known as G0.08+0.15 and G359.96+0.09;][]{Morris85}, which are shown in Fig. \ref{fig:PABGChuss}. Polarization studies have confirmed that the emission from the filaments is strongly linearly polarized, and that the internal magnetic field orientations are parallel to the long axes of the filaments \citep{Tsuboi86,Lang99}. The simplest interpretation of these observations, combined with the discovery of more filaments \citep[e.g.,][]{LaRosa04,Yusef04}, is that poloidal magnetic fields are pervasive throughout the central few hundred pc. Although the polarizations originating from the filaments cannot be detected in our observation, those toward the surrounding fields can be detected. The filaments have a width of less than $\sim 10\arcsec$ \citep{Lang99}, and probably have a depth similar to their width. On the other hand, the size of the grids of our analysis is $2\arcmin \times 2\arcmin$, and the polarization shown in Fig. \ref{fig:PABGChuss} is the average of $\sim$1$-$2 kpc from the GC along the line of sight. Hence most of the stars we detected do not show a polarization originating from the filaments. However, we can detect the average polarization near the line of sight toward the filaments. As shown in Fig. \ref{fig:PABGChuss}, most of the position angles of the grids including the filaments align nearly perpendicular to them rather than parallel. Therefore, Fig. \ref{fig:PABGChuss} suggests that the average magnetic field has a toroidal configuration even around the sight-lines toward the filaments. We note again that the polarization is the average along the line of sight and does not originate from the area close to the filaments. \subsection{NIR Polarization as a New Tool for Mapping the GC Magnetic Field} We have shown that the polarization of starlight can be a probe of the magnetic field near the GC. \citet{Morris98} enumerated five different ways in which the magnetic field near the GC has been studied: morphology, polarization angle, Faraday rotation of the radio continuum, Zeeman effect, and polarized dust emission in FIR/sub-mm wavelengths. The polarization of starlight has not been employed for such investigations. The optical polarization of starlight can trace the structure of the local magnetic field, but cannot detect stars near the GC due to large extinction. NIR polarimetry has been carried out toward the GC prior to our observations (see \S \ref{subsec:PrevNIR}), but division of polarization into several components (e.g., originating from the Galactic disk and the central region) has not been done previously. The wide field-of-view of the NIR polarimeter SIRPOL, and the statistical treatment of tens of thousands of stars enable us to study the magnetic field near the GC; that is, the NIR polarimetry of starlight is a new way to investigate the magnetic field. NIR polarimetry has the advantage of providing information about the magnetic field at locations where FIR/sub-mm emission is weak. NIR polarization of starlight is attributed to extinction along the line of sight by aligned dust grains, while FIR/sub-mm polarization is due to emission from the aligned dust. Hence, NIR polarimetry can investigate the magnetic field in regions where FIR/sub-mm polarimetry is absent, if background stars exist. This advantage is clearly shown in Fig. \ref{fig:PABGChuss}. We have detected polarization at positions where blue bars are not shown. The distribution of the position angles including such low emission regions shows a globally toroidal magnetic configuration at the GC. \section{SUMMARY} We have measured the near-infrared polarization of point sources toward the Galactic center (GC) in the $20\arcmin \times 20 \arcmin$ region centered at Sgr A$^*$. The difference in the Stokes parameters between stars at the close and far sides of the GC reveals the polarization originating from the central 1$-$2 kpc region of our Galaxy. The distribution of the position angle for the central region shows good agreement with those obtained from polarized emission of dust in the GC, showing that the near-infrared polarization of point sources can be used as a tool to investigate the magnetic field configuration of the GC. The position angles have a peak at $\sim 20\degr$, which is almost parallel to the Galactic plane, suggesting a global toroidal magnetic field in the region. \acknowledgements We are grateful to Hiroshi Akitaya for his helpful comments, and Jun Hashimoto for his help with our analysis. We thank the staff of the South African Astronomical Observatory (SAAO) for their support during our observations. The IRSF/SIRIUS project was initiated and supported by Nagoya University and the National Astronomical Observatory of Japan in collaboration with the SAAO. SN is financially supported by the Japan Society for the Promotion of Science (JSPS) through the JSPS Research Fellowship for Young Scientists. This work was supported by KAKENHI, Grant-in-Aid for Young Scientists (B) 19740111, Grant-in-Aid for Scientific Research (A) 19204018, and Grant-in-Aid for Scientific Research on Priority Areas 15071204, and also supported in part by Grants-in-Aid for the 21st Century COE ``The Origin of the Universe and Matter: Physical Elucidation of the Cosmic History'' from the MEXT of Japan. This publication makes use of data from the Two Micron All Sky Survey, a joint project of the University of Massachusetts, the Infrared Processing and Analysis Center, the National Aeronautics and Space Administration, and the National Science Foundation.
1,108,101,564,463
arxiv
\section{Introduction}% Subsurface geological formations are often highly heterogeneous and heavily fractured at multiple scales. Heterogeneity of the deformation properties (e.g. elasticity coefficients) can be of several orders of magnitude which occurs at fine scale (cm) resolution. The reservoirs also span large scales, in the order of kms. Numerical simulation of mechanical deformation for such complex systems is necessary to optimise the geo-engineering operations \cite{Zoback_95,Ernest1}, and assess their safety and manage the associated risks (e.g. fracture propagation, fault slip and induced seismicity). Though being crucially important, simulation of these systems are beyond the scope of classical numerical schemes. Presence of highly heterogeneous coefficients with high resolution within large-scale domains has been systematically addressed in the computational geoscience community through the development of multiscale finite element and finite volume methods \cite{Multiscale1_Hadi,Nicolai_MSFEM,Sokolova2019,Jenny2003}. Recent developments also include mechanical deformation coupled with fluid pore pressure dynamics \cite{Deb2017,Ren2016,Castelletto2016,Fumagalli2014,Giovanardi2017}. In presence of many fractures, however, the complexity of the computational model increases significantly. As such, development of a robust multiscale strategy for deformation of heavily fractured porous media, which also allows for convergent systematic error reduction\cite{Multiscale2_Hadi,YWang2014,CHUNG201454}, is of high interest in the geoscience community. The presence of fractures within the computational domain can be included explicitly by two approaches of (1) unstructured grid and (2) immersed or embedded methods. \newline The unstructured grid approach \cite{Rashid1998,Bittencourt1996,Cook1995} generates a discrete computational domain in which fractures are always at the interfaces of elements. This allows for convenient treatment of their effect, however, at the cost of complex meshing. The complex mesh generation for three-dimensional (3D) large scale domains with many fractures is challenging, specially when fractures dynamically extend their geometries. On the other hand, the immersed or embedded approach allows for independent grids for matrix block and fractures, by introducing enrichment of the discrete connectivity (for flow) and shape functions (for mechanics)\cite{Wells2001,Tene2017,Khoei2014,Efendiev2014,Wu2015}. These enriched formulations are aimed at representing discontinuities within the overlapping matrix cell, without any adjustment nor refinement of the grid\cite{Belytschko1994}. The enrichment strategy for modeling deformation using finite-element schemes in presence of fractured media are referred to as 'extended finite element (XFEM)' methods. XFEM enriches the partition of unity (PoU) \cite{MELENK1996} by introducing additional degrees of freedom (DOF) at the existing element nodes. There exists sets of enriched functions to capture the jump discontinuity in the displacement field, when the fracture element cuts through the entire cell, and the tip when a fracture ends within the domain of an element (i.e. its tip is inside an element). \cite{Moes1999,Belytschko_XFEM,Aragon2017,FPMeer2009,GNwells}. For these jump and tip scenarios, additional shape functions are introduced which are multiplied by the original shape functions and supplement the discrete displacement approximation space. When it comes to geoscience applications, the XFEM is not an attractive method, due to its excessive additional degrees of freedom to capture the many fractures. As such one has to develop a scalable approach, in order to allow for accurate yet efficient application of XFEM to simulate deformation in geological formations. This paper develops a multiscale XFEM (referred to as MS-XFEM) which offers a scalable efficient strategy to model large-scale fractured systems. MS-XFEM imposes a coarse mesh on the given fine-scale mesh. The main novel idea behind MS-XFEM is to use XFEM to computationally solve for local coarse-scale (multiscale) basis functions. These basis functions capture the fractures and coefficient heterogeneity within each coarse element. The solving strategy of these local coarse-scale basis functions can be either geometric or algebraic \cite{HosseiniMehr2020,HosseiniMehr2018,YWang2014}. We prefer algebraic construction, as it allows for black-box integration of the method within any existing XFEM simulator. Once the basis functions are solved, they will be clustered in the matrix of Prolongation (P), which maps the coarse-scale solution to the fine-scale one. Note that there will be no additional multiscale basis functions due to jump or tips, and that only 4 multiscale basis functions per element exist for 2D structured grids (8 in 3D) in each direction (x, y, and z). The fine-scale XFEM system is then mapped to the coarse grid by using the Restriction (R) operator, which is defined based on the FEM, as the transpose of the prolongation operator. The approximate fine-scale solution is finally obtained after mapping the coarse-scale solution to the fine scale, by using the prolongation operator. The approximate solution of MS-XFEM can be found acceptable for many applications, however error control and reduction to any desired level is necessary to preserve its applicability for challenging cases. As such, the MS-XFEM is integrated within the two-stage iterative solver in which the MS-XFEM is paired with an efficient iterative smoother (here ILU(0)) to reduce the error\cite{Chow1997,Zhou2012}. One can also use the Krylov subspace methods (e.g. GMRES) to enhance the convergence, which stays outside the scope of this paper. Several proof-of-concept numerical tests are presented to assess the accuracy of the presented MS-XFEM without and with iterative improvements. The test cases include large deformations which may not be realistic in geoscience applications, but important to be studied in order to quantify the errors in large deformation scenarios. From these results it becomes clear that the MS-XFEM, despite using no enriched basis functions at coarse scale, presents an efficient and accurate formulation to study deformation of fractured geological media. The structure of this paper is set as the following. Next, the governing equations and the fine scale XFEM method are introduced. Then, the MS-XFEM method is presented in detail, with emphasis on the construction of local multiscale basis function and the approximate fine scale solution. Then, different numerical test cases are presented. Finally, concluding remarks are discussed. \section{Governing Equations and Fine-scale XFEM System} Consider the domain $\Omega$ bounded by $\Gamma$ as shown in figure \ref{comp_domain}. Prescribed displacements or Dirichlet boundary condition are imposed on $\Gamma_u$, while tractions are imposed on $\Gamma_t$. The crack surface $\Gamma_c$ (lines in 2-D and surfaces in 3-D) is assumed to be traction-free. \begin{center} \centering \includegraphics[trim={0cm 0cm 0cm 0cm}, clip, width=0.6\textwidth]{1.png} \captionof{figure} {An illustration of fractured domain setup}\label{comp_domain} \end{center} The momentum balance equations and boundary conditions read \begin{align} \nabla \cdot \sigma + f =0 \thickspace \ \ \ in \ \Omega \label{gov_1}\\ \sigma\cdot \overrightarrow{n} =\bar{t} \thickspace \ \ \ on \ \Gamma_t \\ \sigma\cdot \overrightarrow{n} =0 \thickspace \ \ \ on \ \Gamma_c \\ u=\bar{u} \thickspace \ \ \ on \ \Gamma_u, \end{align} where $\sigma$ is the stress tensor and $u$ is the displacement field over the whole domain. $\overrightarrow{n}$ is the normal vector pointing outside the domain \cite{White_2020,TEREKHOV2020112357}.\\ The constitutive law with linear elasticity assumption reads \begin{equation} \label{elastic} \sigma=C: \varepsilon=C: \nabla^{s} u \end{equation} where, $\nabla^{s}$ denotes the symmetrical operator and $C$ is the property tensor defined as \begin{gather} \nonumber C= \begin{bmatrix} \lambda+2\mu & \mu & 0 \\ \mu & \lambda+2\mu & 0 \\ 0 & 0 & \mu \end{bmatrix}, \end{gather} with $\lambda$ and $\mu$ denoting the Lame's parameters \cite{wang2017,GASPAR2003487}. The strain tensor $\varepsilon$ is expressed as \begin{equation} \label{strain_h} \varepsilon = \nabla^{s} u=\frac{1}{2} (\nabla u+\nabla^{T} u) \end{equation} where, $\nabla$ denotes the gradient operator. Substituting Eqs. \eqref{elastic} and \eqref{strain_h} in the governing equation \eqref{gov_1} results in a 2nd order Partial Differential Equation (PDE) for displacement field $u$ \begin{equation} \label{displacementEq} \nabla \cdot (C : \nabla^s u) + f = 0. \end{equation} Equation \eqref{displacementEq} is then solved for computational domains with cracks (representing faults and fractures). This is done by the extended finite element (XFEM) method, which is briefly revisited in the next section. \subsection{Extended Finite Element Method (XFEM)} The FEM with smooth shape functions $N_i$ provides an approximate numerical solution to Eq. \eqref{displacementEq} for displacement unknowns, i.e., \begin{equation}\label{fem} u=\sum_{i\in I} u_i N_i. \end{equation} This formula can be used for computational domains without discontinuity. The FEM approximation is insufficient to capture discontinuities imposed by the existence of the fractures and faults. As such, the XFEM method introduces two sets of enrichment to the original FEM in order to allow it to capture the discontinuities without adapting the grid. These enrichment sets are associated with the body and tip of the fractures and faults. The body is enriched by jump functions, and the tip by tip enrichment functions \cite{Moes1999}. Below brief descriptions of these two enrichment functions are provided. \subsubsection{Jump enrichment} The jump enrichment represents the discontinuity involved in the displacement field across the fracture and fault main body. The jump enrichment is often chosen as the step or Heaviside function, which can be expressed as \[ H(x) = \begin{cases} 1 & \text{on {$\Omega^+$}} \\ -1 & \text{on {$\Omega^-$}}\\ \end{cases} \]. Note that $\Omega^+$ and $\Omega^-$ zones are determined based on the normal vector pointing out of the fracture curve. For line fractures, the direction can be any side, as long as all discrete elements use the same + and - sides for a fracture. \subsubsection{Tip enrichment} The tip enrichment represents the discontinuity of the displacement field near the fracture tip. This type of enrichment function, denoted by $F_l$, is based on the auxiliary displacement field near the fracture tip and contains four functions, i.e., \begin{equation} {F_l(r,\theta)}=\{ {\sqrt{r} sin(\frac{\theta}{2})}, {\sqrt{r} cos(\frac{\theta}{2})}, {\sqrt{r} sin(\frac{\theta}{2}) sin(\theta)}, {\sqrt{r} cos(\frac{\theta}{2}) sin(\frac{\theta}{2})}\}. \end{equation} These four functions around the fracture tip inside the element are plotted in Figure \ref{tip_func}. The red segment, shown on the base of the plots, represents the fracture which ends in the element. Note that only the ${\sqrt{r} sin(\frac{\theta}{2})}$ contains a discontinuity around the fracture tip, while other functions are smooth.\\ \begin{center} \centering \includegraphics[trim={1cm 0cm 1cm 0cm}, clip, width=0.8\textwidth]{2.png} \captionof{figure} {Four types of tip enrichment functions inside the element. The red segment represents the crack with its tip located at center point (0,0). The discontinuity can be seen clearly in the top left function, ${\sqrt{r} sin(\frac{\theta}{2})}$.} \label{tip_func} \end{center} \subsubsection{Enrichment mechanism} To decide whether the node is enriched or not, the node location related to the fracture is the key factor. The sketch of the enrichment mechanism is shown in Figure \ref{enrich_illustrate}. More precisely, in this figure, the tip and jump enriched nodes are highlighted in red and black, respectively. \begin{center} \centering \includegraphics[trim={4cm 16.5cm 5cm 2.9cm}, clip, width=0.4\textwidth]{3.pdf} \captionof{figure} {Enrichment mechanism: node I and J will be enriched using tip and jump functions.} \label{enrich_illustrate} \end{center} \subsection{XFEM linear system} The XFEM approximates the continuum displacement field $u$ at fine-scale mesh resolution $h$ by $u^h$ which is defined as \begin{equation} \label{XFEM_formula} u \approx u^h=\sum_{i\in \Omega^h} u_i N_i + \sum_{j\in J} a_j N_j H(x) + \sum_{k\in K}N_k, \ \big[\sum_{l=1}^{4} F_l(x) \ b_k^{l} \big] \end{equation} where $N$, $H$ and $F_l$ represent, respectively, the classical FEM shape functions, the Heaviside function and the tip enrichment functions. The fine-scale mesh has $\Omega^h$ nodes. Moreover, $u$ denotes the standard degrees of freedom (DOFs) associated to the classical finite element method. $a$ denotes the extra DOFs associated to the jump enriched node. For the jump enriched nodes, in the 2D domains, each node would contain 2 extra DOFs. Furthermore, $b$ indicates the extra DOFs associated to the tip enrichment, which adds four extra DOFs per direction (8 in total in a 2D domain) for each tip inside an element. The first term in the right-hand-side (RHS) of Eq. \eqref{XFEM_formula} is the contribution of the classical finite element method. This term captures the smooth deformation, using classical shape functions. The second term, however, represents the contribution of the jump enrichment. Note that the jump enrichment is modeled by the weighted Heaviside functions, with weights being the classical shape functions. There will be as many jump enrichment functions as the number of fractures inside an element. Finally, the third term in the RHS is the contribution of the fracture tips. Note that if several fracture tips end up in an element, there will be 4 additional DOFs per tip per direction in that element. The resulting linear system entails the nodal displacement unknowns $u$, as well as the jump level $a$ and tip weight $b$ per fracture (and fault). The augmented XFEM linear system $K^h d^h = f^h$, therefore, reads \begin{gather}\label{XFEM_system} \underbrace{\begin{bmatrix} \overline{K}_{uu} & \overline{K}_{ua} & \overline{K}_{ub} \\ \overline{K}_{au} & \overline{K}_{aa} & \overline{K}_{ab} \\ \overline{K}_{bu} & \overline{K}_{ba} & \overline{K}_{bb} \end{bmatrix}}_{K^h} \underbrace{ \begin{bmatrix} \overline{u} \\ \overline{a} \\ \overline{b} \end{bmatrix}}_{d^h} = \underbrace{ \begin{bmatrix} \overline{f}_u \\ \overline{f}_a \\ \overline{f}_b \end{bmatrix}}_{f^h}. \end{gather} Compared to the classical FEM, there exist several additional blocks involved in the stiffness matrix, due to the existence of the discontinuities. The advantage of XFEM is that it does not rely on complex mesh geometry, instead, it allows fractures to overlap with the matrix elements. On the other hand, for geoscientific fractured systems, the additional DOFs due to the enrichment procedure results in excessive computational costs. This imposes a significant challenge for the XFEM application in geoscience applications. In this paper, we develop a scalable multiscale procedure which constructs a coarse-scale system based on locally supported basis functions. The method is described in the next section. \section{Multiscale Extended Finite Element Method (MS-XFEM)} A multiscale formulation provides an approximate solution $u'^h$ to the fine-scale XFEM deformation $u^h$ through \begin{equation} \label{MS-XFEM_formula} u^h \approx u'^h = \sum_{i\in \Omega^H} N^H_i u^H_i, \end{equation} where $N^H_i$ are the coarse-scale (multiscale) basis functions and $u^H_i$ are the coarse-scale nodal displacements at coarse mesh $\Omega^H$. Note that this multiscale formulation does not include any enrichment functions. Instead, all enrichment functions are incorporated in the construction of accurate coarse-scale basis functions $N^H$. This allows for significant computational complexity reduction, and makes the entire formulation attractive for field-scale geoscientific applications. Next, construction of the coarse-scale system and the basis functions will be presented. \subsection{Coarse scale linear system} MS-XFEM solves the linear deformation system on a coarse mesh, imposed on a given fine-scale mesh, as shown in figure \ref{MS_XFEM_mesh}. The coarsening ratio is defined as the ratio between the coarse mesh size and fine-scale mesh size. \begin{center} \centering \includegraphics[trim={5cm 17cm 5cm 2.9cm}, clip, width=0.5\textwidth]{4.pdf} \captionof{figure}{Illustration of the multiscale mesh imposed on the given fine-scale mesh, with the coarsening ratio of $3\times3$.} \label{MS_XFEM_mesh} \end{center} The multiscale formula \eqref{MS-XFEM_formula} can be algebraically expressed as \begin{equation}\label{approximate_MSXFEM} u^h \approx u'^h = \mathbf{P} \ d^H, \end{equation} where $\mathbf{P}$ is the matrix of basis functions (i.e., prolongation operator) and $d^H$ is the coarse-scale deformation vector for $u^H$ unknowns. Algebraic formulation allows for convenient implementation of the proposed MS-XFEM, and its integration as a black-box tool for any given classical XFEM solver. Therefore, the remainder of the article will be devoted to the formulation. The coarse-scale solution $d^H$ needs to be found by solving a coarse-scale system. To construct the coarse-scale system and solve for $d^H$, one has to restrict (map) the fine-scale linear system($K^h d^h=f^h$) to the coarse-scale, i.e., \begin{equation} \underbrace{(\mathbf{R} \ K^h \ \mathbf{P})}_{K^H} \ d^H = \mathbf{R} \ f^h. \end{equation} Here, $\mathbf{R}$ is the restriction operator with the size of $\Omega^H \times \Omega^{h+j+t}$, where $\Omega^{h+j+t}$ is the size of the fine-scale enriched XFEM system including jump and tip enrichment. Prolongation operator $\mathbf{P}$ has the dimension of $\Omega^{h+j+t} \times \Omega^H$. This results in the coarse-scale system matrix $K^H$ size of $\Omega^H \times \Omega^H$. The finite-element-based restriction function is introduced as the transpose of the prolongation matrix, i.e., \begin{equation} \mathbf{R} = \mathbf{P}^T. \end{equation} Therefore, the coarse-scale matrix $K^H$ is symmetric-positive-definite (SPD), if $K^h$ is SPD. Once the coarse-scale system is solved on $\Omega^H$ space for $d^H$, one can find the approximate fine-scale solution using Eq. \eqref{approximate_MSXFEM}. Overall, the multiscale procedure can be summarised as finding an approximate solution $d'^h$ according to \begin{equation}\label{multiscale_algebraic_finalexp} d^h \approx d'^h = \mathbf{P} d^H = \mathbf{P} (\mathbf{R} \ K^h \ \mathbf{P})^{-1} \mathbf{R} \ f^h. \end{equation} Next, the prolongation operator $\mathbf{P}$, i.e., the basis functions are explained in detail. Once $\mathbf{P}$ is known, all terms in Eq. \eqref{multiscale_algebraic_finalexp} are defined. \subsection{Construction of multiscale basis functions} To obtain the basis functions, the governing equation \eqref{displacementEq} without any source term using XFEM, i.e., \eqref{XFEM_system} needs to be solved in each coarse element $\omega^H$. This can be expressed as solving \begin{equation}\label{basis_c_1} \nabla\cdot(C:(\nabla^{S} N_i^{H}))=0\thickspace \ \ \ \text{in} \ \ \Omega^H, \end{equation} subject to local boundary conditions. Here, we develop a reduced-dimensional equilibrium equation to solve for the boundary cells \cite{Nicola_2019,Sokolova2019}, i.e., \begin{equation}\label{basis_c_2} \nabla_{\parallel}\cdot(C_r:(\nabla_{\parallel}^{S} N_i^{H}))=0\thickspace \ \ \ \text{on} \ \ \Gamma^H. \end{equation} Here, $\Gamma^H$ denotes the boundary cells of the coarse element $\Omega^H$. In addition, $\nabla_{\parallel}^{S}$ denotes the reduced dimensional divergence and symmetrical gradient operators, which act parallel to the direction of the local domain boundary. For 2D geometries, the reduced-dimensional boundary condition represents the 1D (rod) deformation model along the coarse element edges. Note that the local basis functions involve transverse equilibrium, i.e., therefore the prolongation matrix $\mathbf{P}$ reads \begin{gather} \mathbf{P} = \begin{bmatrix} {P}_{xx} & {P}_{xy} \\ P_{yx} & P_{yy} \end{bmatrix}. \end{gather} \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={5cm 16.5cm 5cm 2.9cm}, clip, width=\textwidth]{5.pdf} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={5cm 16.5cm 5cm 2.9cm}, clip, width=\textwidth]{6.pdf} \caption{} \end{subfigure} \caption {Illustration of the multiscale local basis functions, constructed using XFEM for the node $H$ in x direction (a) and y direction (b).}\label{basis_illustrate_1} \end{figure} Figure \ref{basis_illustrate_1} shows an example of a local system to be solved for basis functions belonging to the highlighted node $H$ in x and y directions. Note that the Dirichlet value of 1 is set at $H$ for each directional basis functions, while all other 3 coarse mesh nodes are set to 0.\\ Note that, as shown in Fig. \ref{basis_illustrate_1}, the boundary problem is solved for both edges which have the node $H$ at one of their end values. More precisely, e.g., to find the basis function in x-direction for node $H$, we set the value of $u_x (H) = 1$ at the location $H$. This causes extension of the horizontal boundary cells and bending of the vertical boundary. \\ Once the boundary values are found, the internal cells are solved subjected to Dirichlet values for the boundary cells. An illustration of a basis function obtained using this algorithm is presented in figure \ref{basis_1}. \begin{center}\label{basis_1} \centering \includegraphics[trim={4cm 17cm 4cm 3cm}, clip, width=0.7\textwidth]{7.pdf} \captionof{figure} {Illustration of a basis function that captures the discontinuity of a fracture. Yellow segment represents the fracture.}\label{basis_1} \end{center} Note that the illustrated basis function captures the fractures, because of the XFEM enrichment procedure. The basis function $N^H_i$ will be stored in the column $i$ of the prolongation operator $\mathbf{P}$. Once all basis functions are found, the operator $\mathbf{P}$ is also known and one can proceed with the multiscale procedure as explained before. Next, we explain how the basis functions can be algebraically computed based on the given XFEM fine-scale system. This crucial step allows for convenient integration of our multiscale method into a given XFEM simulator. \subsection{Algebraic construction of multiscale basis functions} The basis function formulation \eqref{basis_c_1} subjected to the local boundary condition \eqref{basis_c_2} can be constructed and solved purely algebraically. This is important, since it allows for convenient integration of the devised multiscale method into any existing XFEM simulator. Consider the coarse cell (local domain) as shown in figure \ref{localdomain_ill}. The cells are split into 3 categories of internal, edge and vertex (node), depending on their locations \cite{YWang2014}. \begin{center} \centering \includegraphics[trim={5cm 17cm 5cm 4cm}, clip, width=0.6\textwidth]{natural_order.pdf} \captionof{figure} {Illustration of the 3 categories of Internal, Edge, and Vertex cells, corresponding to the position of each fine cell within a coarse element.}\label{localdomain_ill} \end{center} Note that the vertex nodes are in fact the coarse mesh nodes, where the coarse-scale solution will be computed. The basis functions are needed to interpolate the solution between the vertex cells through the edge and internal cells. To develop the basis functions, first the fine-scale stiffness matrix $K^h$ is permuted, such that the terms for vertex, then edge and finally the internal cells appear. The permutation operator $\mathbf{T}$ as such reorders $K^h$ into $K^v$such that \begin{gather} \Breve{K}^v = \mathbf{T} K^h \mathbf{T}^T = \mathbf{T} \ \begin{bmatrix} {K}_{uu} & {K}_{ua} & {K}_{ub} \\ {K}_{au} & {K}_{aa} & {K}_{ab} \\ {K}_{bu} & {K}_{ba} & {K}_{bb} \end{bmatrix} \ \mathbf{T}^T = \begin{bmatrix} {K}_{II} & {K}_{IE} & {K}_{IV} \\ {K}_{EI} & {K}_{EE} & {K}_{EV} \\ {K}_{VI} & {K}_{VE} & {K}_{VV} \end{bmatrix}. \end{gather} Here, $I$ represents the internal nodes, $E$ represents the edge nodes and $V$ represents the vertex nodes. The permuted linear system, therefore, reads \begin{gather} \begin{bmatrix} {K}_{II} & {K}_{IE} & {K}_{IV} \\ {K}_{EI} & {K}_{EE} & {K}_{EV} \\ {K}_{VI} & {K}_{VE} & {K}_{VV} \end{bmatrix} \begin{bmatrix} {d}_I \\ {d}_E \\ {d}_V \end{bmatrix} = \begin{bmatrix} {f}_I \\ {f}_E \\ {f}_V \end{bmatrix} \end{gather} Note that the permuted system collects all entries of the XFEM discrete system belonging to I, E, and V cells. Therefore, the XFEM enrichment entries due to tips and jumps are within their corresponding I, E, and V entries. The reduced-dimensional boundary condition is now being imposed by replacing the 2D equation for E by a 1D XFEM discrete system. This leads the entry $\bar{K}_{VI}$ to vanish, as there will be no connectivity between the edge and internal cells for the edge cells. This 1D edge equations can then be expressed as \begin{equation} {K}_{EE}^R {d}_E+{K}_{EV}^R {d}_V = 0. \end{equation} Knowing that the solution at vertex cells will be obtained from the coarse-scale system, the reorder fine-scale system matrix can now be reduced to \begin{gather}\label{reduced_bs} \begin{bmatrix} {K}_{II} & {K}_{IE} & {K}_{IV} \\ 0 & {K}_{EE}^R & {K}_{EV}^R \\ 0 & 0 & I_{VV} \end{bmatrix} \begin{bmatrix} {d'}_I \\ {d'}_E \\ {d'}_V \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix}. \end{gather} Note that the equations for basis functions do not have any source terms in their right-hand-side. The upper-triangular matrix of Eq. \eqref{reduced_bs} can be easily inverted to give the prolongation operator, i.e., given the coarse nodes solutions $d'_V$, one can obtain the solution at the edge via \begin{equation} {d'}_E = - (K^R_{EE})^{-1} {K^R}_{EV} \ {d}_V' = P_E \ {d}_V'. \end{equation} similarly, the solution at the internal cells reads \begin{align} {d'}_I &= - K_{II}^{-1} (K_{IE} d'_E + K_{IV} d'_V) \nonumber\\ &= - K_{II}^{-1} (- K_{IE} (K^R_{EE})^{-1} {K^R}_{EV} + K_{IV}) \ d'_V = P_I \ {d}_V'. \end{align} Note that $P_E$ and $P_I$ are the sub-matrices of the prolongation operator, i.e., \begin{equation} d' = \begin{bmatrix} d'_I\\ d'_E\\ d'_V \end{bmatrix} = \underbrace{ \begin{bmatrix} - K_{II}^{-1} (- K_{IE} (K^R_{EE})^{-1} {K^R}_{EV} + K_{IV})\\ - (K^R_{EE})^{-1} {K^R}_{EV}\\ I_{VV} \end{bmatrix}}_{\mathbf{P}} \ d'_V. \end{equation} Here, $I_{VV}$ is the diagonal identity matrix equal to the size of the vertex nodes. After defining the prolongation operator algebraically, based on the entries of the 2D XFEM (for internal cells) and 1D XFEM (for edge cells), one can find the multiscale solution. \subsection{Iterative multiscale procedure (iMS-XFEM)} The multiscale solutions with the accurate XFEM basis functions can be used to provide an approximate efficient solution for many practical applications. However, it is important to control the error and reduce it to any desired tolerance \cite{Multiscale2_Hadi} if needed. As such, the MS-XFEM is paired with a fine-scale smoother (here, ILU(0) \cite{Chow1997}) to allow for error reduction. Note that this iterative procedure can also be used within a GMRES iterative loop \cite{Saad86} to enhance convergence rates. The study of the most efficient iterative strategy to reduce the error is outside the scope of the current paper. The iterative procedure then reads \begin{itemize} \item Construct the $\mathbf{P}$ and $\mathbf{R}$ operators \item Iterate until $||r^{\nu+1}|| = ||f^h - K^h d'^{\nu+1}|| < e_r$ \begin{itemize} \item [1.] MS-XFEM stage: $\delta{d'}^{\nu+1/2} = \mathbf{P} (\mathbf{R} K^h \mathbf{P})^{-1} \mathbf{R} \ r^{\nu} $ \item [2.] Smoothing stage (apply $n_s$ times ILU(0)): $\delta{d'}^{\nu+1} = ({M^{ns}_{\text{ILU(0)}}})^{-1} \ r^{\nu+1/2}$ \end{itemize} \end{itemize} Note that $n_s$ is defined by user. \section{Numerical Test Cases} In this section several test cases are considered to investigate the performance of MS-XFEM both as approximate solver and integrated within the iterative error reduction procedure. \subsection{Test case 1: Single fracture in a heterogeneous domain} In the first test, a square 2D domain of $L \times L$ with $L=10$[m] is considered, which contains a single horizontal fracture in its centre, as shown in figure \ref{Test1_f1}a. The fine-scale mesh consists of $40\times 40$ cells, while the MS-XFEM contains only $5\times 5$ coarse grids. This results in a coarsening ratio of 8, in each direction. The heterogeneous Young's modulus distribution is shown in figure \ref{Test1_f1}b, while the Possion's ratio is assumed to be constant 0.2 everywhere. The fracture tip coordinates are shown in figure \ref{Test1_f1}a. The Dirichlet boundary condition is set at the south face, while the north boundary is under distributed upward load with $q = 5 \times 10^{5}$ [N/m] magnitude. \begin{figure}[H] \centering \begin{subfigure}{0.4\textwidth} \centering \includegraphics[trim={6cm 18cm 6cm 4cm}, clip, width=\textwidth]{test1setup.pdf} \caption{} \end{subfigure} \begin{subfigure}{0.4\textwidth} \centering \includegraphics[trim={4cm 8cm 4cm 8cm}, clip, width=\textwidth]{test1Emap.pdf} \caption{} \end{subfigure} \caption {Test case 1 :(a) illustration of the model setup, (b) heterogeneous Young's modulus distribution. Note the units are SI.}\label{Test1_f1} \end{figure} Results are shown in figure \ref{Test1_f1_r}. The black lines on figure \ref{Test1_f1_r} (b) and (c) represent the coarse scale mesh. It is clear that the results of MS-XFEM on only $5 \time 5$ grid cells is in reasonable agreement with that of the fine-scale XFEM solver using a $40 \times 40$ mesh. Note that no enrichment for the MS-XFEM is used, and the basis functions are computed using the XFEM method on local domains. \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.8\textwidth]{test1fine.pdf} \caption{} \end{subfigure}\\ \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.8\textwidth]{test1MS.pdf} \caption{$||e_y||=2.6512 \times 10^{-4}$} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.8\textwidth]{test1MS_3.pdf} \caption{$||e_y||=1.0172 \times 10^{-4}$} \end{subfigure} \caption {Test Case 1: displacement field for a heterogeneous fractured reservoir using (a) fine scale XFEM and (b) MS-XFEM (c) iMS-XFEM after 3 iterations.}\label{Test1_f1_r} \end{figure} A basis function for a fractured local domain is illustrated in figure \ref{Test1_basis}. Note that the discontinuity is captured by the basis functions, since XFEM is used to solve for it. \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 17cm 4cm 3cm}, clip, width=0.8\textwidth]{Pxx_MSXFEM1.pdf} \caption{} \end{subfigure}% \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 17cm 4cm 3cm}, clip, width=0.8\textwidth]{Pyx_MSXFEM1.pdf} \caption{} \end{subfigure} \caption {Basis functions of single fracture test case. Single discontinuity is captured by axial equilibrium and transverse equilibrium solutions}\label{Test1_basis} \end{figure} The effect of the coarsening ratio is shown in figure \ref{Test1_cr}. The error $e$ in this figure is computed using \begin{equation} e_i=\frac{||u_{i,MS}-u_{i,f}||_2}{N}, \ \ \ \forall i\in{x,y}, \end{equation} where N is the number of fine-scale mesh nodes. $u_{i,MS}$ and $u_{i,f}$ denote the proloned MS-XFEM solution displacement field and fine-scale solution displacement field, respectively. \begin{center} \centering \includegraphics[trim={4cm 8cm 4cm 8cm}, clip, width=0.5\textwidth]{errorsratio1.pdf} \captionof{figure} {Change of errors with different coarsening ratios}\label{Test1_cr} \end{center} The MS-XFEM errors are due to the local boundary conditions used to calculate the basis functions, and also because no additional enrichment functions are imposed at coarse scale. Note this means that for heterogeneous domains there can be a finer resolution for coarse cells at which the local boundary conditions impose more errors compared with coarser resolutions. In spite of this, figure \ref{Test1_cr} clearly shows, for this example, a decaying trend of the error with respect to the finer coarse mesh is observed. \subsubsection{Iterative MS-XFEM} As discussed in section 3.4, one can pair the MS-XFEM in an iterative strategy in which the error is reduced to any desired level \cite{YWang2014}. From figure \ref{Test1_f1_r} (c) that with 3 times of fine scale smoothers applied in the second stage after 3 iterations the MS-XFEM result has been improved compared to the fine-scale result and the error is decreased significantly. Results of the iterative MS-XFEM procedure (iMS-XFEM) are shown in figure \ref{Test1_imsxfem}. Different smoothing steps per iteration values $n_s$ are used. Note than neither GMRES \cite{Saad86} nor any other iterative convergence enhancing procedure is used here. Clearly, one can reduce the multiscale errors to the machine accuracy by applying iMS-XFEM iterations. In particular, for practical applications, one can stop iterations after a few counts, once the error norm is below the level of uncertainty $\tau$ within the parameters of the problem. \begin{equation} e_{i}\leqslant\tau, \ \ \ i={x,y} \end{equation} In which $\tau$ is chosen as $10^{-10}$ in here.\\ In figure 12 it is shown that convergence is achieved with $n_s$ rounds of the fine scale smoother involved in the second stage.\\ \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 8cm 4cm 9cm}, clip, width=\textwidth]{error_x_iters.pdf} \caption{} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 8cm 4cm 9cm}, clip, width=\textwidth]{error_y_iters.pdf} \caption{} \end{subfigure} \caption {Iteration history of iMS-XFEM procedure with different number of smoothings per step. Errors for displacement in x (a) and y (b) direction are shown.}\label{Test1_imsxfem} \end{figure} \subsection{Test case 2: heterogeneous reservoir with multiple fractures} The second test case is set to model deformation in a heterogeneous reservoir with more fractures. The size and the heterogeneous properties of this test case are the same as those in test case 1. Here, more fractures are considered. In addition, compared to test case 1, the east and west boundaries are also observing distributed loads, as shown in figure \ref{test2_1}. \begin{center} \centering \includegraphics[trim={6cm 18cm 6cm 4cm}, clip, width=0.5\textwidth]{test2setup.pdf} \captionof{figure} {Test case 2: Multiple fractures within a heterogeneous reservoir under tension stress across three boundaries.}\label{test2_1} \end{center} Simulation results for both fine-scale XFEM and MS-XFEM are shown in figure \ref{test2_2}. The black lines on figure \ref{test2_2} (b) and (c) represent the coarse scale mesh. It is clear that the MS-XFEM (without using iterations) results in a relatively accurate representation of the deformation field, compared with the fine-scale XFEM, using $8 \times 8$ fewer grid cells and no coarse-scale enrichment functions. \begin{figure}[H] \centering \begin{subfigure}{0.33\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.9\textwidth]{test2fine.pdf} \caption{} \end{subfigure}\\ \begin{subfigure}{0.33\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.9\textwidth]{test2MS.pdf} \caption{$||e_y||=1.5 \times 10^{-3}$} \end{subfigure} \begin{subfigure}{0.33\textwidth} \centering \includegraphics[trim={5cm 9cm 4cm 8cm}, clip, width=0.9\textwidth]{test2MS_3.pdf} \caption{$||e_y||=4.0566 \times 10^{-4}$} \end{subfigure} \caption {Test Case 2: displacement field for a heterogeneous media with multiple fractures for (a) fine scale XFEM and (b) MS-XFEM without iterative strategy applied (c) iMS-XFEM result after 3 iterations.}\label{test2_2} \end{figure} An example of two basis functions for this test case is shown in figure \ref{test2_3}. In the local plot, it is illustrated how 1 (\ref{test2_3}a) and 2 (\ref{test2_3}b) fractures are captured by the basis functions. \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 17cm 4cm 3cm}, clip, width=0.8\textwidth]{Pyy_MSXFEM21.pdf} \caption{} \end{subfigure}% \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 17cm 4cm 3cm}, clip, width=0.8\textwidth]{Pyy_MSXFEM22.pdf} \caption{} \end{subfigure} \caption {Illustration of the basis functions $P_{xx}$ for two different part of the domain, where two (a) and one (b) discontinuities are captured.}\label{test2_3} \end{figure} The iMS-XFEM procedure, as explained before, is now applied to reduce the multiscale errors to machine precision. Still in figure \ref{test2_2} (c) the result quality is improved a lot after 3 iterations with 3 times fine-scale smoothers applied in the second stage. Note that since no GMRES nor a complete smoother is used, but the incomplete smoother ILU(0) for its efficiency. Results are shown in figure \ref{test2_4}. \begin{figure}[H] \centering \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 8cm 4cm 9cm}, clip, width=\textwidth]{test2_errx_iters.pdf} \caption{} \end{subfigure}% \begin{subfigure}{0.49\textwidth} \centering \includegraphics[trim={4cm 8cm 4cm 9cm}, clip, width=\textwidth]{test2_erry_iters.pdf} \caption{} \end{subfigure} \caption {Iteration history of iMS-XFEM procedure with different number of smoothing per step. Errors for displacement in x (a) and y (b) direction are shown.}\label{test2_4} \end{figure} \section{Conclusion}\label{sec:conclusion} A multiscale procedure for XFEM is proposed to model deformation of geological heterogeneous fractured fields. The method resolves the discontinuities through local multiscale basis functions, which are computed using XFEM subjected to local boundary conditions. The coarse-scale system is obtained by using the basis functions, algebraically, which does not have any additional enrichment functions, in contrast to the local basis function systems. This procedure makes the MS-XFEM very efficient. Also, by combining it with a fine-scale smoother, an iterative MS-XFEM (iMS-XFEM) procedure is developed, which allows to reduce the error to any desired level of accuracy. Two heterogeneous test cases were studied as proof-of-concept, to investigate the performance of the MS-XFEM. It was shown that MS-XFEM results in acceptable solutions, when no iterations are imposed. By applying iterations, one can further improve the results. For practical applications, when parameters are uncertain, only a few iterations can be applied to maintain (and control) the MS-XFEM quality of the solution. \section*{Acknowledgements} Fanxiang Xu is sponsored by the Chinese Scholarship Council (CSC). Authors acknowledge Yaolu Liu of TU Delft and all members of the Delft Advanced Reservoir Simulation (DARSim) and ADMIRE research group for fruitful discussions.
1,108,101,564,464
arxiv
\section{Introduction} With rapid progress in various quantum simulator platforms, the study of quantum many-body states in noisy intermediate-scale quantum (NISQ) platforms has emerged as an active area of research, attracting growing interest from both the condensed matter and quantum information communities. In particular, various exotic quantum states have been realized in these platforms~\cite{scar2017, earlyToric, SPT2019, KZ2019, googleToric, SL2021, TimeCrystal}, demonstrating the possibility of realizing and manipulating quantum matter in near-term quantum devices. Among them, symmetry-protected topological (SPT) \cite{Gu0903.1069, Chen1008.3745, Pollmann0909.4059} phases are of great importance since the SPT states provide a class of non-trivially entangled quantum systems that can be realized in the NISQ platforms, and they are also relevant to measurement-based quantum computation and state preparation~\cite{1Dcluster_GHZ, 2Dcluster, 2Dcluster_toric, 3dCluster_fracton1, 3dCluster_fracton2, Stephen2017, Raussendorf2019, NatMeasurement, NatRydberg,CSScode, CSScode2, ClusterCSS, Lu2022, Lee2022, Zhu2022, nonAbelianNat, nonAbelianNat2}. However, despite the fact that quantum decoherence of various types inevitably occurs in nature, most theoretical studies have been focusing on the {\it pure} SPT states, except for a few~\cite{statTI, statTI2, MaWang2022}. Therefore, it is timely to ask what defining features of this highly interesting class of topological quantum phases can be identified under the presence of decoherence. It is commonly accepted that SPT states usually exhibit \emph{trivial} bulk features, in the sense that the properties such as the correlation between local order parameters are not distinguishable from those of trivial quantum disordered states. Since the defining symmetries of the SPT states are preserved, there are no bulk observables such as macroscopic magnetization. As a result, characteristic features of SPT states are often understood in terms of the nontrivial boundary physics, or nontrivial quantum numbers carried by the symmetry defects. However, it is not so easy to directly extract this information from a bulk ground state wave function of the SPT phase. In particular, when the defining symmetry of the SPT state involves spatial symmetries, an arbitrary open boundary may break the spatial symmetry and render the boundary state trivial, unless the boundary is carefully designed to preserve all the spatial symmetries. In order to overcome these complexities, a notion \emph{strange correlator} was proposed~\cite{YouXu2013}, to diagnose nontrivial SPT properties using just the bulk ground state wave function, without reference to either boundaries or defects. With an analogy to the Wick-rotated correlator in the imaginary space-time at the interface against a trivial disordered state, the strange correlator defined purely based on the bulk wave function must be ``nontrivial", $i.e.$ they must either saturate to a constant or decay as a power-law for $1d$ or $2d$ SPT states. As we will demonstrate in this work, the strange correlator can also be viewed as a general ``order parameter" for the $1d$ and $2d$ SPT phases, analogous to the well-known string order parameter of the Haldane phase~\cite{SOP1989, SOP2008}. Hence the behavior of the strange correlator can also be regarded as the defining feature of an SPT state. The strange correlator has been adopted as a useful tool in both conceptual understanding and numerical studies of SPT phases, for both bosonic and fermionic systems~\cite{scwierschem1,scwierschem2,scwierschem3,sczohar1,scmeng1,sczohar2,scmeng2,scwei,scmeng3,scwierschem4,sczhong,scscaffidi,scfrank1,scfrank2,scfrank3,scfrank4,scfrank5,schsieh,scfan,wu2020,scsagar,scmeng4}. \begin{figure}[!t] \centering \includegraphics[width = 0.45 \textwidth]{SC_pathintegral.pdf} \caption{\label{fig:sc_pathintegral} Schematic representations of: (a) The density matrix of a SPT state under decoherence as an imaginary-time path integral. The two boundaries at the temporal direction are decoupled for a pure state. However, decoherence acts like a coupling between two boundaries, collapsing the doubled symmetries down to its diagonal subgroup. (b) Under Wick rotation $(x,\tau) \rightarrow (\tilde{x},\tilde{\tau})$, the two boundaries along the temporal direction $\tau$ would become two opposite boundaries along the spatial direction $\tilde{x}$; also, decoherence acts as a perturbation connecting two spatial boundaries. (c) The type-I strange correlator where blue squares corresponding to charged operators (order parameters) are present only at the $\tau = -\infty$ boundary. The grey (white) sheet corresponds to the path integral formulation of the SPT (trivial state). (d) The type-II strange correlator, where charged operators present at both $\tau = \pm \infty$ boundaries. } \end{figure} In any experimental implementation and transmission of an entangled quantum state, the state experiences certain amount of decoherence that may erase a characteristic signature defined for a pure state, where the key information of the original quantum state needs be \emph{decoded} by the receiving end of the process. In this work, we exploit the idea of strange correlators to illustrate that SPT states under decoherence can still retain nontrivial topological information that is in principle decodable. Nature gives quantum systems a natural mechanism of decoherence: thermalization. At thermal equilibrium, all degrees of freedom of the system experience decoherence whose strength depends on the temperature, and such a ``massive" thermal decoherence drives SPT phases with on-site symmetries to be trivial~\cite{SPT_finiteT_2017}. Instead, we will consider a ``selective" decoherence to certain degrees of freedom of the system, and we will demonstrate that this will expose an extremely rich possibility of physics under decoherence. With decoherence, the quantum state of interest becomes an \emph{ensemble} expressed by the density matrix. In the density matrix formalism, two types of strange correlators naturally emerge, which we denote as ``type-I" and ``type-II" strange correlators, \jyl{whose graphical illustration is shown in \figref{fig:sc_pathintegral}(c,d)}. In the pure state limit, the former one is essentially identical to the original definition of the strange correlator~\cite{YouXu2013}, while the latter is the squared version of the original strange correlator. From both exact lattice model and field-theoretic calculations, we will show that the type-II strange correlator still retains the information of an SPT state under certain selective decoherence, hence the type-II strange correlator may serve as \jyl{a tool to decode the mixed state originated from the SPT state under a noisy quantum channel}, and also a general method of classifying mixed state density matrices. We also found that for some SPT states, the type-I strange correlator can retain SPT features below critical decoherence strength, above which the type-I strange correlator would become short-ranged and decay by an ``area law''. Taking further steps, we show that nontrivial behaviors of type-I strange correlators can be probed from experiments. Since the strange correlator is defined as the operator matrix element between two different states, it is experimentally challenging to measure and has been used mostly as a conceptual notion and a numerical tool to diagnose SPT phases in the past. However, we show that for a broad class of SPT states with decorated defect constructions, the \emph{marginalized} version of the type-I strange correlator can be experimentally probed using measurements and additional classical computational steps~\cite{Lee2022}. More interestingly, we reveal that this marginalized type-I strange correlator provides a unifying framework to understand the non-local order parameters of an SPT state and the feasibility of preparing a long-range entangled quantum state by measuring the SPT state. Finally, the type-II strange correlator can be in principle probed by fidelity estimation using randomized measurements~\cite{Ohliger1204.5735, Elben2203.11374, Notarnicola2112.11046}. The rest of the paper is organized as the following. In \secref{sec:SC_lattice}, we define the type-I and type-II strange correlators. With exact calculations on stabilizer Hamiltonians, and numerical computations for models away from the exactly soluble limit, we illustrate that under selective decoherence, the type-I strange correlator may be short-ranged, but the type-II strange correlator stays nontrivial and retain the memory of an underlying SPT phase. Furthermore, using the example of the 2d cluster state, we show that the type-I strange correlator can stay nontrivial for a weak decoherence and undergo a transition into a short-ranged phase at a critical decoherence strength. In \secref{sec:field_theory}, we evaluate both the type-I and type-II strange correlators using effective field theory descriptions for generic $1d$ and $2d$ SPT states. In this formalism, the strange correlators of decohered SPT states at the $d$ spatial dimension reduce to the ordinary correlation functions in the space-time of the boundary of the SPT state, and the two opposite boundaries are coupled by interactions in the selected channel. In \secref{sec:doubled}, we map a decohered SPT mixed state into a pure state in the doubled Hilbert space using the so-called Choi-Jamiołkowski isomorphism \cite{JAMIOLKOWSKI1972, CHOI1975}. We also argue that the type-II strange correlator can be viewed as a tool to decode the key information of a quantum state transmitted through a noisy channel. In \secref{sec:measure}, we establish the relation between the type-I strange correlators and more general non-local order parameters, such as the string-order parameter in the Haldane phase, showing that the type-I strange correlators provide an upper-bound for non-local order parameters. We also propose a method to experimentally probe the type-I strange correlator. \section{Strange Correlators} \label{sec:SC_lattice} In this section, we investigate the strange correlators of SPT states defined by stabilizer Hamiltonians subject to decoherence. We remark that any stabilizer SPT with zero correlation length has a corresponding decorated domain wall (defect) construction~\cite{Chen2014}, which facilitates the analytic calculations. \subsection{Basic formalism} Let us first elaborate on the basic formalism used in the paper. Decoherence is defined as the process that incurs the loss of quantum coherence, evolving pure states into mixed states. Throughout this paper, we consider local decoherence models described by the quantum channel $ {\cal E} _i$ in the Kraus representation: \begin{equation} \label{eq:noise_Kraus} {\cal E} _i[\rho] = \sum_m K^{\vphantom{\dagger}}_{m,i} \rho K_{m,i}^\dagger \end{equation} where $K_{m,i}$ is a Kraus operator with local support in the neighborhood of the $i$-th site, satisfying $\sum_m K^{\vphantom{\dagger}}_{m,i} K_{m,i}^\dagger=1$. The global decoherence model would be defined by the composition of local decoherence models, $ {\cal E} = \circ_i {\cal E} _i$. Once we use the density matrix to describe a state that is invariant under certain symmetry $G$, the density matrix enjoys a ``doubled" symmetry transformations, $i.e.$ the density matrix $\rho = |\Psi\rangle \langle \Psi |$ is invariant under separate ``left" and ``right" multiplication of the symmetry transformation, as $|\Psi\rangle$ is invariant under $G$: $\rho = g_L \rho g^\dagger_R$, $g_L, g_R \in G$. However, the notion of symmetry may be weakened to the density matrix being invariant $\rho=g \rho g^\dagger$ (for $g\in G$) under the adjoint action of the symmetry from both left and right, which is in connection to the average symmetry discussed in \cite{MaWang2022, Kimchi1710.06860}. An average symmetry $G$ was defined in Ref.~\onlinecite{MaWang2022} for a random ensemble $\{\ket{\Psi}\}$ of states subject to a probability distribution $P(\ket{\Psi})$ that is invariant under the symmetry transformation, i.e. $P(g\ket{\Psi})=P(\ket{\Psi})$ for $g\in G$, even though $g\ket{\Psi}\neq\ket{\Psi}$. This directly implies that the density matrix $\rho=\sum_{\Psi} p(\ket{\Psi})\ket{\Psi}\bra{\Psi}$ is only invariant under the adjoint action $\rho=g \rho g^\dagger$ of the average symmetry as we introduced above. For some of the prototypical SPT states considered in this work, the system is defined with two symmetries, $G_A$ and $G_B$, and the ground state wave function of the system $|\Psi\rangle$ is symmetric under $G_A$ and $G_B$, meaning its density matrix is invariant under separate left and right transformation of $G_A$ and $G_B$. Throughout the work, we often introduce \emph{symmetric} decoherence on degrees of freedom charged under $G_B$, but not $G_A$, which implies that the mixed density matrix under decoherence, denoted by $\rho^D$, is invariant under \emph{simultaneous} left and right transformation of $G_B$: $\rho^D = g_B \rho^D g_B^{\dagger}$ for $g_B \in G_B$, while $\rho^D$ must still remain invariant under separate left and right transformation of $G_A$. \subsection{$1d$ Cluster State} \label{sec:1dcluster} Let $\ket{\Psi}$ be a nontrivial SPT state, and $\ket{\Omega}$ be a trivial disordered state, then the following quantity \begin{equation} \label{eq:strange_basic} C(r) \equiv \frac{\langle \Psi | O(0) O(r) | \Omega \rangle }{\langle \Psi | \Omega \rangle } \end{equation} is called the \emph{strange correlator}, and it is expected to either saturate to a non-zero constant value or decay as power-law, for SPT states in $1d$ and $2d$~\cite{YouXu2013}. $O$ is an operator that transforms nontrivially under symmetries that define the SPT states. For noninteracting or weakly interaction fermionic topological insulators (TI) and topological superconductors (TSC), the strange correlators in higher dimensions should also decay with a power law. It is helpful to write the strange correlator in a slightly different form: $ C(r) \equiv \langle \Psi | \hat{C}(r) | \Psi \rangle$, where \begin{eqnarray} \hat{C}(r) = \frac{1}{|\langle \Psi | \Omega \rangle|^2} \Big[ O(0) O(r) |\Omega\rangle \langle \Omega| \Big]. \end{eqnarray} Hence the strange correlator can be viewed as the expectation value of an ``order parameter" $\hat{C}(r)$ of $1d$ and $2d$ SPT phases, and the nontrivial behavior of this order parameter, either long-ranged or power-law decaying in its expectation value, can be viewed as a defining feature of an SPT wave function $|\Psi\rangle$. As an example, let us consider a $1d$ cluster state~\cite{1Dcluster_GHZ} with $2N$ sites, defined by the stabilizer Hamiltonian with $\mathbb{Z}_2 \times \mathbb{Z}_2 $ symmetry: \begin{equation} \label{eq:1d_cluster_ham} H = - \sum_i Z_{i-1} X_i Z_{i+1}, \end{equation} where the symmetry action is defined by the product of $X$ on even/odd sublattices. The periodic boundary condition is assumed, such that the Hamiltonian has a unique SPT ground state without degeneracies arising from boundary modes. \jyl{We remark that under open boundaries, the system has anomalous boundary zero modes (spin-1/2) at each end, whose degeneracy is protected by symmetries; this is often considered to be a defining feature of the SPT state. } To evaluate a strange correlator, we use the following product state for a trivial disordered state: $\ket{\Omega} = \ket{+}^{\otimes 2N}$ for a $1d$ system with $2N$ sites, where $\ket{+}$ denotes the $X_n=+1$ eigenstate on every site. For the strange correlator of $\mathbb{Z}_2^{\textrm{odd}}$ charged operators separated by the $2n$ lattice spacings, we get \begin{align} \label{eq:1d_cluster_even} C_{\textrm{odd}}(2n) &= \frac{\langle \Psi | Z_1 Z_{2n+1} | \Omega \rangle }{\langle \Psi | \Omega \rangle } \nonumber \\ &= \frac{\langle \Psi | \prod_{m=1}^n X_{2m} | \Omega \rangle }{\langle \Psi | \Omega \rangle } = 1, \end{align} where we have used $Z_1\prod_{m=1}^{n}X_{2m}Z_{2n+1}\ket{\Psi}=\ket{\Psi}$ for the SPT state $\ket{\Psi}$. This result is expected from the presence of a spin-1/2 zero mode at the boundary in the Wick-rotated picture of the strange correlator. Similarly, the strange correlators of $\mathbb{Z}_2^{\textrm{even}}$ charged operators $Z_0$ and $Z_{2n}$ take a unit value, where we identify $Z_0 \equiv Z_{2N}$. On the other hand, if we replace $|\Psi\rangle$ with a trivial product state, strange correlators would vanish. Later in \secref{sec:measure} we will show that the strange correlator for the stabilizer Hamiltonian discussed here is directly connected to the well-known string order parameter of the Haldane phase~\cite{SOP1989, SOP2008}; therefore, the strange correlator can be indeed viewed as an ``order parameter'' defining the SPT phase. \subsection{Decoherence} The strange correlators probe nontrivial information of the SPT state. What would happen to strange correlators if the SPT state is decohered through the noise channel that destroys, say the $\mathbb{Z}_2^{\textrm{even}}$ symmetry? First of all, we remark that as we discuss decoherence, we should use the density matrix formalism. In this case, the strange correlator expression in \eqnref{eq:strange_basic} should generalize as the following: \begin{equation} \label{eq:strange_typeI} C^\textrm{I}(r) \equiv \frac{\tr(\rho_\textrm{spt} O(0) O(r) \rho_0)}{\tr(\rho_\textrm{spt} \rho_0)} \end{equation} where $\rho_\textrm{spt}$ and $\rho_0$ are density matrices of the decohered SPT and trivial states, respectively, and the superscript ``$\textrm{I}$" stands for the ``type-I" strange correlator. In the pure state limit, the above expression reduces into \eqnref{eq:strange_basic}. Now, consider a noise channel defined as the composition of local noises: \begin{align} \label{eq:noise_basic} &\mathcal{E}_i: \rho \rightarrow (1-p) \rho + p Z_i \rho Z_i, \quad {\cal E} \equiv {\cal E} _2 \circ {\cal E} _4 \circ \cdots {\cal E} _{2N} \nonumber \\ &\Rightarrow {\cal E} [\rho] = \sum_{\bm{\eta}}P(\bm{\eta})\, K^\dagger_{\bm{\eta}} \rho K^{\vphantom{\dagger}}_{\bm{\eta}},\quad K_{\bm{\eta}} \equiv \prod_m Z_{2m}^{\eta_{2m}}. \end{align} where $p$ is the probability of having a dephasing noise locally, ${\bm{\eta}} \equiv (\eta_2, \eta_4, ..., \eta_{2N})$ is a bit-string of $\{0,1\}$ characterizing the $Z$ noise operator $K_{\bm{\eta}}$, and $P(\bm{\eta})=\prod_{m=1}^{N} P(\eta_{2m})$ with $P(0)=1-p$ and $P(1)=p$ is the probability of having the noise $\bm{\eta}$. This noise channel perturbs the system in a way that locally breaks the $\mathbb{Z}_2^{\textrm{even}}$ symmetry along every quantum trajectory. Under this channel, the pure SPT state becomes a mixed state ensemble, denoted by $\rho^D_\textrm{spt} \equiv {\cal E} [\rho_\textrm{spt}]$. Now we evaluate the type-I strange correlator of $\rho^D_\textrm{spt}$ against the trivial SPT state density matrix $\rho_0 = |\Omega \rangle \langle \Omega |$: \begin{align} \textrm{tr}\big(\rho^D_\textrm{spt} \rho^{\vphantom{\dagger}}_0\big) &= \sum_{ \bm{\eta} } P(\bm{\eta}) |{\langle \Psi| K_{\bm{\eta}} | \Omega \rangle }|^2 = \frac{1}{2} | {\langle \Psi | \Omega \rangle } |^2. \end{align} This is because odd number of $Z$ operators can change the $\mathbb{Z}_2^{\textrm{even}}$ parity, whose expectation value vanishes between two states with the same parity. Such a case corresponds to $\sum_m \eta_{2m} \equiv 1 \mod 2$, which happens about half the time in the limit $N \rightarrow \infty$. Interestingly, this implies that we can use the trivial product state $\ket{\Omega}$ with an arbitrary parity, since the decoherence would flip the parity of the SPT state half the time, i.e., $ {\cal E} [\rho_\textrm{spt}] = \frac{1}{2} \rho_\textrm{spt}^\textrm{e} + \frac{1}{2}\rho_\textrm{spt}^\textrm{o}$ in the limit $N \rightarrow \infty$ where $\tr \rho_\textrm{spt}^\textrm{e,o} =1$. For an even parity error $K_{\bm{\eta}}$ such as $Z_{2a} Z_{2b}$, it satisfies $K_{\bm{\eta}} \ket{\Psi} = \prod_{m=a}^{b-1} X_{2m+1} \ket{\Psi}$ and then the action of the product of $X$ on $\langle \Omega|$ should square to one. Similarly, the numerator of the strange correlator for the $\mathbb{Z}_2^{\textrm{odd}}$ charged operator is \begin{align} &\textrm{tr}\big(\rho^D_\textrm{spt} Z^{\vphantom{\dagger}}_1 Z^{\vphantom{\dagger}}_{2n+1} \rho^{\vphantom{\dagger}}_0\big) \nonumber \\ & \quad = \sum_{ \bm{\eta} } P(\bm{\eta}) {\langle \Psi| K_{\bm{\eta}} Z_1 Z_{2n+1} | \Omega \rangle } {\langle \Omega| K^\dagger_{\bm{\eta}} | \Psi \rangle } \nonumber \\ & \quad = \sum_{ \bm{\eta} } P(\bm{\eta}) {\langle \Psi| \prod_{m=1}^n X_{2m} K_{\bm{\eta}} | \Omega \rangle } {\langle \Omega| K^\dagger_{\bm{\eta}} | \Psi \rangle } \nonumber \\ &\quad = \frac{1}{2} | {\langle \Psi | K_{\bm{\eta}} | \Omega \rangle } |^2 \Big[ \prod_{m=1}^n \sum_{ \eta_{2m} } (-1)^{\eta_{2m}} P(\eta_{2m}) \, \Big] \nonumber \\ & \quad = \frac{1}{2} | {\langle \Psi | \Omega \rangle } |^2 e^{-n/\xi},\,\,\,\,\,\, \xi = 1/\ln(1/(1-2p)). \end{align} However, for the strange correlator of $\mathbb{Z}_2^{\textrm{even}}$ charged operators, its numerator is \begin{equation} \tr(\rho^D_\textrm{spt} Z_0 Z_{2n} \rho_0)=\tr(\rho^D_\textrm{spt} \rho_0)=\frac{1}{2}| {\langle \Psi | \Omega \rangle } |^2, \end{equation} because the operator $Z_0 Z_{2n}$ can commute through $K_{\bm{\eta}}$ to act on the state $\bra{\Psi}$ and become $\prod_{m=1}^{n}X_{2m-1}$ to commute back and hit $\ket{\Omega}$. Following \eqnref{eq:strange_typeI}, we obtain \begin{equation}\label{eq:strange_typeI_result} C^\textrm{I}_{\textrm{even}}(2n) = 1, \qquad C^\textrm{I}_{\textrm{odd}}(2n) = e^{-n/\xi}. \end{equation} for the SPT state decohered under $\mathbb{Z}_2^{\textrm{even}}$ noise channel. Therefore, the odd-sited strange correlator becomes exponentially decaying, and its length scale depends on the strength of the decoherence. For $p \ll 1$, we see that $\xi$ is very large, and the strange correlator would look the same as a pure state SPT phase for small system size. However, in the thermodynamic limit, the odd-sited strange correlator is always exponentially decaying to zero while the even-sited strange correlator behaves the same as the pure state. A recent work pointed out that the notion of SPT order can still be defined even if the symmetry only holds in the average sense~\cite{MaWang2022}. Under decoherence, although the $\mathbb{Z}_2^{\textrm{even}}$ symmetry is broken for every quantum trajectory, the ensemble of all trajectories still preserves the symmetry on average, thus an ``average SPT" order should still be expected. Our result \eqnref{eq:strange_typeI_result} appears to suggest that this average SPT behavior cannot be detected. However, one can define a new type of odd-sited correlator as follows: \begin{equation} \label{eq:strange_typeII} C^\textrm{II}(r) \equiv \frac{\tr(\rho^D_\textrm{spt} O(0) O(r) \rho^{\vphantom{\dagger}}_0 O(r)^\dagger O(0)^\dagger )}{\tr(\rho^D_\textrm{spt} \rho^{\vphantom{\dagger}}_0)} \end{equation} which is called the ``type-II" strange correlator. Then, following the same calculation as above, one can show that for $O=Z$, \begin{equation} C^\textrm{II}_{\textrm{even}}(2n) = C^\textrm{II}_{\textrm{odd}}(2n) = 1. \end{equation} which is nontrivial for both even and odd sites. This is because the noise only flips the sign of the operator overlap between $|\Omega\rangle$ and $K_{\bm{\eta}} |\Psi \rangle$, which squares to one in the type-II strange correlator evaluation. Hence the type-II strange correlator is analogous to the Edwards-Anderson correlator of the type-I strange correlator. \begin{figure}[!t] \centering \includegraphics[width = 0.49 \textwidth]{1d_cluster.pdf} \caption{\label{fig:1d_numerics} The DMRG numerics is performed at the system size $L=100$ for $H = -\sum_n (Z_{n-1} X_n Z_{n+1} + h X_n)$ under periodic boundary condition. (a) Odd-sited strange correlators at the distance $L/2$ as functions of the transverse field $h$. (b) Odd-sited strange correlators as functions of the distance at $h=1/2$. Red (blue) curves represent type-I (II) strange correlators. Solid (dashed) lines are obtained at the decoherence strength $p=0$ ($p=0.1$). Here, the decohered density matrix is obtained for the ensemble of $4 \times 10^3$ different disorder realizations on even-sites. } \end{figure} \subsection{Away from the fixed point} So far, we have discussed the strange correlators for the $1d$ $\mathbb{Z}_2 \times \mathbb{Z}_2$ SPT state in the stabilizer limit with a zero correlation length ($\xi=0$). Even away from the stabilizer limit, the aforementioned properties of the type-I and type-II strange correlators generally hold: while the type-I strange correlator would decay exponentially with distance under decoherence, the type-II strange correlator would remain long-ranged under decoherence. This behavior will be more systematically discussed in the later section using field theory. To demonstrate the behavior, consider introducing a transverse field $-h \sum_n X_n$ to the Hamiltonian in \eqnref{eq:1d_cluster_ham}, which drives the ground state away from the soluble limit. In \figref{fig:1d_numerics}, we numerically obtained both types of $\mathbb{Z}_2^{\textrm{odd}}$-charged strange correlators $C^\textrm{I}_{\textrm{odd}}$ and $C^\textrm{II}_{\textrm{odd}}$ computed against $|+\rangle^{\otimes L}$ as functions of $r$ at $h=0.5$, with and without decoherence on the even sites as in \eqnref{eq:noise_basic}. The plot illustrates two important points: $(i)$ away from the fixed point, the magnitudes of the strange correlators decrease, which completely vanish at the known topological-trivial transition point $h=1$; and $(ii)$ under decoherence, while $C^\textrm{I}$ decays exponentially with distance, $C^\textrm{II}$ stays robust. Therefore, the type-II strange correlator serves as a probe for the SPT physics under decoherence. As a side remark, we found that the magnitude of the type-I strange correlator in each quantum trajectory is very close to each other; the main difference among the type-I strange correlators in different quantum trajectories is their signs. Although not shown in the figure, we remark that for $h>0$, both $C^\textrm{I}_{\textrm{even}}$ and $C^\textrm{II}_{\textrm{even}}$ may increase under the decoherence on even sites. This is because while $C^\textrm{I,II}_{\textrm{even}}$ probe correlations between exactly localized charged operators, away from the fixed point, $\mathbb{Z}_2$-charged operator become diffused across the correlation length $\xi$. As decoherence effectively diffuses a localized charged operator $Z$ over multiple sites, decoherence can help enhance the strange correlator. \subsection{Generic Noise Model} One can consider more generic decoherence channels as defined in \eqnref{eq:noise_Kraus}. For example, one may consider the following depolarization channel on site $i$ \begin{equation} \label{eq:generic_noise} \mathcal{E}_i: \rho \rightarrow (1-p) \rho + \frac{p}{3} (X_i \rho X_i + Y_i \rho Y_i + Z_i \rho Z_i). \end{equation} When we turn on this noise on all the even sites, it is straightforward to see that the behavior of the type-I and type-II strange correlators would remain the same as was discussed in the previous sections. Instead, one may also consider a noise model coherent over multiple sites, i.e., Kraus operators $K_i$s in \eqnref{eq:noise_Kraus} are extended over multiple sites. In this case, the global decoherence channel acts as a stochastic random short-depth local unitary transformation. While doing so, we can still require the noise channel to respect the doubled symmetry of $\mathbb{Z}_2^{\textrm{odd}}$, and the adjoint action of $\mathbb{Z}_2^{\textrm{even}}$. For each quantum trajectory, such a noise action can be decomposed into strictly a local noise in \eqnref{eq:noise_basic} and short-depth local unitary transformations. Accordingly, a generic complicated noise model can change the magnitude of type-II strange correlator as a generic unitary transformation away from the fixed point decreases the magnitude of the type-II strange correlator in \figref{fig:1d_numerics}(b). \subsection{2d Cluster State} A 2d generalization of the cluster state~\cite{2Dcluster, 2Dcluster_toric} in the Lieb lattice is an SPT state~\cite{Yoshida2016} with the mixed anomaly between 0-form $\mathbb{Z}_2^{(0)}$ and 1-form $\mathbb{Z}_2^{(1)}$ symmetries defined on the square lattice, where qubits reside on both vertices and edges. For a $L \times L$ square lattice, there are $N_v = L^2$ vertex qubits and $N_e = 2L^2$ edge qubits. Its stabilizer Hamiltonian is defined as \begin{equation} \label{eq:2dclusterHam} H = - \sum_{v} \qty( X_v \prod_{e \in \mathrm{d}v } \bm{Z}_e ) - \sum_{e} \qty( \bm{X}_e \prod_{v \in \partial e} Z_v ), \end{equation} where $\partial$ stands for the boundary operator on the lattice and $\mathrm{d}=\star\partial\star$ is the coboundary operator (with $\star$ being the Hodge dual). Bold symbols $\bm{Z}$ and $\bm{X}$ act on edges, and unbold symbols $Z$ and $X$ act on vertices. Here, all terms in the Hamiltonian commute with one another, and the groundstate satisfies that each term be 1. This implies that $B_p \equiv \prod_{e \in \partial p} \bm{X}_e=1$ for any plaquette $p$. We denote the symmetry group by $G_A \equiv \mathbb{Z}_2^{(0)}$ and $G_B \equiv \mathbb{Z}_2^{(1)}$, where the 0-form symmetry charge (generator) $g \equiv \prod_{v} X_v \in G_A$ and the 1-form symmetry charge $h_\gamma \equiv \prod_{e \in \gamma} \bm{X}_e \in G_B$ for any closed loop $\gamma$ along the bonds. Again, the ground state of \eqnref{eq:2dclusterHam} has the decorated domain wall (defect) structure: the defect of the 1-form symmetry measured by $\prod_{e \ni v} \bm{Z}_e$ is bound to the charge of the 0-form symmetry measured by $X_v$; also the defect (domain wall) of the 0-form symmetry measured by $Z_v Z_{v'}$ is bound to the charge of the 1-form symmetry measured by $X_e$. \jyl{As shown in \cite{Yoshida2016}, the mixed anomaly between $\mathbb{Z}_2$ 0-form and $\mathbb{Z}_2$ 1-form symmetries in the SPT state enforces the boundary to be nontrivial under open boundary condition; when vertex qubits are exposed, the boundary should spontaneously break the 0-form symmetry, forming a $\mathbb{Z}_2$ ferromagnet. } Now, we are ready to calculate the strange correlators for this state. There are two operators we can inspect: $Z_v Z_{v'}$ which is the correlation of $G_A$-charged operators, and $\prod_{e \in \gamma^\star} \bm{Z}_e$ which is the Wilson loop of $G_B$-charged operators where $\gamma^\star$ is a closed loop on the dual lattice. First, we evaluate strange correlators for the stabilizer state. Let $\rho_0 = | \Omega \rangle \langle \Omega |$, where $|\Omega \rangle = |+\rangle^{\otimes(N_e+N_v)}$. Then, \begin{align} C_{A}^\textrm{I} \equiv \frac{\tr(\rho_\textrm{spt} Z_v Z_{v'} \rho_0)}{\tr(\rho_\textrm{spt} \rho_0)} &= \frac{\tr(\rho_\textrm{spt} \prod_{e \in l} \bm{X}_e \rho_0)}{\tr(\rho_\textrm{spt} \rho_0)} = 1 \end{align} where $l$ is an open string connecting two vertices $v$ and $v'$. Since $B_p = 1$ for both $\ket{\Psi}$ and $\ket{\Omega}$, the RHS is independent of the choice of $l$. Similarly, for any closed loop $\gamma = \partial {\cal A} $, where $ {\cal A} $ is the region enclosed by the loop $\gamma$, \begin{align} C_{B}^\textrm{I} &\equiv \frac{\textrm{tr}(\rho_\textrm{spt} \prod_{e \in \partial {\cal A} } \bm{Z}_e \rho_0)}{\tr(\rho_\textrm{spt} \rho_0)} = \frac{\tr(\rho_\textrm{spt} \prod_{v \in {\cal A} } X_v \rho_0)}{\tr(\rho_\textrm{spt} \rho_0)} = 1 \end{align} Accordingly, the type-II strange correlators would simply be $C_A^\textrm{II} = C_B^\textrm{II} = 1$. Interestingly, under decoherence, $C_A$ and $C_B$ exhibit a qualitative difference. For example, consider the decoherence of $G_B = \mathbb{Z}_2^\textrm{(1)}$ symmetry by the noise channel in \eqnref{eq:noise_basic} for edge qubits, i.e., $\rho_\textrm{spt}^D = {\cal E} _B[\rho_\textrm{spt}]$. It is convenient to define the projection operator $ {\cal P} $ for vertex and edge qubits as the following: \begin{align} \label{eq:projector} {\cal P} _0^v &= \prod_{v} \frac{1 + X_v}{2}, \qquad {\cal P} _0^e = \prod_{e} \frac{1 + \bm{X}_e}{2}. \end{align} Then, the trivial state density matrix is given by the product of two projectors, $\rho_0 = {\cal P} _0^v \otimes {\cal P} _0^e$. Using this, \begin{align} \tr( {\cal E} _B[\rho_\textrm{spt}] \rho_0) &= \tr( \rho_\textrm{spt} {\cal E} _B[\rho_0]) = \langle \Psi | {\cal P} ^v_0 \otimes {\cal P} ^e_ {\cal E} | \Psi \rangle, \nonumber \\ {\cal P} ^e_ {\cal E} \equiv {\cal E} [ {\cal P} ^e_0] &= \prod_{e} \frac{1}{2} (1 + (1-p) \bm{X}_e + p \bm{Z}_e \bm{X}_e \bm{Z}_e) \end{align} where we use self-adjointness of the decoherence channel in the first line. Note that $ {\cal P} _ {\cal E} ^e = \prod_{e} \frac{1}{2} (1 + (1-2p) \bm{X}_e )$ since $ZXZ = -X$. By expanding the above expressions and using the Lemma in \appref{app:lemma}, we obtain that \begin{align} \label{eq:1form_partition} \textrm{tr}\big( \rho_\textrm{spt}^D \rho^{\vphantom{\dagger}}_0 \big) &= \frac{2 }{2^{N_e} \cdot 2^{N_v} } \sum_{\gamma} (1-2p)^{|\gamma|} \end{align} where the summation over $\gamma$ is taken for all closed loops. Similarly, \begin{align} \label{eq:1form_corr} &\tr( {\cal E} _B[\rho_\textrm{spt}] Z_v Z_{v'} \rho_0) = \tr( \rho_\textrm{spt} Z_v Z_{v'} {\cal E} _B[\rho_0]) \nonumber \\ & = \langle \Psi | \prod_{e \in l} \bm{X}_e {\cal P} ^v_0 \otimes {\cal P} ^e_ {\cal E} | \Psi \rangle = \frac{2 }{2^{N_e} \cdot 2^{N_v} } \sum_{\gamma'} (1-2p)^{|\gamma'|} \end{align} where the summation over $\gamma'$ is taken over all possible loops added by the open string $l$ (they are $\mathbb{Z}_2$-valued). The channel $ {\cal E} _B$ could be moved to $\rho_0$ since it acts trivially on the vertices. Therefore, we see that the strange correlator \begin{align} \label{eq:1form_strangeIA} C_A^\textrm{I} = \frac{\textrm{tr}( \rho^D_\textrm{spt} Z_v Z_{v'} \rho_0)}{ \textrm{tr}( \rho^D_\textrm{spt} \rho_0) } = \expval{Z_v Z_{v'}}_\beta^\textrm{2dIsing}, \end{align} which is the correlation function of classical 2d ferromagnetic Ising model at the inverse temperature $\beta = \tanh^{-1}(1-2p)$. In other words, the strange correlator exhibit transition behavior from long-ranged to short-ranged depending on the decoherence strength $p$. \jyl{To compute the type-II strange correlator, we calculate its numerator as the following: \begin{align} \label{eq:1form_corr_SCII} &\tr( {\cal E} _B[\rho_\textrm{spt}] Z_v Z_{v'} \rho_0 Z_v Z_{v'}) = \tr( \rho_\textrm{spt} Z_v Z_{v'} {\cal E} _B[\rho_0] Z_v Z_{v'}) \nonumber \\ & = \langle \Psi | \prod_{e \in l} \bm{X}_e {\cal P} ^v_0 \otimes {\cal P} ^e_ {\cal E} \prod_{e \in l} \bm{X}_e | \Psi \rangle = \textrm{tr}\big( \rho_\textrm{spt}^D \rho_0 \big). \end{align} Therefore, $C^\textrm{II}_A = 1$. Note that $C_B^\textrm{I,II}$ are unaffected by the noise $ {\cal E} _B$, taking a unit value. } \jyl{It is instructive to understand how the conventional non-local order parameter for the 2d SPT state behaves. In the 2d SPT pure state, the following string order parameter takes a finite value in the limit where its length diverges~\cite{Yoshida2016}: \begin{equation} \lim_{|l| \rightarrow \infty} \Big\langle Z_v \Big[\prod_{e \in l} \bm{X}_e \Big] Z_{v'} \Big \rangle \neq 0, \end{equation} where $l$ is an open string along the bonds and $v,v'$ are two vertices where the string ends. This can be easily understood if we imagine applying a symmetric quantum circuit $U$ that moves the state away from the fixed point wave function. Since $U$ commutes with a string of $\bm{X}$, it only acts nontrivially on $Z_{v,v'}$. Accordingly, for any order parameter of the above form, it will be corrected by a constant amount as long as one is within the same phase. However, this order parameter is actually short-ranged for the decohered mixed state. Using the above formalism, it is straightforward to show that \begin{align} \label{eq:2dSPT_NLO_decoherence} &\textrm{tr}\Big( {\cal E} _B[\rho_\textrm{spt}] Z_v \Big[\prod_{e \in l} \bm{X}_e \Big] Z_{v'} \Big) \nonumber \\ &\quad = (1-2p)^{\abs{l}} \textrm{tr}\Big( \rho_\textrm{spt} Z_v \Big[\prod_{e \in l} \bm{X}_e \Big] Z_{v'} \Big) = e^{-l/\xi} \end{align} where $\xi = 1/\ln(1/(1-2p))$. Therefore, it implies that the conventional non-local order parameter would fail to detect nontrivial structure of the underlying SPT state if there is decoherence, while the type-I or type-II strange correlator can still detect in this case. This issue will be discussed more in the \secref{sec:measure}. } On the other hand, if we consider the decoherence of $G_A = \mathbb{Z}_2^\textrm{(0)}$ symmetry by the noise channel in \eqnref{eq:noise_basic} for vertex qubits, we can evaluate that \begin{align} \label{eq:1form_strangeIB} C_B^{\textrm{I}}(\partial {\cal A} ) &= (1-2p)^{| {\cal A} |} \end{align} This indicates that the type-I strange correlator probing the 1-form symmetry decays in an area-law manner, i.e., short-ranged. This result is consistent with the earlier discussion about $1d$ $\mathbb{Z}_2 \times \mathbb{Z}_2$ SPT state, as upon compactification into a cylindrical geometry which is effectively 1d, the 1-form symmetry becomes 0-form, and we have shown that the corresponding type-I strange correlator is short-ranged under the decoherence of the other symmetry. \section{Effective Field Theory Evaluation} \label{sec:field_theory} Many SPT phases classified and constructed through the group cohomology formalism in Ref.~\onlinecite{wenspt,wenspt2} can be described by a nonlinear sigma model (NLSM) effective field theory~\cite{ashvinsenthil2012,binlsm}. Various different SPT phases share similar physics captured by a NLSM with a topological $\Theta$-term. For example, $1d$ bosonic SPT phases can often be viewed as the descendants of the previously well-known Haldane phase of a spin-1 chain, through reducing the SO(3) spin symmetry down to its subgroups, as long as the subgroup still has a nontrivial projective representation carried by the edge states. Hence all these $1d$ bosonic SPT states can be described as a O(3) NLSM in the $(1+1)d$ space-time with a $\Theta-$term at $\Theta = 2\pi$~\cite{haldane1}. The $\Theta-$term of the NLSM reduces to the physical boundary as a WZW term in the $(0+1)d$ space-time~\cite{ng1994}, which gives us two-fold degenerate boundary states carrying a projective representation of the underlying symmetry group. The effective field theory description also makes the physical interpretation of the construction (such as decorated domain wall, or more generally decorated defects construction) of these SPT states transparent. Hence an effective field theory evaluation of the strange correlator of the SPT states under decoherence would be more universal, and is applicable to situations with continuous symmetries, as well as higher dimensions. \subsection{$1d$ SPT states}\label{sec:1d field theory} As was observed in Ref.~\onlinecite{sptwf,YouXu2013}, the wave function of many bosonic SPT states in $1d$ can be inferred from the bulk topological $\Theta-$term of the NLSM, and written as \begin{align} &\quad |\Psi \rangle \sim \int D[\vect{n}(x)] \exp\left( - {\cal S} [\vect{n}(x)] \right) |\vect{n}(x) \rangle \nonumber \\ &\quad {\cal S} = \int dx \ \frac{1}{g} (\nabla_x \vect{n}(x))^2 + \mathrm{WZW}[\vect{n}(x)] \nonumber \\ &\mathrm{WZW}[\vect{n}(x)] = \int_0^1 du \int dx \ \frac{2\pi \mathrm{i}}{4\pi} \epsilon_{abc} \tilde{n}^a \partial_u \tilde{n}^b \partial_x \tilde{n}^c \label{1dwf} \end{align} where $\vect{n}$ is a unit three component vector with length $|\vect{n}| = 1$, and $\tilde{\vect{n}}(x, u)$is an extension of $\vect{n}(x)$ into the space $(x, u)$ where $\tilde{\vect{n}}(x, 0) = \vect{n}(x)$, $\tilde{\vect{n}}(x, 1) = \hat{z}$. This wave function can describe SPT phases in $1d$ with symmetries $\mathrm{SO}(3)$, $\mathrm{U}(1) \rtimes Z_2$, $Z_2 \times Z_2$, $\mathrm{U}(1) \times Z_2^T$, $Z_2^T$, etc. The Wess-Zumino-Witten term in Eq.~\ref{1dwf} can be viewed as the termination of the bulk topological $\Theta-$term at a temporal boundary~\cite{sptwf}, which is the ``space-time dual" of the WZW term at the physical real space boundary~\cite{ng1994} of the SPT state. To make connection with the computation based on the stabilizer Hamiltonian of the SPT states presented in the previous section, let us assume the symmetry of the system is $Z_2^A \times Z_2^B$, which act on the vector $\vect{n}$ as \begin{eqnarray} && Z_2^A: (n_x, n_y, n_z) \rightarrow (- n_x, -n_y, n_z); \cr\cr && Z_2^B: (n_x, n_y, n_z) \rightarrow (n_x, -n_y, - n_z). \end{eqnarray} The density matrix of the pure SPT state $|\Psi\rangle \langle \Psi|$ is given as the following in the basis of $|\vect{n}(x) \rangle$: \begin{align} \label{puredm} &\rho_{\textrm{spt}} \sim \int {\cal D}\{\vect{n}(x), \vect{n}'(x)\} \ e^{- {\cal S} [\vect{n}] - {\cal S} ^\ast [\vect{n}'] } | \vect{n}(x) \rangle \langle \vect{n}'(x) |. \end{align} As a pure state density matrix, all the symmetries of the system have manifestly ``doubled" as we explained in the introduction: this pure state density matrix is invariant under separate ``left" and ``right" symmetry transformations, which act on $\vect{n}(x)$ and $\vect{n}'(x)$ respectively. We would like to consider a mixed density matrix built based upon the SPT wave function $|\Psi\rangle$. One way to achieve this is increasing the ``weight'' of the diagonal elements of the density matrix to decrease the purity (or increase the number of non-zero Schmidt eigenvalues). This is equivalent to turning on some ``interaction" $ {\cal S} ^{\textrm{int}}[\vect{n}(x), \vect{n}'(x)]$ between $\vect{n}(x)$ and $\vect{n}'(x)$ in the density matrix, as diagrammatically represented in \figref{fig:sc_pathintegral}(a): \begin{align} \label{mixeddm} \rho^{D}_{\textrm{spt}} &\sim \int {\cal D} \{\vect{n}, \vect{n}' \} e^{ - {\cal S} [\vect{n}] - {\cal S} ^\ast [\vect{n}'] - {\cal S} ^{\textrm{int}}[\vect{n}, \vect{n}'] } | \vect{n}(x) \rangle \langle \vect{n}'(x) |. \end{align} Since $ {\cal S} ^{\textrm{int}}[\vect{n}(x), \vect{n}'(x)]$ should favor specific combinations of $\vect{n}(x)$ and $\vect{n}'(x))$, the system would be no longer invariant under all left and right symmetry transformations; still, it must remain invariant under the simultaneous left and right transformations on $\vect{n}(x)$ and $\vect{n}'(x)$ as long as $ {\cal S} ^{\textrm{int}}$ is invariant under the simultaneous actions. To proceed, we define the trivially disordered state $|\Omega\rangle$ as an equal weight superposition of all the configurations of $\vect{n}(x)$: \begin{eqnarray} |\Omega \rangle \sim \int D \vect{n}(x) \, |\vect{n}(x)\rangle. \end{eqnarray} Then, the type-I \eqref{eq:strange_typeI} and type-II \eqref{eq:strange_typeII} strange correlators are expressed in terms of vectors $\vect{n}$ as follows: \begin{align} C^{\mathrm{I}}_{ab}(x) &=\frac{\mathrm{tr}\left( n^a (x) n^b(0)\rho^{D}_{\textrm{spt}} \rho_0 \right)}{\mathrm{tr}\left(\rho^{D}_{\textrm{spt}} \rho_0 \right)}. \nonumber \\ C^{\mathrm{II}}_{ab}(x) &=\frac{\mathrm{tr}\left( n^a(x) n^b(0)\rho^{D}_{\mathrm{spt}} n^a(x) n^b(0) \rho_0 \right)}{\mathrm{tr}\left(\rho^{D}_{\textrm{spt}} \rho_0 \right)}. \end{align} Formally, after the space-time rotation ($x \rightarrow \tau$) \jyl{illustrated in \figref{fig:sc_pathintegral}(b)}, the original type-I strange correlator is mapped to the spin-spin correlation along the temporal direction of two interacting spin-1/2 degrees of freedom (the $0d$ boundary of a $1d$ SPT state), one from $\bm{n}$ and the other from $\bm{n}'$: \begin{equation} \label{scItemporal} C^\mathrm{I}_{ab}(x) \sim \langle n_a(x) n_b (0) \rangle \sim \langle S_a(\tau) S_b(0) \rangle, \end{equation} as in the path integral formalism, an isolated spin-1/2 is represented by a $(0+1)d$ NLSM with a WZW term at level-1. The evaluation of the type-II strange correlator will be mapped formally to an evaluation of the ``doubled" temporal spin-spin correlation of the two interacting spins, $\vect{S}$ and $\vect{S}^\prime$: \begin{eqnarray} C^\mathrm{II}_{ab}(x) \sim \langle S_a(\tau) S_b(0) \ S^{\prime}_a(\tau) S^{\prime}_b(0) \rangle. \label{scIItemporal} \end{eqnarray} As an example, let us introduce the following ``interaction" in the density matrix \begin{eqnarray} \mathcal{S}^{\textrm{int}}_{1d, 1}[\vect{n}(x), \vect{n}'(x)] \sim \int dx \ u \left( \vect{n}(x)\cdot \vect{n}'(x) \right). \label{int1d1} \end{eqnarray} Physically this interaction corresponds to introducing decoherence of all degrees of freedom of the system, as now the density matrix in Eq.~\ref{mixeddm} is no longer invariant under any separate left or right $Z_2^A$ or $Z_2^B$ transformation, though it is still invariant under simultaneous left and right transformations; in other words, the decoherence introduced by $S^{\textrm{int}}_{1d,1}$ should be analogous to introducing temperature to the density matrix, which thermalizes all degrees of freedom. A similar idea of generating mixed density matrices has been explored for states constructed with loop degrees of freedom~\cite{chamon}. Most naturally, when the density matrix is driven into a mixed state under decoherence, the interaction $\mathcal{S}^{\textrm{int}}_{1d, 1}$ should favor the ``diagonal" configurations $\vect{n}(x) \sim \vect{n}'(x)$, $i.e.$ we need $u < 0$ in Eq.~\ref{int1d1}. When $\vect{n}(x) \sim \vect{n}'(x)$, the two WZW terms, $i.e.$ $\mathrm{WZW}[\vect{n}(x)]$ and $\mathrm{WZW}[\vect{n}'(x)]$ tend to cancel each other. Without the WZW term, both the type-I and type-II strange correlators would be short-range, for all components $a$, $b$. This is consistent with the picture under space-time rotation, as the interaction term $S^{\textrm{int}}_{1d,1}$ would translate into the following spin-spin interaction of the zero-dimensional system \begin{eqnarray} H^{\textrm{int}}_{0d,1} \sim J \vect{S} \cdot \vect{S}' + \cdots. \end{eqnarray} The ellipsis includes terms that reduce the SO(3) spin symmetry down to $Z_2^A \times Z_2^B$. When $J > 0$, the ground state is a spin singlet; when $J < 0$, as long as there are terms that reduce the symmetry to $Z_2^A \times Z_2^B$, the ground state is in general nondegenerate, hence any spin correlation along the temporal direction would still be short ranged. As was understood before, multiple copies of fermionic topological insulators (TI) and topological superconductors (TSC) of fermions may be mapped to bosonic SPT states under interaction~\cite{ashvin2014,wangsenthil2014, bridge,youxu2014}. For example, four copies of Kitaev's chains of Majorana fermions, or two copies of the spinless Su–Schrieffer–Heeger (SSH) models of complex fermions~\cite{SSH}, can be mapped to the Haldane phase with different defining symmetries. Our evaluation above also implies that, under general decoherences on fermion bilinear operators that respect the symmetry of the system, the classification of the fermionic TIs and TSCs would collapse, analogous to the collapse of the classification of TIs and TSSs under short-range interactions~\cite{fidkowski1,fidkowski2}). Similar behavior of TIs and TSCs under decoherence in higher dimensions is also expected, and it was noticed that the averaged symmetry caused by disorder could also lead to the collapse of classifications of TIs~\cite{MaWang2022}. We will defer a more complete discussion of decohered TIs and TSCs to future work. Now let's consider another type of interaction: \begin{eqnarray} \mathcal{S}^{\textrm{int}}_{1d,2}[\vect{n}(x), \vect{n}'(x)] \sim \int dx \ u \left( n_z (x)n'_z(x) \right). \label{int2} \end{eqnarray} Here only the $z$ components of the two $\vect{n}$ vectors are coupled, which corresponds to introducing decoherence on $Z_2^B$ but not $Z_2^A$, as now the density matrix is only invariant under one simultaneous $Z_2^B$ transformation on $n_z(x)$ and $n_z'(x)$, but it is still invariant under two separate $Z_2^A$ symmetry transformations on $\vect{n}$ and $\vect{n}'$. Then after space-time rotation the strange correlator calculation is mapped to a temporal spin-spin correlation (either Eq.~\ref{scItemporal} or Eq.~\ref{scIItemporal}) of the following two interacting spins: \begin{eqnarray} H^{\textrm{int}}_{0d,2} \sim J S_z S'_z. \end{eqnarray} Then the ground state of the system would be a doublet, rather than a singlet. For example, when $J < 0$, the ground states of the two-spin system are \begin{eqnarray} |\mathrm{GS}_1\rangle = |S_z = + 1/2, \ S'_z = +1/2\rangle, \cr \cr |\mathrm{GS}_2\rangle = |S_z = -1/2, \ S'_z = -1/2\rangle. \end{eqnarray} The type-I strange correlator $C^\mathrm{I}_{xx}(x)$ defined in Eq.~\ref{eq:strange_typeI} evaluated as Eq.~\ref{scItemporal} would be short ranged, as a single $S_x$ operator does not connect these two states within the doublet; { however, the type-II strange correlator $C^{\mathrm{II}}_{xx}(x)$ is still long-ranged, as the operator $S_x S'_x$ can connect the two states within the doublet}. Both $C^\mathrm{I}_{zz}$ and $C^{\mathrm{II}}_{zz}$ are long-ranged, as both the ground states are eigenstates of $S_z$ and $S'_z$. It was concluded before that the edge state of the SPT state with $\mathbb{Z}_2 \times \mathbb{Z}^\text{avg}_2$ symmetry is trivial (though the SPT state was still considered nontrivial)~\cite{MaWang2022}, where $\mathbb{Z}^\text{avg}_2$ is the average symmetry that only exists after disorder average. Our calculation indicates that the boundary state, which is directly related to the type-I strange correlator, may not be the best diagnosis for systems under decoherence; instead, the type-II strange correlator serves as another method of characterizing mixed state SPT phases. The type-II strange correlator is to some extent analogous to the Edwards-Anderson correlator of spin glass systems. Indeed, as remarked in the numerical lattice model calculation, for each quantum trajectory the type-I strange correlator fluctuates in its sign while magnitude stays robust, signalling the presence of nontrivial correlation. Although the type-I strange correlator vanishes over averaging, the squared order parameter, i.e., the type-II strange correlator, still captures the nontrivial information from the SPT state. \subsection{$2d$ SPT states} Now let us consider the wave function of a class of $2d$ SPT states: \begin{align} |\Psi \rangle & \sim \int D[\vect{n}(\mathbf{x})] \exp\left( - \mathcal{S}[\vect{n}(\mathbf{x})] \right) |\vect{n}(\mathbf{x}) \rangle \nonumber \\ \mathcal{S} & = \int d^2x \ \frac{1}{g} (\partial \vect{n}(\mathbf{x}))^2 + \mathrm{WZW}[\vect{n}(\mathbf{x})] \nonumber \\ \mathrm{WZW}[\vect{n}(\mathbf{x})] & = \int_0^1 du \int d^2x \ \frac{2\pi \mathrm{i}}{\Omega_3} \epsilon_{abcd} \tilde{n}^a \partial_u \tilde{n}^b \partial_x \tilde{n}^c \partial_y \tilde{n}^c. \label{2dnlsm} \end{align} The pure density matrix is analogous to Eq.~\ref{puredm}. This formalism can describe SPT phases with symmetries $\O(4)$, $\mathrm{U}(1) \times \mathrm{U}(1)$, $\mathrm{SO}(3) \rtimes Z_2$, $\mathrm{SO}(3) \times Z_2^T$, $Z_2 \times Z_2$, or even just one $\mathrm{U}(1)$ or $Z_2$ symmetry, etc. Let us start with the case with the maximum $\O(4)$ symmetry. We can introduce some weak decoherence on $\vect{n}$, and consider the mixed density matrix Eq.~\ref{mixeddm}. We start with an interaction term analogous to Eq.~\ref{int1d1}: \begin{eqnarray} {\cal S} ^{\textrm{int}}_{2d,1} = \int d^2x \ u \left( \vect{n}(\mathbf{x}) \cdot \vect{n}'(\mathbf{x}) \right). \end{eqnarray} This interaction respects the simultaneous left and right $\O(4)$ symmetry (which act on $\vect{n}(\mathbf{x})$ and $\vect{n}'(\mathbf{x})$ simultaneously). Like the $1d$ case, the decoherence is supposed to drive the system into a mixed state with enhanced weight on the diagonal configurations, hence $u$ is most naturally negative. But regardless of the sign of $u$, this interaction drives the computation of both type-I and type-II strange correlators into a field theory calculation of correlation functions of a $(2+0)d$ or $(1+1)d$ nonlinear Sigma model (NLSM) without any WZW term. Both type-I and type-II strange correlators should be short-ranged, as a NLSM in $(2+0)d$ without any topological term will flow to the disordered phase. Now let us consider a $2d$ bosonic symmetry protected topological state with symmetry $\mathrm{U}(1)^A \times \mathrm{U}(1)^B$. This is a SPT phase considered in Ref.~\onlinecite{levinsenthil}. The two $\mathrm{U}(1)$ symmetries act on the four-component vector $\vect{n}$ as the following: \begin{eqnarray} \mathrm{U}(1)^A &:& (n_1 + \mathrm{i} n_2) \rightarrow e^{\mathrm{i} \theta} (n_1 + \mathrm{i} n_2), \cr\cr \mathrm{U}(1)^B &:& (n_3 + \mathrm{i} n_4) \rightarrow e^{\mathrm{i} \phi} (n_3 + \mathrm{i} n_4). \label{u1ab} \end{eqnarray} Let us turn on the following interaction, which corresponds to introducing decoherence on $\mathrm{U}(1)^B$ charges: \begin{eqnarray} {\cal S} ^{\textrm{int}}_{2d,2}[\vect{n}(\mathbf{x}), \vect{n}'(\mathbf{x})] \sim \int d^2x \ \sum_{a = 3}^4 u ( n_a (\mathbf{x}) n'_a(\mathbf{x})). \label{int2} \end{eqnarray} This interaction can be conveniently analyzed through Abelian bosonization of the $(1+1)d$ NLSM. Under Abelian bosonization, the NLSM of $\vect{n}(\bf{x})$ with the WZW term corresponds to the following $(1+1)d$ Lagrangian, and its dual: \begin{eqnarray} \label{abelian} {\cal L} = \frac{1}{2K} \left( (\partial_\tau \theta)^2 + v^2 (\partial_x \theta)^2 \right) , \cr \cr {\cal L} _d = \frac{K}{2}\left( (\partial_\tau \phi)^2 + v^2 (\partial_x \phi)^2 \right). \end{eqnarray} The four-component vector $\vect{n}$ has the following schematic representation in the Abelian bosonized formalism: \begin{eqnarray} \vect{n} \sim \left(\cos\theta, \sin\theta, \cos (2\pi\phi), \sin (2\pi\phi) \right). \end{eqnarray} The computation of the strange correlator, especially type-II, requires making two copies of the system of Eq.~\ref{abelian}, and now the interaction term $ {\cal S} ^{\textrm{int}}_{2d,2}$ becomes the following coupling in the Abelian bosonization language: \begin{eqnarray} \label{eq:interaction} H^{\textrm{int}}_{1d,2} \sim \int dx \ \alpha \cos(2\pi \phi - 2\pi \phi'). \end{eqnarray} We evaluate the following type-I and type-II strange correlators for the operator $\Phi \sim n_1 - \mathrm{i} n_2 \sim e^{-\mathrm{i} \theta}$ charged under $\mathrm{U}(1)^A$: \begin{eqnarray} C^{\mathrm{I}}(\mathbf{x}) &=& \frac{\mathrm{tr}\left( \Phi (0) \Phi^\ast(\mathbf{x}) \rho^{D}_{\mathrm{spt}} \rho^{\vphantom{\dagger}}_0 \right)}{\mathrm{tr}\left(\rho^{D}_{\mathrm{spt}} \rho^{\vphantom{\dagger}}_0 \right)} \sim \langle e^{- \mathrm{i} \theta(0,0)} \ e^{+ \mathrm{i} \theta(x,\tau)} \rangle, \cr\cr\cr C^{\mathrm{II}}(\mathbf{x}) &=& \frac{\mathrm{tr}\left( \Phi (0) \Phi^\ast (\mathbf{x})\rho^{D}_{\mathrm{spt}} \Phi (0) \Phi^\ast (\mathbf{x}) \rho_0 \right)}{\mathrm{tr}\left(\rho^{D}_{\mathrm{spt}} \rho^{\vphantom{\dagger}}_0 \right)} \cr\cr \cr &\sim& \langle e^{- \mathrm{i} ( \theta(0,0) + \theta'(0,0))} \ e^{+ \mathrm{i} (\theta(x,\tau) + \theta'(x,\tau))} \rangle, \label{sc2d} \end{eqnarray} where the expectation values are taken with respect to the two copies of Luttinger liquids coupled by \eqnref{eq:interaction}. As the scaling dimension of $e^{i2\pi \phi}$ is given by $\pi/K$, the coupling $H^{\textrm{int}}_{1d,2}$ is relevant (irrelevant) when $K\,{>}\,K_c$ ($K\,{<}\,K_c$) with $K_c = \pi$. When $K\,{>}\,K_c$, $i.e.$ $H^{\textrm{int}}_{1d,2}$ is relevant, it will gap out the channel $\theta_- = (\theta - \theta')/\sqrt{2}$, $\phi_- = (\phi - \phi')/\sqrt{2}$, and the type-I strange correlator becomes short-ranged. Hence by tuning $K$, there is a phase transition signified by the behavior of the type-I strange correlator. Since the Luttinger parameter $K$ can receive renormalization from $\alpha$ in Eq.~\ref{eq:interaction}, this phase transition of the type-I strange correlator can also be driven by the strength of interaction. But the $\theta_+ = (\theta + \theta')/\sqrt{2}$ and $\phi_+ = (\phi + \phi')/\sqrt{2}$ channel remains a gapless CFT, regardless of the fate of $H^{\textrm{int}}_{1d,2}$ under renormalization group. Since the type-II strange correlator in \eqnref{sc2d} only involves the symmetric linear combinations, it always decays with a power law. Here once again the type-II strange correlator captures the ``memory" of the pure state SPT wave function. \subsection{Bosonic Integer Quantum Hall state} We would also like to discuss the $2d$ bosonic integer quantum Hall state discussed in Ref.~\onlinecite{levinsenthil}. The bIQH state can be obtained from the same NLSM description Eq.~\ref{2dnlsm}, and reducing the $\mathrm{U}(1)^A \times \mathrm{U}(1)^B$ symmetry in Eq.~\ref{u1ab} to one diagonal $\mathrm{U}(1)$ symmetry. The edge state of the bIQH state is nonchiral, whose left-moving modes carry the $\mathrm{U}(1)$ charge, while the right-moving modes are neutral. The Hall conductivity of the bIQH state must be an even integer due to the bosonic nature of the underlying particles. If we only look at the degrees of freedom that carry the $\mathrm{U}(1)$ charge, the density matrix of the bIQH state can be mapped to a pair of chiral bosons in $(1+1)d$ space-time under Wick rotation: \begin{eqnarray} \mathcal{S} &=& \int d^2x \Big[+ \frac{1}{2\pi}\partial_\tau \varphi \partial_x \varphi + v^2 K (\partial_x \varphi)^2\Big], \cr\cr \mathcal{S}' &=& \int d^2x \Big[- \frac{1}{2\pi}\partial_\tau \varphi' \partial_x \varphi' + v^2 K (\partial_x \varphi')^2\Big]. \end{eqnarray} Again the $\mathrm{U}(1)$ symmetry is doubled, carried by $\varphi$ and $\varphi'$ respectively. If we turn on decoherence (or noise) on the phase angle of charges, this effect is mapped to the following interaction between $\varphi$ and $\varphi'$: \begin{eqnarray} H^{\textrm{int}}_{1d,3} = \int dx \ \alpha\cos(\varphi - \varphi'). \end{eqnarray} Again, depending on the parameter $K$, $H^{\textrm{int}}_{1d,3}$ may become relevant, and render the strange correlators short-ranged. Here we note that a similar analysis also applies to the fermionic integer quantum Hall states, whose boundary states also become chiral bosons after bosonization. A full analysis of fermionic topological insulators under decoherence will be presented in another work. \subsection{Boundary states under decoherence} As we have seen in this section, the density matrix of an SPT state corresponds to a system evolving in the imaginary time, and terminating on two opposite boundaries $\tau = \pm \infty$ at the temporal direction, see \figref{fig:sc_pathintegral}(a). After Wick rotation, these two temporal boundaries become the left and right boundaries of the system in real space, and calculations about the decoherence on the density matrix is formally mapped to a system whose two boundaries are coupled in the selected channel as in \figref{fig:sc_pathintegral}(b). One may also ask about the fate of the physical boundary state of the system under decoherence. This can be answered by first deriving the pure state density matrix of the boundary theory only, then turning on decoherence. In this process, we have ignored the bulk. Let us still take the $2d$ SPT state with $\mathrm{U}(1)^A \times \mathrm{U}(1)^B$ symmetry as an example. The boundary theory is captured by the $(1+1)d$ Lagrangians Eq.~\ref{abelian}, and the schematic density matrix of the ground state of Eq.~\ref{abelian} reads \begin{eqnarray} && \rho_b \sim \int {\cal D}\{\vect{\phi}(x), \vect{\phi}'(x)\} e^{- {\cal S} [\phi] - {\cal S} [\phi']} |\phi(x)\rangle \langle \phi'(x')|, \cr\cr && {\cal S} [\phi] \sim \sum_q K |q| |\phi_q|^2. \end{eqnarray} The problem is now formally analogous to the transmission of Luttinger liquid through a zero dimension defect~\cite{kanefisher}. The decoherence on $\mathrm{U}(1)^B$ is still modeled with a coupling between $\phi$ and $\phi'$: $\sim \alpha \cos(2\pi\phi - 2\pi\phi')$, and there is again a critical $K_c$ above which the coupling $\alpha$ becomes relevant, and the correlation function at the physical boundary is rendered short-ranged by the decoherence. Hence the type-I strange correlator indeed reflects what could happen at the physical boundary under decoherence, though the exact transition point of the boundary depends on the details of the boundary Hamiltonian. In this section, the effective field theories used to compute the strange correlators were unitary theories after Wick rotation. This is not necessarily true for all wave functions within the SPT phase. But in most cases, there exists a submanifold in the parameter space in which the SPT phase admits a description in terms of a Lorentz invariant field theory, then since the strange correlator is mapped to the ordinary correlation functions at the physical boundary of the system under Wick rotation, the strange correlators can indeed be evaluated with a unitary field theory. \section{Doubled System} \label{sec:doubled} In the previous section, we have developed a field-theoretic understanding of two types of strange correlators and their behaviors under decoherence. In doing so, the idea of doubled systems has been crucial. In the following, we make this idea more concrete using the Choi-Jamiołkowski isomorphism~\cite{JAMIOLKOWSKI1972, CHOI1975}. \subsection{Choi Representation of Decohered SPT States} \label{sec:doubleH} Let $\rho_\textrm{spt}$ be the density matrix of the pure SPT state with $G\,{=}\,G_A\,{\times}\,G_B$ symmetry. By using the Choi-Jamiołkowski isomorphism, we can represent an operator as a state in the following way \begin{equation} \Vert \rho \rangle \hspace{-2pt} \rangle \equiv \frac{1}{\sqrt{ \textrm{Dim}[\rho] }} \sum_i |i \rangle \otimes \rho |i\rangle \end{equation} Accordingly, the pure state density matrix can be represented as two copies of independent SPT states: \begin{equation} \Vert \rho_\textrm{spt} \rangle \hspace{-2pt} \rangle = |\Psi^* \rangle_u \otimes | \Psi \rangle_l = | \Psi^*_u, \Psi_l \rangle. \end{equation} where $u$ and $l$ denote the upper and lower copies respectively. The Choi state is defined on the doubled Hilbert space as ${\cal H}_{ul} \equiv {\cal H}_u \otimes {\cal H}_l$. Here, the star superscript is to denote that its amplitude is complex-conjugated relative to the original wavefunction. In this doubled Hilbert space, the symmetry group is also doubled as $(G^{u}_A\,{\times}\,G^u_B)\,{\times}\,(G^l_A\,{\times}\,G^l_B)$, and the Choi state is the SPT state under the doubled symmetry group. See \appref{app:Choi} for more details. Similarly, the Choi state of a trivial disordered state density matrix is given as \begin{equation} \Vert \rho_0 \rangle \hspace{-2pt} \rangle = |\Omega^* \rangle_u \otimes |\Omega \rangle_l \equiv | \tilde{\Omega} \rangle \end{equation} which is a trivial disordered state in the doubled Hilbert space. Under the Choi isomorphism, the decoherence quantum channel $ {\cal E} $ in \eqnref{eq:noise_Kraus} maps into an operator $\hat{ {\cal E} }$ acting in the doubled Hilbert space as the following: \begin{align} \hat{ {\cal E} } = \sum_{m} K^*_{m,u} \otimes K_{m,l} \end{align} where $K_m$ is a Kraus operator. For example, the aforementioned local $Z$ noise channel under the Choi mapping would be expressed as \begin{equation} \label{eq:noise_model1} \hat{ {\cal E} } = \prod_i \big[ (1-p) \mathbb{I}_{i,u} \otimes \mathbb{I}_{i,l} + p Z_{i,u} \otimes Z_{i,l} \big]. \end{equation} Then, the decohered SPT state can be represented as \begin{equation} |\tilde{\Psi} \rangle \equiv \Vert {\cal E} [\rho_\text{spt}] \rangle \hspace{-2pt} \rangle = \hat{ {\cal E} } | \Psi_u, \Psi_l \rangle \end{equation} Although $\hat{ {\cal E} }$ is not a unitary map that preserves the norm of the state, it is nevertheless a positive semi-definite map for the Choi states. Therefore, $|\tilde{\Psi}\rangle$ is a valid state in the doubled Hilbert space whose norm is given by the purity of the decohered density matrix, $\textrm{tr}(\rho^D_\textrm{spt} \rho^D_\textrm{spt}) > 0$. The Choi representation of the mixed state density matrix $|\tilde{\Psi} \rangle$ can be shown to be the ground state of a certain parent Hamiltonian in the doubled Hilbert space, which is perturbed from the doubled SPT Hamiltonian. Starting from the pure state limit, we consider a pure SPT state $\ket{\Psi}$ as the ground state of a parent Hamiltonian $H_\text{spt}$ in the standard Hilbert space. With some constant energy shift, it is always possible to make $H_\text{spt}$ a positive semidefinite Hermitian operator, such that the ground state has exactly zero energy, i.e. $H_\text{spt}\ket{\Psi}=0$. Correspondingly, in the doubled Hilbert space, the Choi representation of the SPT density matrix $\Ket{\rho_\text{spt}}=\ket{\Psi^*}_u\otimes\ket{\Psi}_l$ should be the ground state of the following double Hamiltonian \begin{equation} \hat{ {\cal H} }_\text{spt}=H_{\text{spt},u}^*\otimes \mathbb{I}_l + \mathbb{I}_u\otimes H_{\text{spt},l}, \end{equation} such that $\hat{ {\cal H} }_\text{spt}\Ket{\rho_\text{spt}}=0$. The double Hamiltonian $\hat{ {\cal H} }_\text{spt}$ will inherit the positive semidefinite property of the single Hamiltonian $H_\text{spt}$, such that it admits a Cholesky decomposition as \begin{equation} \hat{ {\cal H} }_\text{spt}=\hat{ {\cal A} }^\dagger\hat{ {\cal A} }, \end{equation} with $\hat{ {\cal A} }$ being some generic (possibly non-Hermitian) operator in the doubled Hilbert space. The fact that $\Ket{\rho_\text{spt}}$ is a zero-energy eigenstate of $\hat{ {\cal H} }_\text{spt}$ implies \begin{equation} \Bra{\rho_\text{spt}}\hat{ {\cal H} }_\text{spt}\Ket{\rho_\text{spt}}=\Bra{\rho_\text{spt}}\hat{ {\cal A} }^\dagger\hat{ {\cal A} }\Ket{\rho_\text{spt}}=0, \end{equation} therefore $\hat{ {\cal A} }\Ket{\rho_\text{spt}}=0$ must be a zero vector, i.e.~the ground state $\Ket{\rho_\text{spt}}$ should be annihilated by the Cholesky operator $\hat{ {\cal A} }$. In our setup, the decohered SPT state $\rho_\text{spt}^D= {\cal E} [\rho_\text{spt}]$ is always given by sending the pure SPT state $\rho_\text{spt}=|\Psi \rangle \langle \Psi |$ through a decoherence channel $ {\cal E} $. In the Choi representation, this can be written as $\Ket{\rho_\text{spt}^D}=\hat{ {\cal E} }\Ket{\rho_\text{spt}}$. We claim that the state $\Ket{\rho_\text{spt}^D}$ must be the ground state of the following deformed double Hamiltonian \begin{equation}\label{eq:HsptD_nonlocal} \hat{ {\cal H} }_\text{spt}^D=(\hat{ {\cal E} }\hat{ {\cal A} }\hat{ {\cal E} }^{-1})^\dagger (\hat{ {\cal E} }\hat{ {\cal A} }\hat{ {\cal E} }^{-1}), \end{equation} which is still a Hermitian positive semidefinite operator in the doubled Hilbert space. To see this, we apply $\hat{ {\cal H} }_\text{spt}^D$ on the state $\Ket{\rho_\text{spt}^D}$ and obtain $\hat{ {\cal H} }_\text{spt}^D\Ket{\rho_\text{spt}^D} =(\hat{ {\cal E} }\hat{ {\cal A} }\hat{ {\cal E} }^{-1})^\dagger \hat{ {\cal E} }\hat{ {\cal A} }\Ket{\rho_\text{spt}}=0$, which proves our claim. In this way, given the original SPT Hamiltonian $H_\text{spt}$ and the decoherence channel $ {\cal E} $, we can in principle construct the parent Hamiltonian $\hat{ {\cal H} }_\text{spt}^D$ that stabilizes the decohered SPT state $\Ket{\rho_\text{spt}^D}$ as its unique ground state. However, one concern is that the Hamiltonian constructed in \eqnref{eq:HsptD_nonlocal} may not be a local Hamiltonian. Nevertheless, if the original SPT Hamiltonian $H_\text{spt}$ is made of local commuting projectors (e.g.~a stabilizer Hamiltonian), it is possible to define a set of local Cholesky operators $\hat{ {\cal A} }_i$ (using local projectors) such that $\hat{ {\cal H} }_\text{spt}=\sum_{i}\hat{ {\cal A} }_i^\dagger \hat{ {\cal A} }_i$, then the same construction leads to a local Hamiltonian $\hat{ {\cal H} }_\text{spt}^D=\sum_i(\hat{ {\cal E} }\hat{ {\cal A} }_i\hat{ {\cal E} }^{-1})^\dagger (\hat{ {\cal E} }\hat{ {\cal A} }_i\hat{ {\cal E} }^{-1})$~\cite{WITTEN1982, Wouters2021, pivot}. We now apply this general construction principle to $1d$ and $2d$ cluster state examples discussed previously. \begin{figure}[!t] \centering \includegraphics[width = 0.47 \textwidth]{doubled.pdf} \caption{\label{fig:doubled} \jyl{(a) 1d $\mathbb{Z}_2^{\textrm{odd}} \times \mathbb{Z}_2^{\textrm{even}}$ symmetric cluster state density matrix and (b) 2d $\mathbb{Z}_2^{0} \times \mathbb{Z}_2^{1}$ symmetric cluster state density matrix under the Choi Isomorphism. The decoherence of the symmetry $G_B$ maps to the coupling between $G_B^u$ and $G_B^l$ charges in two layers. Accordingly, two respective symmetries become identified, and the SPT classification collapses. } } \end{figure} \subsubsection{1d cluster state} For the $1d$ cluster state, the SPT projector Hamiltonian can be written as \begin{equation} H_\text{spt}= \sum_{i} \hat{A}_i^2, \quad \hat{A}_i \equiv \frac{1-Z_{i-1}X_iZ_{i+1}}{2}. \end{equation} Consider the decoherence model in \eqnref{eq:noise_model1}, which has the following convenient form \begin{align} \label{eq:noise_model} \hat{ {\cal E} } &= \prod_m \frac{ e^{\tau Z_{2m,u} Z_{2m,l}} }{\cosh \tau}, \quad \tanh \tau = \frac{p}{1-p} \end{align} Then, we see that \begin{align} \hat{ {\cal E} }\hat{ {\cal A} }_{2n} \hat{ {\cal E} }^{-1} = \frac{1}{2} \big( 1 - e^{2\tau Z_{2n,u} Z_{2n,l}} Z_{2n-1} X_{2n} Z_{2n+1} \big) \end{align} while $\hat{ {\cal E} } \hat{ {\cal A} }_{2n-1} \hat{ {\cal E} }^{-1}\,{=}\,\hat{ {\cal A} }_{2n-1}$. As depicted in \figref{fig:doubled}(a), the parent Hamiltonian $\hat{ {\cal H} }_\text{spt}^D$ for the decohered SPT state takes the form of \begin{equation}\label{eq: HsptD} \hat{ {\cal H} }_\text{spt}^D= \hat{ {\cal H} }_{\text{spt},u}^D+\hat{ {\cal H} }_{\text{spt},l}^D+\hat{ {\cal H} }_{\textrm{int}}^D, \end{equation} where the upper layer Hamiltonian reads \begin{align} \hat{ {\cal H} }_{\text{spt},u}^D &= \sum_m \frac{\cosh^2 2\tau - \cosh 2\tau Z_{2m-1} X_{2m} Z_{2m+1}}{2} \nonumber \\ &\,+ \sum_m \frac{1 - Z_{2m} X_{2m+1} Z_{2m+2}}{2}, \end{align} and the lower layer Hamiltonian $\hat{ {\cal H} }_{\text{spt},l}^D$ is essentially the same as $\hat{ {\cal H} }_{\text{spt},u}^D$ with all the label $u$ replaced by the label $l$, together with the interlayer coupling \begin{equation} \hat{ {\cal H} }_{\textrm{int}}^D=-\frac{\sinh 4\tau}{2}\sum_{m}Z_{2m,u}Z_{2m,l}. \end{equation} The coupling vanishes in the pure state limit when the decoherence strength $p\to 0$ ($\tau \to 0$), and diverges in the strong decoherence limit as $p\to 1/2$ (which corresponds to measuring $Z_{2m}$ operators projectively and forgetting the measurement outcomes). This parent Hamiltonian $\hat{ {\cal H} }_\text{spt}^D$ in \eqnref{eq: HsptD} explicitly shows that the decohered SPT state $\Ket{\rho_\text{spt}^D}$ can be interpreted as two identical layers of $\mathbb{Z}_2^{\textrm{odd}}\times\mathbb{Z}_2^{\textrm{even}}$ SPT states coupled together by the interlayer $\mathbb{Z}_2^{\textrm{even}}$ charge tunneling (i.e.~the ferromagnetic interlayer $ZZ$ coupling on every even site). The resulting state is a nontrivial SPT state protected by the $\mathbb{Z}_2^{{\textrm{odd}},u}\times\mathbb{Z}_2^{{\textrm{odd}},l}\times\mathbb{Z}_2^{{\textrm{even}}}$ symmetry in the doubled Hilbert space. This lattice model analysis is in full agreement with the field theory understanding in \secref{sec:1d field theory}. \subsubsection{2d cluster state} For the $2d$ cluster state under the local $Z$ noise on edge qubits described by \eqnref{eq:noise_model}, the Choi state of the decohered density matrix can be shown to be the groundstate of the Hamiltonian in the form \eqnref{eq: HsptD}, where \begin{align} \label{eq:2dSPT_couple} \hat{ {\cal H} }_{\text{spt},u}^D &= - \cosh 2 \tau \sum_e \bm{X}_{e} \prod_{v \in \partial e} Z_{v} - \sum_v X_v \prod_{e \in \dd v} \bm{Z}_e \nonumber \\ \hat{ {\cal H} }_{\textrm{int}}^D&= - \sinh 4\tau \sum_{e} \bm{Z}_{e,u}\bm{Z}_{e,l}. \end{align} In this doubled system, the coupling $\hat{ {\cal H} }_\textrm{int}^D$ breaks two one-form symmetries into their diagonal subgroup, and the remaining global symmetry would be $\mathbb{Z}_2^{(0),u} \times \mathbb{Z}_2^{(0),l} \times \mathbb{Z}_2^{(1)}$ as illustrated in \figref{fig:doubled}(b). However, a 1-form symmetry can often be robust under perturbation; even if it does not exist at a microscopic level, it can appear as an emergent symmetry~\cite{Tupitsyn2010, Nahum2021}. The robustness of an SPT state under the local decoherence of higher-form symmetries can be thought of as a consequence of extensively many conserved quantities, which is in contrast with the single globally conserved charge of a 0-form symmetry. Accordingly, some decoherence in local charge configuration cannot destroy the macroscopic stability of emergent higher-form symmetries, and the corresponding anomaly acquires noise resilience. Therefore for $p \ll 1$, two emergent one-form symmetries may become robust, providing a bulk anomaly in-flow such that if one imagines a fictitious boundary of the doubled system, the boundary would exhibit spontaneous symmetry breaking of individual $\mathbb{Z}_2^{(0),u}$ and $\mathbb{Z}_2^{(0),l}$ symmetries. This, in turn, implies that the type-I strange correlator for $G_A^u \equiv \mathbb{Z}_2^{(0),u}$ charged operators must be nontrivial in the doubled (pure) state for small $p$. In other words, \begin{align} C^\textrm{I}_{A_u}(v,v') &\equiv \frac{ \langle \tilde{\Omega} | \, Z_v Z_{v'} \otimes \mathbb{I} \,| \tilde{\Psi} \rangle }{ \langle \tilde{\Omega} | \tilde{\Psi} \rangle } \end{align} would be nontrivial. Since this expression is nothing but the type-I strange correlator for the original decohered mixed state, the nontrivial type-I strange correlator under decoherence in this model can be understood in terms of the stability of the emergent one-form symmetries in the doubled system. \subsubsection{Back to the original mixed state} So far, we have examined the cases where the effect of decoherence can be considered as perturbing the parent Hamiltonian for the Choi state. In the doubled Hilbert space, decoherence generally amounts to a non-unitary state evolution that only respects the diagonal part of the doubled symmetry, and breaks the respective symmetry of each layer. Thus, one might expect that any doubled SPT order protected by the doubled symmetry group would immediately collapse under decoherence. Interestingly, we have shown that this is not the case in several examples, including 2d cluster state (lattice model) and 2d SPT phases (field theory). In the Choi state language, the type-I and type-II strange correlators of the decohered SPT mixed state would be expressed as the type-I strange correlators for different symmetries: \begin{align} \label{eq:boundary} C^\textrm{I}_{A} &= \frac{ \langle \tilde{\Omega} | \, (O^A_i O^A_j)_u \otimes \mathbb{I}_l \,| \tilde{\Psi} \rangle }{ \langle \tilde{\Omega} | \tilde{\Psi} \rangle } = C^{d,\textrm{I}}_{A_u} \\ C^\textrm{II}_A &= \frac{ \langle \tilde{\Omega} | \, (O^A_i O^A_j)_u \otimes (O^A_i O^A_j)_l \,| \tilde{\Psi} \rangle }{ \langle \tilde{\Omega} | \tilde{\Psi} \rangle } = C^{d,\textrm{I}}_{A_u\cdot A_l}, \end{align} where the superscript $d$ represents that the strange correlator is for the pure doubled state, and the subscript $A_u \cdot A_l$ means that the strange correlator is for the operator charged under both $G_A^u$ and $G_A^l$. Interestingly, the original type-II strange correlator of the mixed state $\rho_\textrm{spt}^D$ for $G_A$ becomes the type-I strange correlator of the Choi state $\Vert \rho_\textrm{spt}^D \rangle \hspace{-2pt} \rangle $ for the operator charged under both $G_A^u$ and $G_A^l$. In the Choi state, this strange correlator naturally probes the anomaly inflow of the $G_A^u \times G_A^l \times G_B$ SPT order. Since the decoherence acts like a shallow quantum circuit acting on the doubled SPT state that respects the symmetry, as long as the decoherence is not too strong, the $G_A^u \times G_A^l \times G_B$ SPT order should be robust. Therefore, this Choi isomorphism naturally provides a mechanism for the stability of the type-II strange correlator under selective decoherence. \begin{figure}[!t] \centering \includegraphics[width = 0.43 \textwidth]{information2.pdf} \caption{\label{fig:information} Schematic diagram of how information is encoded, decohered, and decoded. After the encoding step, a label $x$ is encoded in quantum correlations of a prepared state $|\psi(x)\rangle$. The state undergoes decoherence, becoming a mixed state $\rho^D(x)$. Finally, a receiver measures this mixed state to learn about the label $x$ (assuming the state is repeated prepared). After enough measurements, the receiver may generate an output $\hat{x}$ which is the estimator of the label $x$. The decoding is successful if $\hat{x} = x$. } \end{figure} \subsection{Implications of the strange correlators} As remarked, a nontrivial and robust type-I strange correlator can be interpreted as the presence of nontrivial boundary physics. From the conventional understanding, the boundary of a $1d$ or $2d$ topological phase is equipped with an associated anomaly that makes the boundary nontrivial, i.e., the boundary must either spontaneously break the symmetry or be power-law correlated. Compared with the type-I strange correlator, a nontrivial type-II strange correlator has a more subtle implication. The type-II strange correlator can be interpreted as an information-theoretic quantity that allows one to identify what the underlying state is for a given mixed-state density matrix. As illustrated in \figref{fig:information}, one may imagine a procedure where the \emph{label} information $x$ is encoded as the quantum circuit that can repeatedly generate a certain quantum state. For example, given a label $x\,{=}\,\textrm{SPT}$, a corresponding quantum circuit that prepares a certain SPT state can be constructed, whose output quantum state is sent to an observer. While doing so, decoherence occurs for various reasons, such as imperfection in the state preparation or transmission noise from environments, converting the pure quantum state into a mixed state density matrix. Finally, the observer repeatedly measures the mixed state to learn about the input label $x$. This last step is formally denoted as \emph{decoding}. Whether one can accurately identify the label or not crucially depends on the decoding strategy. For example, if there is no decoherence, decoding would be straightforward; one can simply measure the type-I strange correlator, which as we argued can be viewed as a non-local SPT order parameter for a given pure state density matrix, and its value would immediately allow one to tell the decoded label $\hat{x}$ (See \secref{sec:measure} for more precise protocol). Such a decoding strategy would also be efficient in the sense that the required number of measurements would at most scale polynomially with the system size. Although the type-I strange correlator may become short-ranged with strong enough decoherence, at weak decoherence the type-I strange correlator may remain nontrivial. Hence for weak decoherence, the type-I strange correlator can still tell us whether the underlying state was a trivial disordered state or a nontrivial SPT state. But with strong decoherence, we have shown that there are examples where the type-I strange correlator becomes short-ranged, while the type-II strange correlator remains nontrivial. Indeed, the robustness of the type-II strange correlator implies that there must be a method to distinguish between the decohered mixed states originating from an SPT state and those originating from a trivial disordered state. Without referring to any decoding strategy, the fact that the decohered trivial state and decohered SPT state are distinguished by the type-II strange correlator implies the existence of a fundamental information-theoretic distinction between these two labels. Although the corresponding decoding strategy may not be efficient, the type-II strange correlator behavior provides a fundamental distinction between two classes of density matrices. Finally, when both the type-I and type-II strange correlators are trivial, one cannot tell whether the underlying state is an SPT or trivial state based on any feature that involves the mixed state density matrix. Therefore, behaviors of the type-I and type-II strange correlators allow us to distinguish three different regimes of the mixed state density matrix under decoherence as illustrated in \figref{fig:phase}. \begin{figure}[!t] \centering \includegraphics[width = 0.48 \textwidth]{information_phase2.pdf} \caption{\label{fig:phase} \jyl{We draw schematic plots of the type-I and type-II strange correlators as a function of decoherence strength, which distinguish three different information-theoretic phases. For $p < p_{c,1}$, $C^\textrm{I}$ is nontrivial, implying that the boundary physics of the original system should be nontrivial. Furthermore, the SPT label information may be efficiently decoded (see \secref{sec:measure}). For $p_{c,1} < p < p_{c,2}$, $C^{\textrm{II}}$ is nontrivial, quantifying the information that the original state under decoherence was an SPT order. However, the quantity is very difficult to extract from the bulk measurements. For $p > p_{c,2}$, the SPT label information cannot be identified by the type-II strange correlator. In the local decoherence models, we considered in the lattice calculations, $p_{c,2}$ does not exist since the type-II strange correlator is always finite. However, for a generic decoherence model, we would expect the transition into the ``trivial'' information phase to exist. } } \end{figure} Here we comment on the efficiency of measuring the type-II strange correlator. Note that any observable in the doubled Hilbert space that involves operators in both upper and lower Hilbert space would involve two density matrices, which would make the quantity challenging to measure in general. This is because the numerator and denominator of the type-II strange correlator should be evaluated separately, and the denominator which is the purity of the density matrix $\rho^D$ is exponentially small in general; it is straightforward to show that \begin{equation} \textrm{tr}(\rho^D_\textrm{spt} \rho^D_\textrm{spt}) \sim \exp(-N) \end{equation} where $N$ is the number of sites for any $p > 0$ in the decoherence model. Likewise, the numerator decays exponentially. Therefore, although in principle we can define a quantity to characterize an ordering in the doubled Hilbert space, it only characterizes the presence of some nontrivial quantum entanglements whose verification is exponentially hard. A similar situation occurs in the famous black hole information paradox: the reconstruction of a quantum state fallen into an \emph{old} blackhole based on the Hawking radiation is theoretically possible, but it would require an exponential computational complexity in general~\cite{Hayden_2007, Yoshida2017, You1803.10425}. \section{Experimentally probing the strange correlator} \label{sec:measure} \jyl{ So far, we have discussed how strange correlators can detect nontrivial SPT features even under decoherence. Naively, strange correlators are just theoretical tools since we cannot sandwich two different states in actual experiments. However, in this section, we will show that with the help of classical computations, one can experimentally probe nontrivial signatures of the strange correlators through measurements, so-called computationally assisted observables (CAO)~\cite{Lee2022, Garratt2022}. } \subsection{Strange correlator against a generic trivial state} To further proceed, we consider a strange correlator against a generic trivial disordered state $\ket{\Omega_{\bm{s}}}$, which is defined as the product state along the $X$-basis characterized by the bit-string ${\bm{s}} = \{s_1,s_2,...,s_N\}$: \begin{equation} \label{eq:generic_product} \ket{\Omega_{\bm{s}}} \equiv \bigotimes_{n=1}^N \big[ Z^{(1-s_n)/2}|+\rangle_n \big],\quad s_i \in \{-1,1\} \end{equation} which is invariant under the global symmetry. Essentially, the label ${\bm{s}}$ specifies the symmetry charge configuration of the trivial state of interest. Then, we define $C^\textrm{I}_{\bm{s}}(r)=\tr(\rho_\text{spt}O(0)O(r)\rho_{0,{\bm{s}}})/\tr(\rho_\text{spt}\rho_{0,{\bm{s}}})$ as the type-I strange correlator computed against the ${\bm{s}}$-labeled trivial state density matrix $\rho_{0,{\bm{s}}} \equiv |\Omega_{\bm{s}} \rangle \langle \Omega_{\bm{s}} |$. We emphasize that for the strange correlator to be well-defined, the overlap between the SPT and trivial states $\tr(\rho_\textrm{spt} \rho_{0,{\bm{s}}})$ has to be non-zero. For that, it is necessary that they have components carrying the same total charge under the global symmetry. In general, one may expect that this generalized strange correlator $C^\textrm{I}_{\bm{s}}(r)$ still serves as a suitable probe for the SPT physics as the locally-charged background (specified by ${\bm{s}}$) should not change the qualitative behaviors. In the following, we compute strange correlators of pure $G_A \times G_B$ SPT states against generic trivial states for two exemplary cases with or without decoherence, where the decoherence model is given as \eqnref{eq:noise_basic}: \vspace{5pt} \noindent $(i)$ \emph{1d cluster $\mathbb{Z}_2^{\textrm{odd}} \times \mathbb{Z}_2^{\textrm{even}}$ SPT state}: For $|\Omega_{\bm{s}} \rangle$ to have a non-zero overlap with the 1d cluster state, the trivial state should have a even parity on both symmetries, i.e., $\prod_n s_{2n+1} = \prod_n s_{2n} = 1$. Then, \begin{align} \label{eq:generic_SCI} C^\textrm{I}_{{\textrm{odd}},{\bm{s}}}(2n) = \prod_{m=1}^n s_{2m},\quad C^\textrm{I}_{{\textrm{even}},{\bm{s}}}(2n) = \prod_{m=1}^n s_{2m-1}. \end{align} Under decoherence of $\mathbb{Z}_2^{\textrm{even}}$, $\rho_\textrm{spt}^D$ would become an ensemble of configurations with different total $\mathbb{Z}^{\textrm{even}}_2$ charges, and the constraint $\prod_n s_{2n} = 1$ does not have to hold anymore for a trivial state. In this case, $C^\textrm{I}_{{\textrm{even}},{\bm{s}}}(2n)$ remains unchanged while $C^\textrm{I}_{{\textrm{odd}},{\bm{s}}}(2n)$ becomes short-ranged as in \eqnref{eq:strange_typeI_result}. Even away from the stabilizer limit, the numerical results in \figref{fig:1d_numerics} has shown that the above sign structure persists, and the magnitude of $C^\textrm{I}_{{\textrm{odd}},{\bm{s}}}$ does not depend on the $\mathbb{Z}_2^{\textrm{even}}$-charge configuration of the trivial state ${\bm{s}}_{\textrm{even}} = \{s_2,s_4,...\}$. However, the magnitude of the $C^\textrm{I}_{{\textrm{odd}},{\bm{s}}}$ can change depending on ${\bm{s}}_{\textrm{odd}} = \{s_1, s_3, ... \}$. \vspace{5pt} \noindent $(ii)$ \emph{2$d$ cluster $\mathbb{Z}^{\textrm{(0)}}_2 \times \mathbb{Z}_2^{\textrm{(1)}}$ SPT state}: For $|\Omega_{\bm{s}} \rangle$ to have a non-zero overlap with the 2d cluster state, there are two necessary conditions: $(i)$ $\prod_v s_v = 1$ (trivial 0-form global charge), and $(ii)$ $B_p \equiv \prod_{e \in p} s_e = 1$ for all plaquette $p$ (trivial 1-form charge configurations). Then, an ordinary correlation $Z_v Z_{v'}$ for $G_A = \mathbb{Z}^{\textrm{(0)}}_2$ and a Wilson loop operator $\prod_{e \in \partial {\cal A} } Z_e$ for $G_B = \mathbb{Z}^{\textrm{(1)}}_2$ are given as follows \begin{align} C^\textrm{I}_{A,{\bm{s}}}(\partial l) = \prod_{e \in l} s_{e},\quad C^\textrm{I}_{B,{\bm{s}}}(\partial {\cal A} ) = \prod_{v \in {\cal A} } s_{v}. \end{align} where $l$ is an open string connecting two vertices $v$ and $v'$, i.e., $\partial l = \{v,v'\}$. Under decoherence of $G_A$ (0-form), the above sign structure for $C^\textrm{I}_B$ remains robust, while the magnitude can change depending on the decoherence strength as in \eqnref{eq:1form_strangeIB}. In this case, $C^\textrm{I,II}_A$ is unaffected. Under decoherence of $G_B$ (1-form), however, $C^\textrm{I}_A$ gets modified in an interesting way. It is straightforward to show that under decoherence, \begin{align} C^\textrm{I}_{A,{\bm{s}}} = \langle Z_{v} Z_{v'} \rangle_\beta^{\textrm{RBIM}({\bm{s}})} \end{align} which is the correlation function of the random bond Ising model (RBIM) at the inverse temperature $\beta = \tanh^{-1}(1-2p)$ for the bond configuration ${\bm{s}}$. As the correlation of the RBIM, the magnitude of $C^\textrm{I}_{A,{\bm{s}}}$ only depends on the bond frustration, which corresponds to the 1-form charge configuration $\{ B_p \}$ of the trivial state $|\Omega_{\bm{s}} \rangle$. Interestingly, unlike the 1d cluster state case, this implies that the type-I strange correlator of the mixed state $\rho^D_\textrm{spt}$ behaves differently depending on the trivial state we computed against. For example, when $s_e = 1$, $C^\textrm{I}_{A,{\bm{s}}}$ behaves as the correlation of the ferromagnetic Ising model. If we choose $s_e$ such that $B_p = +1$ for half the plaquettes and $B_p = -1$ for the other half, $C^\textrm{I}_{A,{\bm{s}}}$ behaves as the correlation of the maximally frustrated RBIM, decaying exponentially in distance for any $p$. This raises the question of which strange correlator is appropriate for one to use to diagnose the nontrivial behavior of the underlying SPT physics. In fact, a proper way is to take a weighted average of all type-I strange correlators, whose weight is determined by the overlap between the trivial state from a given density matrix, $P_{\bm{s}} = \langle \Omega_{\bm{s}} | \rho_\textrm{spt}^D | \Omega_{\bm{s}} \rangle$. In this formalism, for small $p$ the weight for the latter scenario is extremely small, and the \emph{average} behavior of the type-I strange correlator would still be long-ranged. We will examine the implication of this in the next section. \subsection{With post-selection} With measurements and post-selections, it is possible to experimentally extract ($i$) the quantity very similar to the type-I strange correlator and ($ii$) the type-II strange correlator. The idea is that measurements and post-selections allow one to project the SPT state into a certain trivial product state to some degree, a step essential to compute the strange correlator. To illustrate the idea, consider a $1d$ cluster state discussed in the previous section. Assume that we want to understand the behavior of the type-I strange correlators for the $\mathbb{Z}_2^{\textrm{odd}}$ symmetry. As the type-I strange correlator is not a physical quantity that can be directly measured, we instead define the \emph{marginalized} type-I strange correlator: \begin{align} \label{eq:marginalized_strangeI} \tilde{C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}^\textrm{e}}(2n) \equiv \frac{\textrm{tr}(\rho^D_\textrm{spt} Z_1 Z_{2n+1} {\cal P} _{{\bm{s}}^\textrm{e}} )}{\textrm{tr}(\rho^D_\textrm{spt} {\cal P} _{{\bm{s}}^\textrm{e}} )} = \langle Z_1 Z_{2n+1} \rangle_{ {\cal P} ^\textrm{e}_{{\bm{s}}^\textrm{e}} \rho^D_\textrm{spt}} \end{align} where $ {\cal P} _{{\bm{s}}^\textrm{e}}$ is the projector onto a specific $\mathbb{Z}_2^{\textrm{even}}$-charge configuration defined as \begin{align} \label{eq:projector2} {\cal P} _{{\bm{s}}^\textrm{e}} \equiv \prod_{m=1}^{N} \frac{1 + s_{2m} X_{2m}}{2} \end{align} with the bit-string ${{\bm{s}}^\textrm{e}}=(s_2,s_4,...,s_{2N})$. We remark that the form of the marginalized type-I strange correlator is very similar to that of the type-I strange correlator. The crucial difference is that instead of evaluating against a trivial state density matrix $\rho_{0,{\bm{s}}}$, which is essentially the projector onto a specific $\mathbb{Z}_2^{\textrm{odd}} \times \mathbb{Z}_2^{\textrm{even}}$ charge configuration labeled by ${\bm{s}}$, here we evaluate against the projector that only specifies the $\mathbb{Z}_2^{\textrm{even}}$ charge configuration. Thus, in the computation of $\tilde{C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}^\mathbf{e}}$, the $\mathbb{Z}_2^{\textrm{odd}}$ charge configuration is \emph{marginalized}. One may immediately notice that the marginalized type-I strange correlator is the correlation function of $Z$ operators for the \emph{projected} mixed state $ {\cal P} _{{\bm{s}}^\textrm{e}} \rho^D_\textrm{spt} {\cal P} _{{\bm{s}}^\textrm{e}}$. The projected mixed state can be obtained by measuring all the even sites in the $X$-basis and \emph{post-selecting} the resulting state with measurement outcomes equal to ${\bm{s}}^{\textrm{e}}$. As the correlation function of the physical mixed state, the marginalized type-I strange correlator is experimentally measurable. Also, as the name suggests, it is related to the type-I strange correlators. To show this, we define $ {\cal P} _{{\bm{s}}^{\textrm{o}}}$ as the projector onto a specific $\mathbb{Z}_2^{\textrm{odd}}$ configuration labeled by ${\bm{s}}^{\textrm{o}} = (s_1,s_3,...,s_{2N-1})$. Together, $ {\cal P} _{{\bm{s}}^{\textrm{o}}}$ and $ {\cal P} _{{\bm{s}}^{\textrm{e}}}$ project onto a configuration where all symmetry charges are specified, i.e., a trivial disordered state $|\Omega_{{\bm{s}}}\rangle$ with ${\bm{s}} = ({\bm{s}}^{\textrm{o}}, {\bm{s}}^{\textrm{e}})$. Therefore, by inserting a completeness relation $\mathbb{I} = \sum_{{\bm{s}}^{\textrm{o}}} {\cal P} _{{\bm{s}}^\textrm{o}}$ in \eqnref{eq:marginalized_strangeI}, we can show that \begin{align} \label{eq:tilde_strange_typeI} \tilde{C}^\textrm{I}_{{\textrm{odd}},{\bm{s}}^\textrm{e}}(2n) &= \sum_{{\bm{s}}^\textrm{o}} \frac{\textrm{tr}\big(\rho^D_\textrm{spt} {\cal P} _{{\bm{s}}^\textrm{e}} {\cal P} _{{\bm{s}}^\textrm{o}} \big)}{\textrm{tr}\big(\rho^D_\textrm{spt} {\cal P} _{{\bm{s}}^\textrm{e}} \big)} \frac{ \textrm{tr}\big(\rho^D_\textrm{spt} Z_1 Z_{2n+1} {\cal P} _{{\bm{s}}^\textrm{e}} {\cal P} _{{\bm{s}}^\textrm{o}} \big)}{ \textrm{tr}\big(\rho^D_\textrm{spt} {\cal P} _{{\bm{s}}^\textrm{e}} {\cal P} _{{\bm{s}}^\textrm{o}} \big) } \nonumber \\ &= \sum_{{\bm{s}}^\textrm{o}} P({\bm{s}}^\textrm{o} | {\bm{s}}^\textrm{e}) \, C^\textrm{I}_{{\textrm{odd}}, ({\bm{s}}^{\textrm{o}}, {\bm{s}}^{\textrm{e}})} (2n), \end{align} where $P({\bm{s}}^\textrm{o}|{\bm{s}}^\textrm{e} )$ is the probability of observing measurement outcome ${\bm{s}}^\textrm{o}$ on odd sites conditioned on ${\bm{s}}^\textrm{e}$ on even sites for the SPT state $|\Psi \rangle$. Therefore, as a weighted average of the type-I strange correlators, the experimentally accessible quantity $\tilde{C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}^{\textrm{e}}}(2n)$ would quantify the overall non-trivialness of the type-I strange correlator. In the stabilizer limit, as we showed in \eqnref{eq:generic_SCI}, the type-I strange correlator $C^\textrm{I}_{{\textrm{odd}}, {\bm{s}}}$ is independent of the values of ${\bm{s}}$ on odd sites. Accordingly, the marginalized type-I strange correlator $\tilde{C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}^{\textrm{e}}}$ would be equal to ${C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}}$ if ${\bm{s}}$ matches ${\bm{s}}^{\textrm{e}}$ on even sites. The type-II strange correlator is a quantity that can be expressed as the ratio of two fidelities: $\textrm{tr}(\rho_\textrm{spt}^D \rho_{0,{\bm{s}}}^{\vphantom{\dagger}})$ and $\textrm{tr}(\rho_\textrm{spt}^D {\rho}_{0,{\bm{s}}'}^{\vphantom{\dagger}})$ where $\rho_{0,{\bm{s}}'} \equiv Z_1 Z_{2n+1} \rho_{0,{\bm{s}}} Z_{2n+1} Z_1$ is also the pure state density matrix where ${\bm{s}}'$ differs from ${\bm{s}}$ in signs for two sites: \begin{align} \label{eq:typeII_strange} C^\textrm{II}_{{\textrm{odd}},{\bm{s}}}(2n) &= \frac{\textrm{tr}\big(\rho^D_\textrm{spt} Z_{1} Z_{2n+1} \rho^{\vphantom{\dagger}}_{0,{\bm{s}}} Z_{2n+1} Z_1 \big)}{\textrm{tr}\big(\rho^D_\textrm{spt} \rho^{\vphantom{\dagger}}_{0,{\bm{s}}} \big)} \nonumber \\ &=\frac{\textrm{tr}\big(\rho^D_\textrm{spt} \rho^{\vphantom{\dagger}}_{0,{\bm{s}}'} \big)}{\textrm{tr}\big(\rho^D_\textrm{spt} \rho^{\vphantom{\dagger}}_{0,{\bm{s}}} \big)} = \frac{P({\bm{s}}')}{P({\bm{s}}\,)} \end{align} Therefore, if we repeatedly measure the charge configuration ${\bm{s}}$ of the mixed state and estimate the probability $P({\bm{s}})$ and $P({\bm{s}}')$ for the measurement outcome to be ${\bm{s}}$ (${\bm{s}}'$), we can calculate $C^\textrm{II}_{{\textrm{even}},{\bm{s}}}(2n)$. Without post-selection, this is an extremely challenging task since there are exponentially many possible outcomes ${\bm{s}}$. However, assuming we can post-select, one can compare the relative frequency between ${\bm{s}}'$ and ${\bm{s}}$ and estimate the type-II strange correlator. Another possibility to perform the fidelity estimation is to use classical shadow tomography \cite{Huang2002.08953}. In this approach, the decohered SPT state $\rho_\text{spt}^D$ will be repeatedly prepared and then measured in the random Clifford basis. The measurement will collapse $\rho_\text{spt}^D$ to a random stabilizer state $\ket{\sigma}$ with probability $\bra{\sigma}\rho_\text{spt}^D\ket{\sigma}$. These post-measurement states $\ket{\sigma}$ will be recorded and post-processed on a classical computer. In particular, the fidelity $\textrm{tr}(\rho_\text{spt}^D\rho_{0,{\bm{s}}})$ (both in the numerator and the denominator) in \eqnref{eq:typeII_strange} can be estimated as $\textrm{tr}(\rho_\text{spt}^D\rho_{0,{\bm{s}}})=\mathbb{E}_{\ket{\sigma}}(2^{2N}+1)\bra{\sigma}\rho_{0,{\bm{s}}}\ket{\sigma}-1$, where $\bra{\sigma}\rho_{0,{\bm{s}}}\ket{\sigma}$ can be efficiently computed given that $\ket{\sigma}$ is a stabilizer state and we have the freedom to choose the trivial reference state $\rho_{0,{\bm{s}}}$ to be a stabilizer state as well. Thus both the numerator and the denominator of \eqnref{eq:typeII_strange} can be obtained by fidelity estimation via classical shadow tomography. However, this is not more efficient than the post-selection method. Because both fidelities are exponentially small ($\sim 2^{-2N}$) with respect to the system size $2N$, such that the number of measurements needed to accurately determine both the numerator and the denominator is still exponentially large ($\sim 4^{2N}$) in order to control the estimation standard deviation below the level of $2^{-2N}$. \subsection{Without post-selection} Post-selecting specific measurement outcomes allow us to access a type-II strange correlator as well as a weighted average of type-I strange correlators. However, as the system size increases, the number of samples for post-selection increases exponentially with the system size, and the whole idea becomes experimentally impractical. Therefore, for this method to be feasible, it is essential to avoid the post-selection issue. \subsubsection{Without decoherence} Before addressing this issue, it is important to understand that in the pure SPT state, a nontrivial behavior of the type-I strange correlator in the SPT phase is intimately tied to the presence of finite non-local SPT order parameters, which can be experimentally measured to diagnose the underlying SPT order. For example, the following \emph{string order parameter} in the 1d $\mathbb{Z}^{\textrm{odd}}_2 \times \mathbb{Z}^{\textrm{even}}_2$ SPT phase allows one to distinguish an SPT and trivial phases~\cite{SOP1989, SOP2008}: \begin{equation} S_{2a+1,2b+1} = Z_{2a+1} Z_{2b+1} \hspace{-3pt} \prod_{m={a+1}}^{b} \hspace{-3pt} X_{2m}, \end{equation} which takes a non-zero expectation value as $|a-b| \rightarrow \infty$ in the SPT phase. One way to interpret this quantity is the following: in the SPT phase, the application of the $\mathbb{Z}_2^{\textrm{even}}$ symmetry transformation on a partial region is equivalent to threading a symmetry defect (flux) at the boundaries of the region. For this SPT state, by definition, such a $\mathbb{Z}_2^{\textrm{even}}$ symmetry defect should induce a $\mathbb{Z}_2^{\textrm{odd}}$ charge, which can be canceled by the application of the charge creation/annihilation operator. If we define $S$ as such a partial symmetry transformation combined with the charge creation/annihilation operators at the boundary, $S$ would behave non-trivially, while for a trivial disordered state, the defect does not induce any charge and $S$ would take a value exponentially decaying in its size. For a given SPT state $\rho_\textrm{spt}$, the expectation value of the string order parameter can be expressed in terms of strange correlators as the following: \begin{align} \label{eq:string_order} \textrm{tr}\big( \rho_\textrm{spt} S_{1,2n+1} \big) &= \sum_{{\bm{s}}} P_{\bm{s}} \Big[ \prod_{m=1}^{n} s_{2m} \Big]\, C^\textrm{I}_{{\textrm{odd}}, {\bm{s}}}(2n) \nonumber \\ &= \sum_{{\bm{s}}^e} P_{{\bm{s}}^e} \Big[ \prod_{m=1}^{n} s_{2m} \Big]\, \tilde{C}^\textrm{I}_{{\textrm{odd}}, {\bm{s}}^e}(2n) \end{align} where we used the completeness of the product states $\sum_{{\bm{s}}} \rho_{0,{\bm{s}}} = \mathbb{I}$ for the derivation, and $P_{\bm{s}} = \textrm{tr}( \rho_\textrm{spt} \rho_{0,{\bm{s}}} )$ is the probability to obtain the measurement outcome ${\bm{s}}$ by measuring $\rho_\textrm{spt}$ in $X$-basis, satisfying $\sum_{\bm{s}} P_{\bm{s}} = 1$. The expression implies that the string order parameter is a weighted average of \emph{signed} strange correlators. The implications of \eqnref{eq:string_order} can be summarized as the following: \begin{itemize} \item If the expectation value of the non-local parameter is non-trivial, the signed average of the type-I strange correlators should be non-trivial. \item If $(i)$ the type-I strange correlators are nontrivial and $(ii)$ their sign structure cancels the prefactor $\prod s_{2m}$, the non-local parameter $S$ must be non-trivial. However, even if the condition $(i)$ holds, the condition $(ii)$ may not hold under decoherence. \end{itemize} In the pure SPT state where the expectation value of the non-local order parameter is nontrivial, indeed the strange correlators are nontrivial and their signs are generally canceled by the term $\prod s_{2m}$. In other words, \begin{equation} C^\textrm{I}_{{\textrm{odd}},{\bm{s}}} = \prod_{m=1}^n s^{\vphantom{\dagger}}_{2m} C^\textrm{I}_{{\textrm{odd}},\bar{{\bm{s}}}} \,\, \Rightarrow \,\, \expval{S_{1,2n+1}} = \tilde{C}^\textrm{I}_{{\textrm{odd}}, \bar{s}^e} \end{equation} where $\bar{{\bm{s}}}$ is defined in such a way that its even part $\bar{{\bm{s}}}^e = 1$ and its odd part $\bar{{\bm{s}}}^o = {\bm{s}}^o$. Therefore, the string order parameter would be given as the marginalized type-I strange correlator discussed in the previous section. Since the string order parameter can be simply measured from the bulk wavefunction without any post-selection, it implies that in the pure SPT state measuring string order parameters allow one to learn about the type-I strange correlators. To be more concrete, we outline the following experimental procedure to measure non-local order parameters: for a given $\rho_\textrm{spt}^D$, measure all even sites by $X$-basis and odd sites by $Z$-basis, where ${\bm{s}}^\textrm{e} = (s_2, s_4,...)$ and ${\bm{\sigma}}^\textrm{o} = (\sigma_1, \sigma_3, ...)$ are measurement outcome bit-strings. Now, we repeat this procedure repeatedly and obtain a set of $M$ bit-strings $\{ {\bm{s}}^{\textrm{e},(i)} \}_{i=1}^M$ and $\{ {\bm{\sigma}}^{\textrm{o},(i)} \}_{i=1}^M$. Then, \begin{align} \label{eq:exp_quantity_typeI} & \frac{1}{M} \sum_{i=1}^M \Big[ \prod_{m=1}^n s^{(i)}_{2m} \Big]\,\sigma^{(i)}_{1} \sigma^{(i)}_{2n+1} \rightarrow \tilde{C}^\textrm{I}_{o,\bar{{\bm{s}}}}(2n) \end{align} whose statistical uncertainty reduces \emph{polynomially} in the number of measurement snapshots $M$. This is exactly equivalent to the measurement of the marginalized type-I strange correlator except we multiply by the additional factor $\prod s_{2m}$. In fact, the absence of post-selection is exactly compensated by the presence of this additional factor. Therefore, even without post-selection, we can identify a nontrivial behavior of the marginalized type-I strange correlator with polynomial sample complexity. \jyl{ To generalize this idea to higher dimension, consider an $G_A \times G_B$ SPT characterized by the mixed anomaly between $G_A$ and $G_B$ symmetries; in other words, the domain wall (defect) of $G_B$ is decorated with $G_A$-SPT states~\cite{Chen2014}. In this case, the non-local order parameter $S^A$ associated with $G_A$-symmetry on the area $ {\cal A} $ is defined as the following \cite{Yoshida2016,2022arXiv221002485Z}: \begin{align} S^A_g( {\cal A} ) \equiv O_g(\partial {\cal A} ) \cdot U_g(cA) \end{align} where $g \in G_B$ is the element of the $G_B$ symmetry group, $O_g(\partial {\cal A} )$ is the operator that creates a $G_A$-SPT state along the boundary $\partial {\cal A} $ decorating the $g$-domain wall, and $U_g( {\cal A} )$ is the partial $G_B$ symmetry transformation on the region $ {\cal A} $. By definition, this non-local order parameter would exhibit a perimeter law in the SPT phase, i.e., it decays exponentially with its perimeter $| \partial {\cal A} |$: \begin{equation} \langle S_g^A( {\cal A} ) \rangle \propto e^{-a | \partial {\cal A} | } \end{equation} When $G_B$ is a $n$-form symmetry, the region of support $ {\cal A} $ is $(d\,{-}\,n)$-dimensional. Accordingly, its boundary is $(d\,{-}\,n\,{-}\,1)$-dimensional. Therefore, if $G_B$ is a $(d\,{-}\,1)$-form symmetry, there always exist a non-local order parameter for $G_B$ which takes a finite value in the SPT phase. On the other hand, if $G_B$ is $(d\,{-}\,2)$-form symmetry, then it has support on a 2-dimensional manifold, whose boundary is 1-dimensional. Accordingly, in the SPT phase, the associated non-local order parameter would decay exponentially with its boundary length; however, for the trivial state, this order parameter would decay exponentially with its area, so one can distinguish an SPT state from a trivial disordered state. By inserting a completeness relation in the $G_B$-charge basis $\mathbb{I} = \sum_{{\bm{s}}^B} {\cal P} _{{\bm{s}}^B}$, where ${\bm{s}}^B$ is the string specifying a $G_B$-charge configuration, the non-local order parameter expectation value can be expressed as \begin{align} \label{eq:SC_NLO} \textrm{tr}( \rho_\textrm{spt}^D S_g ) &= \sum_{{\bm{s}}^B} P_{{\bm{s}}^B} \, c(U_g|{\bm{s}}^B) \, \tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O_g ) \end{align} where each term is defined as \begin{align} P_{{\bm{s}}^B} &\equiv \tr( {\cal P} _{{\bm{s}}^B} \rho_\textrm{spt}^D ) \nonumber \\ c(U_g|{\bm{s}}^B) & \equiv \bra{\Omega_{{\bm{s}}^B}} U_g( {\cal A} )\ket{\Omega_{{\bm{s}}^B}}, \nonumber \\ \tilde{C}^\textrm{I}_{A, {\bm{s}}^B}(O_g) &\equiv \sum_{{\bm{s}}^A} P({\bm{s}}^A| {\bm{s}}^B) \, C^\textrm{I}_{A,({\bm{s}}^A,{\bm{s}}^B)}(O_g). \end{align} $P_{{\bm{s}}^B}$ is the probability of obtaining a measurement outcome ${\bm{s}}^B$ on $\rho_\textrm{spt}^D$, and $c(U_g|{\bm{s}}^B)$ is the phase factor obtained by applying $U_g( {\cal A} )$ on the product state labeled by ${\bm{s}}^B$, and $\tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O_g )$ is the marginalized type-I strange correlator. Therefore, for these SPT states the points made earlier for the 1d $\mathbb{Z}_2 \times \mathbb{Z}_2$ SPT state immediately extend. } \subsubsection{With decoherence} Despite the \eqnref{eq:SC_NLO}, nontrivial behaviors of the type-I strange correlator under decoherence do not imply that the non-local order parameters are also nontrivial under decoherence. As we have examined in \eqnref{eq:2dSPT_NLO_decoherence}, the non-local order parameter immediately becomes short-ranged under decoherence while the type-I strange correlator remains nontrivial for a weak decoherence strength. The reason is that, $c(U|{\bm{s}}^B)$ does not cancel the sign of $\tilde{C}^\textrm{I}_{A,{\bm{s}}^B}$, and the weighted average of the marginalized strange correlators with unbalanced oscillating signs becomes short-ranged. However, this issue can be circumvented through some computational assistance~\cite{Lee2022}. Theoretically, the question can be rephrased as finding a decoding quantum channel $ {\cal D} $ such that if we evaluate a non-local order parameter of $ {\cal D} [\rho_\textrm{spt}^D]$ instead of $\rho_\textrm{spt}^D$, the quantity becomes non-trivial. Roughly speaking, this can be achieved by multiplying an additional ${\bm{s}}^B$-dependent factor $d(S_g|{\bm{s}}^B)$ such that it cancels the oscillating sign in the summation: \begin{align} \textrm{tr}( { {\cal D} }[\rho_\textrm{spt}^D] S_g ) &= \sum_{{\bm{s}}^B} P_{{\bm{s}}^B} \, c(U_g|{\bm{s}}^B) \, \tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O_g ) \cdot d(S_g|{\bm{s}}^B) \end{align} where $d(S_g|{\bm{s}}^B)$ is defined in a way such that $c \cdot \tilde{C}^\textrm{I} \cdot d \geq 0$ for as many ${\bm{s}}^B$ as possible. In principle, if $\tilde{C}$ is nontrivial for each ${\bm{s}}^B$, there must exist $d$ such that the above summation becomes non-trivial. \jyl{ Experimentally, the procedure is as the following. First, we measure all $G_B$-charges. Then, based on the measurement outcome ${\bm{s}}^B$, we can predict or \emph{decode} the sign for the corresponding marginalized type-I strange correlator, and apply a relevant transformation on the remaining qubits. This step is captured by the decoding quantum channel $ {\cal D} _{{\bm{s}}^B}$ acting on $G_A$-charges~\footnote{Although it sounds like we apply unitary transformations, this procedure can be stochastic, implying that the decoding step is captured by the quantum channel.}. The combined action of measurement and decoding can be described by the following quantum channel: \begin{equation} {\cal D} \big[ \rho_\textrm{spt}^D \big] \equiv \sum_{{\bm{s}}^B} {\cal D} _{{\bm{s}}^B} \big[ { {\cal P} }_{{\bm{s}}^B} \rho_\textrm{spt}^D { {\cal P} }_{{\bm{s}}^B} \big]. \end{equation} where $ {\cal P} _{{\bm{s}}^B}$ is the projector onto the measurement outcomes ${\bm{s}}^B$ on the $G_B$-charges. In this representation, \begin{equation} d(S_g|{\bm{s}}^B) \equiv \frac{\tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( {\cal D} _{{\bm{s}}^B}[O_g] )}{\tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O_g )} \end{equation} where we used the self-adjointness of the decoding channel $ {\cal D} _{{\bm{s}}^B}$ which only acts on $G_A$-charge associated with the $O_g$ operator. Finally, we measure $O_g$ for this state projected into ${\bm{s}}^B$, which is equivalent to measuring the marginalized type-I strange correlator as in \eqnref{eq:marginalized_strangeI}. Let ${\bm{\sigma}}^A$ be the label for the measurement outcome of $O_g$. One can repeat this procedure, and then use the sequence of $\{ ({\bm{\sigma}}^{A,(i)},{\bm{s}}^{B,(i)})\,|\, i=1,2,... \}$ to estimate the \emph{computationally assisted observable}. } This computationally assisted observable can exhibit nontrivial behavior in the decohered SPT phase. Then, this would guarantee that the marginalized type-I strange correlators must be non-trivial. It can be immediately seen from the following inequality: \begin{align} \label{eq:inequality} \Big| \sum_{{\bm{s}}^B} P_{{\bm{s}}^B} \, c^ {\cal D} (U|{\bm{s}}^B) \, \tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O ) \Big| \leq \sum_{{\bm{s}}^B} P_{{\bm{s}}^B} \Big| \tilde{C}^\textrm{I}_{A,{\bm{s}}^B}( O ) \Big|. \end{align} For example, if the LHS is decaying in power-law, the inequality implies that the marginalized strange correlators cannot decay exponentially, i.e., non-trivial. Therefore, this method would allow one to identify the SPT information from the decohered mixed state using a polynomial number of measurement snapshots. On the contrary, if the type-I strange correlators are short-ranged, then no matter how well we choose a phase factor $c^ {\cal D} $, the LHS must be trivial. Put it differently, there does not exist a decoding channel $ {\cal D} $ such that a computationally assisted observable takes a nontrivial value. Therefore, the behavior of the type-I strange correlator imposes a fundamental bound on topological information we can extract from a decohered mixed state with polynomial sample complexity. The \eqnref{eq:inequality} has another important consequence in the context of measurement-assisted state preparations. It has been pointed out that by measuring SPT (short-range entangled) states followed by an appropriate feedback, one can realize long-range entangled quantum states, from GHZ states to non-abelian topological orders, and even fractons~\cite{1Dcluster_GHZ, 2Dcluster, 2Dcluster_toric, 3dCluster_fracton1, 3dCluster_fracton2, Stephen2017, Raussendorf2019, NatMeasurement, NatRydberg,CSScode, CSScode2, ClusterCSS, Lu2022, Lee2022, Zhu2022, nonAbelianNat, nonAbelianNat2}. In the measurement-assisted state preparations, one prepares an SPT state with two symmetries $G_A$ and $G_B$, and charges of one symmetry are measured, say $G_B$~\cite{NatMeasurement}. Then, the resulting state has a nontrivial correlation structure for $G_A$-charged operators; note that this correlation function is exactly equivalent to our definition of the marginalized type-I strange correlators in \eqnref{eq:tilde_strange_typeI}. Finally, one applies appropriate feedback based on the measurement outcomes of $G_B$-charges so that this nontrivial correlation of $G_A$-charged operators is transformed into a way independent of the measurement outcomes. The last step corresponds to finding a decoding quantum channel $ {\cal D} $ in our language. If the last step fails, the measurement-assisted state preparation fails as well. Since our inequality imposes a fundamental bound on this last step, we get the following theorem: \vspace{5pt} {\noindent {\bf Theorem} Under decoherence, an SPT state can be diagnosed efficiently in the sample size only if the non-trivial marginalized type-I strange correlator exists. Furthermore, a scheme to prepare a long-range entangled state by measuring an SPT state can exist only if the corresponding marginalized type-I strange correlator is nontrivial. } \vspace{5pt} As we have examined for the 2d cluster state SPT, the type-I strange correlator is generally nontrivial under weak decoherence of 1-form symmetry, and its marginalized version can be shown to be nontrivial for small $p$. Then, the theorem implies that there must exist a decoding protocol to capture this quantity in a sample efficient manner, and also to facilitate the measurement-assisted state preparation. Indeed, in Ref.~\cite{Lee2022}, this was shown explicitly where the decoding transformation $ {\cal D} _{{\bm{s}}^B}$ corresponds to spin-flips on the vertex qubits based on the measurement outcomes ${\bm{s}}^B$ on the edges (rotation in $Z$-axis followed by measurement in $X$-basis is equivalent to having decoherence by a dephasing channel). We remark that the theorem does not provide a decoding protocol, and finding an optimal decoding transformation can be computationally challenging. However, there exist several cases with sub-optimal yet computationally efficient decoding solutions. Indeed, Ref.~\cite{Lee2022} also demonstrates a sample- and computation-efficient protocol for the 2d SPT state where the 1-form symmetry is decohered. \section{Conclusion and Outlook} In this work, we proposed several quantities that can be used to diagnose nontrivial topological features of \emph{mixed state} density matrices, namely the type-I and type-II strange correlators. Using these strange correlators, we have shown that an SPT state under \emph{selective} decoherence can still persist. We also introduced a ``doubled Hilbert space" formalism based on the Choi-Jamiołkowski isomorphism, in which the mixed state is mapped to a pure state, and this doubled formalism gives us a unified picture of both type-I and type-II strange correlators. \jyl{Under selective decoherence, the mapping provides a natural explanation for the robustness of the type-II strange correlator as a signature of the pure SPT state in the doubled Hilbert space under a shallow-depth symmetric quantum circuit. Finally, we interpreted the type-I and type-II strange correlators as indicators of the information-theoretic phases, quantifying the presence of nontrivial topological information in the mixed-state density matrix that may not be readily accessible in the experiments. } \jyl{A natural yet speculative direction one may ask is the full implication of the Choi-Jamiolski isomorphism in understanding mixed state properties. The mapping allows one to understand nontrivial properties of the density matrix as physical properties of the corresponding Choi state in the doubled Hilbert space. Accordingly, one may classify density matrices based on the possible quantum phases of Choi states constrained by Hermiticity, positive semi-definiteness, and average symmetry. This may give an interesting way to quantify the information content of the density matrices. } Many subjects remain unexplored along the direction of decohered quantum states of matter. As we have mentioned, we will leave a more systematic study of decohered fermionic TIs, and TSCs to the future. For each element of the ten-fold way classification, there can be various types of decoherence channels, which likely leads to very rich physics. As we have seen in the paper, decoherence may be mapped to an interaction between the two boundaries of the SPTs, hence decoherence may lead to various phenomena such as symmetric mass generation~\cite{Wang1307.7480,wen2013,Slagle1409.7401,Ayyar1410.6474,you2014,yougrand,Catterall1510.04153,You1705.09313,You1711.00863,Xu2103.15865, Tong2104.03997, Wang2204.14271}, and decoherence collapsed classification of TIs and TSCs~\cite{Fidkowski0904.2197, Fidkowski1008.4138, Turner1008.4346, Ryu1202.4484, Qi1202.3983, Yao1202.5805, Gu1304.4569, Wang1401.1142, Metlitski1406.3032, You1409.0168, Cheng1501.01313, Yoshida1505.06598, Gu1512.04919, Song1609.07469, Queiroz1601.01596, Witten1605.02391, Wang1703.10937, Kapustin1701.08264}. When the classification of TIs and TSCs is collapsed due to interaction, it was shown that the original topological transition between the topological insulator and the trivial state may become an ``unnecessary transition"~\cite{bisenthil,jianxune}, namely this transition is still a stable fixed point in the parameter space, but two sides of the transition can still be connected adiabatically. We expect that the unnecessary transition may also occur due to decoherence \jyl{in the doubled Hilbert space}. Another big class of quantum states of matter is the gapless quantum states, including quantum critical points. The effect of weak measurement on $(1+1)d$ conformal field theory (CFT) was studied in Ref.~\onlinecite{Garratt2022}, and it was shown that the effect of weak measurement can be mapped to the boundary operators of the CFT. In the past few years, the boundary of $(2+1)d$ quantum critical points have attracted a lot of attentions from both theoretical and numerical community~\cite{groverashvin,zhanglong1,zhanglong2,wessel1,wessel2,wessel3,xuboundary1,xuboundary2,max1,max2,max3,shang,toldin1,wessel2,zhanglong3}. The problem becomes particularly interesting when the $(2+1)d$ bulk was a SPT state driven to a critical point, hence two different boundary effects were at play. Quantum critical points under decoherence will be another direction worth exploration. \begin{acknowledgments} We thank Ehud Altman, Soonwon Choi, Matthew P. A. Fisher, Sam Garrett, Joel Moore, Xiao-Liang Qi, Shinsei Ryu, and Ryan Thorngren for inspiring discussions. We would like to thank the KITP program ``Quantum Many-Body Dynamics and Noisy Intermediate-Scale Quantum Systems" where the collaboration of this work was initiated. J.Y.L is supported by the Gordon and Betty Moore Foundation under the grant GBMF8690 and by the National Science Foundation under the grant PHY-1748958. Y.Z.Y. is supported by a startup fund at UCSD. C. X. is supported by the NSF Grant No. DMR-1920434, and the Simons Investigator program. \emph{Note Added:} While finishing up this work, we became aware of an independent related work~\cite{ZhangToAppear}, which should appear on arXiv on the same day as ours. \end{acknowledgments}
1,108,101,564,465
arxiv
\section{Introduction} Adaptive control, or self-learning control, is a set of techniques to automatically adjust controllers for uncertain systems. In the traditional problem of adaptive control, a parameterized system is considered where the parameters are assumed to be constant, but their values are initially unknown to the controller. The goal is to achieve some desired performance while the parameters are (possibly indirectly) estimated online. The solution to this problem can be extended to scenarios where parameters infrequently change or vary slowly. Numerous adaptive control methods have been developed since 1950s \cite{aastrom2013adaptive,krsticnonlinear,slotine1991applied,ioannou2012robust}. The main theoretical guarantee sought in all conventional adaptive control techniques is stability - whether it is specified in terms of tracking a set-point, trajectory, or a reference model. One particular limitation of current adaptive control methods is handling systems that involve discontinuities. Most adaptive control techniques rely on the continuity of the model and its parameterization. In many realistic models, state, control or parameters take values from both continuous and discrete domains. Within methods that do not entirely depend on the continuity of the model, a promising direction is using multiple models/controllers \cite{morse1996supervisory,narendra2000adaptive,anderson2001multiple,hespanha2003overcoming}, where the objective is to achieve stability via designing a switching law (supervisory control) to coordinate the controllers. Model reference adaptive control (MRAC) of specific forms of scalar input piecewise affine systems were studied in \cite{di2013hybrid,di2016extended}. However, it is still not clear how to deal with general discrete or hybrid systems. Another remaining open problem in adaptive control is dealing with specifications richer than stability. In many engineering applications, we are interested in complex requirements composed of safety (something bad never happens), liveness (something good eventually happens), sequentiality of tasks, and reactiveness. Temporal logics \cite{baier2008principles} provide a natural framework for specifying such requirements. The main challenge in designing adaptive control techniques from formal specifications is handling hard constraints on the evolution of the system. Even for the simpler problem of constraints defined as a safe set in the state-space, designing adaptive control strategies is challenging. Existing works on this problem \cite{guay2012adaptive,aswani2013provably,tanaskovic2014adaptive,di2016indirect,he2016adaptive} apply robust control techniques to ensure infinite-time constraint satisfaction for all admissible parameters. This approach may be severely conservative since if a robust control strategy does not exist for all admissible parameters, it does not necessarily indicate that constraints can not be satisfied after some measurements are taken from the system and a more accurate model is available. Even though \cite{aswani2013provably,tanaskovic2014adaptive,di2015indirect} update the model and synthesize controls in a receding horizon manner, they decouple constraint satisfaction and learning. However, there exists a deep coupling: when synthesizing controls, not only constraints must be taken into account, but also the evolution of the system should also lead to subsequent measurements that are more informative about the uncertainties in the model. In other words, control decisions have a indirect influence on the way the model is updated. We use tools from formal methods \cite{clarke1999model,baier2008principles} to develop a framework for correct-by-design adaptive control that can deal with complex systems and specifications. Formal methods have been increasingly used in control theory in recent years \cite{tabuada2009verification,belta2017book}. We consider discrete-time systems with constant but initially unknown parameters. We describe system specifications using linear temporal logic (LTL) \cite{baier2008principles}. As in any other adaptive control technique, we require an online parameter estimator. Our parameter estimator maps the history of the evolution of the system to the set of ``all possible" parameters, which contains the actual parameters. We embed the parameterized system in a (non-deterministic) parametric transition system (PTS), from which we construct a (non-deterministic) adaptive transition system (ATS) that contains all the possible combinations of transitions with the unfoldings of the parameter estimator. The main results and contributions of this paper are as follows: \begin{itemize} \item For finite systems, the LTL adaptive control problem reduces to a Rabin game \cite{thomas2002automata} on the product of the finite ATS and the Rabin automaton corresponding to the LTL specification. The method is correct by design and it is complete, i.e. it finds a solution if one exists; \item For infinite systems, we construct finite quotient ATSs by partitioning the state and the parameter space and quantizing the control space. Once an adaptive control strategy is found for the quotient, it is guaranteed that it will also ensure the satisfaction of the LTL formula for the original infinite system. The method may be conservative. \end{itemize} This paper is related to recent works that seek a formal approach to combining learning and control. The authors in \cite{quindlen2016region,kozarev2016case} provided statistical certificates for MRAC subject to safety constraints. The idea was based on implementing MRAC from a set of different initial conditions and parameters and observing if the trajectories were safe. However, the design of MRAC itself did not take into account the constraints. Moreover, given a temporal logic specification and a system model with parametric uncertainty, it is not clear how a reference model should be chosen for implementing MRAC. If a reference model is able to satisfy the specification, the matching condition may not hold, i.e. there may not exist a controller for the original system to behave as the reference model. Therefore, classic MRAC may not be suitable for the purpose of this paper as it requires a careful search of reference models subject to matching conditions. Reinforcement learning (RL) methods are conceptually similar to adaptive control, but are used in a probabilistic framework and require a reward mechanism to generate control policies. The authors in \cite{sadigh2014learning} studied RL from LTL specifications, where large rewards were dedicated to the pairs in the Rabin automaton to incentivize the system to visit them regularly or avoid them. In \cite{aksaray2016q}, Q-learning was applied to control MDPs from signal temporal logic (STL) specifications, where the reward was the STL robustness score - a measure of distance to satisfaction. Other closely related works include \cite{fu2014adaptive,leahy2016integrate}, where the problem of LTL control was modeled as a game between a player (controller) and an adversary (environment). The controller inferred the ``grammar" of actions taken by the environment. However, this approach also decoupled adaptation (learning) and control. If the LTL formula was violated during the grammar learning, the control software stopped. While these methods (including RL) have the advantage that they require less prior knowledge about the system, they are not suitable for performance-critical systems with constraints that should never be violated, even during the learning process. This paper is organized as follows. First, we provide the necessary background on LTL, transition systems and LTL control in Sec. \ref{sec:back}. The adaptive control problem is formulated in Sec. \ref{sec:problem}. We define PTSs in Sec. \ref{sec:parametric}. Technical details for the solutions for finite and infinite systems are explained in Sec. \ref{sec:finite} and \ref{sec:infinite}, respectively. Finally, two case studies are presented in Sec. \ref{sec:case}. \section{Background} \label{sec:back} \subsection{Notation} The set of real and Boolean values are denoted by $\mathbb{R}$ and $\mathbb{B}$ respectively. The empty set is denoted by $\emptyset$. Given a set ${S}$, we use $|S|$, $2^S$, $2^S_{-\emptyset}$ to denote its cardinality, power set, and power set excluding the empty set, respectively. An alphabet $\mathcal{A}$ is a finite set of symbols $\mathcal{A}=\{a_1,a_2,\cdots,a_A\}$. A finite (infinite) word is a finite-length (infinite-length) string of symbols in $\mathcal{A}$. For example, $w_1=a_1a_2a_1$ is a finite word, and $w_2=a_1a_2\overline{a_1}$ and $w_3=a_1\overline{a_2a_1}$ are infinite words over $\mathcal{A}=\{a_1,a_2\}$, where over-line stands for infinitely many repetitions. We use $\mathcal{A}^*$ and $\mathcal{A}^\omega$ to denote the set of all finite and infinite words that can be generated from $\mathcal{A}$, respectively. \subsection{Linear Temporal Logic} The formal definition of LTL syntax and semantics is not provided here as it can be found in the literature \cite{baier2008principles}. Here we provide an informal introduction and the necessary notation. LTL consists of a finite set of atomic propositions $\Pi$, temporal operators ${\bf G}$ (globally/always), ${\bf F}$ (future/eventually), ${\bf U}$ (Until), and Boolean connectives $\wedge$ (conjucntion), $\vee$ (disjunction), and $\neg$ (negation). LTL semantics are interpreted over infinite words over $2^\Pi$. The set of all infinite words that satisfy an LTL formula $\varphi$ is denoted by $L(\varphi)$, $L(\varphi) \subset (2^\Pi)^\omega$, and is referred to as the \emph{language} of $\varphi$. \begin{definition} A Deterministic Rabin Automaton (DRA) is defined as the tuple $\mathcal{R}=(S,s^0,\mathcal{A},\alpha,\Omega)$, where: \begin{itemize} \item $S$ is a set of states; \item $s^0$ is the initial state; \item $\mathcal{A}$ is a finite set of inputs (alphabet); \item $\alpha$ is a transition function $\alpha:S \times \mathcal{A} \rightarrow S$; \item $\Omega=\left \{(F_1,I_1),\cdots,(F_r,I_r) \right\}$ is a finite set of pairs of sets of states, where $F_i,I_i \subset S ,i=1,\cdots,r$. \end{itemize} \end{definition} An infinite word $w \in \mathcal{A}^\omega$ determines a sequence of inputs for $\mathcal{R}$ that results in the \emph{run} $\zeta(w)=s_0s_1\cdots$, where $s_{k+1}=\alpha(s_k,a_k)$, $s_0=s^0$, and $a_k$ is the $k$'th input appearing in $w$. We define $Inf(\zeta)=\left\{ s | s \text{ appears infinitely often in } \zeta \right\}$. A run $\zeta$ is \emph{accepted} by $\mathcal{R}$ if there exists $i \in \{1,\cdots,m\}$ such that $Inf(\zeta) \cap F_i = \emptyset$ and $Inf(\zeta) \cap I_i \neq \emptyset$. In other words, $F_i$ is visited finitely many times and $I_i$ is visited infinitely often for some $i$. The language of $\mathcal{R}$, denoted by $L(\mathcal{R})$, $L(\mathcal{R}) \subset \mathcal{A}^\omega$ , is defined as the set of all elements in $\mathcal{A}^\omega$ that produce accepting runs. It is known that given an LTL formula $\varphi$ over $\Pi$, one can construct a DRA $\mathcal{R}_\varphi$ with input set $\mathcal{A}=2^\Pi$ such that $L(\mathcal{R}_\varphi)=L(\varphi)$ \cite{thomas2002automata}. Therefore, verifying whether an infinite word satisfies an LTL formula becomes equivalent to checking the Rabin acceptance condition. There exists well-established algorithms and software for this procedure \cite{klein2006experiments}. \begin{example} Consider $\varphi={\bf G F} \pi_1 \wedge {\bf F} \pi_2$, which is an LTL formula over $\Pi=\{\pi_1,\pi_2\}$, stating that ``$\pi_1$ holds infinitely often, and $\pi_2$ eventually holds". The DRA $\mathcal{R}_\varphi$ corresponding to this formula is illustrated in Figure \ref{fig:rabin}. For example, we have $\{\pi_2\}\overline{\{\pi_1,\pi_2\}} \models \varphi$ ($\varphi$ is satisfied), but $\overline{\{\pi_1\}} \not \models \varphi$ ($\varphi$ is violated since $\pi_2$ never appears), and $\{\pi_1\}\overline{\emptyset \{\pi_2\} } \not \models \varphi$ (because $\pi_1$ does not hold infinitely often). \end{example} \begin{figure} \centering \vspace{0.2in} \begin{tikzpicture}[>=latex',shorten >=0.5pt,node distance=2.1cm,on grid,auto,semithick] \tikzset{font={\fontsize{8pt}{8}\selectfont}} \node[state,initial,fill=red!50] (s0) {$s_0$}; \node[state,fill=green!50] (s2) [below right=of s0] {$s_2$}; \node[state] (s1) [below left=of s0] {$s_1$}; \path[->] (s0) edge [bend right,pos=0.8,above left] node {$\{\pi_2\}$} (s1); \path[->] (s0) edge [loop above] node {$\{ \{\pi_1\},\emptyset\}$} (s0); \path[->] (s0) edge [bend left,pos=0.8] node {$\{ \{\pi_1,\pi_2\} \}$} (s2); \path[->] (s1) edge [bend left,pos=0.5] node {$\{\{\pi_1\},\{\pi_1,\pi_2\} \}$} (s2); \path[->] (s1) edge [loop left] node {$\{ \{\pi_2 \},\emptyset \}$} (s1); \path[->] (s2) edge [loop right] node {$\{ \{\pi_1 \},\{\pi_1,\pi_2\} \}$} (s2); \path[->] (s2) edge [bend left,pos=0.5] node {$\{ \emptyset,\{\pi_2 \} \}$} (s1); \end{tikzpicture} \caption{Example 1: DRA corresponding to $\varphi={\bf G F} \pi_1 \wedge {\bf F} \pi_2$, where $F_1=\{s_0\}$ (red), $I_1=\{s_2\}$ (green). Runs that visit the green state infinitely many times and visit the red state finitely many times satisfy $\varphi$.} \label{fig:rabin} \end{figure} \subsection{Transition Systems} \label{sec:transit} \begin{definition} A transition system is defined as the tuple $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$, where: \begin{itemize} \item $X$ is a (possibly infinite) set of states; \item $U$ is a (possibly infinite) set of control inputs; \item $\beta$ is a transition function $\beta:X \times U \rightarrow 2^X$; \item $\Pi=\{\pi_1,\pi_2,\cdots,\pi_{m}\}$ is a finite set of atomic propositions; \item $O: X \rightarrow 2^\Pi$ is an observation map. \end{itemize} \end{definition} We assume that $\mathcal{T}$ is \emph{non-blocking} in the sense that $|\beta(x,u)| \neq 0$ for all $x \in X, u \in U$. \footnote{ If $\mathcal{T}$ is blocking, we can make it non-blocking by adding an additional state $x^{sink}$ such that for all $x \in X, u \in U, |\beta(x,u)| = 0$, we have $x^{sink}=\beta(x,u)$. Also, we add transitions $x^{sink}=\beta(x^{sink},u), \forall u \in U$. In order to prevent blocking, we find a control strategy such that $x^{sink}$ is not reachable. } A transition system $\mathcal{T}$ is \emph{deterministic} if $|\delta(x,u)| = 1, \forall x \in X, \forall u \in U$, and is \emph{finite} if $X$ and $U$ are finite sets. A trajectory of $\mathcal{T}$ is an infinite sequence of visited states $x_0x_1x_2\cdots$. The infinite {word} produced by such a trajectory is $O(x_0)O(x_1)O(x_2)\cdots$. Note that the alphabet here is $2^\Pi$. The set of all infinite words that can be generated by $\mathcal{T}$ is a subset of ${(2^\Pi)}^\omega$. \begin{definition} A {control strategy} $\Lambda$ is a function $\Lambda:X^* \times U^* \rightarrow U$ that maps the history of visited states and applied controls to an admissible control input, where $u_k=\Lambda(x_0\cdots,x_k,u_0\cdots,u_{k-1}), \forall k\in \mathbb{N}$. \end{definition} \begin{definition} Given a transition system $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$, a control strategy $\Lambda$ and a set of initial states $X_0 \in X$, we define: \begin{equation*} \begin{array}{ll} L(\mathcal{T},\Lambda,X_0):=\Big\{ & O(x_0)O(x_1)\cdots \in {(2^\Pi)}^\omega \Big | \\ & x_0 \in X_0, x_{k+1} \in \beta(x_k,u_k), k \in \mathbb{N} \Big \}, \end{array} \end{equation*} where $u_k=\Lambda(x_0\cdots,x_k,u_0\cdots,u_{k-1})$. \end{definition} \subsection{Quotient Transition System} \label{sec:quotient} Consider a transition system $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$. A (finite) set $Q \subset 2^X$ is a (finite) partition for $X$ if 1) $\emptyset \not \in Q,$ 2) $\bigcup_{q\in Q}q=X$, and 3) $q \cap q'= \emptyset, \forall q,q' \in Q, q\neq q'$. A partition $Q$ is \emph{observation preserving} if for all $q\in Q$, we have $O(x)=O(x'), \forall x,x' \in q$. \begin{definition} \label{def:quotient} Given a transition system $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$ and an observation preserving partition $Q$ for $X$, the \emph{quotient transition system} is defined as the tuple $\mathcal{T}_Q=\left(Q, U, \beta_Q, \Pi, O_Q \right)$ such that: \begin{itemize} \item for all $q\in Q$, we have $q' \in \beta_Q(q,u)$ if and only if $\exists x \in q$, $\exists x' \in q'$ such that $x' \in \beta(x,u)$; \item for all $q\in Q$, we have $O_Q(q)=O(x)$ for any $x \in q$. \end{itemize} \end{definition} Given a control strategy for the quotient $\Lambda_Q:Q^* \times U^* \rightarrow U$, and a set of initial conditions $Q_0$, we construct $\Lambda^{(Q)}:X^* \rightarrow U$ such that $\Lambda^{(Q)}(x_0\cdots x_k)=\Lambda_{Q}(q_0\cdots q_k)$, $x_i \in q_i$, $0\le i \le k$, $k \in \mathbb{N}$, and $X_0^{(Q)}=\{x_0| x_0 \in q_0, q_0 \in Q_0\}$. It is easy to show that $L(\mathcal{T},\Lambda^{(Q)},X_0^{(Q)}) \subseteq L(\mathcal{T}_Q,\Lambda_Q,Q_0)$, which stems from the fact that $\mathcal{T}_Q$ simulates $\mathcal{T}$. We refer to $L(\mathcal{T}_Q,\Lambda_Q,Q_0) \setminus L(\mathcal{T},\Lambda^{(Q)},X_0^{(Q)})$ as the set of spurious infinite words (SIW). In order to have $L(\mathcal{T},\Lambda^{(Q)},X_0^{(Q)}) = L(\mathcal{T}_Q,\Lambda_Q,Q_0)$ (empty SIW), a sufficient condition is that $\mathcal{T}_Q$ and $\mathcal{T}$ are \emph{bisimilar} \cite{belta2017book}. For infinite $X$, there is no general guarantee that a finite $Q$ exists such that $\mathcal{T}_Q$ is bisimilar to $\mathcal{T}$. In order to ``shrink" SIW, $Q$ is refined. At the most extreme case, SIW remains nonempty unless $Q=X$. Further details on simulation and bisimulation relations are not required for this paper and the interested reader is referred to the related works in the literature, such as \cite{fernandez1991fly,tabuada2009verification,belta2017book}. \subsection{LTL Control} \label{sec:ltlcon} Given a finite transition system $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$ and an LTL formula $\varphi$ over $\Pi$, we are interested in finding a control strategy $\Lambda$ and the largest set of initial conditions $X_0^{\max}$ such that $L(\mathcal{T},\Lambda,X_0^{\max}) \subseteq L(\varphi)$. In other words, we require $\varphi$ to be satisfied for all trajectories that are allowed by the non-determinism in $\mathcal{T}$. \begin{definition} Given a transition system $\mathcal{T}=\left(X, U, \beta, \Pi, O \right)$ and a DRA $\mathcal{R}_\varphi=(S,s^0,\mathcal{A},\alpha,\Omega)$ corresponding to LTL formula $\varphi$, the product automaton $\mathcal{T}_\varphi^P=\mathcal{T} \otimes \mathcal{R}_\varphi$ is defined as the tuple $\left(X^P,X^{P,0},U, \beta^P, \Omega^P \right)$, where: \begin{itemize} \item $X^P=X \times S$ is the set of product states; \item $X^{P,0}=\{(x,s^0) | x\in X\}$ is the set of initial product states; \item $U$ is the set of control inputs; \item $\beta^P:X^P \times U \rightarrow 2^{X^P}$ is the product transition function, where $x^{P'}\in \delta(x^P,u)$, $x^P=(x,s),x^{P'}=(x',s')$, if and only if $x' \in \beta(x,u)$ and $s'=\alpha(s,O(x))$. \item $\Omega^P=\left \{(F^P_1,I^P_1),\cdots,(F^P_r,I^P_r) \right\}$ is a finite set of pairs of sets of states, where $F^P_i=\{(x,s) | x\in X, s \in F_i\},I^P_i=\{(x,s) | x\in X, s \in I_i\} ,i=1,\cdots,r$. \end{itemize} \end{definition} The product automaton $\mathcal{T}_\varphi^P$ is a (non-deterministic) automaton (with control inputs) capturing both the transitions in $\mathcal{T}$ and the acceptance condition of $\varphi$. The solution to the problem of finding a control strategy to satisfy $\varphi$ is accomplished by solving the Rabin game on the product automaton. The details are not presented here but can be found in \cite{chatterjee2012survey}. It can be shown that the control strategy is memoryless on the product automaton in the form $\Lambda: X \times S \rightarrow U$. In other words, the history of the system is incorporated into the state of the Rabin automaton. The largest set of admissible initial conditions $X_0^{\max}$ corresponds to the winning region of the Rabin game. If the transition system $\mathcal{T}$ is infinite, a finite quotient is constructed. If $U$ is infinite, it can be quantized to obtain a finite set \footnote{An alternative (better) approach was proposed in \cite{Yordanov2012} for piecewise affine systems, where the authors computed a finite set of sets of control inputs that enabled transitions with minimal non-determinism in the quotient system.}. It is straightforward to show that if a control strategy satisfying $\varphi$ exists for the finite quotient, it also satisfies $\varphi$ if implemented on the original system. However, unless the quotient and the original transition system are bisimilar, the non-existence of a control strategy for the quotient does not indicate that one does not exist for the original system. Hence the approach of using finite quotients may be conservative \cite{tabuada2009verification,belta2017book}. \section{Problem Formulation and approach} \label{sec:problem} We are interested in discrete-time systems of the following form: \begin{equation} \label{eq:system} \begin{array}{rl} x^+= &F(x,u,\theta,d), \\ y_i= & \mu_i(x), i=1,\cdots,m, \end{array} \end{equation} where $x\in X$ is the state, $u \in U$ is the control input, $\theta \in \Theta$ represents the parameters of the system, $d \in D$ is the disturbance (adversarial input), $F:X \times U \times \Theta \times D \rightarrow X$ is the system evolution function, and $y_i, i=1,\cdots,m$, are Boolean system outputs, where $\mu_i: X \rightarrow \mathbb{B}$. We define the set of atomic propositions $\Pi=\{\pi_1,\cdots,\pi_m\}$ such that $x \models \pi_i \Leftrightarrow \mu_i(x)=\text{True}, i=1,\cdots, m$. The sets $X,U,\Theta,D$ are the admissible sets for states, controls, parameters and disturbances respectively. All sets may be finite or infinite. System \eqref{eq:system} is finite if $X,U,\Theta,D$ are all finite. \begin{example} A prominent class of systems encountered in adaptive control are parameterized linear systems, where $F(x,u,\theta,d)=A(\theta)x+B(\theta)u+d$. We have $X \subset \mathbb{R}^{n_x}$, $U \subset \mathbb{R}^{n_u}$, $\Theta \subset \mathbb{R}^{n_\theta}$, $D \subset \mathbb{R}^{n_d}$. $A,B$ are matrices with appropriate dimensions that depend on $\theta$. It is also common to assume that the outputs are Boolean evaluations of linear predicates $\mu_i=(r_i^T x \le \rho_i)$, where $r_i \in \mathbb{R}^n,$ and $\rho_i \in \mathbb{R}$. Thus, each proposition $\pi_i$ defines a closed half space in $\mathbb{R}^{n_x}$. \end{example} As mentioned in the introduction, we distinguish between the uncertainty in parameters and disturbances. Disturbances usually have unknown (fast) variations in time. In this paper, we assume that $\theta$ is a constant but its value $\theta^*$ is initially unknown. If we treat the uncertainties in parameters and disturbances in the same way, we are required to design control strategies that are robust versus all values in both $\Theta$ and $D$. This approach is severely conservative and often fails to find a solution. The key idea of adaptive control is to take advantage of the fact that $\theta^*$ can be (approximately) inferred from the history of the evolution of the system. Therefore, adaptive control is often significantly more powerful than pure robust control and it is also more difficult to design and analyze. In engineering applications, parameters are related to the physical attributes of the plant whereas disturbances are related to effects of stochastic nature such as imperfect actuators/sensors and perturbations in the environment. \begin{problem} \label{problem} Given system \eqref{eq:system} and an LTL formula $\varphi$ over $\Pi$, find a control strategy $\Lambda: X^* \times U^* \rightarrow U$ and a set of initial states $X_0 \subseteq X$ such that all the trajectories of the closed loop system starting from $X_0$ satisfy $\varphi$. \end{problem} \vspace{0.2in} { Our aim is to convert Problem \ref{problem} to an LTL control problem described in Sec.\ref{sec:ltlcon} and use the standard tools for Rabin games. To this end, we need to incorporate adaptation into control synthesis. The central tool to any adaptive control technique is parameter estimation. Note that an adaptive control strategy has the form $\Lambda: X^* \times U^* \rightarrow U$, since parameters are estimated using the history of the evolution of the system. We take the following approach to convert Problem \ref{problem} into an LTL control problem. We embed system \eqref{eq:system} in a parametric transition system (PTS), which is defined in Sec. \ref{sec:parametric}. We construct a finite adaptive transition system (ATS) from a finite PTS. An ATS is an ordinary transition system as in Sec. \ref{sec:transit}, but parameters are also incorporated into its states and transitions in appropriate way, which is explained in Sec. \ref{sec:finite}. We deal with an infinite PTS by constructing a finite quotient PTS in Sec. \ref{sec:infinite}. } \section{Parametric Transition System} \label{sec:parametric} \begin{definition} A parametric transition system (PTS) is defined as the tuple $\mathcal{T}^\Theta=\left(X, U, \Theta, \gamma, \Pi, O \right)$, where: \begin{itemize} \item $X$ is a (possibly infinite) set of states; \item $U$ is a (possibly infinite) set of control inputs; \item $\Theta$ is a (possibly infinite) set of parameters; \item $\gamma$ is a transition function $\gamma:X \times U \times \Theta \rightarrow 2^X$. \item $\Pi=\{\pi_1,\pi_2,\cdots,\pi_{m}\}$ is a finite set of atomic propositions; \item $O: X \rightarrow 2^\Pi$ is an observation map. \end{itemize} \end{definition} The only difference between a PTS and a transition system is that its transitions depend on parameters. Note that if $|\Theta|=1$, a PTS becomes a transition system. Now we explain how to represent \eqref{eq:system} in the form of a PTS. The sets $X,U,\Theta$ are inherited from \eqref{eq:system} (which is why we have used the same notation). The transition function $\gamma$ is constructed such that \begin{equation} \gamma(x,u,\theta)=\left \{ F(x,u,\theta,d) \Big| d \in D \right\}. \label{eq:embed_parameteric} \end{equation} The observation map $O:X \rightarrow 2^\Pi$ is given by: \begin{equation} O(x)=\left \{\pi_i \Big|\mu_i(x)=\text{True}, i=1\cdots,m \right\}. \end{equation} Therefore, $\mathcal{T}^\Theta=\left(X, U, \Theta, \gamma, \Pi, O \right)$ captures everything in system \eqref{eq:system}. We refer to $\mathcal{T}^\Theta$ as the \emph{embedding} of \eqref{eq:system}. One can interpret a PTS as a (possibly infinite) family of transition systems. The actual transitions are governed by a single parameter $\theta^*$, which is initially unknown to the controller. Therefore, the controller has to find out which transition system is the ground truth. \section{Control Synthesis for Finite Systems} \label{sec:finite} In this section, we assume the PTS embedding system \eqref{eq:system} is finite. \subsection{Parameter Estimation} \begin{definition} A \emph{parameter estimator} $\Gamma$ is a function \begin{equation} \label{eq:estimator} \Gamma: X^* \times U^* \rightarrow 2^\Theta_{-\emptyset} \end{equation} that maps the history of visited states and applied controls to a subset of parameters. We have $\vartheta_k=\Gamma(x_0 \cdots x_k; u_0\cdots u_{k-1})$, where: \begin{equation} \label{eq:estimator} \vartheta_k = \left \{ \theta \in \Theta \Big | x_{i+1} \in \gamma(x_i,u_i,\theta), 0\le i \le k-1 \right\}. \end{equation} \end{definition} One can see that the parameter estimator \eqref{eq:estimator} is ``{sound}" in the sense that $\theta^* \in \vartheta_k, \forall k\in \mathbb{N}$. We have $\vartheta_0=\Gamma(x_0)=\Theta$, by definition. Note that our definition of parameter estimator is different from the traditional ones, which are often in the form $X^* \times U^* \rightarrow \Theta$, as they return only an estimate $\hat{\theta}$ rather than the set of all possible parameters. For our formal setup, it is vitally important that the controller take into account all possible ground truth parameters at all times. Otherwise, guaranteeing the specification is impossible. The following proposition enables us to make \eqref{eq:estimator} recursive. \begin{proposition} The following recursive relation holds: \begin{equation} \vartheta_{k+1}=\left \{ \theta \in \vartheta_k \Big | x_{k+1} \in \gamma(x_k,u_k,\theta) \right\}. \end{equation} \end{proposition} \begin{proof} Substitute $\vartheta_k$ from \eqref{eq:estimator}: \begin{equation*} \begin{array}{rl} & \left \{ \theta \in \vartheta_k \Big | x_{k+1} \in \gamma(x_k,u_k,\theta) \right\} \\ = & \left \{ \theta \in \Theta \Big | \theta \in \vartheta_k, x_{k+1} \in \gamma(x_k,u_k,\theta) \right\} \\ = & \left \{ \theta \in \Theta \Big | x_{i+1} \in \gamma(x_i,u_i,\theta), 0\le i \le k \right\} = \vartheta_{k+1}. \end{array} \end{equation*} \end{proof} \begin{corollary} The set of estimated parameters never grows: $\vartheta_{k+1} \subseteq \vartheta_k, \forall k\in \mathbb{N} $. \end{corollary} Therefore, we obtain a recursive parameter estimator $\Gamma_{rec}:2^\Theta_{-\emptyset} \times X \times U \times X \rightarrow 2^\Theta_{-\emptyset}$ as $\vartheta_{k+1}=\Gamma_{rec}(\vartheta_k,x_k,u_k,x_{k+1})$. Note that $\Gamma_{rec}$ is deterministic. \begin{figure*} \centering \begin{tikzpicture}[>=latex',shorten >=1pt,node distance=1.5cm,on grid,auto] \node[state] (s0) {$x_1$}; \node[state] (s1) [below right=of s0] {$x_2$}; \node[state] (s2) [below left=of s0] {$x_3$}; \path[->] (s0) edge node {$u_1$} (s1); \path[->] (s0) edge [bend left] node {$u_2$} (s2); \path[->] (s1) edge [loop right] node {$u_1,u_2$} (s1); \path[->] (s2) edge [loop left] node {$u_1$} (s2); \path[->] (s2) edge [bend left] node {$u_2$} (s0); \end{tikzpicture} ~~~~~~~ \begin{tikzpicture}[>=latex',shorten >=1pt,node distance=1.5cm,on grid,auto] \node[state] (s0) {$x_1$}; \node[state] (s1) [below right=of s0] {$x_2$}; \node[state] (s2) [below left=of s0] {$x_3$}; \path[->] (s0) edge node {$u_1$} (s1); \path[->] (s0) edge [bend right] node {$u_2$} (s2); \path[->] (s1) edge [] node {$u_1$} (s2); \path[->] (s1) edge [loop right] node {$u_2$} (s1); \path[->] (s2) edge [loop left] node {$u_1,u_2$} (s2); \end{tikzpicture} \\ {$\mathcal{T}^{\theta_1}$ \hspace{2.6in} $\mathcal{T}^{\theta_2}$} \\ \vspace{0.2in} \begin{tikzpicture}[>=latex',shorten >=1pt,node distance=3.2cm,on grid,auto] \node[state] (s1) {$x_1,\{\theta_1,\theta_2\}$}; \node[state] (s2) [right=of s1] {$x_2,\{\theta_1,\theta_2\}$}; \node[state] (s3) [left=of s1] {$x_3,\{\theta_1,\theta_2\}$}; \node[state] (s11) [above=of s1] {$x_1,\{\theta_1\}$}; \node[state] (s21) [above right=of s1] {$x_2,\{\theta_1\}$}; \node[state] (s31) [above left=of s1] {$x_3,\{\theta_1\}$}; \node[state] (s12) [below=of s1] {$x_1,\{\theta_2\}$}; \node[state] (s22) [below right=of s1] {$x_2,\{\theta_2\}$}; \node[state] (s32) [below left=of s1] {$x_3,\{\theta_2\}$}; \path[->] (s1) edge node {$u_1$} (s2); \path[->] (s1) edge node {$u_2$} (s3); \path[->] (s2) edge [loop right] node {$u_2$} (s2); \path[->] (s2) edge node {$u_1$} (s21); \path[->] (s2) edge [sloped] node {$u_1$} (s32); \path[->] (s3) edge [loop left] node {$u_1$} (s3); \path[->] (s3) edge node {$u_2$} (s32); \path[->] (s3) edge [bend right, sloped] node {$u_2$} (s11); \path[->] (s11) edge [] node {$u_1$} (s21); \path[->] (s11) edge [bend left] node {$u_2$} (s31); \path[->] (s21) edge [loop right] node {$u_1,u_2$} (s21); \path[->] (s31) edge [loop left] node {$u_1$} (s31); \path[->] (s31) edge [sloped,bend left] node {$u_2$} (s11); \path[->] (s12) edge [] node {$u_1$} (s22); \path[->] (s12) edge [] node {$u_2$} (s32); \path[<-] (s32) edge [] node {$u_1$} (s22); \path[->] (s22) edge [loop right] node {$u_2$} (s22); \path[->] (s32) edge [loop left] node {$u_1,u_2$} (s32); \end{tikzpicture} \\ {$\mathcal{T}^{adp}$} \caption{Example 3: [Top] A PTS with two possible parameters $\theta_1,\theta_2$, and the corresponding transition systems [Bottom] The corresponding ATS } \label{fig:ATS} \end{figure*} \subsection{Adaptive Transition System} As mentioned in the introduction, a primary challenge of provably correct adaptive control is coupling parameter estimation and control synthesis. In order to combine these two, we provide the following definition. \begin{definition} Given a PTS $\mathcal{T}^\Theta=\left(X, U, \Theta, \gamma, \Pi, O \right)$, we define the adaptive transition system (ATS) as the tuple $\mathcal{T}^{adp}=\left(X^{adp}, U, \gamma^{adp}, \Pi, O^{adp} \right)$, where $U,\Pi$ are inherited from $\mathcal{T}^{\Theta}$ with the same meaning and \begin{itemize} \item $X^{adp} \subseteq X \times 2^\Theta_{-\emptyset}$ is the set of states; \item $\gamma^{adp}:X^{adp} \times U \rightarrow 2^{X^{adp}}$ is the transition function, where we have $(x',\vartheta') \in \gamma^{adp}((x,\vartheta),u)$ if and only if $x' \in \gamma(x,u)$ and $\vartheta' = \Gamma_{rec}(\vartheta,x,u,x')$; \item $O^{adp}:X^{adp} \rightarrow 2^\Pi$ is the observation function where $O^{adp}(x,\vartheta)=O(x), \forall x\in X, \vartheta \in 2^\Theta_{-\emptyset}$. \end{itemize} \begin{example} Consider a PTS with $X=\{x_1,x_2,x_3\},U=\{u_1,u_2\},$ and $\Theta=\{\theta_1,\theta_2\}.$ The transition systems corresponding to $\theta_1$ and $\theta_2$ are illustrated in Fig. \ref{fig:ATS} [top]. The ATS corresponding is shown in Fig. \ref{fig:ATS} [Bottom]. \end{example} \end{definition} The number of states in the ATS is upper-bounded by $|X|(2^{|\Theta|}-1)$, which shows an exponential explosion with the number of parameters. Fortunately, not all states in $X \times 2^\Theta_{-\emptyset}$ are reachable from the set $\{(x,\theta) | x \in X,\theta \in \Theta \}$, which is the set of possible initial states in the ATS. Algorithm \ref{alg:ats} constructs the ATS consisting of only these reachable states. \begin{algorithm} \caption{Procedure for Constructing ATS from a PTS} \label{alg:ats} \begin{algorithmic}[0] \Require{$\mathcal{T}^\Theta=\left(X, U, \Theta, \gamma, \Pi, O \right)$} \State{$X^{adp,new}=\{ (x, \Theta) | x \in X \}$} \State{$X^{adp}=X^{adp,new}$} \While{$X^{adp,new} \neq \emptyset$ } \State $ X^{adp,new} \gets \emptyset$ \For {$ (x,\vartheta) \in X^{adp}$} \For {$ u \in U$} \State $ \gamma^{adp}((x,\vartheta),u)=\emptyset$ \State $ \vartheta'=\emptyset$ \For {$\theta \in \vartheta$} \For {$x' \in \gamma(x,u,\vartheta)$} \For {$\theta' \in \vartheta$} \If { $x' \in \gamma(x,u,\theta')$ } \State {$ \vartheta' \gets \vartheta' \cup \theta'$} \EndIf \EndFor \State{$\gamma^{adp}((x,\vartheta),u) \gets \gamma^{adp}((x,\vartheta),u) \cup (x',\vartheta')$} \If {$(x',\vartheta') \not \in X^{adp}$} \State{$X^{adp,new} \gets X^{adp,new} \cup (x',\vartheta')$} \State{$X^{adp} \gets X^{adp} \cup (x',\vartheta')$} \State{$O^{adp}(x',\vartheta')=O(x')$} \EndIf \EndFor \EndFor \EndFor \EndFor \EndWhile \State \textbf{return} $\mathcal{T}^{adp}=\left(X^{adp}, U, \gamma^{adp}, \Pi, O^{adp} \right)$ \end{algorithmic} \end{algorithm} \subsection{Control Synthesis} Finally, given an ATS $\mathcal{T}^{adp}$ and an LTL formula $\varphi$, we construct the product automaton $\mathcal{T}^{adp} \otimes \mathcal{R}_\varphi$ as explained in Sec. \ref{sec:ltlcon}, and find the memoryless control strategy on $\mathcal{T}^{adp} \otimes \mathcal{R}_\varphi$ by solving the Rabin game. We also find the largest set of admissible initial conditions $X_0^{adp,\max}$ as the winning region of the Rabin game. In order to find $X_0^{\max}$, we perform the following projection: \begin{equation} X_0^{\max}=\left\{x_0 \Big| (x_0,\Theta) \in X_0^{adp,\max} \right\}. \end{equation} The adaptive control strategy takes the memoryless form $\Lambda: X \times 2^\Theta_{-\emptyset} \times S \rightarrow U$, which maps the current state in the PTS, the set of current possible ground truth parameters and the state in the Rabin automaton to an admissible control action. \begin{theorem} Given a finite system \eqref{eq:system}, an initial condition $x_0 \in X$, an LTL formula over $\Pi$, there exists a control strategy $\Lambda^*: X^* \times U^* \rightarrow U$ such that $O(x_0)O(x_1)\cdots \models \varphi$, $\forall \theta \in \Theta, \forall d_k \in D$, $x_{k+1}=F(x_k,u_k,\theta,d_k), \forall k\in \mathbb{N}$, if and only if $x_0 \in X_0^{\max}$. . \end{theorem} \begin{proof} (sketch) The completeness property follows from two facts. First, the solutions to Rabin games on finite automata are complete. Second, every possible behavior of a finite PTS embedding \eqref{eq:system} and parameter estimator \eqref{eq:estimator} is captured in the ATS. If $x_0 \not \in X_0^{\max}$, then it can be shown that there exists a $\theta \in \Theta$ and a disturbance sequence $d_0d_1\cdots$ such that there does not exist any control strategy to satisfy the LTL specification. \end{proof} \section{Control Synthesis for Infinite Systems} \label{sec:infinite} In this section, we assume that PTS embedding \eqref{eq:system} is not finite, which means that at least one of the sets $X,U,\Theta$ is infinite. We provide the general solution for the case when all sets are infinite. We note that the approach in this section is still preliminary and we leave further investigation to our future work. We consider a finite observation preserving (see Sec. \ref{sec:quotient}) partition $Q_X=\{q_X^1,\cdots,q_X^{p_X} \}$ for $X$ and a finite partition $Q_\Theta=\{q_\Theta^1,\cdots,q_\Theta^{p_\Theta} \}$ for $\Theta$. We also quantize $U$ to obtain a finite $U_{\text{qtz}}=\{u_{qtz}^1,\cdots,u_{qtz}^{p_u} \}$. In this paper, we do not consider any particular guideline for how to partition and leave this problem to our future work. In general, the finer the partitions, the less conservative the method is with a price of higher computational effort. ``Smart" partition refinement procedures were studied in \cite{yordanov2013formal,nilsson2014incremental}. Once partitions and quantizations are available, we compute the transitions. We denote the successor (post) of set $q_X$, under parameter set $q_\Theta$ and control $u$ by \begin{equation} \label{eq:post} \small \text{Post}(q_X,q_\Theta,u) := \Big\{ x \in X \big | \exists x \in q_X, \exists \theta \in q_\Theta, x \in \gamma(x,\theta,u) \Big\}. \end{equation} A computational bottleneck is performing the post computation in \eqref{eq:post}. For additive parameters, the post computation is exact for piecewise affine systems using polyhedral operations \cite{Yordanov2012}. For multiplicative parameters, an over-approximations of post can be computed \cite{yordanov2008formal}, which introduces further conservativeness but retains correctness. Finally, we construct the quotient PTS from the infinite PTS. The procedure is outlined in Algorithm \ref{alg:quotient}. \begin{algorithm} \caption{Procedure for Constructing quotient PTS from infinite PTS} \begin{algorithmic}[0] \Require{$\mathcal{T}^\Theta=\left(X, U, \Theta, \gamma, \Pi, O \right)$} \Require{$Q_X,Q_\Theta,U_{\text{quantized}}$} \For{$q_X \in Q_X$} \State{$O^Q(q_X)=O(x)$ for some $x \in q_X$} \For{$q_\Theta \in Q_\Theta$} \For{$u_{qtz} \in U_{qtz}$} \State{$X_{\text{post}}=\text{Post}(q_X,q_\Theta,u)$} \State{$\gamma^{Q}(q_X,u_{qtz},q_\Theta)=\emptyset$} \For{$q_\Theta' \in Q_\Theta$} \If{$X_{\text{post}} \cap q'_\Theta \neq \emptyset$} \State{$\gamma^{Q}(q_X,u_{qtz},q_\Theta) \gets \gamma^{Q}(q_X,u_{qtz},q_\Theta) \cup q_\Theta'$} \EndIf \EndFor \EndFor \EndFor \EndFor \State \textbf{return} $\mathcal{T}^{Q,\Theta}=\left(Q, U_{\text{quantized}}, Q_\Theta, \gamma^{Q}, \Pi, O^Q \right)$ \end{algorithmic} \label{alg:quotient} \end{algorithm} \section{Case Studies} \label{sec:case} We present two case studies. The first one is a simple finite deterministic system. The second case study involves a linear parameterized system that is infinite and non-deterministic due to the presence of additive disturbances. \subsection{Persistent Surveillance } \begin{figure*}[t] \begin{center} \includegraphics[width=0.29\textwidth]{r0}~ \includegraphics[width=0.29\textwidth]{r1}~\includegraphics[width=0.29\textwidth]{r2} \caption{Case Study 1: (Left): The Robot (shown in black) and its environment. (Middle): Snapshots of the executed Motion at time $k=33$, and (Right) $k=62$. The robot satisfies the specification. } \label{fig:robot} \end{center} \end{figure*} We consider a robot motion planning problem. The environment is modeled as a finite number of cells illustrated in Fig. \ref{fig:robot}. Each cell corresponds to a state in $X$. We have $|X|=150$. The set of control inputs is given by $U=\{$ {\bf left, right, up, down}$\}$, where the transition enabled by each input corresponds to its unambiguous meaning. There exists an constant drift in the horizontal direction in the purple region, but its direction to left or right and its intensity are unknown. The set of possible drifts is $\Theta=\{+2,+1,0,-1,-2\}$, where positive sign corresponds to the left direction. At each time, if the robot is in a purple cell, the drift is added to its subsequent position. For example, if the robot applies $u=${\bf right}, and $\theta^*=2$, the robot actually ends up in a cell to the left. Similarly, if $u=${\bf up} and $\theta^*=-2$, the robot moves a cell up and two cells to the right. The red cells are ``unsafe" regions that must be avoided, and the green cells $A,B$ are ``interesting" regions, which have to be persistently visited. The LTL formula describing this specification is: \begin{equation*} \varphi= {\bf G} {\bf F} A ~\wedge ~{\bf G} {\bf F} B ~\wedge~ {\bf G} (\neg \text{unsafe}). \end{equation*} We implemented the procedure outlined in Sec. \ref{sec:finite}. It is worth to note that there does not exist a pure robust control solution to this problem. In other words, if the robot ignores estimating the drift, it can not find a control strategy. For example, if the robot enters the purple region around the middle and persistently applies ${\bf up}$, a maximum drift in either direction can drive the robot into the unsafe cells before it exits the purple region. Therefore, the only way the robot can fulfill the specification is to learn the drift. The robot first enters the drifty region to find out its value and then moves back and re-plans its motion. Notice that this procedure is fully automated using the solution of the Rabin game on the product $\mathcal{T}^{adp} \otimes \mathcal{R}_\varphi$. Two snapshots of the executed motion for the case $\theta^*=+2$ are shown in Fig. \ref{fig:robot}. \subsection{Safety Control} \label{sec:safety} Consider a one-dimensional linear system of the following form: \begin{equation} x^+=(1+\theta_1)x+\theta_2u+\theta_3+d, \end{equation} where $\theta_1 \in [-0.5,0.5]$, $\theta_2 \in [1,2]$, and $\theta_3 \in [-0.2,0.2]$ are fixed parameters, and $d\in D$, is the additive disturbance, $D= [-0.1,0.1]$. The set of admissible control inputs is $U=[-1,1]$. We desire to restrict $x$ to the $[-1,1]$ interval for all times, which is described by the following LTL formula: \begin{equation*} \varphi= {\bf G} (x \le 1) \wedge {\bf G} (x \ge -1). \end{equation*} We have $\Theta=[-0.5,0.5] \times [1,2] \times [-0.2,0.2]$. We partitioned the intervals of $\theta_1$, $\theta_2$, $\theta_3$, and $X$ into 2,2,4, and 10 evenly spaced intervals, respectively. Thus, we have partitioned $\Theta$ into $16$ cubes ($|Q_\Theta|=16$) and $X$ into 10 intervals ($|Q_X|=10$). $U$ is quantized to obtain $U_{qtz}=\{-1,-0.8,\cdots,0.8,1\}$. We implemented Algorithm \ref{alg:quotient} to obtain the quotient PTS and Algorithm \ref{alg:ats} to find the corresponding ATS. The computation times were 0.1 (Algorithm \ref{alg:quotient}) and 152 (Algorithm \ref{alg:ats}) seconds on a 3.0 GHz MacBook Pro. Even though $|X \times 2_{-\emptyset}^{Q_\Theta}|=655350$, the number of reachable states obtained from Algorithm \ref{alg:ats} was 14146. We solved the safety game on the ATS, which took less than a second and found a winning region containing 14008 states. The winning region in the state-space is $X_0=[-0.6,0.6]$. Since the solution is conservative, $X_0^{\max}$ may be larger if a finer partitioning is used. We also found that the winning region is empty if we had sought a pure robust control strategy. We simulated the system for 100 time steps starting from $x_0=0$. The values of disturbances at each time are chosen randomly with a uniform distribution over $D$. We observe that the specification is satisfied, and the sets given by the parameter estimator shrink over time and always contain the ground truth parameter, which in this case is $\theta_1^*=0.45$, $\theta_2^*=1.11$, $\theta_3^*=-0.18$. The results are shown in Fig. \ref{fig:safety}. \begin{figure}[t] \centering \includegraphics[height=0.18\textwidth]{g0} \caption{Case Study 2: trajectory of the system versus time, which is always between $-1$ and $1$. } \end{figure} \begin{figure}[t] \centering \vspace{0.2in} \includegraphics[width=0.24\textwidth]{s0}\includegraphics[width=0.24\textwidth]{s1} \\~\includegraphics[width=0.24\textwidth]{s2}\includegraphics[width=0.24\textwidth]{s3}~ \caption{Case Study 2: Snapshots of $\vartheta_k$ at various times, which are illustrated by the shaded regions. They always contain the ground truth parameter $\theta_1^*=0.45$, $\theta_2^*=1.11$, $\theta_3^*=-0.18$.} \label{fig:safety} \end{figure} \section{Conclusion and Future Work} We developed a framework to combine the recent advances in applications of formal methods in control theory with classical adaptive control. We used the concepts from transition systems, finite quotients, and product automata to introduce adaptive transition systems and correct-by-design adaptive control. Like most of other formal methods applications, our results suffer from high computational complexity. As discussed in the paper, the number of states in the ATS can be very large. Also, constructing finite quotients for infinite systems is computationally difficult. We believe that this paper opens up several research directions. Besides improving the ideas for the way we combine adaptive control and formal methods, we plan to develop efficient methods to construct finite adaptive transition systems for special classes of hybrid systems such as mixed-monotone systems and piecewise affine systems. We also plan to include optimal control. \balance \bibliographystyle{IEEEtran}
1,108,101,564,466
arxiv
\section{Introduction} \label{intro} Isospin is a useful and fundamental quantum number in nuclear and particle physics. In nuclei, we consider neutron and proton to be the two isospin states of a common entity called a nucleon with isospin $T=1/2$. The third component of isospin is different for both the states, $T_3=+1/2$ for neutron and $T_3=-1/2$ for proton. This convention is just opposite to what is normally used in particle physics. We plot a graph similar to nuclear landscape in Fig.~\ref{driplines}, but in terms of isospin $T_3$ on one axis. We note that $T_3=(N-Z/2)$ for a given nucleus and is a well defined quantity. The line in the middle (black in color) shows the line of $\beta$-stability. We have the neutron rich nuclei on the left of the line of stability and the neutron deficient nuclei on the right, plotted on the basis of the presently known experimental data. Theoretical predictions for the drip lines are shown by the wavy lines, while the presently known experimental limits are shown by joining the known data points. We can see that experimentally known neutron-rich nuclei are quite far from the theoretical predictions. On the other hand, the experimental data touch the theoretical predictions for the proton drip line at least up to $A=200$. Around $A=240$, the experimentally known neutron rich and neutron deficient nuclei merge into the near-stability line, suggesting that a large number of neutron-rich and neutron-deficient isotopes are yet to be found. \begin{figure}[h] \centering \includegraphics[width=10cm]{driplines} \caption{Experimental neutron-rich nuclei (upper solid line), $\beta$-stable nuclei (mid-line) and neutron deficient nuclei (lower solid line), plotted from the known experimental data. The wavy lines show the theoretical predictions for the drip line nuclei.} \label{driplines} \end{figure} There are two main features in Fig.~\ref{driplines} that we would like to emphasize. Firstly, as we go from light to heavy nuclei, the isospin increases due to the fact that $N>Z$ in heavier nuclei. Secondly, in a given chain of isotopes, the range of isospin values could be very large. For example, in the chain of Pb isotopes, the isospin values could range from near zero in the neutron deficient isotopes to nearly 50 in the neutron rich isotopes. In the most stable lead isotope $^{208}$Pb, the ground state isospin would be $T=22$ as $T_3=22$. It is thus obvious that very large $T$ values are involved in dealing with heavy nuclei. These two factors make the heavy nuclei very interesting to study. We also note that the fission fragments coming out from such sources will be even more neutron-rich. Therefore, the experimental data on fission fragment distributions of heavy nuclei is a good testing ground for verifying the conservation of isospin in heavy nuclei. However, the data where this could be directly tested is very scarce. Fission fragment distribution data are available in plenty but the mass resolution is about 3-4 units. This means that the $Z$ and $A$ of each fragment is not known precisely in most of the cases. In many situations, particularly the thermal neutron fission, the fragment mass distribution of heavy fragments is still not known. Only recently, more precise fragment distribution data of HI induced fission are becoming available where gamma ray spectroscopy of fragments are being used to identify each fragment, although only in even-even nuclei so far. This in itself presents a huge opportunity for experimnetalists so that good fragment distribution data become available for all the partitions and also for odd-$A$ fragments eventually. For the first time in 1962, Lane and Soper~\cite{lane} theoretically demonstrated that isospin may remain as a good quantum number in heavy nuclei as in light nuclei. In simple terms, if we consider a nucleus having $N$ neutron and $Z$ protons, we may look upon it as made up of a core ($N=Z$) nucleus having isospin zero and ($N-Z$) valence neutrons. These excess neutrons act in a way so as to reduce the isospin impurity in the nucleus. As the number of excess neutrons rises, the isospin tends to become more pure quantum number. Sliv and Kharitonov (1965)~\cite{sliv} calculated the isospin admixture in light ($N=Z$) nuclei and heavy ($N>Z$) nuclei by using harmonic oscillator shell model wave functions and showed that the isospin admixture in the ground state of $^{16}$O is nearly same as in $^{208}$Pb. A detailed discussion of some these developments may be found in the review by Auerbach~\cite{auerbach}. In the backdrop of this discussion, we analyze the heavy ion induced fission-fusion reaction $^{208}$Pb($^{18}$O, f) in which neutron-rich fission fragments are emitted. We treat the isospin to be a conserved quantity and follow the isospin algebra. We calculate the fission fragment mass distribution using this concept of goodness of isospin. An important conjecture given by Kelson~\cite{kelson} is quite helpful in this respect. Kelson has given a theoretical explanation of how n-emission in fission leads to the formation of Isobaric Analog states (IAS) in final fission products. Kelson’s ideas help in resolving our problem of assigning isospin values to various fission fragments. We find that our calculated values match experimental data reasonably well. There are a some deviations which may be due to the presence of shell closure or the presence of isomers. \section{Formalism} \label{sec-1} Consider a projectile $Y$ with isospin $T_Y=T_{3_{Y}}$ incident upon a target with $T_X=T_{3_{X}}$, leading to a compound nucleus ($CN$) with $T_{CN}=\mid T_X-T_Y \mid,......,T_X+T_Y$. However, from isospin algebra which behaves quite similar to SU(2) algebra of spin while dealing with neutrons and protons, $T_{CN} \geqslant T_{3_{CN}}$ and $T_{3_{CN}}=T_X+T_Y$. We assume that only ground or, low lying states of target and projectile are involved. Therefore, the $CN$ always has only one possible value of isospin, $T_{CN}= T_{3_{CN}}=T_X+T_Y$. For the reaction under consideration in present work $^{208}$Pb ($^{18}$O, f), we have $T_X(^{208}Pb)=T_{3_{X}}=22$ and $T_Y(^{18}O)=T_{3_{Y}}=1$. The isospin of $CN$ ($^{226}$Th) can have values $T_{CN}=21, 22$ or 23 but since $T_{3_{CN}} =23$, there is only one allowed value of $T_{CN}=23$. This CN further fissions to give two fragments $F_1$ ($T_{F1},T_{3_{F1}}$) and $F_2$ ($T_{F2},T_{3_{F2}}$) with the emission of $n$ number of neutrons. This is many body problem and to simplify it to two-body problem, we invoke the concept of residual compound nucleus ($RCN$) formed after the emission of $n$ number of neutrons and has a isospin $T_{RCN}$. Now, our complete reaction will look like, \begin{eqnarray*} Y(T_Y,T_{3_{Y}})+X(T_X,T_{3_{X}}) \rightarrow CN(T_{CN},T_{3_{CN}}) \\ \rightarrow RCN(T_{RCN},T_{3_{RCN}}) +n \\ \rightarrow F_1(T_{F1},T_{3_{F1}})+F_2(T_{F2},T_{3_{F2}})+n \end{eqnarray*} and isospin of $RCN$ should satisfy the two conditions, \begin{eqnarray*} \begin{split} \mid T_{CN}-n/2 \mid \leq T_{RCN} \leq (T_{CN}+n/2)\\ \textrm{and} \quad \mid T_{F1}-T_{F2} \mid \leq T_{RCN} \leq (T_{F1}+T_{F2}) \end{split} \end{eqnarray*} Now, we would like to assign isospin values to various fission fragments emitted in different partitions, which is not so straight forward. We formulate two conjectures based on the ideas put forth by Kelson~\cite{kelson} in assigning the isospin values to fission fragments. First conjecture states that neutron emission from $CN$ leads to formation of highly excited states with $T >T_3$. Using this conjecture, we fix the isospin of $RCN$, $T_{RCN}=T_{F1}+T_{F2}$. Kelson’s second conjecture states that the fission fragments are preferably emitted in isobaric analog states (IAS). We thus assign isospin to fission fragments on the basis of this second conjecture. We make three isobars of each mass number having $T_3$ values as $T_3$, $T_3-2$ and $T_3-4$. Then, we assign $T=T_3$ to that particular mass number since this is the minimum value of isospin required to generate all the members of isobaric multiplet formed corresponding to that mass number. For $^{208}$Pb($^{18}$O, f), we have six experimentally observed partitions. We consider eight nuclides on the lighter side and eight on the heavier side of a partition and for each mass number, we make three isobars. Now, we assign isospin to each mass number using Kelson’s second conjecture as described above. For example, for $A=112$, we have three isobars namely $^{112}$Ru with $T_3=12$, $^{112}$Pd with $T_3=10$ and $^{112}$Cd with $T_3=8$. We assign maximum of three $T_3$ values which is 12 to $A=112$ as it can generate all the three isobars $^{112}$Ru, $^{112}$Pd and $^{112}$Pd. Once we have assigned the isospin values, we proceed to calculate the relative yields of fragments by using isospin part of the wave function. For a particular pair of fragments emitted in a given $n$-emission channel, the isospin wave function for $RCN$ can be written as, \begin{eqnarray} \begin{split} \mid{T_{RCN},T_{3RCN}}\rangle_n = \langle{T_{F1}T_{F2}T_{3_{F1}}T_{3_{F2}} \mid T_{RCN}T_{3_{RCN}}}\rangle \\ \mid{T_{F1},T_{3_{F1}}}\rangle \mid{T_{F2},T_{3_{F2}}}\rangle \end{split} \end{eqnarray} The first part of the equation on R.H.S. gives us the isospin C.G. coefficient (CGC). The intensity can be calculated by taking the square of CGC, \begin{equation} I_n = \langle{CGC}\rangle^2 = \langle{T_{F1}T_{F2}T_{3_{F1}}T_{3_{F2}} \mid T_{RCN}T_{3_{RCN}}}\rangle^2 \end{equation} To calculate the relative yield of any fragments, we multiply intensity by weight factor ($w_n$) of that particular $n$-emission channel. The weight factors are obtained by relative normalization with respect to $n$-emission channel having maximum number of counts in the corresponding partition. Therefore, the final yield of any fragment is, \begin{equation} I = \sum_{n} I_n \times w_n = \sum_{n} \langle{CGC}\rangle^2 \times w_n \end{equation} where we take summation over all the experimentally known $n$-emission channels. In the same way, we calculate the yields of all the lighter and heavier fragments in a partition. As we are not calculating the exact yields and want only relative yields, we normalize the yields of all the fragments with respect to the maximum yield. We perform this normalization separately for lighter side and heavier side of a partition. Similarly, we can have values of relative yields of fragments in all the partitions. Our calculations do not give the absolute yields at any point. We also calculate the total mass distribution of fragments although only relative yields can be calculated. For this calculation, we use the same procedure of assigning the isospin values to fragments and then, in the same way, we calculate CGC of pair of fragments emitted in a $n$-emission channel in a partition. Since, now we are looking at complete mass distribution, we have to take into account the weightage of each partition simultaneously. For this, we normalize the weight factors ($w'_n$) of each $n$-emission channel of all the six partitions with respect to $n$-emission channel having maximum number of counts among all the partitions. Therefore, our equation for calculating the intensity of a fragment is, \begin{equation} I' = \sum_{n} I_n \times w'_n = \sum_{n} \langle{CGC}\rangle^2 \times w'_n \end{equation} \begin{figure}[h] \centering \includegraphics[width=9.5cm, height= 10.5cm]{mergecgs16} \caption{Comparison of calculated relative yields of fragments emitted in all the six partitions with the experimental data from Bogachev et al.~\cite{bogachev}.} \label{merge} \end{figure} After calculating yields of all the fragments in individual partitions, we simply add the yields of all the three members of an isobaric multiplet to have yield corresponding to that particular mass number. Then we again normalize all the yields of mass numbers with respect to mass number with maximum yields. This gives us the relative yields of all the fragments present in the total mass distribution. \section{Results and Discussion} \label{sec-2} We have plotted in Fig.~\ref{merge}, the calculated relative yields of fragments for each partition separately and compare them with the available experimental data of Bogachev $\it{et}$ $\it{al.}$~\cite{bogachev}. There is good agreement between the calculated and experimental data in most of the partitions. This agreement is not so good for Zr-Sn and Kr-Xe partitions. One reason for this deviation could be the presence of closed shell configuration at $A=124$ ($Z=50$) and $A=136$ ($N=82$) and their complementary fragments at $A=84$ and $A=92$~\cite{danu}. Another possibility is the presence of isomers at some of the points. Also, Bogachev $\it{et}$ $\it{al.}$~\cite{bogachev} estimated an error of 10-30\% in the experimental data. Considering these factors, we feel that our calculations reproduce the experimental trends and data reasonably well. Next, we have plotted in Fig.~\ref{total} the relative total yield of fission fragments and compare it with the fission fragment mass distribution given in Bogachev $\it{et}$ $\it{al.}$~\cite{bogachev}. The two curves look very similar to each other with the exceptions at the shell closures as discussed above. These results confirm that isospin and its conservation seems to be valid in heavy neutron-rich nuclei. \section{Conclusion} We have calculated the relative yields of fission fragments partition-wise and also total mass distribution of fragments. In doing so we have assumed that the basic concept of isospin and its conservation remains valid in heavy nuclei. We assign the isospin values by using Kelson’s conjectures which are based on sound physics arguments. We then follow the isospin algebra and find that the fission fragments are preferably formed in isobaric analogue states forming isospin multiplets. A given isospin multiplet is assigned a $T$ value corresponding to the maximum isospin projection in the isobaric multiplet. A reasonable agreement of the calculated results with the observed fission fragment distribution provides a direct experimental evidence of the validity of isospin in heavy nuclei. Isospin, therefore, seems to emerge as a powerful tool in neutron-rich heavy nuclei and may play an important role in many phenomena and applications. \begin{figure}[h] \centering \includegraphics[width=9.5cm, height=8cm]{totalcgs16} \caption{Comparison of calculated relative total yields of fragments with the experimental data from Bogachev et al.~\cite{bogachev}.} \label{total} \end{figure} \section*{Acknowledgment} Support from Ministry of Human Resource Development (Government of India) to SG in the form of a fellowship is gratefully acknowledged. The authors also acknowledge the financial support in the form of travel grant from IIT Roorkee Alumni funds and IIT Roorkee.
1,108,101,564,467
arxiv
\section{Introduction} Although the determination of capacity region of the interference channel (IC) has been a long standing open problem, several interesting recent results shed light on the problem from various perspectives. Among these we may cite the capacity region obtained for special cases \cite{Han:81,Sato:81,Motahari:09,Shang:09}, or obtained for general channel classes in both scalar and MIMO settings up to approximations with bounded gaps \cite{Etkin:08,Karmakar:11}. When specializing to the large SNR regime, it is known that the characterization of the full capacity region can be conveniently replaced with the determination of the so-called degree-of-freedom (DoF) region. Progress on that particular front was reported in \cite{Jafar:2007} with the derivation of the DoF region for the two-user MIMO interference channel with $M_1$, $M_2$ transmit antennas and $N_1$, $N_2$ receive antennas, where the sum DoF $\min\{M_1+M_2, N_1+N_2,\max(M_1,N_2),\max(M_2,N_1)\}$ is shown to be optimal. Most of these advances suggest achievable schemes which require the full knowledge of channel state information (CSI) at both the transmitter and receiver sides. In fact, the cruciality of CSI at the transmitter side in particular is demonstrated in such works as \cite{Huang:2012,Zhu:2012,Vaze:2009} where the DoF region is shown to shrink dramatically when zero CSIT is available. The intermediate scenario of {\em limited} or {\em incomplete} CSIT was also considered in \cite{Bolcskei:2009,Krishnamachari:2010}. In \cite{Bolcskei:2009}, the rate of limited feedback needed to preserve the DoF optimality in interference alignment-enabled IC is provided. More recently, the impact of feedback delays providing the transmitter with outdated CSI over MIMO channels was considered in \cite{Maddah-Ali:10} for the broadcast channel (BC) and later extended to the IC \cite{Vaze:2011,Ghasemi:2011}. The key contribution in \cite{Maddah-Ali:10} was to establish the usefulness of even completely outdated channel state information in designing precoders achieving significantly better DoF than what is obtained without any CSIT. Considering the worst case scenarios, including those where the feedback delay extends beyond the coherence period of the time varying fading channels, the authors in~\cite{Maddah-Ali:10} propose a space-time interference alignment-inspired strategy achieving an optimal sum DoF of 4/3 for the two-user MISO BC, in a setting when the no CSIT case yields no more than 1 DoF. The essential ingredient for the proposed scheme in \cite{Maddah-Ali:10} lies in the use of multi-slot protocol initiating with the transmission of unprecoded information symbols to the user terminals, followed by the analog forwarding of the interference overheard in the first time slot. Recently, this strategy was generalized under similar principle to the interference channel setting \cite{Vaze:2011,Ghasemi:2011}, again establishing DoF strictly beyond the ones obtained without CSIT in scenarios where the delayed CSIT bears no correlation with the current channel realization. Albeit inspiring and fascinating in nature, such results nonetheless rely on the somewhat over-pessimistic assumption that no estimate for the {\em current} channel realization is available to the transmitter. Owing to the finite Doppler spread behavior of fading channels, it is however the case in many real life situations that the past channel realizations can provide information about the current one. Therefore a scenario where the transmitter is endowed with delayed CSI in addition to some (albeit imperfect) estimate of the current channel is practical relevance. This form of combined delayed and imperfect current CSIT was recently introduced in \cite{Kobayashi:2012} for the multiple-antenna broadcast channel whereby a novel transmission scheme is proposed which extends beyond the MAT algorithm in allowing the exploitation of precoders designed based on the current CSIT estimate. The full characterization of the optimal DoF for the hybrid CSIT was reported in \cite{Yang:2012} and independently in \cite{Gou:2012}. The key idea behind the schemes in \cite{Kobayashi:2012,Yang:2012} lies in the modification of the MAT protocol where i) the initial time slot involves transmission of {\em precoded} symbols, followed by the forwarding of {\em residual} interference overheard in the first time slot, and ii) the taking advantage of the reduced power for the residual interference (compared with full power interference in MAT) based on a suitable quantization method and digital transmission. In this paper, we extend the results in~\cite{Kobayashi:2012,Yang:2012} and consider the two-user time-correlated multiple-antenna interference channel. A similar hybrid CSIT scenario is considered whereby each transmitter has access to delayed channel samples for the links it is connected to, as well as possessing an imperfect estimate of the current channel. The current CSIT estimate could be obtained from, e.g., a linear prediction applied to past samples \cite{Lapidoth:2005,Caire:2010}, although the prediction aspects are not specified in this paper. Instead, the quality level for the current CSIT estimate is simply modeled in terms of an exponent of the transmit power level, allowing DoF characterization for various ranges of current CSIT quality. Thus our model bridges between previously reported CSIT scenarios such as the pure delayed CSIT of \cite{Maddah-Ali:10,Vaze:2011,Ghasemi:2011} and the pure instantaneous CSIT scenarios. We assume each receiver has access to its own perfect instantaneous CSI and the perfect delayed CSI of other receivers (as in e.g. \cite{Maddah-Ali:10,Vaze:2011,Ghasemi:2011}), in addition to the imperfect current CSI. In what follows we obtain the following key results: \begin{itemize} \item We establish an outer bound on the DoF region for the two-user temporally-correlated MISO interference channel with perfect delayed and imperfect current CSIT, as a function of the current CSIT quality exponent. This result is initially derived for the two-antenna transmitters and then generalized. \item We propose two schemes which achieve the key vertices of the outer bound with perfect delayed and imperfect current CSIT. The schemes build on the principles of time-slotted protocol, starting with the ZF precoded transmission of information symbols from the two interfering transmitters simultaneously and followed by forwarding of the residual interferences. As in the BC case, the residual interference reflects on the quality of the initial precoder and can be shown to be quantized and power scaled in a suitable way to achieve the optimal DoF. \item Our results coincide with previously reported DoF results for the perfect CSIT setting (current CSIT of perfect quality) and pure delayed CSIT setting (current CSIT of zero quality). \item The DoF region of certain MIMO cases is also provided as a function of the current CSIT quality exponent and the number of receive antennas. \end{itemize} \textbf{Notation}: Matrices and vectors are represented as uppercase and lowercase letters, and matrix transport, Hermitian transport, inverse and determinant are denoted by $\Am^\T$, $\Am^\H$, $\Am^{-1}$ and $\det(\Am)$, respectively. $\hv^{\bot}$ is the normalized orthogonal component of any nonzero vector $\hv$. The approximation $f(P) \sim g(P)$ is in the sense of $\lim_{P \to \infty} \frac{f(P)}{g(P)}=C$, where $C$ is a constant that does not scale as $P$. $\Am \succeq 0$ means $\Am$ is symmetric positive semidefinite if $\Am$ is square and $\Am \preceq \Bm$ means $\Bm-\Am$ is symmetric positive semidefinite if both $\Am$ and $\Bm$ are squared matrices. \section{System Model} We consider a two-user MISO interference channel, where two transmitters each equipped with $2$ antennas\footnote{The generalization to arbitrary number of antennas is considered in Section VI.} wish to send two private messages to their respective receivers each with a single antenna, as shown in Fig.~1. The discrete time baseband signal model is given by \begin{subequations} \begin{align} y(t) &= \hv_{11}^\H(t) \xv_1(t) + \hv_{12}^\H(t) \xv_2(t) + e(t) \\ z(t) &= \hv_{21}^\H(t) \xv_1(t) + \hv_{22}^\H(t) \xv_2(t) + b(t), \end{align} \end{subequations} for any time instant $t$, where $\hv_{ji}(t) \in \CC^{2\times 1}$ is the channel vector from Tx-$i$ to Rx-$j$; $e(t), b(t) \sim \CN[0,1]$ are normalized additive white Gaussian noise~(AWGN) at the respective receivers; the coded input signal $\xv_i(t)$ is subject to the power constraint $\E( \norm{\xv_i(t)}^2 ) \le P$, $\forall\,t$. \begin{figure}[htb] \centering \includegraphics[width=0.4\columnwidth]{IC} \caption{The two-user MISO interference channel.} \label{fig:DoF} \end{figure} \begin{assumption} [mutually independent fading] At any given time instant $t$, the channel vectors $\{\hv_{ji}(t)\}$ are mutually independent and identically distributed (i.i.d.) with zero mean and covariance matrix $\Id_2$. \end{assumption} \begin{assumption} [perfect delayed local CSIT and imperfect current local CSIT] At each time instant $t$, Tx-$i$ knows perfectly the delayed local CSIT $\{{\hv}_{1i}(k),{\hv}_{2i}(k),k=1,\dots,t-1\}$ (with which link it is respectively connected), and somehow predict/estimate imperfectly the current local CSIT $\{\hat{\hv}_{1i}(t),\hat{\hv}_{2i}(t)\}$, which can be modeled by \begin{align} \hv_{ji}(t) = \hat{\hv}_{ji}(t) + \tilde{\hv}_{ji}(t) \end{align} where the estimate $\hat{\hv}_{ji}(t)$ and estimation error $\tilde{\hv}_{ji}(t)$ are independent and assumed to be zero-mean and with variance $(1-\sigma^2)\Id_2$, $\sigma^2 \Id_2$, respectively ($0 \le \sigma^2 \le 1$). \end{assumption} \begin{assumption} [perfect delayed CSIR, imperfect current CSIR and perfect current local CSIR] At each time instant $t$, Rx-$i$ knows perfectly the delayed CSIR up to instant $t-1$ for all links, i.e., $\{{\Hm}(k)\}_{k=1}^{t-1}$, where \begin{align} {\Hm}(k) \defeq \{{\hv}_{11}(k),{\hv}_{12}(k),{\hv}_{21}(k),{\hv}_{22}(k)\}, \end{align} and the imperfect current CSIR (similarly modeled as at the transmitters) up to instant $t$ for all links, i.e., $\{\hat{\Hm}(k)\}_{k=1}^{t}$, where \begin{align} \hat{\Hm}(k) \defeq \{\hat{\hv}_{11}(k),\hat{\hv}_{12}(k),\hat{\hv}_{21}(k),\hat{\hv}_{22}(k)\}, \end{align} as well as the perfect current local CSIR, i.e., $\{{\hv}_{i1}(t),{\hv}_{i2}(t)\}$. \end{assumption} We assume that the estimation error $\sigma^2$ can be parameterized as an exponential function of the power $P$, so that we hope to characterize the DoF of the MISO IC with respect to this exponent. To this end, we introduce a parameter $\alpha \ge 0$, such that \begin{align} \alpha \defeq -\lim_{P \to \infty} \frac {\log \sigma^2}{\log P}. \label{eq:alpha-def} \end{align} This $\alpha$ indicates the quality of current CSIT at high SNR. While $\alpha=0$ reflects the case with no current CSIT, $\alpha \to \infty$ corresponds to that with perfect instantaneous CSIT. As a matter of fact, when $\alpha \ge 1$, the quality of the imperfect current CSIT is sufficient to avoid the DoF loss, and ZF precoding with this imperfect CSIT is able to achieve the maximum DoF~\cite{Caire:2010}. Therefore, we focus on the case $\alpha \in [0,1]$ hereafter. The connections between the above model and the linear prediction over existing time-correlated channel models with prescribed user mobility are highlighted in \cite{Kobayashi:2012}. According to the definition of the estimated current CSIT, we have \begin{align} \E (|{\hv}_{ji}^{\H}(t) \hat{\hv}_{ji}^{\perp}(t)|^2)&=\E (|\hat{\hv}_{ji}^{\H}(t) \hat{\hv}_{ji}^{\perp}(t)|^2) + \E (|\tilde{\hv}_{ji}^{\H}(t) \hat{\hv}_{ji}^{\perp}(t)|^2)\\ &=\E (\abs{\tilde{\hv}_{ji}^\H(t)\tilde{\hv}_{ji}(t)})\\ &= \sigma^2 \\ & \sim P^{-\alpha} \label{eq:P-to-alpha} \end{align} where $\E (\Norm{\hat{\hv}_{ji}^{\perp}(t)})=1$, and (\ref{eq:P-to-alpha}) is obtained from (\ref{eq:alpha-def}). \section{The Degree of Freedom Region} A rate pair $(R_1,R_2)$ is said to be achievable for the two-user interference channel with perfect delayed CSIT and imperfect current CSIT if there exists a $\left(2^{nR_1},2^{nR_2},n\right)$ code scheme consists of: \begin{itemize} \item two message sets $[1:2^{nR_1}]$ at the Tx-1 and $[1:2^{nR_2}]$ at the Tx-2, from which two independent messages $\Mc_1$ and $\Mc_2$ intended respectively to the Rx-1 and Rx-2 are uniformly chosen; \item one encoding function at the Tx-$i$: \begin{align} \xv_i(t) &= f_{i} \left(\Mc_i,\{{\hv}_{1i}(k)\}_{k=1}^{t-1},\{{\hv}_{2i}(k)\}_{k=1}^{t-1},\{\hat{\hv}_{1i}(k)\}_{k=1}^t,\{\hat{\hv}_{2i}(k)\}_{k=1}^t\right) \label{eq:enc-fun}; \end{align} \item and one decoding function at its corresponding receiver, e.g., \begin{align} \hat{\Mc}_j &= g_{j} \left(\{y(t)\}_{t=1}^{n},\{\Hm(t)\}_{t=1}^{n-1},\{\hat{\Hm}(t)\}_{t=1}^{n},{\hv_{j1}}(n), {\hv_{j2}}(n)\right) \label{eq:dec-fun} \end{align} for the Rx-1 when $j=1$, and it is similarly defined for the Rx-2 by replacing $y(t)$ with $z(t)$, \end{itemize} such that the average decoding error probability $P_{e}^{(n)}$, defined as \begin{align} P_{e}^{(n)} \defeq \E [\P[(\Mc_1, \Mc_2) \neq (\hat{\Mc}_1,\hat{\Mc}_2)]] , \end{align} vanishes as the code length $n$ tends to infinity. The capacity region $\Cc$ is defined as the set of all achievable rate pairs. Accordingly, the DoF region can be defined as follows: \begin{definition} [the degree-of-freedom region] The degree-of-freedom (DoF) region for two-user MISO interference channel is defined as \begin{align} \Dc &= \left\{ (d_1,d_2)\in \mathbb{R}_{+}^2 | \forall (w_1,w_2) \in \mathbb{R}_{+}^2, w_1d_1+w_2d_2 \le \limsup_{P \to \infty} \left( \sup_{(R_1,R_2) \in \Cc} \frac{w_1R_1+w_2R_2}{\log P}\right) \right\}. \end{align} \end{definition} Consequently, the DoF region for the two-user time-correlated MISO interference channel is stated in the following theorem. \begin{theorem} In the two-user MISO interference channel with perfect delayed CSIT and imperfect current CSIT (as stated in Assumption~2), the optimal DoF region can be characterized by \begin{subequations} \begin{align} d_1 &\le 1 \label{eq:dof-bound-1}\\ d_2 &\le 1 \label{eq:dof-bound-2}\\ 2 d_1 + d_2 &\le 2+\alpha \label{eq:dof-bound-3}\\ d_1 + 2 d_2 &\le 2+\alpha \label{eq:dof-bound-4}. \end{align} \end{subequations} \end{theorem} \textbf{Remark}: Interestingly, the above DoF region is identical to that of the two-user MISO broadcast channel with perfect delayed CSIT and imperfect current CSIT \cite{Yang:2012,Gou:2012}. In fact, this result is consistent with previous results on the pure delayed CSIT case ($\alpha=0$) where it was shown that the DoF region for the two-user BC and the two-user IC coincides, and also on the special case of perfect instantaneous CSIT ($\alpha=1$). For illustration, the DoF region for the two-user MISO IC is provided in Fig.~2. The DoF regions with no CSIT, pure perfect delayed CSIT, and perfect instantaneous CSIT are also plotted for comparison. It shows that the DoF region with perfect delayed CSIT and imperfect current CSIT is strictly larger than that with pure delayed CSIT and quickly approaches the region with perfect CSIT as the quality of current CSIT increases. \begin{figure}[htb] \begin{center} \includegraphics[width=0.55\columnwidth]{Region} \caption{DoF region for the two-user MISO interference channel (when $\alpha=0.5$).} \label{fig:DoF} \end{center} \end{figure} Given an $\alpha$, the DoF region is a polygon whose vertices are: $(0,1)$, $(\alpha,1)$, $\left(\frac{2+\alpha}{3},\frac{2+\alpha}{3}\right)$, $(1,\alpha)$ and $(1,0)$. In the following, we first characterize the outer bound, and then propose two schemes to show they are achievable, and in turn the entire region can be achieved by time sharing. \section{Outer Bound} We adopt a strategy reminisced in \cite{Vaze:2011} to obtain the genie-aided outer bound, by assuming that (i) both receivers know the CSI ${\Hm}(t)$ perfectly and instantaneously as well as the imperfect current CSI $\hat{\Hm}(t)$ at time $t$, and (ii) the Rx-2 has the instantaneous knowledge of the Tx-1's received signal $y(t)$. Define \begin{align} y'(t) &\defeq \hv_{12}^\H(t) \xv_2(t) + e(t)\\ z'(t) &\defeq \hv_{22}^\H(t) \xv_2(t) + b(t)\\ \Tc &\defeq \{\Hm(t),\hat{\Hm}(t)\}_{t=1}^{n}\\ \Uc(t) &\defeq \left\{\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1},\{\Hm(i)\}_{i=1}^{t-1} ,\{\hat{\Hm}(i)\}_{i=1}^{t} \right\} \end{align} where $\Tc$ denotes the channel state information and its estimated version available at the receivers from the beginning up to time instant $n$. To ease our presentation, we denote: \begin{align} n\epsilon_n \defeq 1+nR P_{e}^{(n)} \end{align} where $\epsilon_n$ tends to zero as $n \to \infty$ by the assumption that $\lim_{n \to \infty}P_{e}^{(n)}=0$. Then, we can upper-bound the achievable rate of Rx-1 by applying Fano's inequality: {\small \begin{align} &nR_1 \\ &\le I(\Mc_1;\{y(t)\}_{t=1}^{n}|\Tc) + n \epsilon_n\\ &= I(\Mc_1,\Mc_2;\{y(t)\}_{t=1}^{n}|\Tc) - I(\Mc_2;\{y(t)\}_{t=1}^{n}|\Mc_1,\Tc) + n \epsilon_n \\ &\le n \log P - I(\Mc_2;\{y(t)\}_{t=1}^{n}|\Mc_1,\Tc) +n \cdot O(1) + n \epsilon_n \label{eq:R-1-bound}\\ &= n \log P - h(\{y(t)\}_{t=1}^{n}|\Mc_1,\Tc) + h(\{y(t)\}_{t=1}^{n}|\Mc_1,\Mc_2,\Tc) +n \cdot O(1) + n \epsilon_n\\ &= n \log P - h(\{y(t)\}_{t=1}^{n}|\Mc_1,\Tc) +n \cdot O(1) + n \epsilon_n \label{eq:epsilon}\\ &= n \log P - h(\{ y'(t) \}_{t=1}^{n}|\Tc) +n \cdot O(1) + n \epsilon_n \label{eq:R-1-remove-m1}\\ &\le n \log P - \sum_{t=1}^n h( y'(t)|\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1}) +n \cdot O(1) + n \epsilon_n \label{eq:R-1-remove-m2}\\ &= n \log P - \sum_{t=1}^n h( y'(t)|\Uc(t),\Hm(t)) +n \cdot O(1) + n \epsilon_n \label{eq:R1-bound} \end{align} } where (\ref{eq:R-1-bound}) follows the fact that the rate of the point-to-point MISO channel (i.e., Tx-1 together with Tx-2 are treated as the transmitter by cooperation while Rx-1 as the receiver) is bounded by $n\log P + n \cdot O(1)$; (\ref{eq:epsilon}) is due to the fact that (a) transmitted signals $\{\xv_i(t)\}_{t=1}^n$ are determined given messages, channel matrices up to $n$ and the encoding functions defined in~(\ref{eq:enc-fun}), (b) translation does change differential entropy, and (c) noise is independent of the channel matrices, the transmitted signals and the messages; (\ref{eq:R-1-remove-m1}) is obtained because (a) the transmitted signals $\{\xv_1(t)\}_{t=1}^n$ are determined provided the channel matrices, $\Mc_1$ and the encoding functions according to~(\ref{eq:enc-fun}), and (b) translation preserves differential entropy; (\ref{eq:R-1-remove-m2}) follows the chain rule of differential entropy and the fact that conditioning reduces differential entropy; the last equality is obtained due to $y'(t)$ is independent of $\{\Hm(k)\}_{k=t+1}^n$ and $\{\hat{\Hm}(k)\}_{k=t+1}^n$. By applying Fano's inequality, we then also upper-bound the achievable rate of Rx-2 as \begin{align} &nR_2 \\ &\le I(\Mc_2;\{y(t)\}_{t=1}^{n},\{z(t)\}_{t=1}^{n}|\Tc) + n \epsilon_n\\ &\le I(\Mc_2;\{y(t)\}_{t=1}^{n},\{z(t)\}_{t=1}^{n},\Mc_1|\Tc) + n \epsilon_n \\ &= I(\Mc_2;\{y(t)\}_{t=1}^{n},\{z(t)\}_{t=1}^{n}|\Mc_1,\Tc) + n \epsilon_n \label{eq:R-2-chain-rule}\\ &= I(\Mc_2;\{y'(t)\}_{t=1}^{n},\{z'(t)\}_{t=1}^{n}|\Tc) + n \epsilon_n \label{eq:R2-remove-M1}\\ &= \sum_{t=1}^n I(\Mc_2;y'(t),z'(t)|\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1}) + n \epsilon_n \nonumber \\ &\le \sum_{t=1}^n I(\xv_2(t);y'(t),z'(t)|\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1}) + n \epsilon_n \label{eq:R2-markov-chain}\\ &= \sum_{t=1}^n \left(h(y'(t),z'(t)|\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1}) \right. \nonumber \\ &~~~~\left. -h(y'(t),z'(t)|\xv_2(t),\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1} ) \right) + n \epsilon_n \label{eq:R2-noise-independence}\\ &\le \sum_{t=1}^n h(y'(t),z'(t)|\Tc,\{ y'(i) \}_{i=1}^{t-1},\{ z'(i) \}_{i=1}^{t-1}) + n \epsilon_n \label{eq:last-eq-2}\\ &=\sum_{t=1}^n h(y'(t),z'(t)|\Uc(t),\Hm(t)) + n \epsilon_n \label{eq:R2-bound} \end{align} where (\ref{eq:R-2-chain-rule}) is obtained because of the chain rule of mutual information and the independence between $\Mc_1$ and $\Mc_2$; (\ref{eq:R2-remove-M1}) is due to (a) the transmitted signals $\{\xv_1(t)\}_{t=1}^n$ are determined given message $\Mc_1$, channel matrices and encoding functions, and (b) $\Mc_2$ and $\{y'(t),z'(t)\}$ are independent of $\Mc_1$; (\ref{eq:R2-markov-chain}) is obtained by Markov chain $\Mc_2 \to \xv_2(t) \to \{y'(t),z'(t)\}$ and the data processing inequality; (\ref{eq:last-eq-2}) is because (a) translation does not change differential entropy, (b) Gaussian noise terms are independent from instant to instant, and are also independent of the channel matrices and the transmitted signals, and (c) the differential entropy of Gaussian noise is nonnegative; and the last equality is obtained due to the independence $\{y'(t),z'(t)\}$ of $\{\Hm(k)\}_{k=t+1}^n$ and $\{\hat{\Hm}(k)\}_{k=t+1}^n$. According to the Markov chain $\{\xv_2(t)\}_{t=1}^n \to \left(\{y(t)\}_{t=1}^n, \{z(t)\}_{t=1}^n\right) \to \{y(t)\}_{t=1}^n$, we upper-bound the weighted sum rate as \begin{align} &n(2R_1+R_2) \\ &\le 2n \log P + \sum_{t=1}^n \left(h(y'(t),z'(t)|\Uc(t),\Hm(t)) - 2 h(y'(t)|\Uc(t),\Hm(t))\right) + n \cdot O(1) + n \epsilon_n \label{eq:R1-2R2}. \end{align} Before preceding further, we introduce the following lemma stated in~\cite{Yang:2012}. \begin{lemma} [\cite{Yang:2012}] For an $m \times 1$ random vector $\hv = \hat{\hv}+\tilde{\hv}$ where $\tilde{\hv} \sim \CN(0,\sigma^2 \Id_m)$ is independent of $\hat{\hv}$, given any $\Km \succeq 0$ with eigenvalues $\lambda_1 \ge \cdots \ge \lambda_m$, we have the following upper and lower bounds: \begin{align} \log (1+e^{-\gamma} \sigma^2 \lambda_1) + O(1) &\le \E_{\tilde{\hv}} \log (1+\hv^\H \Km \hv) \le \log (1+\norm{\hat{\hv}}^2 \lambda_1) + O(1). \end{align} The difference of the upper and lower bounds can be further bounded by \begin{align} &\log (1+\norm{\hat{\hv}}^2 \lambda_1) - \log (1+e^{-\gamma} \sigma^2 \lambda_1) \le - \log (\sigma^2) + O(1) \end{align} where $\gamma$ is Euler's constant. \end{lemma} With the definitions \begin{align} \Sm(t) &\eqdef \Bmatrix{\hv_{12}^\H(t) \\ \hv_{22}^\H(t) }\\ \hat{\Sm}(t) &\eqdef \Bmatrix{\hat{\hv}_{12}^\H(t) \\ \hat{\hv}_{22}^\H(t) }\\ \wv(t) &\eqdef [e(t) \ b(t) ]^\T\\ \Km(t) &\eqdef \E\{\xv_2(t)\xv_2^{\H}(t)|\Uc(t)\} \end{align} we further upper-bound the weighted difference of two conditional differential entropies that derived above, i.e., {\small \begin{align} &h(y'(t),z'(t)|\Uc(t),\Hm(t)) - 2 h(y'(t)|\Uc(t),\Hm(t))\\ &=h(y'(t),z'(t)|\Uc(t),\Sm(t)) - 2 h(y'(t)|\Uc(t),\Sm(t)) \label{eq:gaussian-input-1}\\ &\le \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}} \max_{\substack{p(\Uc(t)),\\p(\xv_2(t)|\Uc(t)) \\ \Km(t) \preceq \Cm}} (h(y'(t),z'(t)|\Uc(t),\Sm(t)) - 2 h(y'(t)|\Uc(t),\Sm(t)))\\ &= \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}} \max_{\substack{p(\Uc(t))\\ \Km_u(t) \preceq \Cm}} ( h( \Sm(t) \uv(t)+\wv(t) |\Uc(t),\Sm(t)) - 2 h( \hv_{12}^\T(t) \uv(t)+e(t) |\Uc(t),\Sm(t)) ) \label{eq:gaussian-input-2}\\ &= \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} ( h( \Sm(t) \uv(t)+\wv(t) |\Sm(t),\hat{\Sm}(t)) - 2 h( \hv_{12}^\H(t) \uv(t)+e(t) |\Sm(t),\hat{\Sm}(t)) ) \label{eq:gaussian-input-3}\\ &= \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}} \max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \E_{\Sm(t),\hat{\Sm}(t)} (\log \det (\mathbf I + \Sm(t) \Km_u(t) \Sm^{\H}(t) ) - 2 \log(1+\hv_{12}^\H(t) \Km_u(t) \hv_{12}(t))) \label{eq:expectation-bound1}\\ &\le \E_{\hat{\Sm}(t)} \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \E_{\Sm(t)|\hat{\Sm}(t)} (\log \det (\mathbf I + \Sm(t) \Km_u(t) \Sm^{\H}(t) ) - 2 \log(1+\hv_{12}^\H(t) \Km_u(t) \hv_{12}(t))) \label{eq:expectation-bound3}\\ &\le \E_{\hat{\Sm}(t)} \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \E_{\Sm(t)|\hat{\Sm}(t)} (\log (1+\hv_{22}^\H(t) \Km_u(t) \hv_{22}(t))) - \log(1+\hv_{12}^\H(t) \Km_u(t) \hv_{12}(t))) \label{eq:expectation-bound4}\\ &\le \E_{\hat{\Sm}(t)} \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \left(\log (1+\Norm{\hat{\hv}_{22}(t)} \lambda_1(\Km_u(t))) - \log \left(1+ e^{-\gamma} \sigma^2 \lambda_1(\Km_u(t))\right)\right) + O(1)\\ & \le \alpha \log P + O(1) \label{eq:diff-bound} \end{align} } where in (\ref{eq:gaussian-input-1}) $\Hm(t)$ is replaced by $\Sm(t)$ because of the independence of $\{y'(t),z'(t)\}$; (\ref{eq:gaussian-input-2}) is obtained because Gaussian distributed vector $\uv(t)$ maximizes the weighted difference of two differential entropies over all conditional distribution of $\xv_2(t)$ with the same covariance matrix constraint, where $\Km_u(t) \defeq \E\{\uv(t)\uv^{\H}(t)\} = \max_{p(\Uc(t))} \Km(t) $~\cite{Liu:2007}; (\ref{eq:gaussian-input-3}) is because $\uv(t)$, $\Sm(t)$ and $\wv(t)$ are independent of $\Uc(t)$ except $\hat{\Sm}(t)$; (\ref{eq:expectation-bound1}) is obtained because $\uv(t)$ is Gaussian distributed and independent of $\{\Hm(t)\}_{t=1}^{n}$, $\{\hat{\Hm}(t)\}_{t=1}^{n}$ as well as the noise terms; (\ref{eq:expectation-bound3}) follows the fact that putting the expectation out of the maximization increases the value; (\ref{eq:expectation-bound4}) follows from the inequality $\det(\Id+\Am) \le \prod_{i=1}^m \left(1+a_{ii}\right)$ where $\Am$ is an $m \times m$ positive semidefinite matrix with entry $a_{ij}$; the last two inequalities are according to Lemma~1 and the quality of current CSIT ($\sigma^2 \sim P^{-\alpha}$). Accordingly, we obtain an upper bound of $2R_1+R_2$ from (\ref{eq:R1-2R2}) and (\ref{eq:diff-bound}), i.e., \begin{align} &n(2R_1+R_2) \le n (2+\alpha) \log P + n \cdot O(1) + n \epsilon_n \end{align} as $n\to \infty$, from which (\ref{eq:dof-bound-3}) is obtained according to the definition of DoF. By exchanging the roles of Tx-1/Rx-1 and Tx-2/Rx-2, another inequality (\ref{eq:dof-bound-4}) can be similarly obtained by assuming Rx-1 has the instantaneous knowledge of $z(t)$, where the weighted rate is bounded as \begin{align} &n(R_1+2R_2) \le n (2+\alpha) \log P + n \cdot O(1) + n \epsilon_n. \end{align} Together with the first two bounds~(\ref{eq:dof-bound-1}) and (\ref{eq:dof-bound-2}) which are obtained by the constraint of antenna configuration, the DoF region is completely characterized. \section{Achievability} With perfect delayed CSIT, the authors in~\cite{Vaze:2011} and~\cite{Ghasemi:2011} characterize the DoF region for two-user MIMO interference channel, bridging between the case with no CSIT~\cite{Huang:2012,Zhu:2012,Vaze:2009} and that with perfect instantaneous CSIT~\cite{Jafar:2007}. Particularly, for two-user MISO case, the DoF pair $(\frac{2}{3},\frac{2}{3})$ is achievable with delayed CSIT, strictly larger than $(\frac{1}{2},\frac{1}{2})$ achieved with no CIST and dominated by $(1,1)$ with perfect CSIT. The technique exploits the advantage of interference alignment in the time domain by utilizing the delayed CSIT together with the space domain, which is referred to as MAT alignment~\cite{Maddah-Ali:10,Ghasemi:2011,Vaze:2011}. We first briefly review its application in the interference channel. \subsection{MAT in the Interference Channel} The MAT alignment in the interference channel is an extension from the broadcast channel, taking into account the distributive and uncooperative nature of the transmitters~\cite{Ghasemi:2011}. The two-phase protocol which consumes three time slots is described as follows: \subsubsection*{Phase-I} In this phase, each Tx transmits two independent encoded symbols to its intended receiver without precoding during a single time slot, i.e., \begin{subequations} \begin{align} \xv_1(1) &= \uv\\ \xv_2(1) &= \vv \end{align} \end{subequations} and the received signals at both receivers are \begin{subequations} \begin{align} y(1) &= \hv_{11}^\H(1) \uv + \underbrace{\hv_{12}^\H(1) \vv}_{\eta_1} + e(1) \\ z(1) &= \underbrace{\hv_{21}^\H(1) \uv}_{\eta_2} + \hv_{22}^\H(1) \vv + b(1), \end{align} \end{subequations} where $\eta_1$ and $\eta_2$ are interference terms overheard at the Tx-1 and Tx-2, respectively. \subsubsection*{Phase-II} At the end of phase-I, the delayed CSIT $\{\hv_{21}(1)\}$ is available at the Tx-1, while $\{\hv_{12}(1)\}$ is accessible at the Tx-2. Together with the transmitted symbols, the overheard interference terms are reconstructible at both Txs. By retransmitting the overheard interference terms $\eta_2=\hv_{21}^\H(1)\uv$ at the Tx-1 and $\eta_1=\hv_{12}^\H(2)\vv$ at the Tx-2 with time division, i.e., \begin{subequations} \begin{align} \xv_1(2) &= \Bmatrix{\eta_2 \\ 0} \\ \xv_2(2) &= \mathbf 0 \end{align} \end{subequations} and \begin{subequations} \begin{align} \xv_1(3)&=\mathbf 0 \\ \xv_2(3)&=\Bmatrix{\eta_1\\0} \end{align} \end{subequations} where two entire time slots are consumed, we cancel the interference terms $\eta_1$ and $\eta_2$ at the Rx-1 and Rx-2, and importantly provide another linear combination of $\uv$ (from $\eta_2$) and $\vv$ (from $\eta_1$) to the Rx-1 and Rx-2, respectively. By the end of phase-II, both receivers are able to recover their own symbols with high probability. The key idea behind is interference repetition and alignment in both space and time domain. At each receiver, the mutual interference aligns in one dimension, while the desired signal spans in a two-dimensional space. This enables each receiver to retrieve the desired signal from a three-dimensional space. \subsection{Integrating the Imperfect Current CSIT} The MAT alignment takes into account the completely outdated CSIT, regardless of the correlation between current and previous channel states. As a matter of fact, such an assumption on the delayed CSIT is over-pessimistic, since the current CSI can be predicted from the past states if the underlying channel exhibits some temporal correlation. Recent results demonstrate that the DoF region can be enlarged in broadcast channel by using estimated current CSIT, even it is imperfect~\cite{Kobayashi:2012,Yang:2012}. In the following, two schemes are proposed, demonstrating the larger DoF region can be achieved by utilizing estimated current CSIT exploited from time correlation model. Instead of forwarding the interference terms in an analog fashion~\cite{Ghasemi:2011,Vaze:2011}, we first quantize the interference and then retransmit the quantized version. By utilizing the imperfect current CSIT for precoding, the interference terms are efficiently compressed with quantization. In the following two schemes, we demonstrate the vertices $\left(\frac{2+\alpha}{3},\frac{2+\alpha}{3}\right)$ and $(1,\alpha)$ are all achievable. Note that we simply use $\hat{\hv}_{ji}(t)$ for the range space while $\hat{\hv}_{ji}^{\perp}(t)$ for the null space of $\hat{\hv}_{ji}(t)$. The precoder design to improve the achievable rate is out of scope of this paper. \subsubsection{\textbf{Achievability of $\left(\frac{2+\alpha}{3},\frac{2+\alpha}{3}\right)$}} Inspired by the enhanced scheme for the two-user MISO broadcast channel~\cite{Yang:2012}, a 3-time-slotted protocol, which achieves the vertex $\left(\frac{2+\alpha}{3},\frac{2+\alpha}{3}\right)$ of the DoF region for the two-user MISO interference channel, is detailed as follows. \subsubsection*{Slot-1} In the first time slot, the symbol vectors $\uv(1)$ and $\vv(1)$ are respectively sent from the two transmitters with precoding, heading to their corresponding receivers: \begin{subequations} \begin{align} \xv_1(1) &= [\hat{\hv}_{21}(1) \ \hat{\hv}_{21}^{\perp}(1)] \uv(1)\\ \xv_2(1) &= [\hat{\hv}_{12}(1) \ \hat{\hv}_{12}^{\perp}(1)] \vv(1) \end{align} \end{subequations} where $\uv(1)=[u_1(1) \ u_2(1)]^{\T}$, $\vv(1)=[v_1(1) \ v_2(1)]^{\T}$ satisfy $\E(\Norm{\uv(1)})=\E(\Norm{\vv(1)}) \le P$. The received signal at both receivers are then given as: \begin{subequations} \begin{align} y(1) &= \hv_{11}^\H(1) \xv_1(1) + {\eta}_1 + e(1) \\ z(1) &= \hv_{22}^\H(1) \xv_2(1) + {\eta}_2 + b(1), \end{align} \end{subequations} where ${\eta}_1$ and ${\eta}_2$ are interferences overheard at the Rx-1 and Rx-2 respectively, i.e., \begin{subequations} \begin{align} {\eta}_1 &= {\hv}_{12}^{\H}(1) \hat{\hv}_{12}(1) v_1(1)+{\hv}_{12}^{\H}(1) \hat{\hv}_{12}^{\perp}(1) v_2(1)\\ {\eta}_2 &= {\hv}_{21}^{\H}(1) \hat{\hv}_{21}(1) u_1(1)+{\hv}_{21}^{\H}(1) \hat{\hv}_{21}^{\perp}(1) u_2(1). \end{align} \end{subequations} According to (\ref{eq:P-to-alpha}), i.e., $\E (\Abs{{\hv}_{ji}^{\H}(1) \hat{\hv}_{ji}^{\perp}(1)}) \sim P^{-\alpha}$, we can make $\E(|{\eta}_1|^2)=\E(|{\eta}_2|^2) \sim P^{1-\alpha}$ by allocating $\E(|u_1(1)|^2)=\E(|v_1(1)|^2) = P^{1-\alpha}$ whereas $\E(|u_2(1)|^2)=\E(|v_2(1)|^2) = P-P^{1-\alpha} \sim P$. At the end of slot-1, Tx-1 can reconstruct $\eta_2 ={\hv}_{21}^\H(1)\xv_2(1)$ while Tx-2 can reconstruct $\eta_1 = {\hv}_{21}^{\H}(1) \xv_1(1)$. Instead of forwarding the interferences in an analog fashion, we first quantize the interference term $\eta_i$ into $\hat{\eta}_i$ with $(1-\alpha) \log P$ bits each, then encode the index of $\hat{\eta}_i$ to codeword $c_i$ using a Gaussian channel codebook, and forward $c_i$ as a common message to both receivers in the ensuing two time slots. To ease our presentation, the process of the encoding and decoding of $c_i$ is omitted hereafter, making it look as if the codeword $\hat{\eta}_i$ itself is conveyed to the receivers\footnote{The simplification does not affect our results as long as we consider DoF region. For the rate region, the general scheme and more rigorous proof can be straightforwardly extended from~\cite{Yang:2012}.}. The source codebook $\Xc_1$ (resp.~$\Xc_2$) is generated for ${\eta}_2$ (resp.~ ${\eta}_1$) and maintained at the Tx-1 (resp.~Tx-2). The entry $\hat{\eta}_i$ in codebook $\Xc_i$ satisfies \begin{align} {\eta}_i=\hat{\eta}_i+\Delta_i \end{align} where $\Delta_i$ is the quantization error with distortion $\E(|\Delta_i|^2) \sim \sigma^2_{{\eta}_i} D$ and independent of $\hat{\eta}_i$. According to the rate distortion theory\cite{Cover:2006}, we let the normalized distortion $D$ decay as $P^{-(1-\alpha)}$ (in turn $\E(|\Delta_i|^2) \sim P^{0}$) so that each receiver can decode it successfully and the quantization error is drowned in the noise. \subsubsection*{Slot-2} During the second time slot, the index corresponding to $\hat{\eta}_2$ is encoded to $c_2$ and sent from Tx-1 as a common message together with a new symbol $u(2)$ with ZF precoding, while a new symbol $v(2)$ intended to Rx-2 is instantaneously sent from Tx-2 with ZF precoding as well. By omitting the encoding and decoding process of $c_2$, the equivalent transmitted signals can be written as \begin{subequations} \begin{align} \xv_1(2) &= \Bmatrix{P^{\alpha/2} \hat{\eta}_2 \\ 0} + \hat{\hv}_{21}^{\perp}(2) u(2)\\ \xv_2(2) &= \hat{\hv}_{12}^{\perp}(2) v(2). \end{align} \end{subequations} where the cordword $\hat{\eta}_2$ is power scaled with $P^{\alpha/2}$ to ensure it can be recovered from noisy observation. To avoid interference from the other transmitters, we assume the new symbols $u(2)$ and $v(2)$ satisfy the power constraint $\E (|u(2)|^2)=\E (|v(2)|^2) \le P^{\alpha}$. The received signals at both receivers are given as: \begin{subequations} \begin{align} y(2) &= \underbrace{h_{11,1}^{*}(2) {P^{\alpha/2} \hat{\eta}_2 }}_{t_{11}} + \underbrace{\hv_{11}^\H(2) \hat{\hv}_{21}^{\perp}(2) u(2)}_{t_{12}} + \underbrace{\hv_{12}^\H(2) \hat{\hv}_{12}^{\perp}(2) v(2)}_{t_{13}} + e(2) \\ z(2) &= {h_{21,1}^{*}(2) {P^{\alpha/2} \hat{\eta}_2 }}+ {\hv_{22}^\H(2) \hat{\hv}_{12}^{\perp}(2) v(2)}+ {\hv_{21}^\H(2) \hat{\hv}_{21}^{\perp}(2) u(2)} + b(2). \end{align} \end{subequations} Note that in the received signal $y(2)$, $\E (|t_{11}|^2) \sim P$, $\E (|t_{12}|^2) \sim P^{\alpha}$, while $\E (|t_{13}|^2) \sim P^0$ is at noise level. With distortion $D \sim P^{-(1-\alpha)}$, both receivers can retrieve $\hat{\eta}_2$ with high probability by treating $t_{21}$ and $t_{22}$ as noise~\cite{Yang:2012}. By removing $\hat{\eta}_2$ from the received signals, $u(2)$ and $v(2)$ can be recovered with high probability as long as their power constraints are satisfied. \subsubsection*{Slot-3} The transmission in the third time slot is similar to that in the slot-2, where the index corresponding to $\hat{\eta}_1$ chosen from $\Xc_2$ is encoded to $c_1$ and transmitted as a common message together with another new symbol $v(3)$ from Tx-2, while only one new symbol $u(3)$ intended to Rx-1 is sent from Tx-1. By omitting the encoding and decoding process of $c_1$, the equivalent transmitted signals are \begin{subequations} \begin{align} \xv_1(3) &= \hat{\hv}_{21}^{\perp}(3) u(3)\\ \xv_2(3) &= \Bmatrix{P^{\alpha/2} \hat{\eta}_1 \\ 0} + \hat{\hv}_{12}^{\perp}(3) v(3) \end{align} \end{subequations} where the new symbols $u(3)$ and $v(3)$ satisfy the power constraint $\E (|u(3)|^2)=\E (|v(3)|^2) \le P^{\alpha}$. The received signals at both receivers are given as \begin{subequations} \begin{align} y(3) &= {h_{12,1}^{*}(3) {P^{\alpha/2} \hat{\eta}_1 }} + {\hv_{11}^\H(3) \hat{\hv}_{21}^{\perp}(3) u(3)} + {\hv_{12}^\H(3) \hat{\hv}_{12}^{\perp}(3) v(3)} + e(3) \\ z(3) &= {h_{22,1}^{*}(3) {P^{\alpha/2} \hat{\eta}_1 }} + {\hv_{22}^\H(3) \hat{\hv}_{12}^{\perp}(3) v(3)} + {\hv_{21}^\H(3) \hat{\hv}_{21}^{\perp}(3) u(3)} + b(3). \end{align} \end{subequations} Similarly to the slot-2, $\hat{\eta}_1$ is retrievable at both receivers by treating other terms as noise, and $u(3)$ and $v(3)$ can be also recovered respectively by subtracting $\hat{\eta}_1$ from the received signals at both receivers. At the end of the third slot, $u(2)$, $u(3)$, $\hat{\eta}_1$ and $\hat{\eta}_2$ can be successfully recovered at the Rx-1. As was modeled in~\cite{Kobayashi:2012,Yang:2012}, an equivalent MIMO can be formulated to find the symbols $\uv(1)$: \begin{align} \Bmatrix{y(1)-\hat{\eta}_1 \\ \hat{\eta}_2} = \Bmatrix{\hv_{11}^\H(1) \\ \hv_{21}^\H(1)} \xv_1(1) + \Bmatrix{e(1)+\Delta_1 \\ -\Delta_2} \end{align} for the Rx-1, and it is similar for the Rx-2. \begin{lemma} The vertex $\left(\frac{2+\alpha}{3},\frac{2+\alpha}{3}\right)$ of DoF region is achievable by the above scheme. \end{lemma} \begin{proof} We outline the main idea of the proof here and please refer to Appendix for details. At the Rx-1 for instance, we transform the original signal model into an equivalent $2 \times 2$ point-to-point MIMO system model for $\uv(1)$ (resp.~$\vv(1)$ at the Rx-2), together with two parallel SISO signal models respectively for $u(2)$ and $u(3)$ (resp.~$v(2)$ and $v(3)$ at the Rx-2). For the MIMO model, we obtain the DoF of $2-\alpha$, while get $\alpha$ DoF for each parallel SISO model, and finally $\frac{2-\alpha+2 \alpha}{3} = \frac{2+\alpha}{3}$ DoF is achieved per user. \end{proof} \subsubsection{\textbf{Achievability of $(1,\alpha)$}} In the following, we extend the Han-Kobayashi scheme~\cite{Han:81} here to achieve the vertex $(1,\alpha)$. The symbols sent from the Tx-1 consists of two parts $u_c$ and $u_p$, where only $u_p$ is precoded by using imperfect current CSIT. Simultaneously, one symbol $v_p$ intended to Rx-2 is sent from the Tx-2 with ZF precoding. The transmission can be given as \begin{subequations} \begin{align} \xv_1 &= \Bmatrix{u_c\\0} + \hat{\hv}_{21}^{\perp} u_p \\ \xv_2 &= \hat{\hv}_{12}^{\perp} v_p \end{align} \end{subequations} where the transmitted symbols are assumed to satisfy the power constraints $\E(\Abs{u_c}) \le P$, $\E(\Abs{u_p}) = \E(\Abs{v_p}) \le P^{\alpha}$. Although the symbol $u_c$ is decodable by both receivers and hence referred to as a common message, it is only desirable by Rx-1. On the other hand, we refer to $u_p$, $v_p$ as the private messages which can only be seen and decoded by their corresponding receivers. At the receiver side, we have \begin{subequations} \begin{align} y &= h_{11,1}^{*} u_c + \underbrace{\hv_{11}^\H \hat{\hv}_{21}^{\perp} u_p}_{\eta_{11}} + \underbrace{{\hv}_{12}^{\H} \hat{\hv}_{12}^{\perp} v_p}_{{\eta}_{12}} + e \\ z &= h_{21,1}^{*} u_c + \underbrace{\hv_{22}^\H \hat{\hv}_{12}^{\perp} v_p}_{\eta_{22}} + \underbrace{{\hv}_{21}^{\H}\hat{\hv}_{21}^{\perp} u_p}_{{\eta}_{21}} + b, \end{align} \end{subequations} where the terms carrying common message are with approximated power $P$, while those carrying private messages are $\E(|{\eta}_{11}|^2)=\E(|{\eta}_{22}|^2) \sim P^{\alpha}$, and the interference terms $\E(|{\eta}_{12}|^2)=\E(|{\eta}_{21}|^2) \sim P^{0}$ are at noise level according to (\ref{eq:P-to-alpha}). By firstly treating ${\eta}_{i1}, {\eta}_{i2}$ as noise, Rx-$i$ can recover the common message $u_c$ with high probability. Then, the private messages $u_p$ and $v_p$ can be retrieved from the received signals after $u_c$ being subtracted at the Rx-1 and Rx-2, respectively. \begin{lemma} The vertices $(1,\alpha)$ and $(\alpha,1)$ of DoF region are achievable. \end{lemma} \begin{proof} Define \begin{subequations} \begin{align} y' &\defeq \eta_{11} + \eta_{12} + e \\ z' &\defeq \eta_{22} + \eta_{21} + b. \end{align} \end{subequations} For both receivers, the achievable rate can be given by \begin{align} I(u_c,u_p;y|\Tc) &= I(u_c;y|\Tc)+I(u_p;y|\Tc,u_c)\\ &=I(u_c;y|\Tc)+I(u_p;y'|\Tc)\\ &= \E \log \left( 1+\frac{\Abs{h_{11,1}^{*} u_c}}{\Abs{\eta_{11}}+\Abs{{\eta}_{12}}+\Abs{e}} \right) + \E \log \left( 1+\frac{\Abs{\eta_{11}}}{\Abs{{\eta}_{12}}+\Abs{e}} \right)\\ &= (1-\alpha) \log P + \alpha \log P + O(1)\\ &= \log P + O(1) \end{align} for the Rx-1, and \begin{align} I(v_p;z|\Tc) &= I(v_p;z|\Tc,u_c) + I(v_p;u_c|\Tc) - I(v_p;u_c|\Tc,z)\\ &= I(v_p;z|\Tc,u_c) = I(v_p;z'|\Tc) \label{eq:scheme-2-pr}\\ &= \E \log \left( 1+\frac{\Abs{\eta_{22}}}{\Abs{{\eta}_{21}}+\Abs{b}} \right)\\ &= \alpha \log P + O(1) \end{align} for the Rx-2, where (\ref{eq:scheme-2-pr}) holds because $u_c$ and $v_p$ are independent. The DoF for both receivers can be simply obtained by definition. The other vertex $(\alpha,1)$ can be achieved by swapping the roles of Tx-1 and Tx-2. This completes the proof. \end{proof} Note that the vertices $(1,0)$ and $(0,1)$ are achievable by letting one pair communicate while keeping the other one silent. In conclusion, all vertices of the DoF region for two-user MISO interference channel are achievable, and in turn the entire region can be achieved by time sharing. \section{Extension to MIMO Case} Here, we extend the aforementioned MISO case to a class of MIMO settings with antenna configuration $(M,M,N,N)$, where $M$ antennas at each transmitter and $N$ antennas at each receiver, satisfying $M \ge 2N$. This includes a generalized MISO setting with more than 2 antennas at each transmitter. The discrete time baseband signal model is given by \begin{subequations} \begin{align} \yv(t) &= \Hm_{11}(t) \xv_1(t) + \Hm_{12}(t) \xv_2(t) + \ev(t) \\ \zv(t) &= \Hm_{21}(t) \xv_1(t) + \Hm_{22}(t) \xv_2(t) + \bv(t), \end{align} \end{subequations} for any time instant $t$, where $\Hm_{ji}(t) \in \CC^{N\times M}$ is the channel matrix from Tx-$i$ to Rx-$j$; $\ev(t), \bv(t) \sim \CN[0,\Id_N]$ are normalized AWGN vectors at the respective receivers; the coded input signal $\xv_i(t) \in \CC^{M\times 1}$ is subject to the power constraint $\E( \norm{\xv_i(t)}^2 ) \le P$, $\forall\,t$. In analogy to the MISO case, we have the optimal DoF region of two-user time correlated $(M,M,N,N)$ MIMO interference channel. \begin{theorem} In the two-user $(M,M,N,N)$ MIMO interference channel ($M \ge 2N$) with perfect delayed CSIT and imperfect current CSIT, the optimal DoF region can be characterized by \begin{subequations} \begin{align} d_1 &\le N \\ d_2 &\le N \\ d_1+2d_2 &\le N(2+\alpha)\\ 2d_1+d_2 &\le N(2+\alpha). \end{align} \end{subequations} \end{theorem} \textbf{Remark}: The DoF region is irrelevant to the number of transmit antennas as long as $M \ge 2N$. For the $M \times 1$ MISO case ($M \ge 2$), the DoF region is identical to that when $M=2$, coinciding with the region for two-user MISO broadcast channel. Following the same strategy as the MISO case, we first provide the outer bound and then show the region confined by the outer bound is achievable. \subsection{Outer Bound} The outer bound can be simply extended from the MISO case. To avoid redundancy, we outline the main difference but omit the similar parts. By defining similarly $\yv'(t)$, $\zv'(t)$, $\Uc(t)$, i.e., \begin{align} \yv'(t) &\defeq \Hm_{12}(t) \xv_2(t) + \ev(t)\\ \zv'(t) &\defeq \Hm_{22}(t) \xv_2(t) + \bv(t)\\ \Uc(t) &\defeq \left\{\{ \yv'(i) \}_{i=1}^{t-1},\{ \zv'(i) \}_{i=1}^{t-1},\{\Hm(i)\}_{i=1}^{t-1} ,\{\hat{\Hm}(i)\}_{i=1}^{t} \right\}, \end{align} we have \begin{align} nR_1 &\le n N \log P - \sum_{t=1}^n h( \yv'(t)|\Uc(t),\Hm(t)) + n \cdot O(1) + n \epsilon_n \\ nR_2 &\le \sum_{t=1}^n h(\yv'(t),\zv'(t)|\Uc(t),\Hm(t)) + n \epsilon_n. \end{align} Define \begin{align} \Sm(t) \defeq \Bmatrix{\Hm_{12}(t) \\ \Hm_{22}(t)} \quad \quad \hat{\Sm}(t) \defeq \Bmatrix{\hat{\Hm}_{12}(t) \\ \hat{\Hm}_{22}(t)} \end{align} and we upper-bound the weighted difference of two conditional differential entropies by {\small \begin{align} &h(\yv'(t),\zv'(t)|\Uc(t),\Hm(t)) - 2h( \yv'(t)|\Uc(t),\Hm(t))\\ & \le \E_{\hat{\Sm}(t)} \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \E_{\Sm(t)|\hat{\Sm}(t)} (\log \det (\Id + \Sm(t) \Km_u(t) \Sm^{\H}(t) ) - 2 \log \det(\Id+\Hm_{12}(t) \Km_u(t) \Hm_{12}^\H(t))) \\ & \le \E_{\hat{\Sm}(t)} \max_{\substack{\Cm \succeq 0,\\ \trace(\Cm) \le P}}\max_{\substack{p(\hat{\Sm}(t))\\ \Km_u(t) \preceq \Cm}} \E_{\Sm(t)|\hat{\Sm}(t)} (\log \det (\Id + \Hm_{22}(t) \Km_u(t) \Hm_{22}^{\H}(t) ) - \log \det (\Id+\Hm_{12}(t) \Km_u(t) \Hm_{12}^\H(t))) \\ & \le N \alpha \log P + O(1) \end{align} } where $\Km_u(t)$ possesses the same definition as that in the MISO case and the last inequality is obtained according to the following Lemma. \begin{lemma} For an $N \times M$ random matrix ($M\ge N$) $\Hm = \hat{\Hm}+\tilde{\Hm}$ where $\tilde{\Hm}$ is independent of $\hat{\Hm}$ and whose entries satisfy $\tilde{h}_{ij} \sim \CN(0,\sigma^2)$, given any $\Km \succeq 0$ with eigenvalues $\lambda_1 \ge \cdots \ge \lambda_{M}$, we have the following upper and lower bounds: \begin{align} \E_{\tilde{\Hm}} \log \det (\Id+\Hm \Km \Hm^\H) &\le \sum_{i=1}^{N}\log (1+\norm{\hat{\Hm}}^2 \lambda_i) + O(1)\\ \E_{\tilde{\Hm}} \log \det (\Id+\Hm \Km \Hm^\H) &\ge \sum_{i=1}^{N} \log (1+ \lambda_i \sigma^2 e^{\zeta}) + O(1). \end{align} The difference of the upper and lower bounds can be further bounded by \begin{align} &\log (1+\norm{\hat{\Hm}}^2 \lambda_i) - \log (1+ \lambda_i \sigma^2 e^{\zeta}) \le - \log (\sigma^2) + O(1) \end{align} where $\zeta \defeq \frac{1}{N}\sum_{i=1}^N \psi(N-i+1)$ and $\psi(x)$ is the digamma function that given by~\cite{Goodman:1963,Oyman:03} \begin{align} \psi(x) = - \gamma + \sum_{p=1}^{x-1} \frac{1}{p} \le \ln x \end{align} for integer $x$, where $\gamma$ is Euler's constant. \end{lemma} \begin{proof} Please refer to Appendix for details. \end{proof} Hence, we can outer-bound the weighted sum rate as \begin{align} n (R_1 + 2R_2) &\le nN(2+\alpha) \log P + n \cdot O(1) + n \epsilon_n \end{align} and similarly obtain another outer bound by exchanging the roles of Rx-1 and Rx-2, i.e., \begin{align} n (2R_1 + R_2) &\le nN(2+\alpha) \log P + n \cdot O(1) + n \epsilon_n \end{align} and therefore the outer bound of the DoF is obtained by the definition. \subsection{Achievability} For the achievability, the vertices $(N,N\alpha)$, $(N\alpha,N)$, and $\left(\frac{N(2 + \alpha)}{3},\frac{N (2 + \alpha)}{3}\right)$ are all achievable and the achievable schemes can be simply extended from the MISO case. Here, we only show the achievable scheme for the vertex $\left(\frac{N(2 + \alpha)}{3},\frac{N (2 + \alpha)}{3}\right)$ for instance. For the achievability of $(N,N\alpha)$ and $(N\alpha,N)$, the extension is similar and straightforward. In the extended scheme, three time slots are consumed, where the transmitted signals are detailed as follows: \subsubsection*{Slot-1} The transmitted signals from both transmitters are given by \begin{subequations} \begin{align} \xv_1(1) &= \Bmatrix{\Qm_{21}(1) & \Qm_{21}^{\bot}(1) } \uv(1)\\ \xv_2(1) &= \Bmatrix{\Qm_{12}(1) & \Qm_{12}^{\bot}(1) } \vv(1) \end{align} \end{subequations} where $\uv(1) \in \CC^{2N \times 1}$, $\vv(1) \in \CC^{2N \times 1}$ are assumed to satisfy $\E(\Norm{\uv_1(1)})=\E(\Norm{\vv_1(1)})\le P$ and $\Qm_{ji}(t) \in \CC^{M \times N}$, $\Qm_{ji}^{\bot}(t) \in \CC^{M \times N}$ which satisfy \begin{subequations} \begin{align} \Qm_{21}(t) \subseteq \Rc\{\hat{\Hm}_{21}(t)\} \quad & \quad \Qm_{12}(t) \subseteq \Rc\{\hat{\Hm}_{12}(t)\}\\ \Qm_{21}^{\bot}(t) \subseteq \Nc \{\hat{\Hm}_{21}(t)\} \quad & \quad \Qm_{12}^{\bot}(t) \subseteq \Nc \{\hat{\Hm}_{12}(t)\} \end{align} \end{subequations} where $\Rc\{\cdot\}$ and $\Nc\{\cdot\}$ represent range and null spaces, respectively. Note that the range space is with dimension $N$ whereas the null space is with dimension $M-N$. Similarly to the MISO case, the estimation error satisfies $\E[\Norm{\Hm_{ji}(t)\Qm_{ji}^{\bot}(t)}] \sim P^{-\alpha}$. At both receivers, we have \begin{subequations} \begin{align} \yv(1) &= \Hm_{11}(1) \xv_1(1) + {\etav}_{1} + \ev(1) \\ \zv(1) &= \Hm_{22}(1) \xv_2(1) + {\etav}_{2} + \bv(1), \end{align} \end{subequations} where the interference vectors overheard at both receivers are \begin{subequations} \begin{align} {\etav}_1 &= \Hm_{12}(1) \xv_2(1) \in \CC^{N \times 1} \\ &=\Hm_{12}(1) \Qm_{12}(1) \vv_1(1) + \Hm_{12}(1) \Qm_{12}^{\bot}(1) \vv_2(1)\\ {\etav}_2 &= \Hm_{21}(1) \xv_1(1) \in \CC^{N \times 1}\\ &=\Hm_{21}(1) \Qm_{21}(1) \uv_1(1) + \Hm_{21}(1) \Qm_{21}^{\bot}(1) \uv_2(1) \end{align} \end{subequations} where $\uv_1(1)$, $\uv_2(1)$, $\vv_1(1)$ and $\vv_2(1)$ are all $N \times 1$ vectors. By balancing the allocated power among those vectors, i.e., \begin{align} \E(\norm{\uv_1(1)}^2) = P^{1-\alpha} \quad & \quad \E(\norm{\uv_2(1)}^2) = P-P^{1-\alpha}\\ \E(\norm{\vv_1(1)}^2) = P^{1-\alpha} \quad & \quad \E(\norm{\vv_2(1)}^2) = P-P^{1-\alpha} \end{align} we approximate the total power of interference vectors as $\E(\norm{{\etav}_1}^2) \sim P^{1-\alpha}$ and $\E(\norm{{\etav}_2}^2) \sim P^{1-\alpha}$. A set of source codebooks $\{\Xc_{1i},\Xc_{2i},i=1,\cdots,N\}$ with size $(1-\alpha) \log P$ bits each are generated to represent the quantized elements of the interference vectors ${\etav}_2$ and ${\etav}_1$ at the Tx-1 and Tx-2, respectively\footnote{Here, the quantization is made on each element of the vector $\etav_i$ regardless of their mutual correlation.}. The codewords representing the elements of ${{\etav}}_2$ and ${{\etav}}_1$ are chosen uniformly from $\{\Xc_{1i}\}$ and $\{\Xc_{2i}\}$ and concatenated as $\hat{\etav}_{2}$ and $\hat{\etav}_{1}$, respectively. As stated in MISO case, the indices of $\hat{\etav}_{i}$ are encoded to $\cv_i$ using a Gaussian channel codebook and then forwarded as common messages to both receivers in the following two slots. For the sake of simplicity, we omit the channel encoding and decoding process of $\cv_i$, and therefore, it looks as if $\hat{\etav}_{i}$ itself is conveyed. \subsubsection*{Slot-2} The objective of the slot-2 is to convey the codeword vector $\hat{\etav}_{2}$ whose information is carried on a coded common message $\cv_2$ together with a new symbol vector at the Tx-1, while only a new symbol vector is sent at the Tx-2. By omitting the encoding and decoding process of $\cv_2$, the equivalent transmitted signals are \begin{subequations} \begin{align} \xv_1(2) &= P^{\alpha/2} \Qm_{21}(2) \hat{\etav}_{2} + {\Qm}_{21}^{\perp}(2) \uv(2)\\ \xv_2(2) &= {\Qm}_{12}^{\perp}(2) \vv(2) \end{align} \end{subequations} where $\uv(2) \in \CC^{N \times 1}$ and $\vv(2) \in \CC^{N \times 1}$. We assume $\E(\norm{\uv(2)}^2) \le P^{\alpha}$ and $\E(\norm{\vv(2)}^2) \le P^{\alpha}$ to ensure they are recoverable. By treating $\uv(2)$ and $\vv(2)$ as noise, $N \times 1$ vector $\hat{\etav}_{2}$ is retrievable with high probability provided $N$ linearly independent equations at both receivers. After that, $\uv(2)$ and $\vv(2)$ are also recoverable from $N$ linear equations at the Rx-1 and Rx-2 by subtracting $\hat{\etav}_{2}$ from the received signals. \subsubsection*{Slot-3} The objective of the slot-3 is the same as the slot-2 but with the exchanged roles between Tx-1 and Tx-2. The equivalent transmitted signals are given as \begin{subequations} \begin{align} \xv_1(3) &= {\Qm}_{21}^{\perp}(3) \uv(3)\\ \xv_2(3) &= P^{\alpha/2} \Qm_{12}(3) \hat{\etav}_{1} + {\Qm}_{12}^{\perp}(3) \vv(3) \end{align} \end{subequations} where $\uv(3) \in \CC^{N \times 1}$, $\vv(3) \in \CC^{N \times 1}$ are assume to satisfy power constraint $\E(\norm{\uv(3)}^2) \le P^{\alpha}$ and $\E(\norm{\vv(3)}^2) \le P^{\alpha}$. By firstly treating $\uv(3)$ and $\vv(3)$ as noise, $N \times 1$ vector $\hat{\etav}_{1}$ can be recovered with high probability given $N$ linearly independent equations at both receivers. Similarly to the slot-2, $\uv(3)$ and $\vv(3)$ can also be recovered from the subtracted received signals. At the end of slot-3, $N \times 1$ vectors $\hat{\etav}_{1}$ and $\hat{\etav}_{2}$ can be all recovered at both receivers, serving to cancel the overheard interference as well as to provide additional linearly independent equations for $\vv(1)$ and $\uv(1)$, respectively. With $2N$ linearly independent equations, the $2N \times 1$ vectors $\uv(1)$ and $\vv(1)$ are both recoverable with high probability at its respective receiver. The proof of the achievable DoF pair $\left(\frac{N(2 + \alpha)}{3},\frac{N (2 + \alpha)}{3}\right)$ is similar to that in the MISO case. Take Tx-1/Rx-1 pair for example. The original channel model can be transformed to an equivalent $2N \times 2N$ point-to-point MIMO channel which conveys symbol vector $\uv(1)$ yielding $N(2-\alpha)$ DoF, and two parallel $N \times N$ MIMO channels which carry $\uv(2)$ and $\uv(3)$ respectively yielding $N\alpha$ DoF each. Hence, the total $N(2+\alpha)$ DoF is achieved within three time slots, and in turn the DoF pair $\left(\frac{N(2 + \alpha)}{3},\frac{N (2 + \alpha)}{3}\right)$ is achievable by symmetry. The detailed proof can be analogically derived according to the MISO case and hence omitted here. \section{Conclusion} We characterize the DoF region of the two-user MISO and certain MIMO interference channels where the transmitter has access to both delayed CSI as well as an estimate of the current CSI. In particular, these results are suited to time-correlated fading channels for which a latency-prone feedback channel provided the transmitter with the delayed samples, based on which a prediction mechanism can be applied to obtain the current imperfect CSI. Our DoF region covers a family of CSIT settings, coinciding with previously reported results for extreme situations such as pure delayed CSIT and pure current CSIT. For intermediate regimes, the DoF achieving scheme relies on the forwarding to users of a suitably quantized version of prior interference obtained under imperfect linear ZF precoding at the two transmitters.
1,108,101,564,468
arxiv
\section*{Acknowledgements}% \addtocontents{toc}{\protect\vspace{6pt}}% \addcontentsline{toc}{section}{Acknowledgements}% } \begin{document} \title{Learning dynamics explains human behavior in Prisoner's Dilemma on networks} \author{Giulio Cimini} \email{[email protected]} \affiliation{Grupo Interdisciplinar de Sistemas Complejos (GISC), Departamento de Matem\'{a}ticas, Universidad Carlos III de Madrid, 28911 Legan\'{e}s, Madrid, Spain} \author{Angel S\'{a}nchez} \affiliation{Grupo Interdisciplinar de Sistemas Complejos (GISC), Departamento de Matem\'{a}ticas, Universidad Carlos III de Madrid, 28911 Legan\'{e}s, Madrid, Spain} \affiliation{Instituto de Biocomputaci\'{o}n y F\'{i}sica de Sistemas Complejos (BIFI), Universidad de Zaragoza, 50018 Zaragoza, Spain} \begin{abstract} Cooperative behavior lies at the very basis of human societies, yet its evolutionary origin remains a key unsolved puzzle. Whereas reciprocity or conditional cooperation is one of the most prominent mechanisms proposed to explain the emergence of cooperation in social dilemmas, recent experimental findings on networked Prisoner's Dilemma games suggest that conditional cooperation also depends on the previous action of the player---namely on the `mood' in which the player currently is. Roughly, a majority of people behaves as conditional cooperators if they cooperated in the past, while they ignore the context and free-ride with high probability if they did not. However, the ultimate origin of this behavior represents a conundrum itself. Here we aim specifically at providing an evolutionary explanation of moody conditional cooperation. To this end, we perform an extensive analysis of different evolutionary dynamics for players' behavioral traits---ranging from standard processes used in game theory based on payoff comparison to others that include non-economic or social factors. Our results show that only a dynamic built upon reinforcement learning is able to give rise to evolutionarily stable moody conditional cooperation, and at the end to reproduce the human behaviors observed in the experiments. \end{abstract} \keywords{evolutionary game theory, prisoner's dilemma, social networks, moody conditional cooperation, reinforcement learning} \maketitle Cooperation and defection are at the heart of every social dilemma \cite{Dawes1980}. While cooperative individuals contribute to the collective welfare at a personal cost, defectors choose not to. Due to the lower individual fitness of cooperators arising from that cost of contribution, selection pressure acts in favor of defectors, thus making the emergence of cooperation a difficult puzzle. Evolutionary game theory \cite{MaynardSmith1973} provides an appropriate theoretical framework to address the issue of cooperation among selfish and unrelated individuals. At the most elementary level, many social dilemmas can be formalized as two-person games where each player can either cooperate (C) or defect (D). The {\it Prisoner's Dilemma} game (PD) \cite{Axelrod1984} has been widely used to model a situation in which mutual cooperation leads to the best outcome in social terms, but defectors can benefit the most individually. In mathematical terms, this is described by a payoff matrix (entries correspond to the row player's payoffs) $$\begin{array}{c|cc} & \mbox{C} & \mbox{D} \\ \hline \mbox{C} & R & S \\ \mbox{D} & T & P \end{array}$$ where mutual cooperation yields the reward $R$, mutual defection leads to punishment $P$, and the mixed choice gives the cooperator the sucker's payoff $S$ and the defector the temptation $T$. The essence of the dilemma is captured by $T>R>P>S$: both players prefer any outcome in which the opponent cooperates, but the best option for both is to defect. In particular, the temptation to cheat ($T>R$) and the fear of being cheated ($S<P$) can put cooperation at risk, and according to the principles of Darwinian selection, cooperation extinction is inevitable \cite{Hofbauer1998}. Despite the conclusion above, cooperation is indeed observed in biological and social systems alike \cite{MaynardSmith1995}. The evolutionary origin of such cooperation hence remains a key unsolved issue, particularly because the manner in which individuals adapt their behavior---which is usually referred to as evolutionary dynamics or strategy update---is unknown a priori. Traditionally, most of the theoretical studies in this field have built on update rules based on payoff comparison \cite{Hofbauer2003,Szabo2007,Roca2009a} \cite{foot0}. While such rules fit in the framework of biological evolution, where payoff is understood as fitness or reproductive success, they are also questionable, especially from an economic perspective, as it is often the case that individuals perceive the others' actions but not how much they benefit from them. Indeed, experimental observations \cite{Fischbacher2001,Grujic2010,Gracia2012b} (with some exceptions \cite{Traulsen2010}, but see also the reanalysis of those data in \cite{Grujic2013}) point out that human subjects playing PD or Public Good games do not seem to take payoffs into consideration. Instead, they respond to the cooperation that they observe in a reciprocal manner, being more prone to contribute the more their partners do. Reciprocity \cite{Trivers1971} has been studied in 2-player games through the concept of reactive strategies \cite{Sigmund2010}, the most famous of which is {\it Tit-For-Tat} \cite{Axelrod1981} (given by playing what the opponent played in the previous run). Reactive strategies generalize this idea by considering that players choose their action with probabilities that depend on the opponent's previous action. A further development was to consider memory-one reactive strategies \cite{Sigmund2010}, in which the probabilities depend on the previous action of both the focal player and her opponent. In multiplayer games, conditional cooperation, {\it i.e.}, the dependence of the chosen strategy on the amount of cooperation received, had been reported in related experiments \cite{Fischbacher2001} and observed also for the spatial iterated PD \cite{Traulsen2010} (often along with a large percentage of free-riders). The analysis of the two largest-scale experiment to date with humans playing an iterated multiplayer PD game on a network \cite{Grujic2010,Gracia2012b} extended this idea by including the dependence on the focal player's previous action, giving rise to the so-called {\it moody conditional cooperation} (MCC). The MCC strategy can be described as follows \cite{Grujic2012}: if in the previous round the player defected, she will cooperate with probability $p_D=q$ (approximately independently of the observed cooperation), whereas, if she cooperated, she will cooperate again with a probability $p_C(x)=p\,x+r$ (subject to the constraint $p_C(x)\leq 1$), where $x$ is the fraction of cooperative neighbors in the previous round. There is ample evidence supporting this aggregate behavior, as it has been observed in at least five independent experiments: the two already quoted \cite{Grujic2010,Gracia2012b}; another one on multiplayer PD \cite{Grujic2012b}; a lab-in-the-field experiment with people attending a fair in Barcelona, where participants in the age range 17-87 behaved consistently according to the MCC strategy \cite{Gutierrez-Roig2013}, and finally, in \cite{Traulsen2010}, as revealed by a recent meta-analysis of those experimental results \cite{Grujic2013}. On the other hand, it could be argued that MCC behavior arises from learning processes experienced by the players. In this respect, it is true that when a number of iterations of the PD is regarded as a single 'supergame', repetitions of such supergame show changes in behavior \cite{DalBo2011}. This is in agreement with the observations in \cite{Grujic2010}, where two repetitions of the supergame were carried out with the same players (Experiments 1 and 2 in the reference), and it was found that the initial behavior was indeed different in both. However, analysis that exclude the first few rounds of those experiments show clear evidence for MCC behavior which, if anything, becomes even more marked in the second one. Similar analysis were carried out in all other experiments, precisely to check for the effects of learning, finding in all cases strong evidence in support of the MCC strategy, even in \cite{Grujic2012b}, where 100 iterations of the PD were played. Therefore, we are confident that the observation of MCC behavior is reproducible and correctly interpreted, and we believe it is a good framework to study the problem as we propose here. However, from the viewpoint of ultimate origins and evolutionary stability of this kind of behavior, conditional cooperation and its moody version are a puzzle themselves. For instance, theoretical results based on replicator dynamics show that the coexistence of moody conditional cooperators with free-riders is not possible beyond very small groups \cite{Grujic2012}. Additionally, whereas the strategies reported in \cite{Grujic2010,Gracia2012b} are aggregate behaviors, it is not clear how individual MCC behavioral profiles $\{q,p,r\}$ evolve in time and how many evolutionarily stable profiles can exist among the players. Here we aim precisely at addressing these issues by developing and studying a model for the evolutionary dynamics of MCC behavioral traits. To this end, we perform agent-based simulations of a population consisting of $N$ differently-parameterized moody conditional cooperators, either on a well-mixed population or placed on the nodes of a network, who play an iterated PD game with their neighbors (which is the same setting used in recent experiments \cite{Traulsen2010,Grujic2010,Gracia2012b}) and whose behavioral parameters $\{q,p,r\}$ are subject to a strategy update process. Specifically, during each round $t$ of the game each player selects which action to take (C or D) according to her MCC traits, then plays a PD game with their neighbors---the chosen action being the same with all of them---and collects the resulting payoff $\pi^t$. Subsequently, every $\tau$ rounds players may update their MCC parameters according to a given evolutionary rule. The key and novel point in this study is that we explore a large set of possible update rules for the MCC parameters, whose details are given in SI Materials and Methods. To begin with, the first set of rules that we consider are of imitative nature, in which players simply copy the parameters from a selected counterpart. Imitation has been related to bounded rationality or to a lack of information that forces players to copy the strategies of others \cite{Schlag1998}. The rules that we consider here cover different aspects of imitation. Thus, we study the classical imitative dynamics that are based on payoff comparison: stochastic rules as {\it Proportional Imitation} \cite{Helbing1992} (equivalent, for a large and well-mixed population, to the replicator dynamics \cite{Hofbauer2003}), the {\it Fermi rule} \cite{Szabo1998} (featuring a parameter $\beta$ that controls the intensity of selection, and that can be understood as the inverse of temperature or noise in the update rule \cite{Blume1993,Traulsen2006}) and the {\it Death-Birth rule} (inspired on Moran dynamics \cite{Moran1962}), as well as the deterministic dynamics given by {\it Unconditional Imitation} (also called ``Imitate the Best'') \cite{Nowak1992}. In all these cases, players decide to copy one of their neighbors with a probability (that may be 1, {\it i.e.}, with certainty) that depends in a specific manner on the payoffs that they and their partners obtained in the previous round of the game. To widen the scope of our analysis, we also analyze another imitative mechanism that is not based on payoff comparison, namely the {\it Voter model} \cite{Holley1975}, in which players simply follow the social context without any strategic consideration \cite{Fehr2000}. Finally, in order to go beyond pure imitation, we also consider another two evolutionary dynamics which are innovative, meaning that they allow extinct strategies to be reintroduced in the population (whereas imitative dynamics cannot do that). The first one is {\it Best Response} \cite{Matsui1992,Blume1993}, a rule that has received a lot of attention in the literature, especially in economics, and that represents a situation in which each player has enough cognitive abilities to compute an optimum strategy given what her neighbors did in the previous round. The second one is {\it Reinforcement Learning} \cite{Bush1955,Macy2002,Izquierdo2008}, which instead embodies the condition of a player that uses her experience to choose or avoid certain actions based on their consequences: actions that met or exceeded aspirations in the past tend to be repeated in the future, whereas choices that led to unsatisfactory experiences are avoided. Note that neither of these two last rules relies on the use of information on others' payoffs. With the different update schemes that we have summarized above, we have an ample spectrum of update rules representing most of the alternatives that have been proposed to implement evolutionary dynamics. The point of considering such comprehensive set is directly related to our aim: finding how evolution, in a broad sense, can give rise to situations that are compatible with the ones seen in the experiments \cite{Grujic2010,Gracia2012b}, in terms of values and stationarity of the MCC parameters, as well as of the final level of cooperation achieved. Additionally, we study different spatial structures determining the interactions among the players: the simple setup of a well-mixed population (modeled by a random graph of average degree $\bar{k}=m$, rewired after each round of the game), as well as more complex structures---such as Barab\'{a}si-Albert scale free network \cite{Barabasi1999} (with degree distribution $P(k)\sim 2\,m^2/k^3$) and regular lattices with periodic boundary conditions (where each node is connected to its $k\equiv m$ nearest neighbors) as used in the available experimental results. In so doing, we add another goal to our research, namely to check whether evolution can also explain the observed lack of {\it network reciprocity} \cite{Nowak2006}, which is another important experimental outcome \cite{Grujic2010,Gracia2012b}. Indeed, experimental results show very clearly that, when it comes to human behavior, the existence of an underlying network of contacts does not have any influence on the final level of cooperation. Therefore, any evolutionary proposal to explain the way subjects behave in the experiments must also be consistent with this additional observation. \section*{Results and Discussion} We have carried out an extensive simulation program on the set of update rules and underlying networks that we have introduced above. In what follows, we separate the discussion of the corresponding results in two main groups: imitative and non-imitative strategies. Additional aspects of our numerical approach are described in SI Results. \subsection*{Imitative updates} The five topmost sets of plots of Fig.\ \ref{fig.C} show the evolution of the level of cooperation $c$ (defined as the percentage of players who cooperate in each round of the game), as well as the stationary probability distribution of the individual MCC parameters among the population, when different evolutionary dynamics are employed to update players' behavioral traits. Note that all the plots refer to the case $\tau=1$ (meaning that the update takes place after each round). We will show only results for this choice below, because we have observed that the value of $\tau$ basically influences only the convergence rate of the system to its stationary state, but not its characteristic features. As can be seen from the plots, the final level of cooperation here is, generally, highly dependent on the population structure, and often the final outcome is a fully defective state (especially for a well-mixed population) \cite{foot1}. Then, as expected from non-innovative strategies, the number of profiles $\{q,p,r\}$ that survive at the end of the evolution is always very low and, in general, only one profile is left for every individual realization of the system. Notwithstanding, the surviving profiles are very different among independent realizations (except when the final outcome is full defection, where $q\rightarrow0$ irrespectively of $p$ and $r$), indicating the absence of a stationary distribution for MCC parameters, {\it i.e.}, the lack of evolutionarily stable profiles. The only case in which the parameters $q$ and $r$ tend to concentrate around some stationary non-trivial values is given by games played on lattices and with Unconditional Imitation updating. Finally, we note that, when the update rule is the Voter model, the surviving profile is just picked randomly among the population (as expected from a rule that is not meant to improve payoffs), and hence the cooperation level remains close on average to the value set by the initial distribution of MCC parameters. A similar behavior is observed with the Fermi rule for low $\beta$, where $\beta$ is the parameter that controls the intensity of the selection. Whereas for high $\beta$ (low temperature) errors are unlikely to occur and players always choose the parameters that enhance their payoffs, resulting in full defection as final outcome, for low $\beta$ (high temperature) errors are frequent, so that MCC parameters basically change randomly and $c$ remains close to its initial value. It is also worth noting that Proportional Imitation and the Fermi Rule lead to very similar results, except for the parameter $q$, which makes sense in view that they are very similar unless $\beta$ is very small. The fact that both the Fermi rule and the Death-Birth update lead also to similar outcomes is probably related to those two dynamics being both error-prone, with specific features of either one showing, for instance, in the different results on lattices. Nonetheless, beyond all these peculiarities of each imitative dynamics, the main conclusion of our simulation program is that this type of update schemes is not compatible with the experimental observations. Note that it is not our goal to explain in detail the effects of a particular updating rule on a given population structure. However, it is possible to gain qualitative insights on the behavior of the system from rather naive considerations. Take for instance scale-free networks, which feature hubs (players with high degree) that thus get higher payoff than average players do. If the dynamics is of imitative nature, hubs' strategy is stable and tends to spread over the network: there is the possibility for a stable subset of cooperators to form around hubs \cite{GomezGardenes2007}. This behavior (which cannot occur in random or regular graphs, where the degree distribution is more homogeneous) is clearly visible when the updating rule is Proportional Imitation. Notably, the stability of the subset of cooperators is destroyed when mistakes are possible (as with the Fermi rule); on the other hand, it is enhanced when the updating selects preferentially individuals with high payoffs (as with the Death-Birth rule or Unconditional Imitation). In these two latter cases cooperation becomes sustainable also in lattices, as these structures naturally allow clusters of mutually connected cooperators to emerge. Instead, the independence on the network observed---as we shall see---in the case of Reinforcement Learning is easily explained by players not even looking at each other, which makes the actual population structure irrelevant. \subsection*{Non-imitative updates} A first general finding about this type of evolutionary rules is that, because of their own nature, they allow for a very large number of surviving MCC profiles ($\sim N$), even when the parameters tend to concentrate around specific values. The bottom set of plots of Fig.\ \ref{fig.C} summarizes our results for the Best Response dynamics, which is the most ``rational'' of the ones that we are studying here. For this choice, the system always ends up in a fully defective state, irrespectively of the network's structure, which is the outcome that would be obtained by global maximization of the individual payoffs. In this sense, the amount $\delta$ by which parameters are shifted at each update influences only the convergence rate of the system: higher $\delta$ arrives faster to full defection ($q=r=0$). We then see that evolution by Best Response fails completely to explain any of the main experimental results. Our other rule of choice in this type is Reinforcement Learning. We will begin by assuming that aspiration levels $A$ remain fixed in time. Our results regarding this rule are presented in Fig.\ \ref{fig.C_RL}. When $A$ is midway between the punishment and reward payoffs ($P<A<R$) we observe a stationary, non vanishing level of cooperation around 30\% that does not depend on the population structure. This behavior, that is robust with respect to the learning rate $\lambda$, is in good qualitative agreement with the experimental observations \cite{Grujic2010,Gracia2012b}. However the most remarkable outcome of this dynamic is that, contrary to all other update procedures that we have discussed so far, the values of the MCC parameters $\{q,p,r\}$ concentrate around some stationary, non-trivial values which are independent on the population structure and on the initial conditions of the system. Indeed, we have checked that the stationary values of $\{q,p,r\}$ do not depend on the initial form of their distributions, and also that fixing one of these three parameters does not influence the stationary distributions of the others. More importantly, these values are compatible with the ones obtained by linear fits of the aggregate MCC behavior extracted from the experiments \cite{Grujic2010,Gracia2012b}. Reinforcement learning thus represents the only mechanism (among those considered here) which is able to give rise to evolutionarily stable moody conditional cooperators, while at the same time reproducing the cooperation level and the lack of network reciprocity (note that, as we already said, the type of network on which the population sits does not affect the cooperation level). It is worth mentioning two other features of this dynamics. First, we have checked that the value of $\lambda$ influences only the convergence rate of the system; however, if players learn too rapidly ($\lambda\sim1$) then the parameters change too quickly and too much to reach stationary values---a phenomenon typical of this kind of learning algorithms. Second, if we introduce in the system a fraction $d$ of players who always defect (recall that full defectors coexist with moody conditional cooperators in the experiments), what happens is that the final cooperation level changes---it drops to 25\% for $d=0.2$ and to 20\% for $d=0.4$---but the stationary distributions of MCC parameters are not affected. This means that Reinforcement Learning is able to account for the heterogeneity of the behaviors observed in the experimental populations, which is consistent with the fact that this update rule does not take into account either the payoffs or the actions of the rest of the players. Further evidence for the robustness of the Reinforcement Learning evolutionary dynamics arises from extending our study to other aspiration levels, including dynamic ones. In general, what we observe is that the higher $A$, the higher the final level of cooperation achieved. When $R<A<T$ players are not satisfied with the reward of mutual cooperation; however an outcome of mutual defection leads to a great stimulus towards cooperation in the next round. This is why players' parameters tend to concentrate around values that allow for a strategy which alternates cooperation and defection, and that brings to stationary cooperation levels around 50\%. Instead if $S<A<P$, then defection-defection is a satisfactory outcome for each pair of players. In this case cooperation may thrive only on stationary networks (where clusters of cooperator may form). However for a well-mixed population the final state is necessarily fully defective ($q\rightarrow 0$). Hence we observe in this case a dependence on the network structure which is not observed in the experiments; nonetheless, setting an aspiration level below punishment is at least questionable. Therefore, unless players make very strange decisions on their expectations from the game, we find behaviors that agree qualitatively with the experiments. Finally, we consider the case in which players adapt their aspiration level after each round: $A^{t+1}\leftarrow (1-h)A^t+h\pi^t/k$, where $h$ is the adaptation (or habituation) rate and $P<A^0<R$. What we observe now is that the stationary level of cooperation lies around 20\%, the absence of network reciprocity is recovered, and players' average aspiration levels remain in the range $P<\bar{A}<R$. Thus this case is again compatible with experimental observations, and the fact that aspiration levels of an intermediate character are selected (corresponding to the case that better describes them) provides a clear rationale for this choice in the preceding paragraph. A final important validation of Reinforcement Learning comes from studying the \emph{EWA} (experience-weighted attraction) updating \cite{Camerer1999}, an evolutionary dynamics that combines aspects of Belief Learning models (to which Best Response belongs) and of Reinforcement Learning. Results for this choice of the updating scheme (which are reported in SI EWA) confirm in fact that Reinforcement Learning is the determinant contribution which allows to achieve situations matching with empirical outcomes. \section*{Conclusion} Understanding cooperation is crucial because all major transitions in evolution involve the spreading of some sort of cooperative behavior \cite{MaynardSmith1995}. In addition, the archetypical tensions that generate social dilemmas are present in fundamental problems of the modern world: resource depletion, pollution, overpopulation, and climate change. This work, inspired by experiments \cite{Grujic2010,Gracia2012b}, aimed at finding an evolutionary framework capable of modeling and justifying real people behavior in an important class of social dilemmas---namely Prisoner's Dilemma games. To this end, we have studied the evolution of a population of differently-parameterized MCC whose parameters can evolve. We have considered several rules for parameters' changes---both of imitative nature and innovative mechanisms, as well as rules based on payoff comparison and others based on non-economic or social factors. Our research shows that Reinforcement Learning with a wide range of learning rates is the only mechanism able to explain the evolutionary stability of moody conditional cooperation, leading to situations that are in agreement with the experimental observations in terms of the stationary level of cooperation achieved, average values and stationary distributions of the MCC parameters, and absence of network reciprocity. Note that we have considered only PD games; however, given that in our setup players have to play the same action with all their neighbors, it is clear that our results should be related to Public Goods experiments (where conditional cooperation was first observed \cite{Fischbacher2001}). Our findings thus suggest that MCC can also arise and be explained through reinforcement learning dynamics in repeated Public Goods games. We stress that this is a very relevant result, as for the first time to our knowledge we are providing a self-consistent picture of how people behave in PD games on networks. Indeed, starting from the observation that players do not take others' payoffs into account, we find that if this behavior is to be explained in an evolutionary manner, it has to be because people learn from what they experience, and not from the information they may gather on their neighbors. Such a learning process is in turn very sensible in the heavily social framework in which we as humans are embedded, and compatible with the knowledge that we have on the effects of our choices on others. On the other hand, the evolutionary dynamics that our work eliminates as possible responsible for how we behave are, in fact, difficult to justify in the same social context, either because they would require a larger cognitive effort (Best Response) or, on the contrary, because they assume a very limited rationality that only allows to imitate without reflecting on how we have been affected by our choices. Our work thus provides independent evidence that, at least in the context of human subjects interacting in PD, the observed behaviors arise mostly from learning. Of course, this does not mean that other ways to update one's strategy are not possible: indeed, a large fraction of people have been observed to be full defectors, a choice they may have arrived at by considering the PD game from a purely rational viewpoint. In addition, specific individuals may behave in idiosyncratic manners that are not described within our framework here. Still, as we have seen, our main result, namely that Reinforcement Learning explains the behavior of a majority of people and its macro-consequences (level of cooperation, lack of network reciprocity) would still hold true in the presence of these other people. Although a generalization of our results to other classes of social dilemma beyond PD and Public Goods is not straightforward, our conclusions here should guide further research on games on networks. We believe that the experimental results, to which the present work provides a firm theoretical support, allow to conclude that many of the evolutionary dynamics used in theory and in simulations simply do not apply to the behavior of human subjects and, therefore, their use should be avoided. As a matter of fact, much of the research published in the last decade by using all these update schemes is only adding confusion to an already very complicated problem. Even so, our findings do not exclude the plausibility of other strategy updating in different contexts. For instance, analytical results with imitative dynamics \cite{Wu2010} display an agreement with experimental outcomes on dynamical networks \cite{Fehl2011}, where it was also shown that selection intensity (which can be thought as a measure of players' rationality) can dramatically alter the evolutionary outcome \cite{VanSegbroeck2011}. It is also important to stress that our findings here relate to human behavior, and other species could behave differently; for instance, it has been recently reported that bacteria improve their cooperation on a spatial structure \cite{Hol2013} and this could arise because of more imitative 'strategies'. Finally, a promising line of research could be to compare the distribution of values for the MCC parameters that we have obtained here with the observations on single individuals, thus going beyond the check agains aggregate data to address the issue of reproducing whole histograms. Unfortunately, the data that we currently have is not good in terms of individual behavior, as observations are noisy and statistics is insufficient to assign significant values of the parameters to single participants. In this respect, experiments specifically designed to overcome this difficulty could be a very relevant contribution to further verifying our claims. Another important suggestion arising from our research is the relevance of theoretical concepts derived within Reinforcement Learning to the study of games on networks. In this respect, it is very interesting to recall that a theoretical line of work based on Reinforcement Learning models for 2-player repeated games has received quite some attention recently \cite{Erev2001,Bendor2001b}. In this context, a generalized equilibrium concept has been introduced in order to explain the findings in simulations of 2-player PD \cite{Macy2002,Izquierdo2008}, called self-correcting equilibrium: it obtains when the expected change of parameters is zero but there is a positive probability to incur into a negative as well as positive stimulus. The extension of the Reinforcement Learning dynamics to multiplayer PD that we have presented here points to the explanatory power of such equilibrium concepts in the framework of network games, as the level of cooperation observed in experiments is in close agreement with the predicted equilibrium. Importantly, it has recently been shown that behavioral rules with intermediate aspiration levels, as the ones we find here to be relevant, are the most successful ones among all possible reactive strategies in a wide range of 2-player games \cite{Vaquero2012}. This suggests that this type of evolutionary dynamics may indeed be relevant in general. It would therefore be important to study whether or not the associated equilibrium concept is also the most important one when other types of games are played on an underlying network. If that is the case, we would have a very powerful tool to understand and predict human behavior in those situations. \newline\newline \textbf{Materials and Methods} --- Agent-based simulations of the model were carried out using the following parameters: $c_0=0.5$ (initial fraction of cooperators) \cite{foot2}, $R=1$, $P=0$, $S=-1/2$, $T=3/2$ (entries of the PD's payoff matrix, such that $T>R$, $S<P$ and $2R>T+S$) \cite{foot3}, $N=1000$ and $m=10$ (network parameters) \cite{foot4}. The MCC behavioral parameters $\{q,p,r\}$ are all drawn for each player before the first round of the game from a uniform distribution $\mathcal{U}[0,1]$, with the additional constraint $p+r\le1$ to have $0\le p_C(x)\le1$. Note that the particular form of the initial distribution as well as the presence of the constraint does not influence the outcome of our experiments. \newline\newline \begin{acknowledgments} This work was supported by the Swiss Natural Science Fundation through grant PBFRP2\_145872, by Ministerio de Econom\'\i a y Competitividad (Spain) through grant PRODIEVO, by the ERA-Net on Complexity through grant RESINEE, and by Comunidad de Madrid (Spain) through grant MODELICO-CM. \end{acknowledgments}
1,108,101,564,469
arxiv
\subsection{Derivation of PMR-RGLRT-UK when the signal format information is employed}\label{pmr_glrt_with_sig_info} \noindent Consider hypothesis $\mathcal{H}_1$ in (\ref{Hypotheses_test_linear}). We have \begin{flalign*} l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2|\bm{s}) & = & \end{flalign*} \begin{eqnarray}\label{LLu1} & & -N_tN_rN\ln(\pi\sigma^2) -\frac{1}{\sigma^2}\sum_{i=1}^{N_t}\sum_{j=1}^{N_r} \big( ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2 \nonumber \\ & & + ||\bm{s}_r^{ij} - \mu_r^{ij}\bm{G}^i\bm{b}^i||^2 \big). \end{eqnarray} From Appendix \ref{pmr_glrt_with_sig_noise_info}, the relaxed MLE of $\mu_{(s,r)}^{ij}$ and $\bm{b}^i$ are given by \begin{eqnarray} \hat{\mu}_{(s,r)}^{ij} & = & \frac{(\bm{G}^i\bm{b}^i)^H\bm{s}_{(s,r)}^{ij}}{||\bm{G}^i\bm{b}^i||^2}, \end{eqnarray} and \begin{eqnarray} \hat{\bm{b}}^i & = & \bm{v}_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right). \end{eqnarray} Substituting these values in (\ref{LLu1}), we obtain \begin{flalign*} l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}, \sigma^2|\bm{s}) & = -N_tN_rN\ln(\pi\sigma^2)& \end{flalign*} \begin{eqnarray}\label{LL_glrt_u} -\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]. \end{eqnarray} The MLE of $\sigma^2$, denoted by $\hat{\sigma}^2$, can be obtained from the derivate of (\ref{LL_glrt_u}) and is given by \begin{eqnarray} \hat{\sigma}^2 = \frac{1}{c_1}\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]. \end{eqnarray} where $c_1 = N_tN_rN$. Substituting the obtained MLE in $l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}, \sigma^2|\bm{s})$ and simplifying, we have (ignoring the additive constant) \begin{flalign*} l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}, \hat{\sigma}^2|\bm{s}) &= & \end{flalign*} \begin{eqnarray}\label{GLRTU_H1} -c_1\ln \Bigg(\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big] \Bigg). \nonumber \end{eqnarray} By a similar procedure, it can shown under hypotheses $\mathcal{H}_0$ that \begin{flalign*} l_0(\hat{\bm{\mu}}_r, \hat{\bm{b}}, \hat{\sigma}^2|\bm{s}) & = & \end{flalign*} \begin{eqnarray}\label{GLRTU_H0} -c_1\ln \Bigg(\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big] \Bigg). \nonumber \end{eqnarray} Using $l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}, \hat{\sigma}^2|\bm{s})$ and $l_0(\hat{\bm{\mu}}_r, \hat{\bm{b}}, \hat{\sigma}^2|\bm{s})$, the \emph{PMR-RGLRT-UK} for the hypothesis testing problem in (\ref{Hypotheses_test_linear}) is given by \begin{eqnarray} \xi_{uk} & = & \frac{\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]}{\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]} \nonumber \\ & & \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{uk}. \end{eqnarray} \subsection{Derivation of PMR-RGLRT-K when the signal format information is employed}\label{pmr_glrt_with_sig_noise_info} \noindent Consider hypothesis $\mathcal{H}_1$ in (\ref{Hypotheses_test_linear}). We have \begin{eqnarray}\label{LL_appendix1} l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}|\bm{s}) & = & \sum_{i=1}^{N_t} l_1^i(\bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i|\bm{s}^i), \end{eqnarray} where (ignoring the additive constants), we have \begin{flalign*} l_1^i(\bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i|\bm{s}^i) & & \end{flalign*} \begin{eqnarray}\label{LL1_i_appendix1} = -\frac{1}{\sigma^2}\sum_{j=1}^{N_r} \big( ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2 + ||\bm{s}_r^{ij} - \mu_r^{ij}\bm{G}^i\bm{b}^i||^2 \big). \end{eqnarray} The MLE of $\mu_{(s,r)}^{ij}$ obtained from a derivative of (\ref{LL1_i_appendix1}) is given by \begin{eqnarray}\label{mu_mle_appendix_1} \hat{\mu}_{(s,r)}^{ij} & = & \frac{(\bm{G}^i\bm{b}^i)^H \bm{s}_{(s,r)}^{ij}}{(\bm{G}^i\bm{b}^i)^H \bm{G}^i\bm{b}^i}. \end{eqnarray} Substituting (\ref{mu_mle_appendix_1}) into (\ref{LL1_i_appendix1}), we obtain \begin{flalign*} l_1^i(\bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i|\bm{s}^i) & & \end{flalign*} \begin{eqnarray} = \frac{-1}{\sigma^2}\sum_{j=1}^{N_r} \Bigg[ ||\bm{s}_s^{ij}||^2 + ||\bm{s}_r^{ij}||^2 - \frac{(\bm{G}^i\bm{b}^i)^H\bm{s}_s^{ij} (\bm{s}_s^{ij})^H\bm{G}^i\bm{b}^i}{(\bm{G}^i\bm{b}^i)^H\bm{G}^i\bm{b}^i} \nonumber \\ - \frac{(\bm{G}^i\bm{b}^i)^H\bm{s}_r^{ij} (\bm{s}_r^{ij})^H\bm{G}^i\bm{b}^i}{(\bm{G}^i\bm{b}^i)^H\bm{G}^i\bm{b}^i} \Bigg]. \end{eqnarray} After simplifying, we obtain \begin{eqnarray}\label{MaximEq_appendix1} l_1^i(\hat{\bm{\mu}}_s^i, \hat{\bm{\mu}}_r^i, \bm{b}^i|\bm{s}^i) = \frac{-1}{\sigma^2}\Bigg[ E_{sr}^i - \frac{(\bm{G}^i\bm{b}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i\bm{b}^i}{(\bm{G}^i\bm{b}^i)^H\bm{G}^i\bm{b}^i} \Bigg], \end{eqnarray} where $\bm{\phi}_1^i = [\bm{\phi}_s^i, \bm{\phi}_r^i]$, and the matrices $\bm{\phi}_s^i$ and $\bm{\phi}_r^i$ are defined as \begin{eqnarray} \bm{\phi}^i_{(s, r)} & = & \left[ \bm{s}_{(s, r)}^{i1}, \bm{s}_{(s, r)}^{i2}, \cdots, \bm{s}_{(s, r)}^{iN_r} \right] \nonumber \end{eqnarray} and the scalar $E_{sr}^i = ||\bm{s}_s^i||^2 + ||\bm{s}_r^i||^2$. Using the discussion below (\ref{Generalized_RayleighQuotient}), the complex value of $\bm{b}^i$ that maximizes (\ref{MaximEq_appendix1}) is given by $\hat{\bm{b}}^i = \bm{v}_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)$. Substituting $\hat{\bm{b}}^i$ in (\ref{MaximEq_appendix1}), we have \begin{flalign*} l_1^i(\hat{\bm{\mu}}_s^i, \hat{\bm{\mu}}_r^i, \hat{\bm{b}}^i|\bm{s}^i) & = & \end{flalign*} \begin{eqnarray} \frac{-1}{\sigma^2}\left[E_{sr}^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)\right]. \nonumber \end{eqnarray} From (\ref{LL_appendix1}), we then have \begin{flalign*} l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})& = & \end{flalign*} \begin{eqnarray}\label{GLRT_H1_pmr_knownsignal} \frac{-1}{\sigma^2}\sum_{i=1}^{N_t} \left(E_{sr}^i - \lambda_1\left((\bm{G}^i)^H \bm{\phi}_1^i (\bm{\phi}_1^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)\right). \end{eqnarray} Following a similar procedure, it can be shown under $\mathcal{H}_0$ that \begin{flalign*} l_0(\hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})& = & \end{flalign*} \begin{eqnarray}\label{GLRT_H1_pmr_knownsignal} \frac{-1}{\sigma^2}\sum_{i=1}^{N_t} \left(E_{sr}^i - \lambda_1\left((\bm{G}^i)^H \bm{\phi}_r^i (\bm{\phi}_r^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)\right). \end{eqnarray} Using $l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})$ and $l_0(\hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})$, the \emph{PMR-RGLRT-K} for the hypothesis testing problem in (\ref{Hypotheses_test_linear}) is given by \begin{eqnarray} \xi_{ksf} & = &\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \Big[ \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \nonumber \\ &&- \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \Big] \nonumber \\ &&\LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{ksf}. \end{eqnarray} \subsection{Derivation of PSL-RGLRT-K when the signal format information is employed}\label{psl_glrt_sig_noise_info} \noindent The conditional probability density function (pdf) of $\bm{s}$ under $\mathcal{H}_1$ for the hypotheses test of (\ref{Hypotheses_test_PSL}) is given by \begin{eqnarray} p_1(\bm{s}|\bm{\mu}_s, \bm{b}) & = & \prod_{i=1}^{N_t} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{b}^i), \end{eqnarray} where \begin{eqnarray} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{b}^i) & \propto & \exp\bigg\{\frac{-1}{\sigma^2}\sum_{j=1}^{N_r} ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2 \bigg\}. \end{eqnarray} The conditional pdf of $\bm{s}$ under $\mathcal{H}_0$, $p_0(\bm{s})$, is similarly defined. Let $l_1(\bm{\mu}_s, \bm{b}|\bm{s}) = \log p_1(\bm{s}|\bm{\mu}_s, \bm{b})$ and $l_0(\bm{s}) = \log p_0(\bm{s})$ denote the log-likelihood functions under the hypotheses $\mathcal{H}_1$ and $\mathcal{H}_0$. The relaxed GLRT can now be written as \begin{eqnarray} \max_{\{\bm{\mu}_s, \bm{b}\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{\mathcal{B}} } l_1(\bm{\mu}_s, \bm{b}|\bm{s}) - l_0( \bm{s}) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{psk}. \end{eqnarray} Consider hypothesis $\mathcal{H}_1$. We have \begin{eqnarray}\label{LL_appendix3} l_1(\bm{\mu}_s, \bm{b}|\bm{s}) & = & \sum_{i=1}^{N_t} l_1^i(\bm{\mu}_s^i, \bm{b}^i|\bm{s}^i), \end{eqnarray} where (ignoring the additive constants), we have \begin{eqnarray}\label{LL1_i_appendix3} l_1^i(\bm{\mu}_s^i, \bm{b}^i|\bm{s}^i) & = & \frac{-1}{\sigma^2}\sum_{j=1}^{N_r} ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2. \end{eqnarray} The relaxed MLE of $\mu_{s}^{ij}$ and $\hat{\bm{b}}^i$ are obtained from a derivative of (\ref{LL1_i_appendix3}) and are given by \begin{eqnarray}\label{mu_mle_appendix1} \hat{\mu}_{s}^{ij} & = & \frac{(\bm{G}^i\bm{b}^i)^H \bm{s}_{s}^{ij}}{(\bm{G}^i\bm{b}^i)^H \bm{G}^i\bm{b}^i}, \end{eqnarray} and \begin{eqnarray} \hat{\bm{b}}^i & = & \bm{v}_1\left((\bm{G}^i)^H\bm{\phi}_s^i (\bm{\phi}_s^i)^H\bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right). \end{eqnarray} Substituting the obtained relaxed MLE in (\ref{LL1_i_appendix3}) and simplifying, we obtain \begin{eqnarray} l_1^i(\hat{\bm{\mu}}_s^i, \hat{\bm{b}}^i|\bm{s}^i) = \frac{-1}{\sigma^2}\left[E_{sr}^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)\right]. \nonumber \end{eqnarray} From (\ref{LL_appendix3}), we then have \begin{flalign*} l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})& = & \end{flalign*} \begin{eqnarray}\label{GLRT_H1_psl_knownsignal} \frac{-1}{\sigma^2}\sum_{i=1}^{N_t} \left(E_{sr}^i - \lambda_1\left((\bm{G}^i)^H \bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right)\right). \end{eqnarray} By a similar procedure, it can shown under $\mathcal{H}_0$ that \begin{eqnarray}\label{GLRT_H0_psl_knownsignal} l_0(\bm{s})& = & -\frac{1}{\sigma^2}\sum_{i=1}^{N_t} E_{sr}^i \end{eqnarray} Using $l_1(\hat{\bm{\mu}}_s, \hat{\bm{b}}|\bm{s})$ and $l_0(\bm{s})$, the \emph{PSL-RGLRT-K} for the hypothesis testing problem in (\ref{Hypotheses_test_PSL}) is given by \begin{eqnarray} \xi_{psk} = \frac{1}{\sigma^2}\sum_{i=1}^{N_t} \lambda_1\left((\bm{G}^i)^H \bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \nonumber \\ \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{psk}. \end{eqnarray} \subsection{Derivation of PSL-GLRT-K when the signal format information is employed}\label{AppendixSec4} \noindent The conditional probability density function (pdf) of $\bm{s}$ under $\mathcal{H}_1$ is given by \begin{eqnarray} p_1(\bm{s}|\bm{\mu}_s, \bm{b}) & = & \prod_{i=1}^{N_t} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{b}^i), \end{eqnarray} where \begin{flalign*} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{b}^i) & \propto & \end{flalign*} \begin{eqnarray} && \exp\bigg\{-\sum_{j=1}^{N_r} (\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i)^H\bm{\Gamma}^{-1}(\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i)\bigg\} \nonumber \end{eqnarray} The conditional probability of $\bm{s}$ under $\mathcal{H}_0$, $p_0(\bm{s}|\bm{\mu}_s, \bm{b})$ is similarly defined. Consider hypothesis $\mathcal{H}_1$. We have \begin{eqnarray}\label{LL4} l_1(\bm{\mu}_s, \bm{b}|\bm{s}) & = & \sum_{i=1}^{N_t} l_1^i(\bm{\mu}_s^i, \bm{b}^i|\bm{s}^i), \end{eqnarray} where \begin{flalign*} l_1^i(\bm{\mu}_s^i, \bm{b}^i|\bm{s}^i) & = & \end{flalign*} \begin{eqnarray}\label{LL1_i_appendix4} -\sum_{j=1}^{N_r} (\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i)^H\bm{\Gamma}^{-1}(\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i). \end{eqnarray} The MLE of $\mu_{s}^{ij}$ and $\bm{b}^i$ obtained from a derivative of (\ref{LL1_i_appendix4}) is given by \begin{eqnarray}\label{mu_mle_appendix4} \hat{\mu}_{s}^{ij} & = & \frac{(\bm{G}^i\bm{b}^i)^H \bm{\Gamma}^{-1}\bm{s}_{(s,r)}^{ij}}{(\bm{G}^i\bm{b}^i)^H \bm{\Gamma}^{-1}\bm{G}^i\bm{b}^i} \end{eqnarray} and \begin{eqnarray} \hat{\bm{b}^i} = \bm{v}_1\left((\bm{G}^i)^H\bm{\Gamma}^{-1}\bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{\Gamma}^{-1}\bm{G}^i, (\bm{G}^i)^H \bm{\Gamma}^{-1}\bm{G}^i \right). \end{eqnarray} Substituting the obtained MLE in (\ref{LL1_i_appendix4}), we obtain \begin{flalign*} l_1^i(\hat{\bm{\mu}}_s^i, \hat{\bm{b}}^i|\bm{s}^i) & = & \end{flalign*} \begin{eqnarray} -\left[E_s^i - \lambda_1\left((\bm{G}^i)^H\bm{\Gamma}^{-1}\bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{\Gamma}^{-1}\bm{G}^i, (\bm{G}^i)^H \bm{\Gamma}^{-1}\bm{G}^i \right)\right]. \nonumber \end{eqnarray} where $E_s^i = (\bm{s}_s^{ij})^H\bm{\Gamma}^{-1}\bm{s}_s^{ij}$. From (\ref{LL3}), we then have \begin{flalign*} l_1(\hat{\bm{\mu}}_s, \hat{\bm{\mu}}_r, \hat{\bm{b}}|\bm{s})& = & \end{flalign*} \begin{eqnarray}\label{GLRT_H1_pmr_knownsignal} -\sum_{i=1}^{N_t}\left(E_{sr}^i - \lambda_1\left((\bm{G}^i)^H\bm{\Gamma}^{-1}\bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{\Gamma}^{-1}\bm{G}^i, (\bm{G}^i)^H \bm{\Gamma}^{-1}\bm{G}^i \right)\right). \nonumber \end{eqnarray} Following a similar procedure, it can shown under $\mathcal{H}_0$ that \begin{eqnarray}\label{GLRT_H0_psl_knownsignal} l_0(\bm{s})& = & -\sum_{i=1}^{N_t}E_{sr}^i \end{eqnarray} Using $l_1(\hat{\bm{\mu}}_s, \hat{\bm{b}}|\bm{s})$ and $l_0(\bm{s})$, the GLRT for the hypothesis testing problem in (\ref{Hypotheses_test_linear}) is given by \begin{eqnarray} \xi_{pslk} & = &\sum_{i=1}^{N_t} \lambda_1\left((\bm{G}^i)^H\bm{\Gamma}^{-1}\bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{\Gamma}^{-1}\bm{G}^i, (\bm{G}^i)^H \bm{\Gamma}^{-1}\bm{G}^i \right) \nonumber \\ &&\LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{pslk}. \end{eqnarray} \section{Introduction}\label{section1} \input{section1} \section{Signal Model and Problem Statement}\label{section2} \input{section2} \section{Target Detection in PMR networks} \input{section3} \section{Simulation Results} \input{section4} \section{Conclusion} \input{section5} \subsection{Linear Digital Modulations} \noindent The complex baseband structure of a linear digital modulation scheme can be represented as \cite{proakis2008digital} \begin{eqnarray}\label{LinearMod1} u^i(t + nT_{sym}) & = & \sum_{k=0}^{M_i-1} g^i(t + kT_{sym}) b_{n - k}^i \end{eqnarray} for $0 \le t < T_{sym}$. In (\ref{LinearMod1}), $i$ denotes the index of the transmit station, $n$ denotes the symbol number index, $b_{k}^i$ denotes the transmitted complex baseband symbol, $T_{sym}$ is the symbol period of the digital modulation scheme and $g^i(.)$ denotes a pulse function of duration $M_iT_{sym}$ used at the $i^{th}$ transmit station. Popular pulse functions include the raised cosine and the root-raised cosine pulse shape \cite{Stinco}. After sampling, (\ref{LinearMod1}) can be rewritten as \begin{eqnarray}\label{LinearMod2} u^i(pT_s + nT_{sym}) & = & \sum_{k=0}^{M_i-1} g^i(pT_s + kT_{sym})b_{n-k}^i \end{eqnarray} for $p = 0, 1, \cdots, P-1$, where $P$ denotes the number of samples per symbol. In (\ref{LinearMod2}), $T_s = T_{sym}/P$ denotes the sampling interval. Collecting $N = LP$ samples from $L$ consecutive symbols indexed by $(n-L+1), (n-L+2), \cdots, n$, the transmitted signal samples can be expressed as \begin{eqnarray}\label{LinearRep} \bm{u}^i & = & \bm{G}^i\bm{b}^i, \end{eqnarray} where $\bm{u}^i = [(\bm{u}^i_{n})^T, (\bm{u}^i_{n-1})^T, \cdots, (\bm{u}^i_{n-L+1})^T]^T$ with $\bm{u}_k^i = [u^i(kT_{sym}), u^i(T_s + kT_{sym}), \cdots, u^i((P-1)T_s + kT_{sym})]^T$ for $k = (n-L+1), \cdots, n$ and $\bm{b}^{i} = \left[ b^i_{n}, b^i_{n-1}, \cdots, b^i_{n-L-M_i+2} \right]^T$. Let $\bm{G}^i$ be an $LP \times (L + M_i - 1)$ matrix defined as \begin{eqnarray}\label{TransMatrix} \bm{G}^i = \begin{bmatrix} \bm{g}^i_{0} & \cdots & \bm{g}^i_{(M_i-1)} & \bm{0}_{P \times 1} & \cdots & \bm{0}_{P \times 1} \\ \bm{0}_{P \times 1} & \bm{g}^i_0 & \cdots & \bm{g}^i_{(M_i-1)} & \cdots & \bm{0}_{P \times 1} \\ \vdots & \ddots & \ddots & \ddots & \bm{0}_P & \vdots \\ \bm{0}_{P \times 1} & \bm{0}_{P \times 1} & \cdots & \bm{g}^i_0 & \cdots & \bm{g}^i_{(M_i-1)} \end{bmatrix} \end{eqnarray} where $\bm{g}^i_k = [ g^i(kT_{sym}), g^i(T_s + kT_{sym}), \cdots, g^i((P-1)T_s + kT_{sym})]^T$ for $k = 0, 1, \cdots, M_i-1$. \subsection{Orthogonal Frequency-Division Multiplexing Signals} \noindent The complex baseband structure of an OFDM signal can be represented as \cite{Palmer} \begin{eqnarray}\label{OFDM_sym} u^i(t + nT_{sym}) & = & \sum_{l=0}^{N_s-1} e^{j2\pi \frac{l}{T_u}(t - T_g)}b^i_{nl}, \end{eqnarray} for $0 \le t < T_{sym}$. In (\ref{OFDM_sym}), $i$ denotes the index of the transmit station, $n$ denotes the OFDM symbol number, $N_s$ is the number of subcarriers used in the OFDM signal, $b^i_{nl}$ are complex valued modulation symbols, $T_u$ is the duration of the useful part of the OFDM symbol (excluding the guard interval), $T_g$ is the guard interval duration, and $T_{sym} = (T_u + T_g)$ is the total OFDM symbol duration. Let $T_s$ be the sampling rate equal to $T_{sym}/(N_sP)$, where $P$ is the number of samples per complex symbol. Collecting $N = LN_sP$ samples from $L$ consecutive OFDM symbols indexed by $0, 1, \cdots, (L-1)$, the transmitted signal samples can be expressed as (similar to (\ref{LinearRep})) \begin{eqnarray}\label{OFDMsym} \bm{u}^i & = & (\bm{I}_L \otimes \bm{H})\bm{b}^i, \end{eqnarray} where $\bm{u}^i = [(\bm{u}^i_{0})^T, (\bm{u}^i_{1})^T, \cdots, (\bm{u}^i_{L - 1})^T]^T$ with $\bm{u}_k^i = [u^i(kT_{sym}), u^i(T_s + kT_{sym}), \cdots, u^i((N_sP-1)T_s + kT_{sym})]^T$ for $k = 0, 1, \cdots, L-1$ and $\bm{b}^i = [(\bm{b}_0^i)^T, (\bm{b}_1^i)^T, \cdots, (\bm{b}_{L-1}^i)^T]^T$ with $\bm{b}_k^i = [b_{k0}^i, b_{k1}^i, \cdots, b^i_{k(N_s-1)} ]^T$ for $k = 0, 1, \cdots, L-1$. In (\ref{OFDMsym}), $\bm{H}$ is a $N_sP \times N_s$ matrix whose $ml^{th}$ element is given by \begin{eqnarray} h_{ml} & = & e^{\frac{j2\pi l(mT_s - T_g)}{T_u}} \end{eqnarray} for $m = 0, 1, \cdots, N_sP-1$ and $l = 0, 1, \cdots, N_s-1$. \subsection{Problem Statement} \noindent Under the stated assumptions, the PMR target detection problem in (\ref{Hypotheses_test_PMR}) can now be written as \begin{eqnarray}\label{Hypotheses_test_linear} \mathcal{H}_1: & \bm{s}_s^{ij} & = \mu^{ij}_s\bm{G}^i\bm{b}^i + \bm{n}_s^{ij}, \nonumber \\ & \bm{s}_r^{ij} & = \mu^{ij}_r\bm{G}^i\bm{b}^i + \bm{n}_r^{ij}, \nonumber \\ \mathcal{H}_0 : & \bm{s}_s^{ij} & = \bm{n}_s^{ij}, \nonumber \\ & \bm{s}_r^{ij} & = \mu^{ij}_r\bm{G}^i\bm{b}^i + \bm{n}_r^{ij}, \end{eqnarray} for $i = 1, 2, \cdots, N_t$ and $j = 1, 2, \cdots, N_r$. In this paper, we derive a low complexity approximate GLRT for target detection in PMR networks that uses the available information regarding the signal format of the transmitted signal. We show significant detection performance improvement over the GLRT which ignores the signal format information. \subsection{Relaxed GLRT for PMR Networks When the Signal Format Information is Employed and $\sigma^2$ is Known}\label{sec3_ssec1} \noindent We consider the hypotheses testing problem given in (\ref{Hypotheses_test_linear}). The conditional probability density function (pdf) of $\bm{s}$ under $\mathcal{H}_1$ is given by \begin{eqnarray} p_1(\bm{s}|\bm{\mu}_s, \bm{\mu}_r, \bm{b}) & = & \prod_{i=1}^{N_t} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i), \end{eqnarray} where \begin{eqnarray} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i) & \propto & \exp\bigg\{\frac{-1}{\sigma^2}\sum_{j=1}^{N_r} \bigg( ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2 \nonumber \\ & & + ||\bm{s}_r^{ij} - \mu_r^{ij}\bm{G}^i\bm{b}^i||^2 \bigg) \bigg\}. \end{eqnarray} Similarly, the conditional pdf of $\bm{s}$ under $\mathcal{H}_0$ is given by \begin{eqnarray} p_0(\bm{s}|\bm{\mu}_r, \bm{b}) & = & \prod_{i=1}^{N_t} p_0^i (\bm{s}^i| \bm{\mu}_r^i, \bm{b}^i), \end{eqnarray} where \begin{eqnarray} p_0^i (\bm{s}^i| \bm{\mu}_r^i, \bm{b}^i) \propto \exp\bigg\{\frac{-1}{\sigma^2}\sum_{j=1}^{N_r} ||\bm{s}_r^{ij} - \mu_r^{ij}\bm{G}^i\bm{b}^i||^2 \bigg\}. \end{eqnarray} Let $l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}|\bm{s}) = \log p_1(\bm{s}|\bm{\mu}_s, \bm{\mu}_r, \bm{b})$ and $l_0(\bm{\mu}_r, \bm{b}|\bm{s}) = \log p_0(\bm{s}| \bm{\mu}_r, \bm{b})$ denote the log-likelihood functions under the hypotheses $\mathcal{H}_1$ and $\mathcal{H}_0$. The relaxed GLRT can now be written as \begin{eqnarray}\label{GLRT_PMR_R} \max_{\{\bm{\mu}_s, \bm{\mu}_r, \bm{b}\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{N_rN_t} \times \mathbb{C}^{\mathcal{B}} } l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}|\bm{s}) \nonumber \\ - \max_{\{\bm{\mu}_r, \bm{b}\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{\mathcal{B}} } l_0( \bm{\mu}_r, \bm{b}|\bm{s}) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{ksf}, \end{eqnarray} where $\kappa_{ksf}$ denotes a threshold corresponding to a desired value of false alarm probability. It is shown in Appendix \ref{pmr_glrt_with_sig_noise_info} that the GLRT-based target detector in (\ref{GLRT_PMR_R}), termed the \emph{Passive MIMO Radar Relaxed GLRT with Known signal format and known noise variance (PMR-RGLRT-K)}, is given by \begin{eqnarray} \xi_{ksf} & = &\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \Big[ \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \nonumber \\ &&- \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \Big] \nonumber \\ &&\LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{ksf} \end{eqnarray} where $\bm{\phi}_1^i = [\bm{\phi}_s^i, \bm{\phi}_r^i]$, and the matrices $\bm{\phi}_s^i$ and $\bm{\phi}_r^i$ are defined as \begin{eqnarray} \bm{\phi}^i_{(s, r)} & = & \left[ \bm{s}_{(s, r)}^{i1}, \bm{s}_{(s, r)}^{i2}, \cdots, \bm{s}_{(s, r)}^{iN_r} \right] \in \mathbb{C}^{N \times N_r}. \end{eqnarray} In specific scenarios discussed in \cite{Hack_noRef, Liu}, the direct path reference channel signals might not be available in the PMR networks. The target detection problem in such scenarios, termed as Passive Source Localization (PSL) networks, can be formulated as \begin{eqnarray}\label{Hypotheses_test_PSL} \mathcal{H}_0 : & \bm{s}_s^{ij} & = \bm{n}_s^{ij} \nonumber \\ \mathcal{H}_1 : & \bm{s}_s^{ij} & = \mu^{ij}_s\bm{G}^i\bm{b}^i + \bm{n}_s^{ij}, \end{eqnarray} for $i = 1, 2, \cdots, N_t$ and $j = 1, 2, \cdots, N_r$. The PSL and PMR hypotheses tests are equivalent if the PMR system ignores the direct-path reference channel signals $\bm{s}_r^{ij}$ \cite{Hack_PMR}. It is shown in Appendix \ref{psl_glrt_sig_noise_info} that the relaxed GLRT-based target detector that uses the signal structure information, termed the \emph{Passive Source Localization Relaxed GLRT with Known signal format and known noise variance (PSL-RGLRT-K)}, for the hypotheses testing problem in (\ref{Hypotheses_test_PSL}), is given by \begin{eqnarray} \xi_{psk} = \frac{1}{\sigma^2}\sum_{i=1}^{N_t} \lambda_1\left((\bm{G}^i)^H \bm{\phi}_s^i (\bm{\phi}_s^i)^H \bm{G}^i, (\bm{G}^i)^H \bm{G}^i \right) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{psk}, \end{eqnarray} where $\kappa_{psk}$ denotes a threshold corresponding to a desired value of false alarm probability. \subsection{GLRT When the Signal Format Information is Employed and $\sigma^2$ is Unknown}\label{sec3_ssec2} \noindent When $\sigma^2$ is unknown, the conditional pdf of $\bm{s}$ under $\mathcal{H}_1$ is given by \begin{eqnarray} p_1(\bm{s}|\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2) & = & \prod_{i=1}^{N_t} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i, \sigma^2) \end{eqnarray} where, \begin{flalign*} p_1^i (\bm{s}^i| \bm{\mu}_s^i, \bm{\mu}_r^i, \bm{b}^i, \sigma^2) & & \end{flalign*} \begin{eqnarray} = & \frac{1}{(\pi\sigma^2)^{N_rN}} \exp\bigg\{\frac{-1}{\sigma^2} \sum_{j=1}^{N_r} \big( ||\bm{s}_s^{ij} - \mu_s^{ij}\bm{G}^i\bm{b}^i||^2 \nonumber \\ & + ||\bm{s}_r^{ij} - \mu_r^{ij}\bm{G}^i\bm{b}^i||^2\big) \bigg\}. \end{eqnarray} The conditional pdf of $\bm{s}$ under $\mathcal{H}_0$, $p_0(\bm{s}|\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2)$, is similarly defined. Let $l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2|\bm{s}) = \log p_1(\bm{s}|\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2)$ and $l_0(\bm{\mu}_r, \bm{b}, \sigma^2|\bm{s}) = \log p_0(\bm{s}| \bm{\mu}_r, \bm{b}, \sigma^2)$ denote the log-likelihood functions under the hypotheses $\mathcal{H}_1$ and $\mathcal{H}_0$. The relaxed GLRT is given by \begin{eqnarray}\label{GLRT_PMR_UK} \max_{\{\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{N_rN_t} \times \mathbb{C}^{\mathcal{B}} \times \mathbb{R}^+} l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}, \sigma^2|\bm{s}) \nonumber \\ - \max_{\{\bm{\mu}_r, \bm{b}, \sigma^2\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{\mathcal{B}} \times \mathbb{R}^+} l_0( \bm{\mu}_r, \bm{b}, \sigma^2|\bm{s}) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{uk}, \end{eqnarray} where $\kappa_{uk}$ denotes a threshold corresponding to a desired value of false alarm probability. It is shown in Appendix \ref{pmr_glrt_with_sig_info} that the GLRT-based target detector in (\ref{GLRT_PMR_UK}), termed the \emph{Passive MIMO Radar Relaxed GLRT with unknown noise variance and Known signal format (PMR-RGLRT-UK)}, is given by \begin{eqnarray} \xi_{uk} & = & \frac{\sum_{i=1}^{N_t} \Big[ E_{sr}^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]}{\sum_{i=1}^{N_t} \Big[ E_{sr}^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]} \nonumber \\ & & \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{uk}, \end{eqnarray} where the scalar $E_{sr}^i = ||\bm{s}_s^i||^2 + ||\bm{s}_r^i||^2$. \subsubsection{Active (known signal) MIMO Radar GLRT (AMR-GLRT)} The binary hypothesis test between the target-absent hypothesis ($\mathcal{H}_0$), and the target-present hypothesis ($\mathcal{H}_1$) in an active radar network (where the transmitted signals are known) can be formulated as \begin{eqnarray}\label{Hypotheses_test_AMR} \mathcal{H}_0 : & \bm{s}_s^{ij} & = \bm{n}_s^{ij} \nonumber \\ \mathcal{H}_1 : & \bm{s}_s^{ij} & = \mu_s^{ij}\bm{u}^i + \bm{n}_s^{ij}, \end{eqnarray} for $i = 1, 2, \cdots, N_t$ and $j = 1, 2, \cdots, N_r$, where the transmitted signal $\bm{u}^i$ is assumed known and the channel coefficients ${\mu}_s^{ij}$ are deterministic unknowns. The GLRT for (\ref{Hypotheses_test_AMR}) is given by \cite{AMRGLRT} \begin{eqnarray} \xi_{amr} & = & \frac{1}{\sigma^2}\sum_{i=1}^{N_t}\sum_{j=1}^{N_r} |(\bm{u}^i)^H\bm{s}_s^{ij}|^2 \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{amr}, \end{eqnarray} where $\kappa_{amr}$ denotes a threshold corresponding to a desired false alarm probability. \subsubsection{Passive MIMO Radar GLRT without using the signal format information (PMR-GLRT)} The GLRT for target detection in PMR networks which does not employ knowledge of the signal format for the hypotheses testing problem given in (\ref{Hypotheses_test_PMR}) was derived in \cite{Hack_PMR} and is given by \begin{eqnarray} \xi_{pmr} & = & \frac{1}{\sigma^2}\sum_{i=1}^{N_t} \bigg[\lambda_1^*\left(\bm{\phi}_1^i (\bm{\phi}_1^i)^H\right) - \lambda_1^*\left(\bm{\phi}_r^i (\bm{\phi}_r^i)^H \right) \bigg] \nonumber \\ & & \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{pmr}, \end{eqnarray} where $\kappa_{pmr}$ denotes a threshold corresponding to a desired false alarm probability and $\lambda_1^*(\bm{A})$ denotes the largest eigenvalue of matrix $\bm{A}$. \subsubsection{Passive Source Localization GLRT without using the signal format information (PSL-GLRT)} The GLRT for target detection in PSL networks which does not employ knowledge of the signal format for the hypotheses testing problem given in (\ref{Hypotheses_test_PSL}) was derived in \cite{Hack_noRef} and is given by \begin{eqnarray} \xi_{psl} & = & \frac{1}{\sigma^2}\sum_{i=1}^{N_t} \lambda_1^*\left(\bm{\phi}_s^i (\bm{\phi}_s^i)^H\right) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{psl}, \end{eqnarray} where $\kappa_{psl}$ denotes a threshold corresponding to a desired false alarm probability. Table \ref{sec4_table1} provides the test statistics of the various considered GLRT-based detectors. \begin{table*}[t] \begin{center} \begin{tabular}{|c|c|c|} \hline {\bf Abbreviation} & {\bf Test Statistic} & {\bf Corresponding References} \\ \hline \emph{AMR-GLRT} & $\frac{1}{\sigma^2}\sum_{i=1}^{N_t}\sum_{j=1}^{N_r} |(\bm{u}^i)^H\bm{s}_s^{ij}|^2 $ & \cite{AMRGLRT} \\ \hline \emph{PMR-GLRT} & $\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \bigg[\lambda_1^*\left(\bm{\phi}_1^i (\bm{\phi}_1^i)^H\right) - \lambda_1^*\left(\bm{\phi}_r^i (\bm{\phi}_r^i)^H \right) \bigg]$ & \cite{Hack_PMR, Hack_PMR_conf} \\ \hline \emph{PSL-GLRT} & $\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \lambda_1^*\left(\bm{\phi}_s^i (\bm{\phi}_s^i)^H\right)$ & \cite{Hack_noRef} \\ \hline \emph{PMR-RGLRT-K} & $\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \bigg[\lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \bigg]$ & Proposed in this paper \\ \hline \emph{PSL-RGLRT-K} & $\frac{1}{\sigma^2}\sum_{i=1}^{N_t} \bigg[\lambda_1\left((\bm{G}^i)^H\bm{\phi}_s^i (\bm{\phi}_s^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right)$ & Proposed in this paper \\ \hline \emph{PMR-RGLRT-UK} & $ \frac{\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_r^i (\bm{\phi}_r^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]}{\sum_{i=1}^{N_t} \Big[ E^i - \lambda_1\left((\bm{G}^i)^H\bm{\phi}_1^i (\bm{\phi}_1^i)^H\bm{G}^i, (\bm{G}^i)^H\bm{G}^i \right) \Big]} $ & Proposed in this paper \\ \hline \end{tabular} \caption{Test statistics of various GLRT target detectors.}\label{sec4_table1} \end{center} \end{table*} \subsection{Simulation scenario}\label{sim_setup} \noindent For a fair comparison, we follow the simulation setup of \cite{Hack_PMR}. We consider a PMR network with $N_t = 2$ transmit stations and $N_r = 3$ receive stations. Following \cite{Hack_PMR}, we fix $||\bm{u}^i||^2 = N$. The transmitted signal samples $\bm{u}^i$ are generated according to the chosen signal format in (\ref{LinearModSignal}) across all transmit stations. The reference and surveillance signal samples are generated on each Monte Carlo trial according to the signal model given in (\ref{basic_signal}). As in \cite{Hack_PMR}, the reference channel coefficients, $\bm{\mu}_r^i$, are randomly drawn from a $\mathcal{CN}(\bm{0}_{N_r}, \bm{I}_{N_r})$ distribution on each trial under $\mathcal{H}_0$ and $\mathcal{H}_1$, and then scaled to achieve a desired direct-path signal-to-noise ratio ($\mbox{DNR}^i_{{avg}}$) according to \begin{eqnarray} \mbox{DNR}^i_{{avg}} & = & \frac{||\bm{\mu}_r^i||^2}{N_r\sigma^2} \end{eqnarray} on each trial, where $\bm{\mu}_r^i = [\mu_r^{i1}, \cdots, \mu_r^{iN_r}]^T$ and $|\mu_r^{ij}|^2/\sigma^2$ is the DNR of the $ij^{th}$ reference channel. Surveillance channel coefficients are similarly drawn from a $\mathcal{CN}(\bm{0}_{N_r}, \bm{I}_{N_r})$ distribution and scaled to achieve a desired surveillance signal-to-noise ratio ($\mbox{SNR}^i_{{avg}}$) according to \begin{eqnarray} \mbox{SNR}^i_{{avg}} & = & \frac{||\bm{\mu}_s^i||^2}{N_r\sigma^2} \end{eqnarray} on each trial, where $\bm{\mu}_r^i = [\mu_r^{i1}, \cdots, \mu_r^{iN_r}]^T$ and $|\mu_s^{ij}|^2/\sigma^2$ is the SNR of the $ij^{th}$ surveillance channel. For simplicity, we assume that $\mbox{SNR}^i_{{avg}} = \mbox{SNR}_{{avg}}$ for all $i$, i.e., the average surveillance channel target-path SNR across receivers is the same for each transmit channel. Similarly, we assume $\mbox{DNR}^i_{{avg}} = \mbox{DNR}_{{avg}}$ and $\bm{G}^i(.) = \bm{G}(.)$ for all $i$. In our simulations, we consider cases where the transmitted signal is either linearly modulated or follows the OFDM modulation scheme. The transmitted signal is generated according to (\ref{LinearMod1}) in case of the linear modulations. The complex baseband symbols are chosen from a Binary Phase Shift Keying (BPSK) constellation and $g^i(.)$ is a raised cosine pulse of roll-off factor $0.22$ and duration $8T_{sym}$. When the transmitted signal uses the OFDM modulation, it is generated according to (\ref{OFDM_sym}). The number of sub-carriers in the OFDM symbol is $16$, the guard-interval duration $T_g$ is $0$ $\mu s$ and BPSK symbols are modulated on each sub-carrier of the OFDM symbol. For all the considered target detectors, the detection threshold that achieves a probability of false alarm $(P_f)$ of $10^{-3}$ is determined empirically using $10^4$ trials under $\mathcal{H}_0$, and the probability of detection ($P_d$) is estimated using $10^4$ trials under $\mathcal{H}_1$. The number of symbols used for target detection in the case of the linearly modulated transmitted signal is $10$ (total of $10P$ samples), while in the case of the OFDM modulated transmitted signal, we use $1$ OFDM symbol (total of $16P$ samples). The BPSK symbols used in the generation of the transmitted signal are randomly generated for each Monte-Carlo simulation run. \subsection{Numerical results} \noindent \subsubsection{Dependence on $\mbox{SNR}_{avg}$, $\mbox{DNR}_{avg}$ and $P$ } Figures \ref{Pd_bpsk_results_dnr_ne10dB}--\ref{Pd_ofdm_results_dnr_ne5dB} show the $P_d$ curves as a function of $\mbox{SNR}_{avg}$ for $\mbox{DNR}_{avg} = \{-10, -5\}$ dB and for different values of samples per symbol, $P$. As we can see from the numerical results, the proposed target detectors significantly outperform the GLRT-based target detectors that do not use the available signal format information under the considered values of $\mbox{DNR}_{avg}$ for both the PMR and PSL networks. We also see the GLRTs for target detection in the PMR networks offer better performance than the GLRT for target detection in PSL networks. This performance gain in the PMR networks is mainly due to the availability of the direct-path reference channel signals. The direct-path reference channel signals provide us some knowledge about the transmitted signal depending on the received strength of these signals \cite{Hack_PMR}. As we see in Figures \ref{Pd_bpsk_results_dnr_ne10dB}--\ref{Pd_ofdm_results_dnr_ne5dB}, the detection performance of relaxed GLRT-based target detectors improves significantly with increasing $P$ when compared to \emph{PMR-GLRT} and \emph{PSL-GLRT}\footnote{The target detection performance of \emph{PMR-GLRT} and \emph{PSL-GLRT} improve with increasing number of samples. However, they improve at a much slower rate when compared to the proposed relaxed GLRT-based target detectors.}. This performance gain is primarily due to the lower number of parameters that need to be estimated for the GLRT in the known signal format case. For a sufficiently large value of $P$, we can also see that the performance of the proposed target detectors is close to that of an active radar, which has complete knowledge of the transmitted signal. Also, at higher values of $\mbox{DNR}_{avg}$; the proposed target detectors achieve near \emph{AMR-GLRT} level performance for smaller values of $P$. Finally, we observe no significant loss in the detection performance from not knowing noise variance in the proposed target detectors for all the considered cases. \subsubsection{Performance comparison with unrelaxed GLRT (PMR-GLRT-K)} In our work, we introduced a relaxation on the complex symbols $\bm{b}^i$ to make the search for the MLE tractable. We now compare the performance of the relaxed GLRT to the exact unrelaxed GLRT to study the performance loss caused by using the relaxation. The exact GLRT which uses the signal format information is obtained by searching across all possible sequences of $\bm{b}^i$ and finding the sequence that maximizes the likelihood. The \emph{Passive MIMO Radar GLRT using the signal format information} (abbreviated as \emph{PMR-GLRT-K}) is given by \begin{eqnarray} \max_{\{\bm{\mu}_s, \bm{\mu}_r, \bm{b}\} \in \mathbb{C}^{N_rN_t} \times \mathbb{C}^{N_rN_t} \times \mathbb{A}^{\mathcal{B}} } l_1(\bm{\mu}_s, \bm{\mu}_r, \bm{b}|\bm{s}) \nonumber \\ - \max_{\{ \bm{\mu}_r, \bm{b}\} \in \mathbb{C}^{N_rN_t} \times \mathbb{A}^{\mathcal{B}} } l_0( \bm{\mu}_r, \bm{b}|\bm{s}) \LRT{\mathcal{H}_1}{\mathcal{H}_0} \kappa_{pmrk} \end{eqnarray} where $\kappa_{pmrk}$ denotes a threshold corresponding to a desired false alarm probability and $\mathbb{A}$ is the finite set of complex symbols from which the complex symbols $\bm{b}^i$ are taken. For this comparison, the transmitted signal is assumed to be an OFDM signal and is generated according to (\ref{OFDM_sym}). The number of sub-carriers in the OFDM symbol is $8$, $T_g$ is $0$ $\mu s$ and BPSK symbols are modulated on each sub-carrier of the OFDM symbol. We use $1$ OFDM symbol (total of $8P$ samples) for target detection. The reference and surveillance signal samples are generated on each Monte Carlo trial according to the approach described in Section \ref{sim_setup}. The direct-path signal-to-noise ratio, $\mbox{DNR}_{avg}$, is $-10$ dB. The detection threshold that achieves a $(P_f)$ of $10^{-3}$ is determined empirically using $10^4$ trials under $\mathcal{H}_0$, and $P_d$ is estimated using $10^4$ trials under $\mathcal{H}_1$. Since $\bm{b}^i \in \mathbb{A}^{8}$, we search across all $2^8$ possible sequences to get the MLE of $\bm{b}^i$. Figure \ref{Pd_bpsk_comp_results_dnr_ne10dB} shows us the performance loss of using the relaxation for different values of $P$. We can see from the results that the performance loss in the target detection due to the relaxation is relatively small and with increasing samples per symbol, there appears to be no performance loss by using the relaxation.
1,108,101,564,470
arxiv
\section{Introduction} Quantum walks are a universal model of quantum computation \cite{Childs2009} that have been used to develop a variety of quantum algorithms, including for searching \cite{SKW2003}, element distinctness \cite{Ambainis2004}, and boolean formula evaluation \cite{FGG2008}. In these algorithms, the quantum walk occurs on unweighted graphs, or equivalently where each edge of the graph has unit weight. Despite this success of quantum walks on unweighted graphs, several papers have identified improvements for quantum walks when using weighted graphs. For continuous-time quantum walks, this includes state transfer \cite{Christandl2004,Kendon2011}, universal mixing \cite{Carlson2007}, and searching \cite{Wong16,Wong22}. For discrete-time quantum walks, Szegedy's quantum walk \cite{Szegedy2004} is naturally defined on weighted graphs by quantizing Markov chains with arbitrary transition amplitudes. Although Szegedy's quantum walk is equivalent to a coined quantum walk under certain conditions (\cite{Magniez2012} in the context of searching, and \cite{Wong26} for undirected and unweighted Markov chains), this equivalence has received less attention than it deserves. As a result, coined quantum walks are usually not understood, investigated, or interpreted as walks on weighted graphs. In this paper, we remedy this by giving an explicit definition of a coined quantum walk on a weighted graph that is inspired by Szegedy's quantum walk. When this coined quantum walk on a weighted graph uses the flip-flop shift to hop between vertices, it is exactly equivalent to Szegedy's quantum walk. When it uses the moving shift, however, it differs and is a new type of walk. In the next section, we discuss this in detail by reviewing the definition of the coined quantum walk, Szegedy's quantum walk, and their precise equivalence \cite{Wong26}. Then we define the coined quantum walk on weighted graphs by taking inspiration from Szegedy's quantum walk. Then in section 3, we analyze the coin operator for the coined quantum walk on weighted graphs, showing that it no longer performs an inversion about the average that the unweighted coin does. In section 4, we prove that if multiple edges of an unweighted graph evolve identically, they can be replaced with a single weighted edge. This naturally leads to a generalization of the lackadaisical quantum walk \cite{Wong10}. The lackadaisical quantum walk is a quantum analogue of a lazy random walk, where each vertex is given $l$ integer self-loops, so the particle has some amplitude of staying put. Typically, the $l$ self-loops at each vertex of a lackadaisical quantum walk evolve identically, so they can be replaced by a single self-loop of weight $l$ at each vertex. Then when $l$ is an integer, it reproduces the behavior of the original lackadaisical quantum walk. But now $l$ can also take non-integer real values, thus defining a generalization of the lackadaisical quantum walk to real-valued weights. We analyze this generalized lackadaisical quantum walk for two problems. In section 5, we consider the walk on the line or one-dimensional (1D) lattice. For the standard lackadaisical quantum walk with an integer number of self-loops per vertex, this was investigated by \cite{Wang2016}, who showed that the ballistic dispersion can be faster than, slower than, or the same as the loopless walk. Replacing the $l$ integer self-loops with a single self-loop of weight $l$, we show that the generalized lackadaisical quantum walk on the line is exactly equivalent to a continuous deformation of the three-state Grover walk proposed by {\v{S}}tefa{\v{n}}{\'a}k \textit{et al} \cite{Stefanak2012}. In section 6, we investigate quantum search on the complete graph, which is the quantum walk-formulation of Grover's unstructured search algorithm \cite{Grover1996}. This was the problem that introduced lackadaisical quantum walks \cite{Wong10}, and with $l$ integer self-loops per vertex, the success probability is improved over the loopless value when $l \le 5$ \cite{Wong10}. Generalizing the lackadaisical quantum walk so that $l$ can take non-integer values, we determine the runtime and success probability of the algorithm for all real $l \ge 0$, and this includes qualitatively new behavior when $l < 1/3$. This reveals that the success probability is improved over the loopless case when $l < 3 + 2\sqrt{2} \approx 5.828$. Finally, we end in section 7 with concluding remarks. \section{Definition} The coined quantum walk was first introduced by Meyer \cite{Meyer1996a} in the context of quantum cellular automata, who showed that an internal degree of freedom allowed the particle to evolve nontrivially. This was later framed in the language of quantum walks by Aharonov \textit{et al}~\cite{Aharonov2001}. The particle hops on the $N$ vertices of a graph, and the internal coin state identifies the directions in which the particle can hop. We write the states as $\ket{v} \otimes \ket{u} = \ket{vu}$ to denote a particle at vertex $v$ pointing towards vertex $u$. The quantum walk is effected by a coin flip $C$ and a shift $S$, so one step of the walk is \begin{equation} \label{eq:U} U = SC. \end{equation} The coin operator $C$ applies a coin $C_v$ to each vertex $v$: \begin{equation} \label{eq:C} C = \sum_v \ketbra{v}{v} \otimes C_v, \end{equation} where $C_v$ is typically the Grover diffusion coin \cite{Moore2002} \begin{equation} \label{eq:Cv} C_v = 2 \ketbra{s_v}{s_v} - I \end{equation} that reflects across \begin{equation} \label{eq:sv_unweighted} \ket{s_v} = \frac{1}{\sqrt{\deg(v)}} \sum_{u \sim v} \ket{u} \quad {\rm (unweighted).} \end{equation} This corresponds to a quantum walk on an unweighted graph because $\ket{s_v}$ is the uniform, unweighted superposition of directions at $v$. For the shift $S$, two different operators are often used. The first is the ``moving shift'' on a lattice, where a particle hops and keeps pointing in the same direction \cite{Meyer1996a,Ambainis2001}. For example, on the line, a particle pointing right will hop to the right and continue pointing right, so $S\ket{0,1} = \ket{1,2}$. The second is the ``flip-flop shift,'' where the particle hops and turns around \cite{AKR2005}, so $S\ket{0,1} = \ket{1,0}$. The flip-flop shift is more straightforward than the moving shift on nonlattice graphs, and it is needed for fast quantum search on lattices \cite{AKR2005}. To define a coined quantum walk on weighted graphs, we will need to generalize $\ket{s_v}$ \eref{eq:sv_unweighted} to weighted graphs, which in turn changes the coin operator \eref{eq:Cv}. The shift does not need to be changed. We can determine $\ket{s_v}$ for a weighted graph using Szegedy's quantum walk. \begin{figure} \begin{center} \subfloat[]{ \includegraphics{graph} \label{fig:graph} } \quad \quad \quad \subfloat[]{ \includegraphics{graph_szegedy} \label{fig:graph_szegedy} } \caption{\label{fig:graph_together} (a) A weighted graph and (b) its bipartite double cover.} \end{center} \end{figure} Szegedy's quantum walk \cite{Szegedy2004} is a quantization of a classical random walk or Markov chain. For example, consider the weighted graph in \fref{fig:graph}. Its classical transition probabilities are \begin{eqnarray*} P_{01} = \frac{w}{w+3}, \quad P_{02} = P_{03} = P_{04} = \frac{1}{w+3}, \\ P_{10} = P_{20} = P_{30} = P_{40} = 1, \end{eqnarray*} where $P_{ij}$ is the probability of transitioning from vertex $i$ to $j$, and the remaining transitions have probability 0. Szegedy's quantum walk occurs on the edges of the bipartite double cover of the original graph, which is shown in \fref{fig:graph_szegedy}. If we label the partite sets $X$ and $Y$, the walk evolves by repeated applications of \[ W = R_2 R_1, \] where the operators \begin{eqnarray} R_1 = 2 \sum_{x \in X} \ketbra{\phi_x}{\phi_x} - I, \label{eq:R1} \\ R_2 = 2 \sum_{y \in Y} \ketbra{\psi_y}{\psi_y} - I, \nonumber \end{eqnarray} are reflections across the states \begin{eqnarray*} \ket{\phi_x} = \ket{x} \otimes \sum_{y \in Y} \sqrt{P_{xy}} \ket{y}, \\ \ket{\psi_y} = \sum_{x \in X} \sqrt{P_{yx}} \ket{x} \otimes \ket{y}, \end{eqnarray*} where $\ket{x} \otimes \ket{y} = \ket{xy}$ denotes the edge connecting vertices $x \in X$ and $y \in Y$. For example, for the weighted graph in \fref{fig:graph_together}, $R_1$ with $x = 0$ reflects across the state \[ \ket{\phi_0} = \ket{0} \otimes \frac{1}{\sqrt{w+3}} \left( \sqrt{w} \ket{1} + \ket{2} + \ket{3} + \ket{4} \right). \] For unweighted graphs, it is known that two steps of the coined quantum walk with the flip-flop shift are equivalent to one step of Szegedy's quantum walk \cite{Magniez2012,Wong26}. More precisely, using the bijection that an edge connecting $x \in X$ to $y \in Y$ (in the bipartite double cover) in Szegedy's quantum walk corresponds to a coined particle at vertex $x$ pointing to vertex $y$ (in the original graph), the relationships between the operators are $C = R_1$ and $SCS = R_2$ \cite{Wong26}. Then $U^2 = (SC)(SC) = (SCS)C = R_2 R_1 = W$, so two steps of the coined quantum walk are equal to one step of Szegedy's. To preserve this relationship for weighted graphs, we use $C = R_1$ to equate \eref{eq:C} and \eref{eq:R1}. Doing this, we identify \begin{equation} \label{eq:sv} \ket{s_v} = \sum_{u \sim v} \sqrt{P_{vu}} \ket{u} = \frac{1}{\sqrt{\sum_t w_{vt}}} \sum_{u \sim v} \sqrt{w_{vu}} \ket{u} \quad {\rm (weighted)}, \end{equation} where $w_{vu}$ is the weight of the edge connecting vertices $v$ and $u$. For example, for the weighted graph in \fref{fig:graph}, \[ \ket{s_0} = \frac{1}{\sqrt{w+3}} \left( \sqrt{w} \ket{1} + \ket{2} + \ket{3} + \ket{4} \right). \] When the graph is unweighted (i.e., $w_{vu} = 1$ for all $v \sim u$), \eref{eq:sv} reduces to the uniform superposition \eref{eq:sv_unweighted}. This yields our definition of a coined quantum walk on a weighted graph: We evolve by repeatedly applying \eref{eq:U} with the Grover diffusion coin $C_v$ \eref{eq:Cv} that reflects across the weighted state \eref{eq:sv}. This definition of a coined quantum walk on a weighted graph preserves the relationships $C = R_1$ and $SCS = R_2$ when $S$ is the flip-flop shift, so two steps of the coined quantum walk with the flip-flop shift are equal to one step of Szegedy's, even for weighted graphs. If the moving shift is used, however, it is a different walk. \section{Weighted Coin and Inversions} Now let us examine in detail how the weighted coin operator reflects across \eref{eq:sv} and how it differs from the unweighted coin. As a concrete example, let us again consider the graph in \fref{fig:graph}, and say the particle is at vertex $0$ and points to vertices $1$, $2$, $3$, and $4$ with respective amplitudes $\alpha_1$, $\alpha_2$, $\alpha_3$, and $\alpha_4$. That is, the particle is in the state $\ket{\psi} = \ket{0} \otimes \ket{\psi_0}$ with \[ \ket{\psi_0} = \alpha_1 \ket{1} + \alpha_2 \ket{2} + \alpha_3 \ket{3} + \alpha_4 \ket{4}. \] The weighted coin acts on this state by \begin{eqnarray*} C \ket{\psi} &= \ket{0} \otimes C_0 \ket{\psi_0} \\ &= \ket{0} \otimes \left( 2 \ket{s_0} \braket{s_0}{\psi_0} - \ket{\psi_0} \right) \\ &= \ket{0} \otimes \left( 2 \ket{s_0} \frac{\sqrt{w} \alpha_1 + \alpha_2 + \alpha_3 + \alpha_4}{\sqrt{w+3}} - \ket{\psi_0} \right) \\ &= \ket{0} \otimes \left( 2 \bar{\alpha} \sqrt{w+3} \ket{s_0} - \ket{\psi_0} \right) \\ &= \ket{0} \otimes \Big[ (2\bar{\alpha}\sqrt{w} - \alpha_1) \ket{1} + (2\bar{\alpha} - \alpha_2) \ket{2} + (2\bar{\alpha} - \alpha_3) \ket{3} \\ &\quad\quad\quad\quad + (2\bar{\alpha} - \alpha_4) \ket{4} \Big], \end{eqnarray*} where \[ \bar{\alpha} = \frac{\sqrt{w} \alpha_1 + \alpha_2 + \alpha_3 + \alpha_4}{w+3}. \] Since $2\bar{\alpha}\sqrt{w} - \alpha_1$ is an inversion of $\alpha_1$ about $\bar{\alpha}\sqrt{w}$, we see that the amplitude pointing towards $\ket{1}$ is inverted about $\bar{\alpha}\sqrt{w}$, and the other amplitudes are inverted about $\bar{\alpha}$. Generalizing this to an arbitrary weighted graph, if the state of the system is \[ \ket{\psi} = \sum_v \ket{v} \otimes \sum_{u \sim v} \alpha_{vu} \ket{u}, \] then the coin inverts the amplitudes to yield \begin{equation} \label{eq:invert} C \ket{\psi} = \sum_v \ket{v} \otimes \sum_{u \sim v} ( 2\bar\alpha_v \sqrt{w_{vu}} - \alpha_{vu} ) \ket{u}, \end{equation} where \begin{equation} \label{eq:bar} \bar\alpha_v = \frac{\sum_{u \sim v} \sqrt{w_{vu}} \alpha_{vu}}{\sum_{u \sim v} w_{vu}}. \end{equation} If the graph is unweighted, so $w_{vu} = 1$ for all $u \sim v$, then $\bar{\alpha}_v$ is the average of the amplitudes pointing from $v$ to its neighbors. Then the amplitudes are inverted about their mean (see Lemma 2 of \cite{Wong23}), and this is akin to the ``inversion about the mean'' of Grover's algorithm \cite{Grover1996}. With weights, $\bar{\alpha}_v$ is no longer the mean, so the coin is no longer an inversion about the mean. \section{\label{sec:reduction} Identically-Evolving Edges to a Weighted Edge} In this section, we show that some quantum walks on unweighted graphs can be reinterpreted as quantum walks on weighted graphs by combining identically-evolving amplitudes. When we apply this result to lackadaisical quantum walks, it allows us to replace the $l$ self-loops at a vertex with one self-loop of weight $l$. Say we have an unweighted graph where a vertex $v$ has degree $d$, so a particle at vertex $v$ can point in $d$ different directions. Then the state of a particle at vertex $v$ can be written as \begin{equation} \label{eq:psi_full} \ket{\psi} = \ket{v} \otimes \left( \alpha_1 \ket{1} + \alpha_2 \ket{2} + \dots + \alpha_{d} \ket{d} \right). \end{equation} The Grover diffusion coin transforms this to \begin{equation} \label{eq:Cpsi_full} C \ket{\psi} = \ket{v} \otimes \left[ (2 \bar{\alpha} - \alpha_1) \ket{1} + \dots + (2 \bar{\alpha} - \alpha_d) \ket{d} \right], \end{equation} where \begin{equation} \label{eq:bar_full} \bar{\alpha} = \frac{\alpha_1 + \alpha_2 + \dots + \alpha_d}{d} \end{equation} is the average of the amplitudes \cite{Wong23}. So it inverts each amplitude about the average. Now say $k$ of the amplitudes evolve identically due to the symmetry of the graph. For example, for a lackadaisical quantum walk, the amplitudes along the $l$ self-loops at a vertex often evolve identically. Without loss of generality, say this corresponds to the first $k$ basis states, i.e., \[ \alpha_1 = \alpha_2 = \dots = \alpha_k. \] Using this, let us reduce \eref{eq:psi_full}, \eref{eq:Cpsi_full}, and \eref{eq:bar_full} to a subspace. The state of the system \eref{eq:psi_full} can be written as \begin{eqnarray*} \ket{\psi} &= \ket{v} \otimes \left[ \alpha_1 ( \ket{1} + \dots + \ket{k} ) + \alpha_{k+1} \ket{k+1} + \dots + \alpha_d \ket{d} \right] \\ &= \ket{v} \otimes \left[ \alpha_1 \sqrt{k} \ket{\sigma} + \alpha_{k+1} \ket{k+1} + \dots + \alpha_d \ket{d} \right], \\ &= \ket{v} \otimes \left[ \alpha_\sigma \ket{\sigma} + \alpha_{k+1} \ket{k+1} + \dots + \alpha_d \ket{d} \right], \end{eqnarray*} where \[ \ket{\sigma} = \frac{1}{\sqrt{k}} \left( \ket{1} + \dots + \ket{k} \right) \] is the uniform superposition of the identically-evolving states, and \[ \alpha_\sigma = \alpha_1 \sqrt{k} \] is its corresponding amplitude. So we have written the state $\ket{\psi}$ in a $(d-k+1)$-dimensional subspace. Now for \eref{eq:Cpsi_full}, we can reduce it to the same subspace: \begin{eqnarray*} C \ket{\psi} &= \ket{v} \otimes \Big[ (2\bar{\alpha} - \alpha_1) ( \ket{1} + \dots + \ket{k} ) + (2\bar{\alpha} - \alpha_{k+1}) \ket{k+1} + \dots \\ &\quad\quad\quad\quad + (2\bar{\alpha} - \alpha_k) \ket{k} \Big] \\ &= \ket{v} \otimes \Big[ (2\bar{\alpha} - \alpha_1) \sqrt{k} \ket{\sigma} + (2\bar{\alpha} - \alpha_{k+1}) \ket{k+1} + \dots \\ &\quad\quad\quad\quad + (2\bar{\alpha} - \alpha_k) \ket{k} \Big], \\ &= \ket{v} \otimes \Big[ (2\bar{\alpha}\sqrt{k} - \alpha_\sigma) \ket{\sigma} + (2\bar{\alpha} - \alpha_{k+1}) \ket{k+1} + \dots \\ &\quad\quad\quad\quad + (2\bar{\alpha} - \alpha_k) \ket{k} \Big]. \end{eqnarray*} Finally, we can rewrite \eref{eq:bar_full} as \[ \bar{\alpha} = \frac{\alpha_1 k + \alpha_{k+1} + \dots + \alpha_d}{d} = \frac{\alpha_\sigma \sqrt{k} + \alpha_{k+1} + \dots + \alpha_d}{d}. \] Comparing this with the inversion of a weighted graph in \eref{eq:invert} and \eref{eq:bar}, we see that the uniform superposition of identically-evolving edges $\ket{\sigma}$ evolves exactly like an edge of weight $k$, while the remaining edges continue to be unweighted. Thus $k$ identically-evolving, unweighted edges can be replaced by a single edge of weight $k$. Hence for a lackadaisical quantum walk, if the $l$ self-loops at a vertex evolve identically, they can be replaced by a single self-loop of weight $l$. This generalizes the values that $l$ can take from the integers to the reals. In the next two sections, we apply this reduction and generalization of the lackadaisical quantum walk to two examples. The first is the quantum walk on the line, whose generalization is exactly equivalent to another type of quantum walk. The second is quantum search on the complete graph, where additional analysis is needed to get the runtime and success probability for all values of $l$. \section{Walk on the Line} \begin{figure} \begin{center} \subfloat[]{ \includegraphics{line_loops} \label{fig:line_loops} } \subfloat[]{ \includegraphics{line_loop} \label{fig:line_loop} } \caption{A one-dimensional line or lattice with (a) $l$ self-loops per vertex and (b) a self-loop of weight $l$ at each vertex.} \end{center} \end{figure} For our first example, consider a quantum walk on the line. The lackadaisical case of $l$ self-loops per vertex, as depicted in \fref{fig:line_loops}, was explored by \cite{Wang2016} with the moving shift. Their initial state was a particle localized at vertex $0$ with amplitude in the left- and right-moving coin states only, so there was no initial amplitude along the self-loops. They showed that the more self loops, the faster the ballistic dispersion, with the velocities of the peaks equal to $\pm \sqrt{l/(l+2)}$. This is illustrated in \fref{fig:line_T100}, where the dashed red curve with $l = 10$ spreads more quickly than the loopless solid black curve. \begin{figure} \begin{center} \includegraphics{line_T100} \caption{\label{fig:line_T100} Probability distribution for a quantum walk on the line after 100 steps with the initial state $\ket{0} \otimes \left( \ket{-1} + i\ket{1} \right) / \sqrt{2}$. The solid black curve is without self-loops and only includes even locations since the probability at odd locations is zero. The dashed red curve is a lackadaisical quantum walk with $l = 10$ with the moving shift, and the dot-dashed green curve is with the flip-flop shift.} \end{center} \end{figure} Due to the symmetry of the evolution, the amplitudes along the $l$ self-loops at a vertex evolve identically. Then from \sref{sec:reduction}, we can replace them with a single self-loop of weight $l$, as shown in \fref{fig:line_loop}. The results from \cite{Wang2016} carry over to this generalized case, such as the peak velocities being $\pm \sqrt{l/(l+2)}$, except $l$ can now take non-integer values. Let us analyze this generalized lackadaisical quantum walk more closely so that we can connect it to other work. At each vertex, a particle can point to the left, to itself through the weighted self-loop (i.e., stay), or to the right. Let us call these coin states $\ket{L}$, $\ket{S}$, and $\ket{R}$. Then $\ket{s_v}$ \eref{eq:sv} for each vertex $v$ is \[ \ket{s_v} = \frac{1}{\sqrt{l+2}} \left( \ket{L} + \sqrt{l} \ket{S} + \ket{R} \right). \] Then in the $\{ \ket{L}, \ket{S}, \ket{R} \}$ basis, $C_v$ \eref{eq:Cv} can be written as a $3 \times 3$ matrix \[ C_v = \frac{1}{l + 2} \left( \begin{array}{ccc} -l & 2\sqrt{l} & 2 \\ 2\sqrt{l} & l-2 & 2\sqrt{l} \\ 2 & 2\sqrt{l} & -l \\ \end{array} \right). \] After applying $C_v$ at each vertex $v$, the moving shift is applied, completing a step of the quantum walk $U = SC$ \eref{eq:U}. Now let us consider another type of quantum walk called a three-state quantum walk \cite{Inui2005}, which we will later prove is equivalent to the above lackadaisical quantum walk on the line. The three-state quantum walk is a quantum walk on the line with three internal coin states, one pointing left, one to stay, and one pointing right, corresponding to the coin basis states $\ket{L}$, $\ket{S}$, and $\ket{R}$. The original three-state quantum walk \cite{Inui2005} simply applies the unweighted Grover diffusion coin \eref{eq:sv_unweighted} followed by the moving shift. {\v{S}}tefa{\v{n}}{\'a}k \textit{et al}, however, considered the 1D walk with deformations of the Grover coin, firstly by deforming its eigenvalues and secondly by deforming its eigenvectors. The second deformation uses the coin operator given by Eq.~(14) of \cite{Stefanak2012}: \[ C_v = \left( \begin{array}{ccc} -\rho^2 & \rho\sqrt{2(1-\rho^2)} & 1-\rho^2 \\ \rho\sqrt{2(1-\rho^2)} & 2\rho^2-1 & \rho\sqrt{2(1-\rho^2)} \\ 1-\rho^2 & \rho\sqrt{2(1-\rho^2)} & -\rho^2 \\ \end{array} \right), \] where $\rho \in [0,1]$ is a continuous parameter with the unweighted Grover diffusion coin corresponding to $\rho = 1/\sqrt{3}$. The coin is followed by the moving shift, and they showed that the velocity at which the peaks travel is $\pm \rho$. So as $\rho$ increases, the speed of the ballistic dispersion also increases. {\v{S}}tefa{\v{n}}{\'a}k \textit{et al}'s deformed coin operator equals the 1D lackadaisical quantum walk's coin with \[ \rho = \sqrt{\frac{l}{l+2}}. \] So their walk is exactly a generalized lackadaisical quantum walk with a weighted self-loop, and all their results carry over. This gives a new interpretation of {\v{S}}tefa{\v{n}}{\'a}k \textit{et al}'s result, not as an eigenvector deformation, but as a coined quantum walk on the line with a weighted self-loop at each vertex. The improved dispersion of the lackadaisical quantum walk on the line necessitates the moving shift. In the loopless case, both the moving and flip-flop shifts effect the same evolution for the initially unbiased state \cite{Wong17}. With self-loops, however, their evolutions are significantly different, as shown in \fref{fig:line_T100}, where the dashed red curve corresponds to the moving shift and the dot-dashed green curve to the flip-flop shift, both with $l = 10$. Whereas the moving shift's dispersion is faster, the flip-flop shift's is slower. \section{Search on the Complete Graph} \begin{figure} \begin{center} \subfloat[]{ \includegraphics{complete_loops} \label{fig:complete_loops} } \quad \quad \subfloat[]{ \includegraphics{complete_loop} \label{fig:complete_loop} } \caption{The complete graph of $N = 6$ vertices with (a) $l$ self-loops per vertex and (b) a self-loop of weight $l$ at each vertex. A marked vertex is indicated by a double circle. Identically evolving vertices are identically colored and labeled.} \end{center} \end{figure} For our second example, consider search on the complete graph, which corresponds to Grover's unstructured search problem, by lackadaisical quantum walk. This was first explored in \cite{Wong10} with $l$ self-loops per vertex, as illustrated in \fref{fig:complete_loops}. Due to the symmetry of the problem, at each vertex, the amplitudes along the $l$ self-loops evolve identically. Then from \sref{sec:reduction}, we can replace them with a single self-loop of weight $l$, as shown in \fref{fig:complete_loop}. The analysis from \cite{Wong10} carries over to our generalized walk with real $l$ when $l \ge 1/3$. We will give new analysis for $l < 1/3$, thus completely characterizing the runtime and success probability of the algorithm for all real $l \ge 0$. First let us carry over the results from \cite{Wong10}. As shown there, from the symmetry of the problem, there are only two types of vertices, the marked $a$ vertex and the unmarked $b$ vertices, depicted in \fref{fig:complete_loop}. A particle at the $a$ vertex can either point to itself or to $b$ vertices, and a particle at a $b$ vertex can either point to the $a$ vertex or to $b$ vertices (including itself). Then the system evolves in a 4D subspace spanned by \begin{eqnarray*} \ket{aa} = \ket{a} \otimes \ket{a}, \\ \ket{ab} = \ket{a} \otimes \frac{1}{\sqrt{N-1}} \sum_{b} \ket{b}, \\ \ket{ba} = \frac{1}{\sqrt{N-1}} \sum_b \ket{b} \otimes \ket{a}, \\ \ket{bb} = \frac{1}{\sqrt{N-1}} \sum_b \ket{b} \otimes \frac{1}{\sqrt{N+l-2}} \left( \sum_{b' \ne b} \ket{b'} + \sqrt{l} \ket{b} \right). \end{eqnarray*} In this basis, the initial state of the system is \begin{eqnarray*} \ket{\psi_0} = \frac{1}{\sqrt{N(N+l-1)}} \Big[ &\sqrt{l} \ket{aa} +\sqrt{N-1} \ket{ab} + \sqrt{N-1} \ket{ba} \\ &+ \sqrt{(N-1)(N+l-2)} \ket{bb} \Big]. \end{eqnarray*} The system evolves by repeatedly applying \[ U' = SCQ, \] where $Q$ is an oracle query that flips the amplitudes at the marked vertex, $C$ is the Grover diffusion coin as before \eref{eq:C}, and $S$ is the flip-flop shift. So $U'$ performs an oracle query followed by a step of the quantum walk. In the 4D subspace, it is \begin{equation*} U' = \left( \!\! \begin{array}{cccc} \cos\theta & -\sin\theta & 0 & 0 \\ 0 & 0 & -\cos\phi & \sin\phi \\ -\sin\theta & -\cos\theta & 0 & 0 \\ 0 & 0 & \sin\phi & \cos\phi \\ \end{array} \!\! \right), \end{equation*} where \[ \cos\theta = \frac{N-l-1}{N+l-1}, \quad {\rm and} \quad \sin\theta = \frac{2\sqrt{l(N-1)}}{N+l-1}, \] and \[ \cos\phi = \frac{N+l-3}{N+l-1}, \quad {\rm and} \quad \sin\phi = \frac{2\sqrt{N+l-2}}{N+l-1}. \] \begin{figure} \begin{center} \subfloat[]{ \includegraphics{complete_N1024_less} \label{fig:complete_N1024_less} } \quad \subfloat[]{ \includegraphics{complete_N1024_greater} \label{fig:complete_N1024_greater} } \caption{\label{fig:complete_N1024} Success probability for search on the complete graph of $N = 1024$ vertices and a self-loop of weight $l$ at each vertex. (a) The solid black curve is $l = 0$, the dashed red curve is $l = 0.1$, the dotted green curve is $l = 0.2$, the dot-dashed blue curve is $l = 0.4$, and the dot-dot-dashed orange curve is $l = 0.8$. (b) The solid black curve is $l = 1$, the dashed red curve is $l = 2.5$, the dotted green curve is $l = 5$, the dot-dashed blue curve is $l = 7.5$, and the dot-dot-dashed orange curve is $l = 10$.} \end{center} \end{figure} The probability of finding the particle at the marked vertex $a$ after $t$ steps is given by $p(t) = \left| \langle aa | U'^t | \psi_0 \rangle \right|^2 + \left| \langle ab | U'^t | \psi_0 \rangle \right|^2 $. This is plotted in \fref{fig:complete_N1024} as the system evolves for various values of $l$. In the loopless case, the success probability reaches $1/2$ after $\pi\sqrt{N}/2\sqrt{2}$ applications of $U'$. As $l$ increases, the maximum success probability increases until it reaches $1$ at $l = 1$. Further increasing $l$ causes the success probability to decrease. We also see in \fref{fig:complete_N1024_less} that when $l < 1/3$, the peak contains two humps, while when $l \ge 1/3$, the peak only has one hump. This transition at $l = 1/3$ will be proved rigorously, and it foreshadows that the analysis from \cite{Wong10}, while applying to $l \ge 1/3$, does not apply to $l < 1/3$. Since $U'$ is a 4D matrix, it has four eigenvectors and corresponding eigenvalues, given in \cite{Wong10}. The initial state can be expressed in terms of these eigenvectors, and then it is straightforward to determine the state after $t$ applications of $U'$. For large $N$, it is \[ U'^t \ket{\psi_0} \approx \left( \begin{array}{c} \frac{[1-\cos(\alpha t)] \sqrt{l(N-1)}}{(l+1)\sqrt{N+l-2}} \\ \frac{2l + (l-1) \cos(\alpha t) + \sqrt{(2N+l-3)(l+1)} \sin(\alpha t)}{2(l+1)\sqrt{N+l-2}} \\ \frac{2l + (l-1) \cos(\alpha t) - \sqrt{(2N+l-3)(l+1)} \sin(\alpha t)}{2(l+1)\sqrt{N+l-2}} \\ \frac{1 + \cos(\alpha t)}{l + 1} \\ \end{array} \right), \] where \[ \cos\alpha = \frac{N-2}{N+l-1}, \quad {\rm and} \quad \sin\alpha = \frac{\sqrt{(2N+l-3)(l+1)}}{N+l-1}. \] Then the success probability $p(t)$ is asymptotically given by the sum of the squares of the first two terms: \begin{eqnarray*} p(t) &\approx \left[ \frac{[1-\cos(\alpha t)] \sqrt{l(N-1)}}{(l+1)\sqrt{N+l-2}} \right]^2 \\ &\quad+ \left[ \frac{2l + (l-1) \cos(\alpha t) + \sqrt{(2N+l-3)(l+1)} \sin(\alpha t)}{2(l+1)\sqrt{N+l-2}} \right]^2. \end{eqnarray*} For large $N$, this further simplifies to \begin{eqnarray*} p(t) &\approx \left[ \frac{[1-\cos(\alpha t)] \sqrt{l(N-1)}}{(l+1)\sqrt{N+l-2}} \right]^2 \\ &\quad+ \left[ \frac{\sqrt{(2N+l-3)(l+1)} \sin(\alpha t)}{2(l+1)\sqrt{N+l-2}} \right]^2. \end{eqnarray*} To find the maximum success probability and the time at which it occurs, we take the first derivative of this, yielding \[ \frac{dp}{dt} = \frac{\alpha \sin(\alpha t)}{2(l+1)^2(N+l-2)} \left[ 4l(N-1) + (2N-l-3)(1-l) \cos(\alpha t) \right]. \] Setting this equal to zero, we get two solutions for the runtime $t_*$: \[ t_* = \frac{\pi}{\alpha}, \quad t_* = \frac{1}{\alpha} \cos^{-1} \left( \frac{4l(N-1)}{(2N-l-3)(l-1)} \right). \] The first solution $t_* = \pi/\alpha$ was explored in \cite{Wong10}. For the second solution, the argument of the inverse cosine is $2l/(l-1)$ for large $N$. This argument has magnitude less than $1$ when $l < 1/3$ and greater than $1$ when $l > 1/3$, resulting in real and complex runtimes, respectively. Thus we use this solution when $l < 1/3$ and the first solution $\pi/\alpha$ when $l \ge 1/3$. Approximating $\alpha \approx \sqrt{2(l+1)/N}$ for large $N$ and $l = o(N)$, we get a runtime of \[ t_* \approx \frac{\cos^{-1} \left( \frac{2l}{l-1} \right)}{\sqrt{2(l+1)}} \sqrt{N} \] when $l < 1/3$. Plugging this into $p(t)$ and keeping the dominant term for large $N$, the corresponding success probability is \[ p_* \approx \frac{1}{2(1-l)}. \] Putting these $l < 1/3$ results together with the $l \ge 1/3$ results from \cite{Wong10}, we get \[ t_* \approx \left\{ \begin{array}{ll} {\cos^{-1} \left( \frac{2l}{l-1} \right) \over \sqrt{2(l+1)}} \sqrt{N} & l < 1/3 \\ {\pi \over \sqrt{2(l+1)}} \sqrt{N} & l \ge 1/3,\ l = o(N) \\ \pi / \sin^{-1} \left( {\sqrt{c(c+2)} \over {c+1}} \right) & l = cN \\ 2 & l = \omega(N) \\ \end{array} \right. \] and \[ p_* \approx \left\{ \begin{array}{ll} {1 \over 2(l+1)} & l < 1/3 \\ {4l \over (l+1)^2} & l \ge 1/3,\ l = o(N) \\ {16+9c \over 4c(c+1)} {1 \over N} & l = cN \\ {9 \over 4l} & l = \omega(N) \\ \end{array} \right. . \] Thus we have completely characterized the runtime and success probability of the algorithm for real $l \ge 0$. Finally, note that the success probability is greater than the loopless value of $1/2$ when \[ \frac{4l}{(l+1)^2} > \frac{1}{2} \quad \Rightarrow \quad l < 3 + 2\sqrt{2} \approx 5.828. \] Recall that the standard lackadaisical quantum walk with integer $l$ self-loops per vertex had $l \le 5$, so the generalized walk has a larger range of values of $l$ that boost the success probability of a single iteration of the search algorithm. \section{Conclusion} Quantum walks are one of the primary means of developing quantum algorithms. The utility of walking on weighted graphs has been explored in several contexts, and we defined a discrete-time coined quantum walk on weighted graphs. The resulting coin operation is no longer an inversion about the average. With the flip-flop shift, two applications of this coined quantum walk is exactly equivalent to one application of Szegedy's quantum walk. With the moving shift, however, it is a new type of walk. This allows lackadaisical quantum walks to be reduced and generalized by replacing, at each vertex, the $l$ identically-evolving self-loops with a single self-loop of weight $l$. When $l$ is an integer, the resulting walks are identical, but now $l$ can also take non-integer values. We explored this for a walk on the line, showing that {\v{S}}tefa{\v{n}}{\'a}k \textit{et al}'s \cite{Stefanak2012} deformation of the three-state Grover walk is precisely the generalized lackadaisical quantum walk. This utilizes the moving shift, and the analytics of the evolution with the flip-flop shift is an open question. We also explored search on the complete graph, which is equivalent to Grover's unstructured search problem. In doing so, the results for a regular lackadaisical quantum walk with $l$ self-loops per vertex directly carry over to the generalized case when $l \ge 1/3$. We provided new analysis, however, for when $l < 1/3$. This completely characterizes the generalized lackadaisical quantum walk for this search problem. Further research includes applications of the generalized lackadaisical quantum walk to other quantum walk-based algorithms or spatial search problems. \ack This work was supported by the U.S.~Department of Defense Vannevar Bush Faculty Fellowship of Scott Aaronson. \section*{References} \bibliographystyle{iop}
1,108,101,564,471
arxiv
\section{Introduction} Mathematical modelling of physical conditions often requires representations for isotropic functions \cite{pipkin58,rivlin55}. In view of this much has been published on this subject (see, for example reference \cite{penninsi87}, and references therein). However, the derived number of isotropic functions in an irreducible basis (see definition of an irreducible basis in \cite{spencer71}) is still an open problem as stated by Pennisi and Trovato \cite{penninsi87}, where they state that: \\ "{\it Among all irreducible complete representations previously published in the literature (2.1)-(2.4) is that with fewer elements; but it is still an open problem to find, among all {\bf possible} irreducible complete representations, that (if it exists) with fewer elements}". In this paper, we address this open problem and prove that only a few elements are required in irreducible bases. The proofs given here are simple (compared to the proofs given in the literature) and they are based on a spectral approach associated with the author's work \cite{shariff13,sharbusta15,shariff17,shariff21a}. This substantial reduction in numbers of elements in irreducible bases could radically reduce modelling complexity. \section{Preliminaries} Let $V$ be a $3$-dimensional vector space. We define $Lin$ to be the space of all linear transformations (second-order tensors) on $V$ with the inner product $\tA:\tB=tr(\tA\tB^T)$, where $\tA, \tB \in Lin$ and $\tB^T$ is the transpose of $\tB$. We define \be\label{pra} Sym=\{\tA\in Lin | \tA=\tA^T\} \, , \spcc Orth= \{ \tQ \in Lin | \tQ = \tQ^{-T} \} \, . \ee The vectors considered here belong to the $3$-dimensional Euclidean space $\mathbb{E}^3$, i.e., the vector space $V$ furnished by the scalar product $\ta\cdot\tb$, where $\ta,\tb \in V$. The summation convention is not used here and, all subscripts $i, j$ and $k$ take the values $1,2,3$ unless stated otherwise. \section{Symmetric Tensors and Vectors} \subsection{Scalar}\label{sec-scalar-1} The scalar function $W(\tA_r,\ta_s)$, $(r=1,2,\ldots , N ; s=1,2,\ldots , P)$, where $\tA_r \in Sym $ and $\ta_s \in \mathbb{E}^3$ are, respectively, symmetric tensors and vectors, is said to be scalar-valued isotropic function if \be W(\tA_r,\ta_s) = W(\tQ\tA_r\tQ^T,\tQ\ta_s) \, \ee for all rotation tensor $\tQ \in Orth$. Boehler \cite{boehler77} has shown that every scalar-valued isotropic function can be written as a function of invariants given in the following list: \[ \ta_\alpha\cdot\ta_\alpha \, , \spcc \ta_\alpha\cdot\ta_\beta \, , \] \[ \tr \tA_i\, , \spcc \tr \tA_i^2 \, , \spcc \tr \tA_i^3 \spcc \tr \tA_i^2\tA_j \, , \spcc \tr \tA_i\tA_j^2 \, , \spcc \tr \tA_i^2\tA_j^2 \, , \spcc \tr \tA_i\tA_j\tA_k \, , \] \[ \ta_\alpha\cdot\tA_i\ta_\alpha \, , \spcc \ta_\alpha\cdot\tA_i^2\ta_\alpha \, , \spcc \ta_\alpha\cdot\tA_i\tA_j\ta_\alpha \, , \] \be\label{sm-1}\ta_\alpha\cdot\tA_i\ta_\beta \, , \spcc \ta_\alpha\cdot\tA_i^2\ta_\beta \, , \spcc \ta_\alpha\cdot(\tA_i\tA_j - \tA_j\tA_i) \ta_\beta \, , \ee $i,j,k=1,2,\ldots , N$ with $i<j<k$ and $\alpha,\beta= 1,2,\ldots , P$ with $\alpha < \beta$. However, Shariff \cite{shariff21a} has shown that, for unit vectors $\tv_\alpha$, only $2P+6N-3$ of the invariants in \rr{sm-1} are independent and that the number of invariants in the irreducible functional basis is at most $2P+6N-3$; far lower than the number of invariants given in \rr{sm-1}. In the case when $\tv_\alpha$ are not unit vectors it can be easily shown that only $3P+6N-3$ of the invariants in \rr{sm-1} are independent. Below, for the sake of easy reading, we prove (similar to the work of Shariff \cite{shariff21a}) that every scalar-valued isotropic function can be written as a function of at most $3P+6N-3$ number of invariants. This significant reduction in number of scalar invariants (when compared to the list in \rr{sm-1}) could greatly assist in reducing modelling complexity (see for example references \cite{shariff13a,shariff14,shariff16,shariff17b,shariff20a,shariff20b,shariff21b,shariff22a,shariff22b,shariff22c})\\ {\it Proof}\\ For $N \ge 1$. Let express (say) \be\label{scalar-1} \tA_1 = \sum_{i=1}^3 \ld_i \tv_i\ot\tv_i \, , \ee where $\ld_i$ and $\tv_i$ are eigenvalues and (unit) eigenvectors of $\tA_1$, respectively and $\ot$ represents a dyadic product. Using $\{\tv_1,\tv_2,\tv_3 \}$ as a basis, we can express \be \tA_r = \sum_{i,j=1}^3 \ru{A}{r}_{ij} \tv_i\ot\tv_j \, , \spcc \ta_s = \sum_{i=1}^3 \ru{a}{s}_i \tv_i \, , \spcc r=2,3, \ldots N \, , \spcc s=1,2,\ldots , P \, . \ee It is clear that the components $\ru{A}{r}_{ij}$ and $\ru{a}{s}_i$ are invariants, since \be \ru{A}{r}_{ij} = \tv_i\cdot\tA_r\tv_j = \tQ\tv_i\cdot\tQ\tA_r\tQ^T\tQ\tv_j \, , \spcc \ru{a}{r}_i = \ta_r\cdot\tv_i = \tQ\ta_r \cdot \tQ\tv_i \, . \ee Since, \be\label{ire-1} \ld_i \, , \spcc \ru{A}{r}_{ij} \, , \spcc \ru{a}{s}_i \, , \spcc r \ge 2 \, , \spcc i,j=1,2,3 \, \ee are "component" invariants, we can express \be W(\tA_r,\ta_s) = W(\tQ\tA_r\tQ^T,\tQ\ta_s) = \hat{W} (\ld_i,\ru{A}{r}_{ij},\ru{a}{s}_i ) \, , \spcc r \ge 2 \spcc i,j=1,2,3 \, . \ee All invariant functions in \rr{sm-1} can be explicitly expressed in terms of the spectral invariants given below; for example, we can express the function \be \ta_\alpha\cdot\tA_i^2\ta_\beta = \sum_{p,q,m=1}^3 \ru{a}{\alpha}_p\ru{A}{i}_{pq}\ru{A}{i}_{qm}\ru{a}{\beta}_m \, , \spcc i \ne 1 \ee Hence, the set of invariants in \rr{ire-1} is a complete representation for the scalar-valued isotropic function and since the terms in \rr{ire-1} are independent (invariant) components, the set is irreducible, i.e., incapable of being reduced. Hence, every scalar-valued isotropic function can be written as a function of at most $3P+6N-3$ number of invariants, far less than the number of invariants given in \rr{sm-1}. The spectral invariants in \rr{ire-1} have been used in continuum modelling \cite{shariff13a,shariff14,shariff16,shariff17b,shariff20a,shariff20b,shariff21b,shariff22a,shariff22b,shariff22c} and spectral derivatives, associated with these spectral invariants, are given in \cite{shariff17a,shariff20}. Since all of Boehler's invariants \rr{sm-1} can be explicitly expressed in terms of the spectral invariants \rr{ire-1}, this further validate our claim that the irreducible basis contains only $6N +3P-3$ invariants. {\bf Word of caution:} The function \be \hat{W}(\ld_i,\ru{A}{r}_{ij},\ru{a}{s}_i ) \ee must satisfy the $P$-property given in \cite{shariff16} and (for the benefit of the readers) in Appendix A. In this paper, we call a scalar-valued isotropic function that satisfies the $P$-property, a $P$-scalar-valued isotropic function. In general, the invariants appearing (as they are) in \rr{ire-1} are not $P$-scalar-valued isotropic functions. In the case when $N=0$, we have $W$ depends on $\ta_s$ only. In this case, we select the vector $\ta_1$ (say) and spectrally express \be\label{vec-a1} \ta_1 \ot\ta_1 = \ld\tv_1\ot\tv_1 + 0 \tv_2\ot\tv_2 + 0 \tv_3\ot\tv_3 \, , \spcc \ld = \ta_1\cdot\ta_1 \, , \spcc \tv_1 = \frd{\ta_1}{\sqrt{\ld}} \, \ee and, $\tv_2$ and $\tv_3$ are any two (non-unique) orthonormal vectors that are perpendicular to $\ta$. Hence, for $N=0$, we have $3P-2$ irreducible invariants, i.e., \be \ld \, , \spcc \ru{a}{s}_i \, , \spcc s=2,3,\ldots , P \, , \spcc i=1,2,3 \, . \ee In the case where all of the vectors $\ta_s$ are unit vectors, we have only $2P-2$ irreducible spectral invariants.\\ {\bf Example 1:} Consider the strain energy function $W$ of a transversely isotropic elastic solid. We then have, \be\label{tr-1} W(\tU,\ta\ot\ta) = \tilde{W}(\tU,\ta) = \hat{W}(\ld_i, a_i) \, , \spcc a_i =\tv_i\cdot\ta \, , \ee where $\ta_1=\ta$ is the preferred direction unit vector, $\tA_1=\tU$ is the right stretch tensor and \be\label{tr-2} \sum_{i=1}^3 a_i^2 =1 \, . \ee It is clear from \rr{tr-1} and \rr{tr-2}, and if we consider the positive and negative values of $a_i$ as distinct single-valued functions then we can conclude that the number of invariants in the irreducible functional basis is $5$.\\ {\bf Example 2:} If we consider in Example 1, $\tA_1=\ta\ot\ta$ and $\tA_2=\tU$, we have \be \ld_1 = 1 \, , \spcc \ld_2=\ld_3=0 \, , \spcc \tv_1=\ta \, , \ee $\tv_2$ and $\tv_3$ are any two (non-unique) orthonormal vectors that are perpendicular to $\ta$ and we then have \be\label{tr-3} W(\ta\ot\ta,\tU) = \hat{W}(U_{ij}) \, , \spcc U_{ij} = \tv_i\cdot\tU \tv_j \, . \ee We note that there are $6$ (instead of $5$) spectral invariants in \rr{tr-3}. However, since $\hat{W}$ must satisfy the $P$-property, we can express $\hat{W}$ in terms of $5$ independent invariants, that satisfy the $P$-property. For example, we can express $\hat{W}$ in terms of the 5 independent invariants \be I_1 = \sum_{i=1}^3 U_{ii} \, , \spcc I_2 = \sum_{i,j=1}^3 U_{ij}U_{ji} \, , \spcc I_3 = \sum_{i,j,k=1}^3 U_{ij}U_{jk}U_{ki} \, , \spcc I_4= U_{11} \, , \spcc I_5 = \sum_{i=1}^3 U_{1i}U_{i1} \, . \ee \subsection{Vector}\label{sec-vector-1} The vector function $\tg(\tA_r,\ta_s)$ is said to be vector-valued isotropic function if \be\label{vect-1} \tQ\tg(\tA_r,\ta_s) = \tg(\tQ\tA_r\tQ^T,\tQ\ta_s) \, \ee for all rotation tensor $\tQ$. Smith \cite{smith71} has shown that every vector-valued isotropic function can be written as a linear combination of the following vectors \be\label{sm-2} \ta_m \, , \spcc \tA_i\ta_m \, , \spcc \tA_i^2\ta_m \, , \spcc (\tA_i\tA_j - \tA_j\tA_i)\ta_m \, , \spcc i,j=1,2,\ldots , N \, : i<j \, , \spcc m=1,2, \ldots , P \, . \ee It is understood that the coefficients in these linear combinations are $P$-scalar-valued isotropic functions. Smith \cite{smith71} and Pennisi and Trovato \cite{penninsi87} claimed that the set of vectors in \rr{sm-2} is irreducible; we claim that the irreducible set contains only three linearly independent vectors. Below, we show via a theorem that every vector-valued isotropic function can be written as a linear combination of at most three linearly independent spectral vectors. \\ \begin{thrmc} $\tg$ is an isotropic tensor function if and only if it has the representation \be\label{apc-2} \tg(\tA_r,\ta_s) = \sum_{i=1}^3 g_i \tv_i \, , \ee where $\tv_i$ is an eigenvector of $\tA_1$ and $g_i$ are isotropic invariants of the set \be\label{apc-2a} S = \{\tA_1,\tA_2, \ldots \tA_N, \ta_1,\ta_2 , \ldots, \ta_P \} \, . \ee \end{thrmc} {\it Proof:} \\ (a) If \rr{apc-2} holds $\tg$ is clearly a vector-valued isotropic function, since the coefficients $g_i$ are isotropic invariants of the set $S$ \rr{apc-2a}. \\ (b) For $N \ge 1$ and $P\ge 0$. Let $\tv_i$ be unit eigenvectors of the symmetric tensor $\tA_1$ (see \rr{scalar-1}). Hence we can write \be\label{vect-2} \tg(\tA_r,\ta_s) = \sum_{i=1}^3 [\tg(\tA_r,\ta_s) \cdot\tv_i]\tv_i \, , \spcc r=1,2,\ldots, N, \spcc s=1,2,\ldots , P \ee and \be\label{vect-3} \tg(\tQ\tA_r\tQ^T,\tQ\ta_s) = \sum_{i=1}^3 [\tg(\tQ\tA_r\tQ^T,\tQ\ta_s)\cdot\tQ\tv_i] \tQ\tv_i \, . \ee Let scalar function \be g_i(\tA_r,\ta_s) = \tg(\tA_r,\ta_s) \cdot\tv_i \, . \ee We then have \be g_i(\tQ\tA_r\tQ^T,\tQ\ta_s) = \tg(\tQ\tA_r\tQ^T,\tQ\ta_s)\cdot\tQ\tv_i \, . \ee In view of \rr{vect-1}, \rr{vect-2} and \rr{vect-3}, and since $\tQ$ is arbitrary, we must have \be g_i(\tA_r,\ta_s) = g_i(\tQ\tA_r\tQ^T,\tQ\ta_s) \, , \ee which implies that $g_i$ are functions of isotropic invariants of the vector and tensor set $S$ given in \rr{apc-2a}. Note that, in view of the $P$-property, the functions $g_i$ must also be $P$-scalar-valued isotropic functions. In the case when $N=0$, we consider the vectors $\tv_i$ obtained similar to \rr{vec-a1} and express \be \tg(\ta_r) = \sum_{i=1}^3 g_i \tv_i \, , \spcc g_i = \tg\cdot\tv_i \, . \ee All Smith's vectors given in \rr{sm-2} can be expressed in terms of the unit vectors $\tv_1,\tv_2$ and $\tv_3$. For example the vector \be \tA_i\ta_m = \sum_{r=1}^3 (\sum_{s=1}^3\ru{A}{i}_{rs}\ru{a}{m}_s)\tv_r \, . \ee Hence, when a vector-valued function is expressed in terms of a linear combinations of Smith's functions given in \rr{sm-2}, it can then be expressed in terms of a linear combination of the symmetric spectral vectors $\tv_1,\tv_2$ and $\tv_3$; this further validates our claim that the irreducible basis contains only three vectors. \subsection{Symmetric Tensor}\label{sec-tensor-1} The symmetric tensor function $\tG(\tA_r,\ta_s)$, is said to be tensor-valued isotropic function if \be\label{tensor-1} \tQ\tG(\tA_r,\ta_s)\tQ^T = \tG(\tQ\tA_r\tQ^T,\tQ\ta_s) \, \ee for all rotation tensor $\tQ$. Smith \cite{smith71} has shown that every symmetric tensor-valued isotropic function can be written as a linear combination of the following symmetric tensors \[ \tI \, , \spcc \tA_i \, , \spcc \tA^2_i \, , \spcc \tA_i\tA_j+ \tA_j\tA_i \, , \spcc \tA_i^2\tA_j+ \tA_j\tA_i^2 \, , \spcc \tA_i\tA_j^2+ \tA_j^2\tA_i \] \[ \ta_m\ot\ta_m \, , \spcc \ta_m\ot\ta_n +\ta_n\ot\ta_m \, , \spcc \ta_m\ot\tA_i\ta_m + \tA_i\ta_m\ot\ta_m \, , \spcc \ta_m\ot\tA_i^2\ta_m + \tA_i^2\ta_m\ot\ta_m \, , \] \be\label{sm-3} \tA_i(\ta_m\ot\ta_n - \ta_n\ot\ta_m) - (\ta_m\ot\ta_n - \ta_n\ot\ta_m)\tA_i \, , \ee where $(i,j=1,2, \ldots , N; i<j)$, $(p,q = 1,2, \ldots , M ; p < q)$, $(m,n =1,2, \ldots , P; m< n)$ and $\tI$ is the identity tensor. Smith \cite{smith71} and Pennisi and Trovato \cite{penninsi87} claimed that the set of symmetric tensors in \rr{sm-3} is irreducible; we, however, claim via Theorem \ref{thrm-2} below, that the irreducible set contains only six linearly independent symmetric tensors. \begin{thrmc}\label{thrm-2} $\tG$ is an isotropic tensor function if and only if it has the representation \be\label{apc-2ten} \tG(\tA_r,\ta_s) = \sum_{i,j=1} t_{ij} \tv_i\ot \tv_j \, , \ee where $\tv_i$ is an eigenvector of $\tA_1$ and $t_{ij}$ are functions of $P$-scalar-valued isotropic functions of the vector and tensor set given in \rr{apc-2a}. \end{thrmc} {\it Proof} \\ (a) If \rr{apc-2ten} holds, since $t_{ij}$ are scalar invariants of the set $S$, then $\tG$ is clearly and isotropic tensor function. \\ (b) Using the basis $\{ \tv_1,\tv_2,\tv_3 \}$ obtained from \rr{scalar-1} , we can express \be \tG(\tA_r,\ta_s) = \sum_{i,j=1} t_{ij} \tv_i\ot\tv_j \, , \ee where \be t_{ij} = g_{ij}(\tA_r,\ta_s) = \tv_i\cdot\tG(\tA_r,\ta_s)\tv_j \, . \ee Similarly, we can express \be \tG(\tQ\tA_r\tQ^T, \tQ\ta_s) =\sum_{i,j=1} \bar{t}_{ij} \tQ\tv_i\ot\tQ\tv_j \, , \ee where \be &&\bar{t}_{ij} = \tQ\tv_i\cdot\tG(\tQ\tA_r\tQ^T, \tQ\ta_s)\tQ\tv_j \nn \\ && =g_{ij}(\tQ\tA_r\tQ^T,\tQ\ta_s) \, . \ee If \rr{tensor-1} holds then \be \sum_{i,j=1} \bar{t}_{ij} \tQ\tv_i\ot\tQ\tv_j = \sum_{i,j=1} t_{ij} \tQ\tv_i\ot\tQ\tv_j \, . \ee Since $\tQ$ is arbitrary, we have \be g_{ij}(\tA_r,\ta_s) = g_{ij}(\tQ\tA_r\tQ^T, \tQ\ta_s) \, \ee which implies that the functions $t_{ij}=g_{ij}$ must depend on $P$-scalar-valued isotropic functions of $S$. Since $g_{ij}=g_{ji}$, all tensor-valued isotropic functions can be written as a linear combination of only six symmetric tensors \be\label{gv-sm3} \tv_i\ot\tv_i \, \spcc (i=1,2,3) \, , \spcc \tv_i\ot\tv_j+\tv_j\ot\tv_i \, \spcc( i=1,2; j=2,3, i<j) \, . \ee Hence, we can express \be\label{sym-ga} \tG(\tA_r, \ta_s) =\sum_{i=1}^3 g_{ii} \tv_i\ot\tv_i + \sum_{i<j} g_{ij}(\tv_i\ot\tv_j+\tv_j\ot\tv_i) \, \spcc (i=1,2;j=2,3) \, . \ee All symmetric tensors in \rr{sm-3} generated by Smith \cite{smith71} can be expressed in terms of the six symmetric tensors given in \rr{gv-sm3}, for example, the symmetric tensor \be \tA_i\tA_j+ \tA_j\tA_i = \sum_{p=1}^3 g_{pp} \tv_p\ot\tv_p + \sum_{p<q} g_{pq}(\tv_p\ot\tv_q+\tv_q\ot\tv_p) \, \spcc (p=1,2;q=2,3) \, , \ee where \be g_{pp} = 2\sum_{m=1}^3 \ru{A}{i}_{pm}\ru{A}{j}_{mp} \, , \spcc g_{pq} = \sum_{m=1}^3 (\ru{A}{i}_{pm}\ru{A}{j}_{mq}+ \ru{A}{i}_{qm}\ru{A}{j}_{mp} ) \, . \ee Hence, when a tensor-valued function is expressed in terms of a linear combinations of Smith's functions given in \rr{sm-3}, it can then be expressed in terms of a linear combination of the six symmetric spectral tensors given in \rr{gv-sm3}; this further validates our claim that the irreducible basis contains only six symmetric tensors. The above theorem proves that the irreducible set contains only six linearly independent symmetric tensors. This drastically reduce the complexity in physical modelling. For example, Merodio and Rajagopal \cite{mer07} modelled viscoelastic solids, where the Cauchy stress $\tT$ depends on $\tA_1=\tB$ (left Cauchy-Green stretch tensor), $\tA_2=\tD$ (the symmetric part of the velocity gradient), $\ta_1=\tm$ and $\ta_2=\tn$ (preferred directions). Using Smith tensors \rr{sm-3}, the Cauchy stress $\tT$ is described using $36$ tensors obtained from \rr{sm-3} and, due to this large number of $36$ tensors and $37$ scalar invariants, the model is complicated; there is a dire need to simplify the model. Sometimes this is done by omission of invariants and tensors. However, the discrimination in selection of invariants and tensors is often debated, and neglecting the influence of some invariants and tensors may result in an incomplete representation of the full range of mechanical response subjected to a continuum. However, using the results obtained here, modelling viscoelastic solids is greatly simplified, we only require $15$ scalar invariants and $6$ symmetric tensors to fully describe the Cauchy stress $\tT$. {\bf Remark:}\\ {\it Since both the scalars $g_i$ and $g_{ij}$ are, respectively, vector and tensor components, the vector $\tg$ and tensor $\tG$ are uniquely expressed in terms of the basis $\{\tv_1,\tv_2,\tv_3 \}$ even though two or three of the vectors $\tv_1$, $\tv_2$ and $\tv_3$ may not be unique due to coalescence of eigenvalues.} The theorem below has been proven in the literature (see for example references Itskov \cite{itskov2013} and Ogden \cite{ogden84}), however, for the benefit of the readers we prove it again here. \begin{thrmc} If $\tG(\tV)$ is an isotropic tensor function then $\tG(\tV)$ is coaxial with $\tV$ and hence \be \tV\tG = \tG\tV \, . \ee \end{thrmc} {\it Proof} \\ Let $\tv_1$ be an eigenvector of $\tV$ and choose \be \tQ = 2\tv_1\ot\tv_1 - \tI = \tQ^T \, . \ee In view of $\tV\tv_1 = \ld_1 \tv_1$, we have \be \tQ\tV = \tV\tQ \rightarrow \tQ\tV\tQ^T = \tV \, . \ee From \rr{tensor-1} we get \be \tQ\tG(\tV)\tQ^T = \tG(\tV) \, . \ee Hence \be \tG(\tV)\tv_1 = \left( \sum_{i,j=1} t_{ij} \tQ\tv_i\ot\tv_j\tQ^T \right) \tv_1 \, . \ee Note that $\tQ^T\tv_1 = \tv_1$ and hence we have \be \tG(\tV)\tv_1 = \sum_{i=1} t_{i1} \tQ\tv_i = \sum_{i=1} t_{i1} (2\tv_1\ot\tv_1 - \tI)\tv_i = 2t_{11}\tv_1 - \tG(\tV)\tv_1 \, . \ee Hence \be \tG(\tV)\tv_1 = t_{11}\tv_1 \, , \ee which implies that $\tv_1$ is an eigenvector of $\tG(\tV)$ and $t_{11}$ is an eigenvalue of $\tG$. In a similar fashion, choosing $\tQ = 2\tv_r\ot\tv_r - \tI = \tQ^T$, $r=2,3$, we can easily derive that \be \tG(\tV) = \sum_{i=1} t_{ii} \tv_i\ot\tv_i \, ,\ee and the theorem is proved. Below is a theorem, which we believe is not found in the literature. \begin{thrmc} Let $\ld_i$ be the eigenvalues of $\tV$ and let \be \tG(\tV) = \sum_{i=1}^3 t_i (\ld_1,\ld_2,\ld_3) \tv_i\ot\tv_i \, , \ee be a symmetric isotropic tensor function, where $\tv_i$ is an eigenvector of $\tV$. \\ (a) If $\ld_i=\ld_j \ne \ld_k$, $(i \ne j \ne k \ne i)$, then \be t_i=t_j \, \ee and we can uniquely express \be \tG(\tV) = t_i\tI + (t_k-t_i)\tv_3\ot\tv_3 \, . \ee (b) If $\ld_1=\ld_2 =\ld_3$ then \be t_1=t_2=t_3 \, \ee and we can uniquely express \be \tG(\tV) = t_1\tI \, . \ee \end{thrmc} {\it Proof} \\ Consider the case $\ld_1=\ld_2 = \ld \ne \ld_3$. In view of this, $\tv_1$ and $\tv_2$ are not unique and have infinitely many values. In view of the relation \be \tv_1\ot\tv_1 + \tv_2\ot\tv_2+\tv_3\ot\tv_3 = \tI \, , \ee we can write \be\label{GV-1} \tG(\tV) = t_1\tI + (t_2-t_1)\tv_2\ot\tv_2 + (t_3-t_1)\tv_3\ot\tv_3 \, . \ee Since $\tv_2$ is not unique, we must have $t_1=t_2$ to give $\tG(\tV)$ a unique value. In a similar fashion, we can show for the cases $\ld_1=\ld_3$ and $\ld_2=\ld_3$. Hence, theorem (a) is proved. In the case when $\ld_1=\ld_2 =\ld_3$, $\tv_3$ is also arbitrary, hence from \rr{GV-1} we must have $t_1=t_2=t_3$ and theorem (b) is proved. We can see that in case when the classical invariants $I_1=\tr \tV$, $I_2=\tr \tV^2 $ and $I_3 = \tr \tV^3$ are used, we have \cite{ogden84} \be\label{GV-2} \tG(\tV) = \phi_0 \tI + \phi_1 \tV + \phi_2\tV^2 = \sum_{i=1}^3 (\phi_0 + \phi_1\ld_i + \phi_2 \ld_i^2) \tv_i\ot\tv_i = \sum_{i=1}^3 t_i \tv_i\ot\tv_i \, , \ee \be\label{GV-3} t_i= \phi_0 + \phi_1\ld_i + \phi_2 \ld_i^2 \, , \ee where $\phi_0,\phi_1$ and $\phi_2$ depend on $P$-scalar-valued isotropic functions, $I_1$, $I_2$ and $I_3$. It is clear from \rr{GV-3} that $t_i=t_j$ when $\ld_i=\ld_j$. \section{Isotropic Functions of Non-symmetric Tensors} \subsection{Scalar} The scalar function $W(\tH_t, \tA_r,\ta_s)$, $(r=1,2,\ldots , N ; t=1,2,\ldots M ; s=1,2,\ldots , P )$ is said to be a scalar-valued isotropic function if \be W(\tH_t, \tA_r,\ta_s) = W(\tQ\tH_t\tQ^T, \tQ\tA_r\tQ^T,\tQ\ta_s) \, \ee for all rotation tensor $\tQ \in Orth$, where $\tH_t \in Lin$ $(t=1,2,\ldots M)$ is a nonsymmetric second order tensor. In the case when $M,N,\ge 1$, we can easily proved, based on Section \ref{sec-scalar-1} that \be W(\tH_t, \tA_r,\ta_s) = W(\tQ\tH_t\tQ^T, \tQ\tA_r\tQ^T,\tQ\ta_s) = \hat{W}(\ld_i, \ru{H}{t}_{ij},\ru{A}{r}_{ij}, \ru{a}{s}_i) \, , \spcc r=2,3,\ldots N \, ,\ee where the invariants \be\label{nsym-1a} \ld_i,\ru{A}{r}_{ij}, \ru{a}{s}_i \ee are given in \rr{ire-1} and the invariants \be\label{nsym-1b} \ru{H}{t}_{ij} = \tv_i \cdot \tH_t \tv_j =\tQ\tv_i \cdot\tQ\tH_t\tQ^T \tQ\tv_j\, , \spcc i,j=1,2,3 \, . \ee Since the above invariants are independent components, the irreducible basis consists of at most $3P+9M+6N-3$ invariants. Note that Boehler \cite{boehler77} consider the isotropic function \be\label{nsym-2} W(\tW_t, \tA_r,\ta_s)\, , \ee where $\tW_t$ is a skew-symmetric tensor. He claimed that the irreducible set contains the "complicated" set of invariants \[ \ta_\alpha\cdot\ta_\alpha \, , \spcc \ta_\alpha\cdot\ta_\beta \, , \spcc \tr \tA_i\, , \spcc \tr \tA_i^2 \, , \spcc \tr \tA_i^3 \, , \, , \spcc \tr \tA_i\tA_j \, , \spcc \tr \tA_i^2\tA_j \, , \spcc \tr \tA_i\tA_j^2 \, , \spcc \tr \tA_i^2\tA_j^2 \, ,\] \[ \tr \tA_i\tA_j\tA_k \, , \spcc \tr \tW_p^2 \, , \spcc \tr \tW_p\tW_q \, , \spcc \tr \tW_p\tW_q\tW_r \, , \spcc \ta_\alpha\cdot\tA_i \ta_\alpha \, , \spcc \ta_\alpha\cdot\tA_i^2 \ta_\alpha \, , \] \[ \ta_\alpha\cdot\tA_i\tA_j \ta_\alpha \, , \spcc \ta_\alpha\cdot\tA_i \tv_\beta \, , \spcc \ta_\alpha\cdot\tA_i^2 \ta_\beta \, , \] \[ \ta_\alpha\cdot (\tA_i\tA_j-\tA_j\tA_i)\ta_\beta \, , \spcc \ta_\alpha\cdot\tW_p^2\ta_\alpha \, , \spcc \ta_\alpha\cdot\tW_p\tW_q\ta_\alpha \, , \spcc \ta_\alpha\cdot\tW_p^2\tW_q\ta_\alpha \, ,\] \[ \ta_\alpha\cdot\tW_p\tW_q^2\ta_\alpha \, , \spcc \ta_\alpha\cdot\tW_p\ta_\beta \, , \spcc \ta_\alpha\cdot\tW_p^2 \ta_\beta \, , \] \[ \ta_\alpha\cdot (\tW_p\tW_q-\tW_q\tW_p)\ta_\beta \, , \spcc \tr \tA_i\tW_p^2 \, , \spcc \tr \tA_i^2\tW_p^2 \, , \spcc \tr \tA_i^2\tW_p^2\tA_i\tW_p \, , \spcc \tr \tA_i\tW_p\tW_q \, , \spcc \tr \tA_i\tW_p\tW_q^2 \, , \] \[ \tr \tA_i\tW_p^2\tW_q \, , \spcc \tr \tA_i\tA_j\tW_p \, , \spcc \tr \tA_i\tW_p^2\tA_j\tW_p \, , \spcc \tr \tA_i\tA_j^2\tW_p \, , \spcc\tr \tA_i^2\tA_j\tW_p \, ,\] \be\label{nysm-1} \ta_\alpha\cdot\tA_i\tW_p \ta_\alpha \, , \spcc \ta_\alpha\cdot\tW_p\tA_i\tW_p^2 \ta_\alpha \, , \spcc \ta_\alpha\cdot\tA_i^2\tW_p \ta_\alpha \, , \spcc \ta_\alpha\cdot (\tA_i\tW_p-\tW_p\tA_i)\ta_\beta \, , \ee where $i,j,k=1,2, \ldots , N$ with $i<j<k; p,q,r =1,2,\ldots , M$ with $p <q < r$ and $\alpha,\beta =1,2,\ldots , P$ with $\alpha < \beta$. However, prove that irreducible set contains only $3P+3M+6N-3$ invariants and they are: \be\label{nsym-3} \ld_i\, , \spcc \ru{A}{r}_{ij}\, , \spcc \ru{a}{s}_i \, , \spcc \ru{W}{t}_{kl} = \tv_k\cdot\tW_k\tv_l \, , \spcc i,j,k,l=1,2,3,\, , \spcc k<l \, , \spcc r \ge 2 \, . \ee The invariants in \rr{nsym-3} are obtained from \rr{nsym-1a} and \rr{nsym-1b}, by replacing $\tH_t$ with $\tW_t$ and taking note that \be \tv_i\cdot\tW_k\tv_i = 0 \, , \spcc \tv_i\cdot\tW_k\tv_j = -\tv_j\cdot\tW_k\tv_i \, , \spcc i\ne j \, , \spcc i,j=1,2,3 \, . \ee In the case when $N=0$, we have \be W(\tH_t,\ta_s) \, . \ee In this case, we let the orthonormal vectors $\tv_i$ to be the eigenvectors of the symmetric tensor $\tH_1\tH_1^T$ (or alternatively $\tH_1^T\tH_1$), i.e., \be \tH_1\tH_1^T = \sum_{i=1}^3 \ld_i \tv_i\ot\tv_i \, , \spcc \ld_i \ge 0 \, . \ee The irreducible set contains at most $9M+3P$ invariants \be\label{scalarb-1} \ru{H}{t}_{ij} \, , \spcc \ru{a}{s}_i \, , \spcc i,j=1,2,3 \, . \ee \subsection{Vector} For a vector-valued isotropic function, it can be easily prove that, following Section \ref{sec-vector-1}, \be \tg(\tH_t, \tA_r,\ta_s) = \sum_{i=1}^3 g_i \tv_i \, , \ee where $g_i$ are functions of the invariants in \rr{nsym-1a} and \rr{nsym-1b} or \rr{scalarb-1}, as appropriate. Hence, the irreducible basis for $\tg$ contain only the three vectors $\tv_i$. Note that for $\tH_t=\tW_t$, Smith \cite{smith71} claimed that the irreducible basis for $\tg$ contain the vectors \[ \ta_m \, , \spcc \tA_i\ta_m \, , \spcc \tA_i^2\ta_m \, , \spcc (\tA_i\tA_j - \tA_j\tA_i)\ta_m \, , \spcc \tW_p\ta_m \, , \spcc \tW_p^2\ta_m \, , \] \be\label{smvec-2} (\tW_p\tW_q -\tW_q\tW_p)\ta_m \, , \spcc (\tA_i\tW_p - \tW_p\tA_i)\ta_m \, , \ee where $i,j=1,2,\ldots , N \, ; i<j \, , \spcc p,q=1,2,\ldots , M; p<q \, , \spcc m=1,2, \ldots , P$. This claim is incorrect since all the vectors in \rr{smvec-2} can be written terms of the vectors $\tv_1,\tv_2$ and $\tv_3$. \subsection{Tensor} Following the method in Section \ref{sec-tensor-1}, we can easily prove that the for any tensor in $Lin$, with $M,N\ge 1$, \be \tH(\tH_t, \tA_r,\ta_s) = \sum_{i,j=1}^3 h_{ij} \tv_i\ot\tv_j \, , \ee where, $\tv_i$ is an eigenvector of $\tA_1$, and in general $h_{ij}=\tv_i\cdot\tH\tv_j \ne h_{ji}$ are functions of the invariants in \rr{nsym-1a} and \rr{nsym-1b}. Hence, the irreducible basis for $\tH$ contains, at most, $9$ tensors, $\tv_i\ot\tv_j$. In the case when $\tH$ is symmetric, the irreducible basis contains at most $6$ symmetric tensors given in \rr{gv-sm3}. In ths case when $\tH$ is a skew-symmetric tensor , the irreducible basis contains at most $3$ skew-symmetric tensors, i.e. \be \tv_i\ot\tv_j - \tv_j\ot\tv_i \, \spcc( i=1,2; j=2,3, i<j) \, . \ee Alternatively, for $M\ge 1$ and $N \ge 0$, using the singular value decomposition \be \tH_1 = \sum_{i=1}^3 \ld_i \tv_i\ot\tu_i \, , \ee we can easily prove that \be \tH(\tH_t, \tA_r,\ta_s) = \sum_{i,j=1}^3 \hat{h}_{ij} \tv_i\ot\tu_j \, , \ee where $\tu_j$ are the unit eigenvectors of $\tH_1^T\tH_1$, $\tv_i$ are the unit eigenvectors of $\tH_1\tH_1^T$ and the invariants \be \hat{h}_{ij} = \tv_i \cdot \tH \tu_j \ne \hat{h}_{ji} \, \ee are functions of the $9M+6N+3P-3$ invariants \be \ld_i \, , \spcc \tu_i\cdot\tv_i \, , \spcc \tv_i\cdot\tH_t\tu_j \spcc (t\ge 2) \, , \spcc \tv_i\cdot\tA_r\tu_j \, , \spcc \ta_s \cdot\tv_i \, . \ee Smith \cite{smith71} claimed for a symmetric tensor $\tH$ and skew-symmetric tensors $\tH_t=\tW_t$, the irreducible basis for symmetric $\tH$ contains the set of symmetric tensors \[ \tI \, , \spcc \tA_i \, , \spcc \tA^2_i \, , \spcc \tA_i\tA_j+ \tA_j\tA_i \, , \spcc \tA_i^2\tA_j+ \tA_j\tA_i^2 \, , \spcc \tA_i\tA_j^2+ \tA_j^2\tA_i \] \[ \ta_m\ot\ta_m \, , \spcc \ta_m\ot\ta_n +\ta_n\ot\ta_m \, , \spcc \ta_m\ot\tA_i\ta_m + \tA_i\ta_m\ot\ta_m \, , \spcc \ta_m\ot\tA_i^2\ta_m + \tA_i^2\ta_m\ot\ta_m \, , \] \[ \tA_i(\ta_m\ot\ta_n - \ta_n\ot\ta_m) - (\ta_m\ot\ta_n - \ta_n\ot\ta_m)\tA_i \, , \] \[ \tW_p^2 \, , \spcc \tW_p\tW_q+\tW_q\tW_p \, , \spcc \tW_p\tW_q^2-\tW_q^2\tW_p \, , \spcc \tW_p^2\tW_q-\tW_q\tW_p^2 \, , \] \[ \tA_i\tW_p - \tW_p\tA_i \, , \spcc \tW_p\tA_i\tW_p \, , \spcc \tA_i^2\tW_p - \tW_p\tA_i^2 \, , \spcc \tW_p\tA_i\tW_p^2 - \tW_p^2\tA_i\tW_p \, , \] \[ \tW_p\ta_m \ot\tW_p\ta_m \, , \spcc \ta_m\ot \tW_p\ta_m + \tW_p\ta_m\ot \ta_m \, , \spcc \tW_p\ta_m \ot \tW_p^2\ta_m + \tW_p^2\ta_m \ot \tW_p\ta_m \, , \] \be\label{nsym-5} \tW_p(\ta_m\ot\ta_n - \ta_n\ot\ta_m) + (\ta_m\ot\ta_n - \ta_n\ot\ta_m)\tW_p \, , \ee where $(i,j=1,2, \ldots , N; i<j)$, $(p,q = 1,2, \ldots , M ; p < q)$ and $(m,n =1,2, \ldots , P; m< n)$. In \rr{nsym-5}, it is clear that there is a large number of "complicated" symmetric tensors in the Smith \cite{smith71} irreducible basis and this number is far greater than $6$, the number of symmetric tensors in our irreducible basis. We note that all of Smith's symmetric tensors in \rr{nsym-5} can be expressed in terms of the six symmetric tensors given in \rr{gv-sm3}. \section{Potential Vectors and Tensors}\label{sec-der-1} In this Section, we consider vectors and tensors that can be obtained from differentiating a scalar-valued isotropic function $W$, i.e., \be \tg = \pdf{W}{\ta} \, , \spcc \tG = \pdf{W}{\tV} \, , \spcc \tH = \pdf{W}{\tF} \, , \ee where $\ta$ is a vector, $\tV$ is a symmetric tensor and $\tF$ is a non-symmetric tensor. We called these vectors/tensors, potential vectors/tensors. For example, in non-linear hyper-elasticity, the potential nominal stress ${\dis \tS=\pdf{\rp{W}{e}}{\tF} }$, where $\tF$ is the deformation gradient tensor and $\rp{W}{e}$ is the strain energy function. \subsection{Vector} Let $W(\tH_t, \tA_r,\ta_s)$, be a scalar-valued isotropic function and let $\ta=\ta_1$. From Appendix B and following the work of Shariff \cite{shariff17a}, we obtain the relation \[ \tg(\tH_t, \tA_r,\ta_s) = \pdf{W}{\ta} = \pdf{W}{\ld}\tv_1 + \left(\frd{1}{\ld} \pdf{W}{\tv_1} \cdot\tv_2 \right)\tv_2 + \left( \frd{1}{\ld} \pdf{W}{\tv_1} \cdot\tv_3\right) \tv_3 \] \be\label{vect-qq1} = \pdf{W}{\ld}\tv_1 + \frd{1}{\ld} \left[ (\tI - \tv_1\ot\tv_1)^T \pdf{W}{\tv_1} \right] \, , \ee where $\ld = \sqrt{\ta\cdot\ta}$. It is clear from \rr{vect-qq1}, since the coefficients of $\tv_i$ are scalar-valued isotropic functions, $\tg$ is a vector-valued isotropic function. \subsection{Symmetric Tensor-Valued Isotropic Function $\tG$} In this case, we let ${\dis \tV=\tA_1 = \sum_{i=1}^3 \ld_i \tv_i\ot\tv_i}$. Shariff \cite{shariff17a} has shown that tensor-valued isotropic function \[ \tG(\tH_t, \tA_r,\ta_s) = \pdf{W}{\tV} \] \be\label{sym-da} =\sum_{i=1}^3 \pdf{W}{\ld_i} \tv_i\ot\tv_i + \sum_{i,j=1 \, , i<j }^3 \frd{1}{2(\ld_i-\ld_j)} (\pdf{W}{\tv_i}\cdot\tv_j - \pdf{W}{\tv_j}\cdot\tv_i)(\tv_i\ot\tv_j + \tv_j\ot\tv_i) \, . \ee \subsection{Non-symmteric Tensor-Valued Isotropic Function $\tH$} In this case, in view of singular value decomposition, we have ${\dis \tH_1=\tF =\sum_{i=1}^3 \ld_i \tv_i\ot\tu_i }$, where $\ld_i$ are the square root of the eigenvalues of $\tF\tF^T$, $\tv_i$ is a unit eigenvector of $\tF\tF^T$ and $\tu_i$ is a unit eigenvector of $\tF^T\tF$. Shariff \cite{shariff17a} (using a derivative convention used in Itskov \cite{itskov2013}) has shown that tensor-valued isotropic function \[ \tH(\tH_t, \tA_r,\ta_s) = \pdf{W}{\tF} \] \be = \sum_{i=1}^3 \pdf{W}{\ld_i}\tv_i\ot\tu_i + \sum_{i,j=1, i\ne j}^3 \frd{\left(\ld_i (\pdf{W}{\tu_i}\cdot\tu_j - \pdf{W}{\tu_j}\cdot\tu_i) + \ld_j(\pdf{W}{\tv_i}\cdot\tv_j - \pdf{W}{\tv_j}\cdot\tv_i) \right) \tv_i\ot\tu_j}{\ld_i^2-\ld_j^2} \, .\ee \section{Remark} In this communication we have shown that we need only $3$ linearly independent vectors to represent both potential and non-potential vectors and and a maximum of only $9$ linearly independent tensors to represent both potenstial and non-potenstial tensors. However, the number of functions in a Smith \cite{smith71} or Boehler \cite{boehler77} irreducible basis required to represent a potential vector/tensor is generally not the same as that required to represent a non-potential vector/tensor. For example, consider finite strain transversely isotropic elasticity with the preferred direction $\ta$ in the undeformed configuration. Let $\tS(\tC,\tL)$ be the second Piola-Kirchhoff stress tensor, where $\tC$ is the right Cauchy-Green tensor and $\tL=\ta\ot\ta$. Using Smith \cite{smith71} and Boehler \cite{boehler77} tensor functions, we have \be\label{rem-Sa} \tS = \alpha_0 \tI + \alpha_1\tL + \alpha_2 \tC + \alpha_3\tC^2 + \alpha_4(\tC\tL + \tL\tC) + \alpha_5(\tC^2\tL + \tL\tC^2) \, , \ee where $\alpha_0 - \alpha_5$ are isotropic invariants of the set $\{\tC,\tL\}$. For an hyperelastic material, there exist a strain energy function \be W(\tC,\tL) = \hat{W}(I_1,I_2,I_3,I_4,I_5) \, , \ee where the invariants \be I_1 =\tr \tC \, , \spcc I_2=\tr \tC^2 \, , \spcc I_3=\tC^3 \, , \spcc I_4=\tr (\tC\tL) \, , \spcc I_5=\tr (\tC^2\tL) \, . \ee The second (potential) Piola-Kirchhoff stress tensor then has the relation \be\label{rem-S} \tS = \pdf{W}{\tE} = 2\pdf{\hat{W}}{\tI_1} \tI + 4\pdf{\hat{W}}{\tI_2}\tC + 6\pdf{\hat{W}}{\tI_3}\tC^2+ 2\pdf{\hat{W}}{\tI_4}\tL + \pdf{\hat{W}}{\tI_5}(\tC\tL + \tL\tC) \, , \spcc \tE=\frd{1}{2}(\tC-I) \, . \ee Comparing \rr{rem-Sa} and \rr{rem-S}, we observe that the representation for the hyperelastic material does not include the last term in \rr{rem-Sa}, i.e., $\tC^2\tL + \tL\tC^2$. It seems {\it on the onset}, if we use Smith \cite{smith71} and Boehler \cite{boehler77} irreducible functions, the constitutive equation \rr{rem-Sa} {\it cannot} be described by a strain energy function (see comments made in Itskov \cite{itskov2013} page 144). However, if we express the tensors \be \tI \, , \spcc \tL \, , \spcc \tC \, , \spcc \tC^2 \, , \spcc \tC\tL + \tL\tC \, , \spcc \tC^2\tL + \tL\tC^2 \, \ee in terms of the tensors $\tv_i\ot\tv_j$ ($\tv_i$ is an eigenvector of $\tC$), where their scalar coefficients are isotropic invariants of the set $\{\tC,\tL \}$, we could easily equate \rr{rem-Sa} with \rr{rem-S}; which suggest that, when express in terms of the basis functions $\tv_i\ot\tv_j$, the constitutive equation \rr{rem-Sa} {\it can} be described by a strain energy function. In general, following the above example, it can be easily shown that a non-potential vector/tensor can always be represented by a potential vector/tensor. \section*{Appendix A: $P$-property} \def\theequation{A\arabic{equation}} \setcounter{equation}{0} The description of the $P$-property uses the eigenvalues $\ld_i$ and eigenvectors $\tv_i$ of the symmetric tensor $\tA_1$ . A general anisotropic invariant, where its arguments are expressed in terms spectral invariants with respect to the basis $\{ \tv_1,\tv_2, \tv_3 \}$ can be written in the form \be\label{pa1}\Phi &=& \bar{W}(\ld_i, \tv_i\cdot\tA_r\tv_j, \tv_i\cdot\ta_s) \nn \\ &=& \tilde{W}(\lambda_1,\lambda_2,\lambda_3,\tv_1,\tv_2,\tv_3) \, , \ee where \be r=2,\ldots, M, \, , \spcc s=1,2,\ldots, P, \ee and, in Eqn. \rr{pa1}$_2$, the appearance of $\tA_r$ and $\ta_s$ is suppressed to facilitate the description of the $P$-property. $\tilde{W}$ must satisfy the symmetrical property \be\label{pa1a} \tilde{W}(\lambda_1,\lambda_2,\lambda_3,\tv_1,\tv_2,\tv_3) = \tilde{W}(\lambda_2,\lambda_1,\lambda_3,\tv_2,\tv_1,\tv_3) = \tilde{W}(\lambda_3,\lambda_2,\lambda_1,\tv_3,\tv_2,\tv_1) \, . \ee In view of the non-unique values of $\tv_i$ and $\tv_j$ when $\lambda_i=\lambda_j$, a function $\tilde{W}$ should be independent of $\tv_i$ and $\tv_j$ when $\lambda_i=\lambda_j$, and $\tilde{W}$ should be independent of $\tv_1$, $\tv_2$ and $\tv_3$ when $\lambda_1=\lambda_2=\lambda_3$. Hence, when two or three of the principal stretches have equal values the scalar function $\Phi$ must have any of the following forms \[ \Phi = \left\{ \begin{array}{cc} \rp{W}{a}(\ld,\ld_k,\tv_k) \, , & \ld_i=\ld_j=\ld \, , i\ne j \ne k \ne i \\ \rp{W}{b}(\ld) \, , & \ld_1=\ld_2=\ld_3=\ld \hspace*{\fill} \end{array}\right. \] For example, consider \be\label{apa-11} \Phi=\ta\tA_1\ta =\sum_{i=1}^3 \ld_i(\ta\bl\tv_i)^2 \, , \ee where $\ta$ is a fixed unit vector and \be \sum_{i=1}^3 (\ta\bl\tv_i)^2 = 1 \, . \ee . If \be \ld_1=\ld_2=\ld \, , \ee we have \be \Phi= \rp{W}{a}(\ld,\ld_3,\tv_3) =\ld + (\ld_3 - \ld)(\ta\bl\tv_3)^2 \, \ee and in the case of $\ld_1=\ld_2=\ld_3=\ld$, \be \Phi=\rp{W}{b}(\ld) = \ld \, . \ee Hence, the invariant \rr{apa-11} satisfies the $P$-property and we note that all the classical invariants described in Spencer \cite{spencer71} satisfy the $P$-property. In reference \cite{shariff20}, the $P$-property described here is extended to non-symmetric tensors such as the two-point deformation tensor $\tF$. \section*{Appendix B} \def\theequation{B\arabic{equation}} \setcounter{equation}{0} A dyadic product $\ta\ot\ta$ has the spectral representation \be\label{apb-1} \ta\ot\ta = \ld \tv_1 \, , \spcc \ld = \sqrt{\ta\cdot\ta} \, , \spcc \tv_1 = \frd{1}{\ld}\ta \, . \ee The unit eigenvectors $\tv_2$ and $\tv_3$, associated with zero eigenvalues, are non-unique. In view of \rr{apb-1}, we have \be\label{apb-2} d\ta = d\ld \tv_1 + \ld d\tv_1 = d\ld \tv_1 + \ld ( da_2 \tv_2 + da_3\tv_3) \, . \ee Note that the above expression, have used the relation, for arbitrary, \be\label{apb-4} d\tv_1 = da_2 \tv_2 + da_3\tv_3 \, , \ee where $da_2$ and $da_3$ are arbitrary. We can write \be\label{apb-3} d\ta = \sum_{i=1}^3 (d\ta)_i \tv_i \, , \spcc (d\ta)_1 = d\ld \, , \spcc (d\ta)_2= \ld da_2 \, , \spcc (d\ta)_3=\ld da_3 \, . \ee For a scalar isotropic function $W=\rp{W}{a}(\ta) =\rp{W}{s}(\ld,\tv_1)$. Express \be\label{apb-5} \pdf{\rp{W}{a}}{\ta} = \sum_{i=1}^3 \left( \pdf{\rp{W}{a}}{\ta} \right)_i \tv_i \, , \spcc \left( \pdf{\rp{W}{a}}{\ta} \right)_i = \pdf{\rp{W}{a}}{\ta}\cdot\tv_i \, . \ee We then have \be dW = \sum_{i=1}^3 \left( \pdf{\rp{W}{a}}{\ta} \right)_i (d\ta)_i = \pdf{\rp{W}{s}}{\ld} d\ld + \pdf{\rp{W}{s}}{\tv_1}\cdot d\tv_1 \, . \ee Using \rr{apb-3} to \rr{apb-5} and since $d\ld , da_2$ and $da_3$ are arbitrary, we obtain the relations \be \left( \pdf{\rp{W}{a}}{\ta} \right)_1 = \pdf{\rp{W}{s}}{\ld} \, , \spcc \left( \pdf{\rp{W}{a}}{\ta} \right)_2 = \frd{1}{\ld} \pdf{\rp{W}{s}}{\tv_1} \cdot\tv_2 \, , \spcc \left( \pdf{\rp{W}{a}}{\ta} \right)_3 = \frd{1}{\ld} \pdf{\rp{W}{s}}{\tv_1} \cdot\tv_3 \, . \ee
1,108,101,564,472
arxiv
\section{Neutron Lifetime Discrepancy} Although the neutron has been known for almost a century, the latest experimental results suggest that it may still be hiding a deep secret. In the currently established framework of particle physics, the Standard Model, the neutron decays almost exclusively through beta decays, involving \begin{eqnarray} n \rightarrow p + e^- \!+ \bar\nu_e \end{eqnarray} and radiative corrections to this process. A calculation of the neutron lifetime in the Standard Model yields \cite{Marciano:2005eca} \begin{eqnarray}\label{life3} \tau_n^{\rm SM} = \frac{4908.7(1.9)\,{\rm s}}{|V_{ud}|^2(1+3g_A^2)} \ , \end{eqnarray} where $g_A$ is the axial-vector coefficient in beta decay, i.e., $\mathcal{M} = \tfrac{1}{\sqrt2}\,{G_F V_{ud}\, g_V}\big[\bar{p} \,\gamma_\mu n - g_A \bar{p}\, \gamma_5\gamma_\mu n\big] \left[ \bar{e} \,\gamma^\mu (1-\gamma_5) \nu \right]$. By using the average values of $V_{ud}$ and $g_A$ extracted from experiments and adopted by the Particle Data Group (PDG) \cite{Tanabashi:2018oca}, one arrives at the neutron lifetime in the range $875.3 \ {\rm s}< \tau_n < 891.2 \ {\rm s}$ within $3\,\sigma$. In turn, a recent lattice QCD calculation of $g_A$ \cite{Chang:2018uxx,Berkowitz:2018gqe} gave $\tau_n = 885 \pm 15 \ {\rm s}$. There are two qualitatively different approaches to measuring the neutron lifetime: the bottle experiments and the beam experiments. The bottle method relies on trapping neutrons in a container and counting them at several points in time. The decaying exponential \begin{eqnarray} N_n(t) = N_n(0) \,\exp\left({-{t}/{\tau_n}}\right) \end{eqnarray} is then fit to the data points $N_n(t)$, and $\tau_n^{\rm bottle}$ is read off. Such measurement yields the total neutron lifetime and is independent of the actual decay channels. The average bottle result quoted by the PDG and based on five experiments \cite{Mampe,Serebrov:2004zf,Pichlmaier:2010zz,Steyerl:2012zz,Arzumanov:2015tea} is \begin{eqnarray} \tau_n^{\rm bottle} = 879.6 \pm 0.6 \ {\rm s} \ . \end{eqnarray} The two most recent bottle experiments \cite{Serebrov:2017bzo,Pattie:2017vsj} provided values for $\tau_n$ within $2\, \sigma$ of this average. A different approach has been implemented in beam experiments, where the neutron lifetime is determined by counting the protons ($N_p$) resulting from neutron decays. Estimating also the number of neutrons in the beam ($N_n$) that those protons originate from, $\tau_n^{\rm beam}$ is given by \begin{eqnarray}\label{onee3} {\tau^{\rm beam}_n} = -\frac{N_n}{{d N_p}/{dt}} = \frac{\tau_n}{{\rm Br}(n\rightarrow p + {\rm anything})} \ . \end{eqnarray} In the Standard Model $\,{\rm Br}(n\rightarrow p \,+\, {\rm anything}) = 100\%$, implying the two lifetimes are the same, $\tau^{\rm beam}_n= \tau^{\rm bottle}_n$. This equality no longer hold if other, beyond Standard Model neutron decay channels not involving a proton in the final state are allowed. In such a case the branching fraction ${\rm Br}(n\rightarrow p + {\rm anything}) < 100\%$ and, given Eq.\,(\ref{onee3}), \begin{eqnarray}\label{ineq3} \tau^{\rm beam}_n > \tau^{\rm bottle}_n \ . \end{eqnarray} The average based on two beam experiments \cite{Byrne:1996zz,Yue:2013qrc} (see also Ref.\,\cite{Nico:2004ie} for the original data used in Ref.\,\cite{Yue:2013qrc}) and adopted by the PDG is \begin{eqnarray} \tau_n^{\rm beam} = 888.0 \pm 2.0 \ {\rm s} \ . \end{eqnarray} This represents a $4.0 \, \sigma$ discrepancy with $\tau_n^{\rm bottle}$ and hints that the inequality in Eq.~(\ref{ineq3}) might actually hold \cite{Green}. The tension between the two types of experiments might arise from underestimated systematic errors, but it may also be an actual sign of new physics. We focus on the latter case. Assuming that the discrepancy between the experimental results originates from an incomplete understanding of the physics behind neutron decay, the results of the two types of experiments can be reconciled if \begin{eqnarray} {{\rm Br}(n\rightarrow p + {\rm anything})} \approx 99\% \ , \end{eqnarray} while the remaining 1\% arises from \emph{\bf \emph{neutron dark decays}}, involving at least one dark sector particle in the final state. \section{Neutron Dark Decay} To investigate how such decays could have gone unnoticed in other experiments, let us consider a general scenario of a neutron decaying to a final state $f$ with the sum of final state particle masses equal to $M_f$. Of course, for the neutron to undergo a dark decay, $M_f$ has to be smaller than the neutron mass, i.e., $M_f < m_n$. The lower bound on $M_f$ is provided by experiments looking for neutron disappearance inside a nucleus. A neutron dark decay inside a nucleus $(Z, A)$ could produce a daughter nucleus in an excited state $(Z, A\!-\!1)^*$, leading to its subsequent de-excitation with the emission of secondary particles, e.g.~gamma rays. A search for such signatures has been conducted by the SNO experiment \cite{Ahmed:2003sy} and the KamLAND experiment \cite{Araki:2005jt}, placing a constraint of $\tau_{n\to {\rm invisible}}> 5.8\times 10^{29}$ years, adopted by the PDG as the bound on the neutron invisible decay channel. However, if the condition $M_f > m_n - S_n$ is fulfilled, with $S_n$ being the neutron separation energy in a given nucleus, then the decay $(Z,A) \to (Z,A\!-\!1) + f$ is kinematically forbidden, while the neutron dark decay $n \to f$ is still allowed. Among all stable nuclei, the nucleus with the smallest neutron separation energy is $^9{\rm Be}$, with $S_n(^9{\rm Be}) = 1.664 \ {\rm MeV}$. Thus, the requirement of $^9{\rm Be}$ stability enforces $M_f > m_n - 1.664 \ {\rm MeV}$, which leads to the condition \begin{eqnarray}\label{con3} 937.900 \ {\rm MeV} < M_f < 939.565 \ {\rm MeV} \ . \end{eqnarray} Since $937.9 \ {\rm MeV} > m_p - m_e$, the requirement in Eq.\,(\ref{con3}) also assures that proton would not undergo a dark decay. This opens the way to a whole new class of possible neutron decay channels: \begin{equation*} n \to \chi\,\gamma \ , \ \ \ \ n \to \chi\,\phi \ , \ \ \ \ n \to \chi\,e^+e^- \ , \ \ \ .\,.\,. \ \ \ , \eean where $\chi$ is a dark fermion, $\phi$ is a dark scalar or a dark vector, and the ellipsis denotes other final states involving additional dark particles, photons and neutrinos. We now analyze the first two cases in more detail. \subsection{${\boldsymbol {\rm Neutron \to dark \ particle +photon}}$} This simplest case involves only one dark fermion $\chi$ and a monochromatic photon in the final state. The allowed range of masses for $\chi$, governed by Eq.~(\ref{con3}), is \begin{eqnarray}\label{range3} 937.900 \ {\rm MeV} < m_\chi < 939.565 \ {\rm MeV} \, . \end{eqnarray} The energy of the corresponding monochromatic photon falls therefore within the range \begin{eqnarray}\label{phE} 0 < E_\gamma < 1.664 \ {\rm MeV} \, . \end{eqnarray} In the limit $m_\chi \to m_n$, the photon energy $E_\gamma \to 0$. The dark fermion $\chi$ could be a dark matter particle, in which case its stability would require $m_\chi < m_p+m_e$, so that $\chi$ does not undergo beta decay through an off-shell neutron. In this dark matter case the allowed energy range for the photon reduces to $ 0.782 \ {\rm MeV} <E_\gamma < 1.664 \ {\rm MeV} $. \noindent An effective Lagrangian for the decay $n \to \chi\,\gamma$ is \begin{eqnarray}\label{lageff113} \mathcal{L}^{\rm eff}_1 \!\!\!&=&\!\!\! \bar{n}\,\big(i\slashed\partial-m_n +\tfrac{g_ne}{2 m_n}\sigma^{\,\mu\nu}F_{\mu\nu}\big) \,n\nonumber\\ &+& \!\!\! \bar{\chi}\,\big(i\slashed\partial-m_\chi\big) \,\chi + \varepsilon \left(\bar{n}\,\chi + \bar{\chi}\,n\right) \ , \end{eqnarray} where $g_n$ is the $g$-factor of the neutron and $\varepsilon$ is a model-dependent parameter with mass dimension one that governs the mixing between $\chi$ and $n$. The Lagrangian in Eq.\,(\ref{lageff113}) gives a neutron dark decay rate of \begin{eqnarray}\label{eff1} \Delta\Gamma_{n\rightarrow \chi\gamma} = \frac{g_n^2e^2}{8\pi}\left(1-\frac{m_\chi^2}{m_n^2}\right)^3 \frac{m_n\,\varepsilon^2}{(m_n-m_\chi)^2} \ . \end{eqnarray} To explain the discrepancy between bottle and beam neutron lifetime experiments, $\Delta\Gamma_{n\rightarrow \chi\gamma} \approx \Gamma_n/100$, where $\Gamma_n$ is the total neutron decay rate in the Standard Model. A phenomenologically viable particle physics model for the case $n \to \chi\,\gamma$ is discussed in Sec.~\ref{mod1} (Model 1). \vspace{1mm} \subsection{${\boldsymbol {{\rm Neutron \to two \ dark \ particles} }}$ } A neutron dark decay with the final state consisting of only dark particles is realized by $n \to \tilde\chi^* \to \chi\,\phi$, where $\chi$ and $\tilde\chi$ are dark fermions and $\phi$ is a dark scalar ($\phi$ could also be a dark vector). In this case the requirement in Eq.\,(\ref{con3}) takes the form \begin{eqnarray} 937.900 \ {\rm MeV} < m_\chi + m_\phi < 939.565 \ {\rm MeV} \ . \end{eqnarray} Since this condition involves only the sum of the $\chi$ and $\phi$ masses, $m_\chi$ does not need to be close to $m_n$, e.g. a scenario where $m_\chi \approx m_\phi \approx m_n/2$ is allowed. However, nuclear stability requires that the mass of the intermediate $\tilde\chi$ satisfy \begin{eqnarray} m_{\tilde\chi} > 937.9 \ {\rm MeV} \end{eqnarray} to prevent $^9{\rm Be} \to \!\,^8{\rm Be} + \tilde\chi$. If, in addition, $|m_\chi - m_\phi| < m_p + m_e$, then both $\chi$ and $\phi$ cannot undergo beta decays. \vspace{1mm} \noindent An effective Lagrangian describing $n\to \chi\,\phi$ is \begin{eqnarray}\label{efflag2} \mathcal{L}^{\rm eff}_{2} \!\!\!&=& \!\!\! \mathcal{L}^{\rm eff}_{1}(\chi \rightarrow \tilde\chi) + \big(\lambda_\phi \,\bar{\tilde{\chi}}\, \chi\, \phi + {\rm h.c.}\big)\nonumber\\ &+&\!\!\! \bar{\chi}\,\big(i\slashed\partial-m_{\chi}\big) \,\chi + \partial_\mu \phi^* \partial^\mu \phi - m_\phi^2\, |\phi|^2 \ , \ \ \ \end{eqnarray} resulting in the neutron dark decay rate \begin{eqnarray}\label{rateb3} \Delta\Gamma_{n\rightarrow \chi\phi} = \frac{|\lambda_\phi|^2}{16\pi}\sqrt{f(x, y)}\, \frac{m_n\,\varepsilon^2}{(m_n-m_{{\tilde\chi}})^2} \ , \end{eqnarray} where $f(x, y) =[(1-x)^2-y^2] \, [(1+x)^2-y^2]^3$, $x=m_\chi/m_n$ and $y=m_\phi/m_n$. For $m_{\tilde\chi} > m_n$ the only available neutron dark decay channel is $n\to \chi\,\phi$ and $ \Delta\Gamma_{n\rightarrow \chi\phi} \approx \Gamma_n/100$ is needed to explain the neutron lifetime discrepancy. In the case $m_{\tilde\chi} < m_n$, the decay channel $n\to \tilde\chi\,\gamma$ is also allowed. The ratio of the corresponding dark decay rates is \begin{eqnarray}\label{rate333} \frac{\Delta\Gamma_{n\rightarrow \tilde\chi\gamma}}{\Delta\Gamma_{n\rightarrow \chi\phi}} = \frac{2g_n^2e^2}{|\lambda_\phi|^2} \frac{(1-\tilde{x}^2)^3 }{\sqrt{f(x, y)}}\ , \end{eqnarray} where $\tilde{x} = m_{\tilde\chi}/m_n$. To account for the experimental discrepancy, $ \Delta\Gamma_{n\rightarrow \chi\phi} + \Delta\Gamma_{n\rightarrow \tilde\chi\gamma} \approx \Gamma_n/100$. A viable model for the decay $n\to \chi\,\phi$ is provided in Sec.~\ref{mod2} (Model 2). \section{Particle Physics Models}\label{sec3} We emphasize that our neutron dark decay proposal is very general and the models presented below serve only as an illustration of the simplest scenarios. Theories with a more complex dark sector remain to be explored and, as discussed in Sec.~\ref{four3}, already their minimal realizations can solve several outstanding problems in astrophysics. \subsection{Model 1 (${\boldsymbol{n\to \chi\,\gamma}}$)}\label{mod1} The minimal model for the neutron dark decay requires only two new particles: a Standard Model singlet Dirac fermion $\chi$ and a scalar $\Phi$, chosen to be an ${\rm SU}(3)_c$ triplet, ${\rm SU}(2)_L$ doublet and carrying hypercharge $Y = -1/3$. The Lagrangian of such a model is \begin{eqnarray}\label{L13} \mathcal{L}_{1} &\!\!\!=\!\!\!& \Big[ \lambda_q \,\epsilon^{ijk}\, \overline{u^c_L}_{i}\, d_{Rj} \Phi_k + \lambda_\chi\Phi^{*i}\bar\chi \,d_{Ri} + {\rm h.c.}\Big] \nonumber\\ &\!\!\!-\!\!\!& M_\Phi^2 \hspace{0.2mm}|\Phi|^2 - m_\chi \,\bar\chi\,\chi \ , \ \ \ \ \ \ \ \ \end{eqnarray} where $u^c_L$ is the charge conjugate of $u_R$. Assigning $B_\chi = 1$ and $B_\Phi=-2/3$, the theory conserves baryon number. A diagram for $n\to \chi\,\gamma$ in this model is presented in Fig.~\ref{fig:1}.\\ \vspace{2mm} \begin{figure}[h!] \centering \includegraphics[height=1.2in]{figure_1} \vspace{-2mm} \caption{\small{Neutron dark decay $n \to \chi \, \gamma$ in Model 1.}} \vspace{1mm} \label{fig:1} \end{figure} \vspace{2mm} The neutron dark decay rate is obtained by matching the Lagrangian in Eq.\,(\ref{lageff113}) with that in Eq.\,(\ref{L13}). The result is given by Eq.\,(\ref{eff1}) with $\varepsilon = {\beta\,\lambda_q\lambda_\chi }/{M_{\Phi}^2}$, where $\beta$ is defined through $\langle 0| \,\epsilon^{ijk} (\overline{u^c_L}_{i} d_{Rj}) \,d_{Rk}^\rho |n\rangle = \beta \, \left({1+\gamma_5}\right)^{\rho}_{\, \sigma} u^\sigma/2 $, with $u$ being the neutron spinor. Lattice calculations give $ \beta \approx 0.014 \ {\rm GeV}^3$ \cite{Aoki:2017puj}.\vspace{1mm} There is a large parameter space available for which $\Delta\Gamma_{n\rightarrow \chi\gamma} \approx \Gamma_n/100$. For example, if one takes the mass of $\chi$ to be at the lower end of the allowed range specified in Eq.\,(\ref{range3}), i.e., $m_\chi = 937.9 \ {\rm MeV}$, then the mass of $\Phi$ and the couplings in the model need to satisfy the relation \begin{eqnarray}\label{cco3} \frac{M_\Phi}{\sqrt{|\lambda_q\lambda_\chi|}} \approx 400 \ {\rm TeV} \ . \end{eqnarray} Therefore, $\Phi$ easily avoids all collider bounds provided that ${M_\Phi} \!\!\gtrsim \!\!1 \ {\rm TeV}$. In addition, since $\chi$ is a Dirac fermion, it escapes the stringent constraints arising from neutron-antineutron oscillation \cite{Abe:2011ky} and dinucleon decay \cite{Gustafson:2015qyo} searches. \subsection{Model 2 (${\boldsymbol{n\to \chi\,\phi}}$)}\label{mod2} The entirely dark decay of the neutron, involving two dark particles in the final state, requires adding four fields to the Standard Model: the Dirac fermions $\chi$ and $\tilde\chi$, a scalar $\phi$ and the colored heavy scalar $\Phi$ introduced in the previous case. The Lagrangian of the model resembles the one for Model 1 with $\chi$ substituted by $\tilde\chi$ and an additional interaction term between $\tilde\chi$, ${\chi}$ and $\phi$, i.e., \begin{eqnarray}\label{333} \mathcal{L}_{2} &\!\!\!=\!\!\!& \mathcal{L}_{1}(\chi \rightarrow \tilde\chi) + ( \lambda_\phi \,\bar{\tilde\chi}\, \chi \,\phi + {\rm h.c.}) \nonumber\\ &\!\!\!-\!\!\!& m_\phi^2\, |\phi|^2 - m_\chi \,\bar\chi\,\chi \ . \end{eqnarray} Baryon number is conserved upon assigning $B_{\tilde\chi} = B_\phi=1$ and $B_\chi \!\!=\!\! 0$. The diagram for the neutron dark decay $n \to \chi \, \phi$ in this model is shown schematically in Fig.~\ref{fig:2}.\\ \begin{figure}[h!] \centering \includegraphics[height=1.3in]{figure_2} \vspace{-8mm} \caption{\small{Neutron dark decay $n \to \chi \, \phi$ in Model 2.}} \vspace{1mm} \label{fig:2} \end{figure} After matching the Lagrangians in Eqs. (\ref{efflag2}) and (\ref{333}), the rate for the neutron dark decay $n \to \chi \, \phi$ is given by Eq.~(\ref{rateb3}) with $\varepsilon = {\beta\,\lambda_q\lambda_{\tilde\chi}}/{M_{\Phi}^2}$. The condition $\Delta\Gamma_{n\rightarrow \chi\phi} \approx \Gamma_n/100$, required to explain the neutron lifetime discrepancy when $m_{\tilde\chi} > m_n$, is satisfied for a wide range of parameters. In particular, adopting $m_\chi = 937.9 \ {\rm MeV}$, $m_\phi \approx 0$ and $m_{\tilde\chi} =2 m_n$, the mass of $\Phi$ and the couplings of the model have to satisfy \begin{eqnarray}\label{cco23} \frac{M_\Phi}{\sqrt{|\lambda_q\lambda_{\tilde\chi} \lambda_\phi|}} \approx 300 \ {\rm TeV} \ , \end{eqnarray} again consistent with collider, neutron-antineutron oscillation and dinucleon decay constraints. In the case $m_{\tilde\chi}<m_n$, the additional neutron decay channel $n \to \tilde\chi \,\gamma$ is also available and the sum of the two rates should add up to $\approx\Gamma_n/100$, with their ratio governed by Eq.\,(\ref{rate333}). \section{Theoretical Developments}\label{four3} Our work inspired several theoretical efforts to explore further implications of neutron dark decays. This involved studying the physics of neutron stars in the presence of the new neutron decay channels, constructing neutron dark decay models with a more complex dark sector including self-interactions, building models with dark decays of mesons, as well as inventing alternative, although related, ways to explain the neutron lifetime discrepancy. We discuss those theoretical ideas below. \subsection{Neutron star constraints} The impact of neutron dark decays on neutron stars was considered in Refs.\,\cite{McKeen:2018xwc,Baym:2018ljz,Motta:2018rxp}. The resulting production of dark particles changes the energy density and pressure inside a neutron star, modifying its equation of state. This in turn changes the predictions for the maximum allowed neutron star masses, since they are derived from integrating the Tolman-Oppenheimer-Volkoff equation that explicitly depends on the equation of state. It was shown that the observed neutron star masses ($2M_\odot$ for the heaviest neutron stars discovered) are allowed if strong repulsive self-interactions are present in the dark sector of our models. Such interactions are easily introduced in the representative Models 1 and 2 discussed in Sec.~\ref{sec3} by simply adding a dark vector boson coupled strongly to the dark particle $\chi$. Interestingly, a strongly self-interacting dark sector lies along the lines of the self-interacting dark matter paradigm, which was introduced two decades ago \cite{Spergel:1999mh} to solve the core-cusp and missing satellite problem of the $\Lambda {\rm CDM}$ model. \subsection{Models with a self-interacting dark sector} A model of this type was constructed in Ref.\,\cite{Cline:2018ami}, where a neutron dark decay involving a dark fermion and a dark photon in the final state was considered, i.e., $n \to \chi\, A'$. The effective Lagrangian is \begin{eqnarray}\label{neweff3} \mathcal{L}^{\rm eff} \!\!\!&=&\!\!\! \bar{n}\,\big(i\slashed{D}-m_n +\tfrac{g_ne}{2 m_n}\sigma^{\,\mu\nu}F_{\mu\nu}\big) \,n\nonumber\\ &+& \!\!\! \bar{\chi}\,\big(i\slashed{D}-m_\chi\big) \,\chi + \varepsilon \left(\bar{n}\,\chi + \bar{\chi}\,n\right) \nonumber\\[2pt] &-&\!\!\!\!\tfrac14{F'}_{\!\!\!\mu\nu} {F'}^{\mu\nu} - \tfrac{\delta}{2}{F}_{\!\mu\nu} {F'}^{\mu\nu} - \tfrac{1}{2}m_{A'}^2 {A'}_{\!\!\mu}{A'}^\mu \ , \end{eqnarray} where the covariant derivative $D_\mu = \partial_\mu - i\,g'{A'}_{\!\!\mu}$. It was shown that the strength of the dark photon coupling to the dark particle $\chi$, governed by the parameter $g'$ and resulting in repulsive interactions between the $\chi$ particles, can be chosen such that the neutron lifetime discrepancy is explained and, at the same time, all astrophysical bounds are satisfied, including constraints from neutron stars, galaxy clusters, cosmic microwave background, Big Bang nucleosynthesis and supernovae. If the dark particle $\chi$ in this model is stable, it can contribute to the dark matter in the universe, but cannot account for all of the dark matter. Many of the astrophysical constraints are alleviated if one assumes non-thermal dark matter production. This was shown in Ref.\,\cite{Karananas:2018goc}, where a model for the neutron dark decay $\,n \to \chi \, \phi\,$ was constructed, based on our Model 2, but with a dark boson introduced to mediate large self-interactions of $\chi$. The Lagrangian for the dark sector is \begin{eqnarray} \mathcal{L}_D&\!\!\!=\!\!\!& g\,\bar{\chi}\,\slashed{\hspace{0mm}Z}_D \,\chi + ( \lambda_\phi \,\bar{\tilde\chi}\, \chi \,\phi + {\rm h.c.}) \nonumber\\ &\!\!\!-\!\!\!& i\, g\,Z_D^{\,\mu} \,\big(\phi^*\partial_\mu \,\phi - \phi\,\partial_\mu \phi^* \big) \ . \end{eqnarray} There exists a choice of parameters for which this model satisfies neutron star constraints, remains consistent with all other astrophysical bounds and $\chi$ makes up all of the dark matter in the universe. In addition, due to the self-interactions of $\chi$, the model is shown to solve the small-scale structure problems of the $\Lambda {\rm CDM}$ model. \subsection{Hadron dark decays} The idea of dark decays can be applied also to other neutral hadrons. In Ref.\,\cite{Barducci:2018rlx} it was argued that the mesons $K_L^0$ and $B^0$ can decay to dark sector particles at measurable rates. An explicit model was constructed with a dark sector consisting of several families of dark fermions. An analogous mechanism that prevents neutron beta decays in neutron stars, i.e., Pauli blocking, also forbids neutron dark decays inside a neutron star in this model. \subsection{Baryogenesis} It has recently been shown that the model addressing the neutron lifetime puzzle based on the Lagrangian in Eq.\,(\ref{neweff3}) provides a successful framework for low-scale baryogenesis \cite{Bringmann:2018sbs}. In addition, a model very similar to our Model 2, with couplings of $\tilde\chi$ to other quark flavors and a Majorana (instead of Dirac) fermion $\chi$, has been proposed in the context of low-scale baryogenesis as well \cite{Elor:2018twp}. \subsection{Related solutions} Taking into consideration only the experimental data for $g_A$ from experiments performed after the year 2002, the bottle neutron lifetime is favored \cite{Czarnecki:2018okw}. Based on this observation, explanations of the neutron lifetime discrepancy have been put forward in which it is the bottle lifetime that is equivalent to the Standard Model prediction for $\tau_n$. The difference in outcomes of the bottle and beam measurements is explained via neutron-mirror neutron oscillations resonantly enhanced in large magnetic fields thus affecting only beam measurements \cite{Berezhiani:2018eds}, or by invoking a sizable Fierz interference term canceling the dark decay contribution to the neutron decay rate \cite{Ivanov:2018vit}. \section{Experimental Searches} Several experimental efforts have been undertaken directly after our results were announced, searching specifically for the signatures we proposed. \subsection{${\boldsymbol {\rm Neutron \to dark \ matter +photon}}$} Within the first few weeks after our results became public, a dedicated experiment was performed at the Los Alamos UCN facility looking for the monochromatic photon in the neutron dark decay $n \to \chi\,\gamma$ \cite{Tang:2018eln}. The search was sensitive to final state photons with energies $0.782 \ {\rm MeV} < E_\gamma < 1.664 \ {\rm MeV}$ and challenged the case ${\rm Br}(n\to \chi\,\gamma) \approx 1 \%$ at a significance level of $2.2 \, \sigma$. The remaining photon energy range, i.e., $E_\gamma < 0.782 \ {\rm MeV}$, is left to be explored. \subsection{${\boldsymbol {{\rm Neutron \to dark \ particle} +{e^+e^-}}}$} Another dedicated experiment, also performed at the Los Alamos UCN facility, looked for $e^+e^-$ pairs from the neutron dark decay $n\to \chi\,e^+e^-$ \cite{Sun:2018yaw}. This search excluded the case with ${\rm Br}(n\to \chi\,e^+e^-) \approx 1 \%$ for the electron-positron energy range $E_{e^+e^-} \gtrsim 2\,m_e + 100 \ {\rm keV}$ with a confidence of nearly $100\%$. The remaining $100 \ {\rm keV}$ energy window was beyond experimental sensitivity. \subsection{\bf Nuclear dark decays} There exists a number of unstable nuclei for which the neutron separation energy is smaller than for $^9{\rm Be}$, i.e., $S_n < 1.664 \ {\rm MeV}$. Those include $^7{\rm H}$, $^{11}{\rm Li}$, $^{11}{\rm Be}$, $^{13}{\rm Li}$, $^{14}{\rm B}$, $^{15}{\rm C}$, $^{16}{\rm Be}$, $^{17}{\rm B}$, $^{17}{\rm C}$, $^{19}{\rm B}$, $^{19}{\rm C}$, $^{22}{\rm C}$, $^{22}{\rm N}$, as well as heavier ones. For these particular nuclei a neutron dark decay can lead to nuclear dark decays if the final state dark particle mass $m_\chi$ falls within the range \begin{eqnarray} 937.9 \ {\rm MeV} < m_\chi < m_n - S_n \ . \end{eqnarray} We proposed to seach for such nuclear dark decays in our original paper \cite{Fornal:2018eol}, focusing on the corresponding signatures for $^{11}{\rm Li}$, for which $S_n(^{11}{\rm Li}) = 0.396\ {\rm MeV}$. In that case the decay chain $^{11}{\rm Li} \to \!\,^{10}{\rm Li} +\chi \to \!\,^{9\,}{\rm Li} +n+\chi $ is allowed and the $^9{\rm Li}$ long lifetime could be used to discriminate against the background from $^{11}{\rm Li}$ beta decays. However, $^9{\rm Li}$ can be produced also in beta-delayed deuteron emission \cite{KELLEY201288,Raabe:2008rj} and the distinction between this and the dark channel would be extremely difficult. It was argued in Ref.\,\cite{Pfutzner:2018ieu} that, from an experimental point of view, there is a much better candidate: $^{11}{\rm Be}$, for which $S_n(^{11}{\rm Be}) = 0.502 \ {\rm MeV}$. It was also suggested that the presence of an unexpectedly high number of $^{10}{\rm Be}$ in $^{11}{\rm Be}$ decays described in Ref.\,\cite{Riisager:2014gia} might in fact be a sign of the neutron dark decay $n \to \chi\,\phi$ like in our Model 2, leading to the nuclear dark decay \begin{eqnarray}\label{cons34} ^{11}{\rm Be}\to \!\,^{10}{\rm Be} + \tilde\chi^* \to \!\,^{10}{\rm Be} + \chi + \phi\ , \end{eqnarray} and not necessarily, as initially conjectured, due to an enhanced $\beta p$ channel resulting from an unknown resonance. In addition, it was shown in Ref.\,\cite{Ejiri:2018dun} that the nuclear dark decay in Eq.\,(\ref{cons34}) is consistent with the observed Standard Model decay rates of $\,^{11}{\rm Be}$ as long as $m_{\tilde\chi} > m_n - S_n(^{11}{\rm Be})$ , i.e., \begin{eqnarray}\label{Eji} m_{\tilde\chi} > 939.064 \ {\rm MeV} \ . \end{eqnarray} This condition is obviously satisfied in the model with a self-interacting dark sector of Ref.\,\cite{Karananas:2018goc}, where the $\tilde\chi$ mass was chosen to be $m_{\tilde\chi} = 800 \ {\rm GeV}$. Very recently, an experiment at the CERN-ISOLDE laboratory was performed \cite{Pfutzner:2018ieu,ISOLDE} with the goal of determining whether the final state of $^{11}{\rm Be}$ decays contains protons in the final state or not. The results have not yet been published. \subsection{Ongoing beam measurements} There are currently two operating beam experiments measuring the neutron lifetime, the first one at the National Institute of Standards and Technology (NIST) \cite{NIST2009,NIST} and the second one at the Japan Proton Accelerator Research Complex (J-PARC) \cite{Nagakura:2017xmv,Japan}. If those experiments provide results consistent with the current beam average, the tension between bottle and beam measurements will increase, supporting the viability of models presented here. \subsection{Expanding the scope of bottle experiments} Perhaps the most straightforward, although technically challenging way to tackle the neutron lifetime puzzle would be to modify the existing bottle experimental setup. Including a proton detection system in bottle experiments would enable measuring the branching fraction ${\rm Br}(n\to p + {\rm anything})$ independently of the beam experiment. Such modification would enable a direct test of the premise that the difference of outcomes between the bottle and beam measurements is due to neutron decays that do not produce a proton, without any dependence on the specific model realization of the non-proton final state. \section{Final Remarks} Given the theoretical and experimental developments related to our proposal, Model 2 with a self-interacting dark sector seems like a very promising candidate theory for explaining the neutron lifetime discrepancy. This model is not only consistent with all current experimental constraints, but it is also interesting from a theoretical perspective, with its solution to the small-scale structure problem and perhaps a novel mechanism for baryogenesis. Even if the neutron lifetime puzzle gets resolved by future higher precision bottle and beam measurements, dark decays of the neutron at a smaller rate will still be allowed and certainly interesting to consider. It would be incredible if the good old neutron became the key to unraveling the mystery of the dark side of our universe. \section*{Acknowledgements} This research was supported in part by the DOE Grant No.~${\rm DE}$-${\rm SC0009919}$.
1,108,101,564,473
arxiv
\section{Introduction}\label{sec:intro} Multifunctions and their fixed point theory have been widely studied, see the books~\cite{Be,Gor} for example, where fairly general classes of multifunctions and spaces are considered. In all of what follows, $X$ and $Y$ will be topological spaces. Let $\phi\colon\thinspace X \multimap Y$ be an $n$-valued function \emph{i.e.}\ a function that to each $x\in X$ associates an unordered subset $\phi(x)$ of $Y$ of cardinality $n$. Recall that such an $n$-valued function $\phi$ is \emph{continuous} if for all $x\in X$, $\phi(x)$ is closed, and for any open set $V$ in $Y$, the sets $\set{x\in X}{\phi(x)\subset V}$ and $\set{x \in X}{\phi(x)\cap V \neq \varnothing}$ are open in $X$. We will refer to a continuous $n$-valued function as an \emph{$n$-valued map}. The class of $n$-valued maps is of particular interest, and more information about their fixed point theory on finite complexes may be found in~\cite{Bet1,Bet2,Brr1,Brr2,Brr3,Brr4,Brr5,Brr6,Sch0,Sch1,Sch2}. A \emph{homotopy} between two $n$-valued maps $\phi_1,\phi_2\colon\thinspace X \multimap Y$ is an $n$-valued map $H\colon\thinspace X\times I \multimap Y$ such that $\phi_1=H ( \cdot , 0)$ and $\phi_2=H ( \cdot , 1)$. Following~\cite{Sch0}, an $n$-valued function $\phi\colon\thinspace X \multimap Y$ is said to be a \emph{split $n$-valued map} if there exist single-valued maps $f_1, f_2, \ldots, f_n\colon\thinspace X \to Y$ such that $\phi(x)=\brak{f_1(x),\ldots,f_n(x)}$ for all $x\in X$. This being the case, we shall write $\phi=\brak{f_1,\ldots,f_n}$. Let $\splitmap{X}{Y}{n}$ denote the set of split $n$-valued maps between $X$ and $Y$. \emph{A priori}, $\phi\colon\thinspace X \multimap Y$ is just an $n$-valued function, but if it is split then it is continuous by \cite[Proposition~42]{GG15}, which justifies the use of the word `map' in the definition. Partly for this reason, split $n$-valued maps play an important r\^ole in the theory. If $\phi\colon\thinspace X \multimap X$ is an $n$-valued map from $X$ to itself, we say that $x\in X$ is a \emph{fixed point} of $\phi$ if $x\in \phi(x)$, and we denote the set of fixed points of $\phi$ by $\operatorname{\text{Fix}}(\phi)$. In~\cite[Section~5]{Sch1}, Schirmer defined the notion of Nielsen number of $n$-valued maps of finite complexes. Her definition is similar to that for single-valued maps, although it is a little more elaborate. As for the case of single-valued maps, for appropriate spaces, the Nielsen number $N(\phi)$ of an $n$-valued map $\phi\colon\thinspace X \multimap X$ provides a lower bound for the number of fixed points among all $n$-valued maps homotopic to $\phi$. The computation of the Nielsen number of a self- or $n$-valued map is an important problem in fixed point theory, and is not easy in general. In the split case, we have the following formula for the Nielsen number of $n$-valued maps of polyhedra in terms of the constituent single-valued maps. \begin{thm}\label{th:helgath0}\cite[Corollary~7.2]{Sch1} Let $\phi=\brak{f_1,f_2,\ldots, f_n}\colon\thinspace X \multimap X$ be a split $n$-valued map, where $X$ is a compact polyhedron. Then $N(\phi)=N(f_1)+\cdots+N(f_n)$. \end{thm} A second fundamental problem in fixed point theory is to decide whether a space $X$ has the Wecken property. Recall that the homotopy class of an $n$-valued map $\phi\colon\thinspace X \multimap X$ is said to have the \emph{Wecken property} if there exists an $n$-valued map $\psi\colon\thinspace X \multimap X$ homotopic to $\phi$ that has exactly $N(\phi)$ fixed points, and that a space $X$ has the \emph{Wecken property for $n$-valued maps} if every homotopy class of $n$-valued maps of $X$ has the Wecken property. For single-valued maps, many complexes of dimension at least three have the Wecken property~\cite{Ke,Wec1,Wec2,Wec3}. In the case of surfaces, the $2$-sphere $\St$, the real projective plane $\ensuremath{{\mathbb R}P^2}$ have the Wecken property~\cite{BGZ,Ji}, as do the $2$-torus $\ensuremath{\mathbb{T}^{2}}$ and the Klein bottle (see~\cite{GK} for example). However, Jiang showed that no other compact surface without boundary has the Wecken property~\cite{Ji0,Ji1}. For $n$-valued maps, substantial progress has been made in the study of the Wecken property. The following result mirrors that for single-valued maps. \begin{thm}\label{th:helgath1}\cite[Theorem~5.2]{Sch2} Let $M$ be a compact triangulable manifold (with or without boundary) of dimension greater than or equal to $3$, and let $n\in \ensuremath{\mathbb N}$. Then every $n$-valued map $\phi \colon\thinspace M \multimap M$ is homotopic to an $n$-valued map that has $N(\phi)$ fixed points. In particular, $M$ has the Wecken property for $n$-valued maps. \end{thm} \reth{helgath1} has been extended to a larger class of spaces, including some $2$-dimensional complexes, as follows. \begin{thm}\label{th:bete}\cite[Theorem~1]{Bet2} Let $X$ be a compact polyhedron without local cut points and such that no connected component of $X$ is a surface. Then $X$ has the Wecken property for $n$-valued maps. \end{thm} In a recent paper~\cite{GG15}, we studied some aspects of fixed point theory of $n$-valued maps from $X$ to $Y$ by introducing an equivalent and natural formulation in terms of single-valued maps from $X$ to the $n\up{th}$ unordered configuration space $D_n(Y)$ of $X$, where $D_n(Y)$ is the quotient of the $n\th$ (ordered) configuration space $F_{n}(X)$ of $X$, defined by: \begin{equation*} F_n(Y)=\setr{(y_1,\ldots,y_n)}{\text{$y_i\in Y$, and $y_i\neq y_j$ if $i\neq j$}}, \end{equation*} by the free action of the symmetric group $S_{n}$ given by permuting coordinates. It is well known that $\pi_{1}(D_{n}(Y))$ (resp.\ $\pi_{1}(F_{n}(Y))$) is the braid group $B_{n}(Y)$ (resp.\ pure braid group $P_{n}(Y)$) of $Y$ on $n$ strings, that the quotient map $\pi\colon\thinspace F_{n}(Y) \to D_{n}(Y)$ is a regular $n!$-fold covering, and that $P_{n}(Y)$ is the kernel of the surjective homomorphism $\tau\colon\thinspace B_n(Y) \to S_n$ that to a braid associates its induced permutation. Configuration spaces play an important r\^ole in several branches of mathematics and have been extensively studied, see~\cite{CG,FH} for example. As in~\cite{GG15}, a map $\Phi\colon\thinspace X \to D_n(Y)$ will be called an \emph{$n$-unordered map}, and a map $\Psi\colon\thinspace X \to F_n(Y)$ will be called an \emph{$n$-ordered map}. For such an $n$-ordered map, for $i=1,\ldots,n$, there exist maps $f_i\colon\thinspace X \to Y$ such that $\Psi(x)=(f_1(x),\ldots, f_n(x))$ for all $x\in X$, and for which $f_i(x)\neq f_j(x)$ for all $1\leq i,j\leq n$, $i\neq j$, and all $x\in X$. In this case, we will often write $\Psi=(f_{1},\ldots,f_{n})$. There is an obvious bijection between the set of $n$-point subsets of $Y$ and $D_{n}(Y)$ that induces a bijection between the set of $n$-valued functions from $X$ to $Y$ and the set of functions from $X$ to $D_{n}(Y)$. If we suppose in addition that $X$ and $Y$ are metric spaces, then $D_{n}(Y)$ may be equipped with a certain Hausdorff metric~\cite[Appendix]{GG15}, in which case this bijection restricts to a bijection between the set of $n$-valued maps from $X$ to $Y$ and the set of (continuous) maps from $X$ to $D_{n}(Y)$~\cite[Theorem~8]{GG15}. If $\phi\colon\thinspace X \multimap Y$ is an $n$-valued map then we shall say that the map $\Phi\colon\thinspace X \to D_{n}(Y)$ obtained by this bijection as the map associated to $\phi$. If a map $\Phi\colon\thinspace X \to D_n(Y)$ admits a lift $\widehat{\Phi}\colon\thinspace X \to F_n(Y)$ via the covering map $\pi$ then we say that $\widehat{\Phi}$ is a \emph{lift} of $\phi$. By~\cite[Section~2.1]{GG15}, $\phi$ is split if and only if it admits a lift. In~\cite{GG15}, we showed that spheres and real and complex projective spaces of even dimension have the fixed point property for $n$-valued maps, and we expressed the problem of deforming an $n$-valued map to a fixed point free $n$-valued map in terms of an algebraic criterion involving the braid groups of $X$, which we then used to determine an infinite number of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that contain a fixed point free representative. In this paper, we explore some other aspects of fixed point theory of $n$-valued maps using the above formulation in terms of configuration spaces. We establish an equality for the Nielsen number in the non-split case, and we study the Wecken property for some surfaces. We now describe the contents of this paper in more detail. In \resec{pres}, we recall some fundamental results from~\cite{GG15}, and in \resec{wecdisc}, we show that the $2$-disc has the Wecken property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$. In Sections~\ref{sec:nvalS2} and~\ref{sec:proj}, we analyse the Wecken problem for $n$-valued maps of $\St$ and $\ensuremath{{\mathbb R}P^2}$ respectively. In the first case, one important step is to determine the number of homotopy classes of $n$-valued maps of $\St$. In \relem{sphWec}, we show that there is just one such class if $n\geq 3$, while if $n=2$, we show that the set of such homotopy classes is in bijection with $\ensuremath{\mathbb N}$. If $\phi\colon\thinspace \St \multimap \St$ is a $2$-valued map then we refer to the non-negative integer given by this correspondence as the \emph{degree} of $\phi$ (or its homotopy class). Let $A\colon\thinspace \St \to \St$ denote the antipodal map. In the case of $\ensuremath{{\mathbb R}P^2}$, in \repr{classmap}, we show that there are precisely two homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$ for all $n\geq 2$, and we give explicit representatives of these homotopy classes. To prove the Wecken property for $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$, it then suffices to check that each of these homotopy classes has the Wecken property. We summarise the main results of Sections~\ref{sec:pres} and~\ref{sec:proj} as follows. \begin{prop}\label{prop:propscst}\mbox{} \begin{enumerate}[(a)] \item\label{it:propscsta} Let $f_0\colon\thinspace\St \to \St$ be the constant map at a point $x_{0}\in \St$. Then $f_{0}$ has degree $0$, and $f_{0}$ and $A\circ f_0$ each have precisely one fixed point. \item\label{it:propscstb} There exists a map $ f_1\colon\thinspace \St \to \St$ of degree $1$ that has a single fixed point and such that the map $A\circ f_1$ is fixed point free. \item\label{it:propscstc} There exists a map $ f_2\colon\thinspace \St \to \St$ of degree $2$ such that $f_{2}$ and the map $A\circ f_2$ each possess a single fixed point. \item \label{it:propscstd} The Wecken property holds for: \begin{enumerate}[(i)] \item\label{it:propscstb1} $n$-valued maps of $\St$ for all $n\geq 3$. \item\label{it:propscstb2} the homotopy classes of $2$-valued maps of degree $0$, $1$ or $2$. \end{enumerate} \end{enumerate} \end{prop} \repr{propscst}(\ref{it:propscstd})(\ref{it:propscstb2}) has been obtained independently in~\cite{Brr0}. Our proof is different from that given in~\cite{Brr0}, and is perhaps more direct. The paper~\cite{Brr0} contains other results about $n$-valued maps of $\St$, concerning for example the minimal number of fixed points in a given homotopy class of a $2$-valued map (see~\cite[Theorem 4.1]{Brr0}). Apart from the three cases given in~\repr{propscst}(\ref{it:propscstd})(\ref{it:propscstb2}), we believe that this minimal number is not known. We do not know either whether the Wecken property holds for $2$-valued maps of $\St$ of degree greater than or equal to $3$. \begin{thm} \label{th:wecproj} The projective plane $\ensuremath{{\mathbb R}P^2}$ has the Wecken property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$. \end{thm} In \resec{calcu}, we consider the problem of the computation of the Nielsen number of $n$-valued maps of a compact, orientable manifold $X$ without boundary. To do this, we shall use~\cite[Proposition~16]{GG15}, which given an $n$-valued map $\phi\colon \thinspace X \multimap X$, states that there exists a finite covering $q\colon\thinspace \widehat{X} \to X$ such that the composition $\phi\circ q \colon \thinspace \widehat{X} \multimap X$ is split, and~\cite[Proposition~17]{GG15}, which describes the fixed points of $\phi$ in terms of the coincidences of $q$ with the coordinate maps of a lift of the split $n$-valued map $\phi\circ q$. We analyse the behaviour of this description with respect to the essential Nielsen classes (\emph{i.e.}\ the Nielsen classes of non-zero index). This yields a partial analogue of \reth{helgath0} in the non-split case as follows. \begin{thm} \label{th:form} Let $X$ be an orientable, compact manifold without boundary, and let $\phi\colon\thinspace X \multimap X$ be a non-split $n$-valued map. Let $\Phi\colon\thinspace X \to D_{n}(X)$ be the map associated to $\phi$, let $H$ be the kernel of the composition $\tau\circ \Phi_{\#} \colon\thinspace \pi_{1}(X)\to S_{n}$, let $L'=\im{\tau\circ \Phi_{\#}}$, and for $i=1,\ldots,n$, let $L_{i,i}'=\setl{\alpha\in L'}{\alpha(i)=i}$. Let $q\colon\thinspace \widehat{X} \to X$ be the finite regular covering of $X$ that corresponds to $H$. Let $\widehat{\Phi}_{1}=(f_{1},\ldots,f_{n}) \colon\thinspace \widehat{X} \to F_{n}(X)$ be a lift of the split $n$-valued map $\phi\circ q \colon \thinspace \widehat{X} \multimap X$, and suppose that the action of $L_{i,i}'$ on $\{1,\dots,n\}$ is free for all $i\in \{1,\dots,n\}$. Then the Nielsen number $N(\phi)$ of $\phi$ is equal to $\sum_{j\in I_0} N(q,f_{i_j})$, where $i_j$ runs over a set $I_{0}$ of representatives of the orbits of this action. \end{thm} If $n=2$, the condition that the action be free is necessarily satisfied, and in this case, we shall prove in \reco{nielsenphi} that $N(\phi)=N(q, f_1)=N(q, f_2)$. The study of the Wecken property for multivalued maps of the torus constitutes work in progress by the authors, and some preliminary results that are closely related to this problem may be found in~\cite{GG15}. \begin{comment} Finally, in \resec{toroII}, we obtain the following classification of the homotopy classes of non-split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$. This completes the analysis given in~\cite[Section~5.3]{GG15} of the split case. \begin{prop}\label{prop:classif} Let $\phi:\ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a non-split $2$-valued map. Then there exist $\lambda\in \mathbb{F}_2(u,v)$ and $a,b,c,d,l,m\in \ensuremath{\mathbb Z}$ such that $\phi$ is defined by a pair of braids $(\alpha, \beta)$ as follows: \begin{enumerate}[(a)] \item\label{it:classifa} if $\beta =\beta' \sigma$ if $\alpha\in P_{2}(\ensuremath{\mathbb{T}^{2}})$ and $\beta \in B_{2}(\ensuremath{\mathbb{T}^{2}})\setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$, $\alpha=((\lambda \widehat{\lambda})^m,(a,b))$, where $\beta'=(\mu^{-l}\lambda vu^{-1},(c,d))$, \item\label{it:classifb} if $\alpha \in B_{2}(\ensuremath{\mathbb{T}^{2}})\setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$ and $\beta\in P_{2}(\ensuremath{\mathbb{T}^{2}})$, $\alpha=\alpha'\sigma$ and $\beta=((\lambda \widehat{\lambda})^m,(c,d))$, where $\alpha'=(\mu^{-l}\lambda vu^{-1},(a,b))$, \item\label{it:classifc} if $\alpha,\beta \in B_{2}(\ensuremath{\mathbb{T}^{2}})\setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$, $\alpha=\alpha'\sigma$ and $\beta =\beta' \sigma$, where $\alpha'=( \mu^{-l}\lambda vu^{-1},(a,b))$ and $\beta'=(\mu^{-l}\lambda(\widehat{\lambda} \lambda)^mvu^{-1},(c,d))$, \end{enumerate} where $\mu$ is a generator of the centraliser of $\lambda\widehat{\lambda}$ if $\lambda\widehat{\lambda}$ is non trivial, and is arbitrary otherwise. \end{prop} Using the framework of \resec{calcu}, and notably the set-up of \reth{form}, the description of the fixed point set of a non-split $2$-valued map $\phi$ of $\ensuremath{\mathbb{T}^{2}}$ in terms of the set of coincidences of the covering map $q\colon\thinspace \widehat{\ensuremath{\mathbb{T}^{2}}} \to \ensuremath{\mathbb{T}^{2}}$ with $f_{1}$ and $f_{2}$, where $\phi\circ q=\brak{f_{1},f_{2}}$ and \reco{nielsenphi}, \repr{classif} enables us to give formul\ae\ for the Lefschetz coincidence numbers of $f_{1}$ and $f_{2}$ with $q$, and the Nielsen number of $\phi$. We also obtain necessary and sufficient conditions for the Nielsen number of $\phi$ to be zero. Let $(e_{1},e_{2})$ denote a basis of $\pi_{1}(\ensuremath{\mathbb{T}^{2}})$, where $e_{1}$ (resp.\ $e_{2}$) is the homotopy class of the meridian (resp.\ longitude) of $\ensuremath{\mathbb{T}^{2}}$. \begin{thm}\label{th:nielsencases} Let $\phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a non-split $2$-valued map, let $\Phi\colon\thinspace \ensuremath{\mathbb{T}^{2}} \to D_2(\ensuremath{\mathbb{T}^{2}})$ be the associated $2$-unordered map that corresponds to $\phi$, let $H$ be the subgroup $\Phi_{\#}^{-1}(P_{2}(\ensuremath{\mathbb{T}^{2}}))$ of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ of index~$2$, let $q\colon\thinspace \ensuremath{\mathbb{T}^{2}} \to \ensuremath{\mathbb{T}^{2}}$ be the double covering of $\ensuremath{\mathbb{T}^{2}}$ that corresponds to the subgroup $H$ of $\pi_{1}(\ensuremath{\mathbb{T}^{2}})$, and let $(f_1, f_2)\colon \thinspace \ensuremath{\mathbb{T}^{2}} \to F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$. For $i=1,2$, let $M_{i}$ denote the matrix of $(f_i)_{\#}\colon \thinspace \pi_{1}(\ensuremath{\mathbb{T}^{2}}) \to \pi_{1}(\ensuremath{\mathbb{T}^{2}})$ with respect to the basis $(e_1, e_2)$. Then $M_{1}=M_{2}$, and for $i=1,2$, $M_{i}$ and the Lefschetz coincidence numbers of $f_{i}$ with $q$ are given as follows: \begin{enumerate}[(a)] \item\label{it:nielsencasesa1} if $\alpha \in P_{2}(\ensuremath{\mathbb{T}^{2}})$ and $\beta \in B_{2}(\ensuremath{\mathbb{T}^{2}}) \setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$, then: \begin{equation*} M_{i} =\begin{cases} \begin{pmatrix} a & 2c+\lvert \lambda \rvert_{u}-1 \\ b & 2d+\lvert \lambda\rvert_{v}+1 \end{pmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{pmatrix} a & 2c-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-1\\ b & 2d-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 \end{pmatrix} & \text{if $\lambda\widehat\lambda=1$} \end{cases} \end{equation*} and \begin{equation*} L(q,f_i) =\begin{cases} \begin{vmatrix} a-1 & 2c+\lvert \lambda \rvert_{u}-1 \\ b & 2d+\lvert \lambda\rvert_{v}-1 \end{vmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{vmatrix} a-1 & 2c-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-1\\ b & 2d-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}-1 \end{vmatrix} & \text{if $\lambda\widehat\lambda=1$,} \end{cases} \end{equation*} \item\label{it:nielsencasesb} if $\alpha \in B_{2}(\ensuremath{\mathbb{T}^{2}}) \setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$ and $\beta \in P_{2}(\ensuremath{\mathbb{T}^{2}})$, then \begin{equation*} M_{i} =\begin{cases} \begin{pmatrix} 2a+\lvert \lambda \rvert_{u}-1 & c \\ 2b+\lvert \lambda\rvert_{v}+1 & d \end{pmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{pmatrix} 2a-l\lvert \mu\rvert_{u}+\lvert \lambda \rvert_{u}-1 & c \\ 2b-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 & d \end{pmatrix} & \text{if $\lambda\widehat\lambda=1$} \end{cases} \end{equation*} and \begin{equation*} L(q,f_i) =\begin{cases} \begin{vmatrix} 2a+\lvert \lambda \rvert_{u}-3 & c \\ 2b+\lvert \lambda\rvert_{v}+1 & d-1 \end{vmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{vmatrix} 2a-l\lvert \mu\rvert_{u}+\lvert \lambda \rvert_{u}-3 & c \\ 2b-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 & d-1 \end{vmatrix} & \text{if $\lambda\widehat\lambda=1$,} \end{cases} \end{equation*} \item\label{it:nielsencasesc} if $\alpha,\beta \in B_{2}(\ensuremath{\mathbb{T}^{2}}) \setminus P_{2}(\ensuremath{\mathbb{T}^{2}})$, then \begin{equation*} M_{i} =\begin{cases} \begin{pmatrix} 2a+\lvert \lambda \rvert_{u}-1 & a+c+\lvert \lambda \rvert_{u}-1 \\ 2b+\lvert \lambda\rvert_{v}+1 & b+d+\lvert \lambda \rvert_{v}+1 \end{pmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{pmatrix} 2a-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-1 & a+c-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-1\\ 2b-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 & b+d-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 \end{pmatrix} & \text{if $\lambda\widehat\lambda=1$} \end{cases} \end{equation*} and \begin{equation*} L(q,f_i) =\begin{cases} \begin{vmatrix} 2a+\lvert \lambda \rvert_{u}-3 & a+c+\lvert \lambda \rvert_{u}-2 \\ 2b+\lvert \lambda\rvert_{v}+1 & b+d+\lvert \lambda \rvert_{v} \end{vmatrix} & \text{if $\lambda\widehat\lambda\ne 1$}\\[1em] \begin{vmatrix} 2a-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-3 & a+c-l\lvert \mu\rvert_{u}+\lvert \lambda\rvert_{u}-2\\ 2b-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v}+1 & b+d-l\lvert \mu\rvert_{v}+\lvert \lambda\rvert_{v} \end{vmatrix} & \text{if $\lambda\widehat\lambda=1$,} \end{cases} \end{equation*} \end{enumerate} where in all cases, $a,b,c,d,l,\lambda$ and $\mu$ are given by \repr{classif}, and $N(\phi)=\lvert L(q,f_i)\rvert$. If $\phi$ can be deformed to a fixed point free $2$-valued map then $N(\phi)=0$. Moreover, if $N(\phi)=0$ then both pairs of maps $(q,f_{1})$ and $(q,f_{2})$ can be deformed to coincidence free pairs. \end{thm} \end{comment} \subsection*{Acknowledgements} The first-named author was partially supported by FAPESP-Funda\c c\~ao de Amparo a Pesquisa do Estado de S\~ao Paulo, Projeto Tem\'atico Topologia Alg\'ebrica, Geom\'etrica e Diferencial 2012/24454-8. The second-named author was also partially supported by the same project as well as the CNRS/FAPESP PRC project n\up{o}~275209 during his visit to the Instituto de Matem\'atica e Estat\'istica, Universidade de S\~ao Paulo, from the 4\textsuperscript{th} to the 22\textsuperscript{nd} of February 2017. \section{Preliminaries and the Wecken property for $n$-valued maps of the $2$-disc and $2$-sphere}\label{sec:pres} Let $n\in \ensuremath{\mathbb N}$. Given a topological space $X$, we are interested in understanding the fixed point theory of its $n$-valued maps. In this section, we will show that if $n\in \ensuremath{\mathbb N}$ (resp.\ $n\geq 3$), the $2$-disc $\ensuremath{\mathbb D}^{\,2}$ and the (resp.\ the $2$-sphere $\St$) has the Wecken property for $n$-valued maps. We first recall some definitions and results from~\cite{GG15}. Given an $n$-valued map $\phi\colon\thinspace X \multimap X$ as defined in \resec{intro}, let $\Phi \colon\thinspace X \to D_n(X)$ denote the associated $n$-unordered map, and if $\phi$ is split, let $\widehat{\Phi}\colon\thinspace X \to F_n(X)$ be an $n$-ordered map that is a lift of $\phi$. By a single-valued map (or simply a map) $f\colon\thinspace X \to Y$ between two topological spaces, we shall always mean a continuous function from $X$ to $Y$, and $[X,Y]$ will denote the set of unbased homotopy classes of maps between $X$ and $Y$. We recall the following result, which shows that for a large class of spaces, (homotopy classes of) $n$-valued maps may be identified with (homotopy classes of) maps whose target is an unordered configuration space. Let $I$ denote the unit interval $[0,1]$. \begin{thm}\label{th:metriccont Let $X$ and $Y$ be metric spaces, and let $n\in \ensuremath{\mathbb N}$. \begin{enumerate}[(a)] \item\label{it:metricconta} An $n$-valued function $\phi\colon\thinspace X \multimap Y$ is continuous if and only if the corresponding function $\Phi\colon\thinspace X \to D_n(Y)$ is continuous. \item\label{it:metriccontb} The set of homotopy classes of $n$-valued maps from $X$ to $Y$ is in one-to-one correspondence with the set $[X,D_{n}(Y)]$ of homotopy classes of maps from $X$ to $D_n(Y)$. \end{enumerate} \end{thm} \begin{proof} Part~(\ref{it:metricconta}) is a direct consequence of~\cite[Theorem~8]{GG15}. For part~(\ref{it:metriccontb}), since $X\times I$ is also a metric space, we may apply~\cite[Theorem~8]{GG15} to $n$-valued maps between $X\times I$ and $Y$. So it follows that two $n$-valued maps $\phi_1, \phi_2\colon\thinspace X\multimap Y$ are homotopic if and only if the corresponding maps $\Phi_1, \Phi_2\colon\thinspace X \to D_{n}(Y)$ are homotopic, and the result follows. \end{proof} \begin{rem}\label{corresp} \reth{metriccont} also holds under weaker hypotheses. \begin{enumerate}[(a)] \item If we assume just that $Y$ is a metric space, then the statement of \reth{metriccont}(\ref{it:metricconta}) is~\cite[Corollary~4.1]{Brr7}, and that of \reth{metriccont}(\ref{it:metriccontb}) follows by applying this corollary to the spaces $X\times I$ and $Y$. \item If $X$ is just locally path-connected and semi-locally simply connected then so is $X\times I$. The statement of \reth{metriccont}(\ref{it:metricconta}) is~\cite[Corollary~3.1]{Brr7}, and that of \reth{metriccont}(\ref{it:metriccontb}) follows by applying this corollary to the spaces $X\times I$ and $Y$. \end{enumerate} \end{rem} From now on, we will assume that our spaces are metric spaces, so that we may interpret an $n$-valued map from $X$ to $Y$ as a map from $X$ to $D_n(Y)$ using \reth{metriccont}(\ref{it:metricconta}). \subsection{The case of $n$-valued maps of the $2$-disc}\label{sec:wecdisc} \begin{prop}\label{prop:fpp} The $2$-disc $\ensuremath{\mathbb D}^{\,2}$ has the Wecken property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$. \end{prop} \begin{proof} Let $n\in \ensuremath{\mathbb N}$, and let $(x_{1},\ldots,x_{n})\in F_{n}(\ensuremath{\mathbb D}^{\,2})$. By Theorem \ref{th:metriccont}(\ref{it:metriccontb}), the set of homotopy classes of $n$-valued maps of the $2$-disc $\ensuremath{\mathbb D}^{\,2}$ may be identified with the set $[\ensuremath{\mathbb D}^{\,2}, D_n(\ensuremath{\mathbb D}^{\,2})]$ of homotopy classes of maps between $\ensuremath{\mathbb D}^{\,2}$ and $D_{n}(\ensuremath{\mathbb D}^{\,2})$. Since the domain $\ensuremath{\mathbb D}^{\,2}$ is contractible, this set has only one element, which is the class of the constant map $c=\brak{c_{1},\ldots,c_{n}} \colon\thinspace \ensuremath{\mathbb D}^{\,2} \to D_n(\ensuremath{\mathbb D}^{\,2})$, where for all $i=1,\ldots,n$, the map $c_i\colon\thinspace \ensuremath{\mathbb D}^{\,2} \to \ensuremath{\mathbb D}^{\,2}$ is the constant map at $x_i$. Then the map $c$ has exactly $n$ fixed points, and $N(c)=n$ by \reth{helgath0}, which proves the proposition. \end{proof} \subsection{The case of $n$-valued maps of the $2$-sphere}\label{sec:nvalS2} We now show that the Wecken property holds for $n$-valued maps of $\St$ for all $n\neq 2$. The case $n=2$ remains open, although we are able to provide some partial results (see also~\cite{Brr0}). Since the case $n=1$ is well known, see for example~\cite[Proposition~2.2]{BGZ}, we will assume that $n\geq 2$. \begin{lem}\label{lem:sphWec} If $n>2$ (resp.\ $n=2$), the set of homotopy classes of $n$-valued maps of $\St$ possesses exactly one element (resp.\ is in one-to-one correspondence with $\ensuremath{\mathbb N}$). \end{lem} \begin{proof} \reth{metriccont}(\ref{it:metriccontb}) implies that it suffices to determine the set $[\St, D_n(\St)]$, which in turn is the orbit space of $\pi_{2}(D_{n}(\St))$ under the action of $\pi_1(D_{n}(\St))$. Now $F_{n}(\St)$ is a regular $n!$-fold covering of $D_{n}(\St)$, and if $n=2$ (resp.\ $n\geq 3$), the universal covering of $F_{n}(\St)$ has the homotopy type of $\St$ (resp.\ of $\St[3]$) by~\cite{BCP,FZ} or~\cite[pp.~43--44]{GG10}. Using standard results about homotopy groups and covering spaces, it follows that $\pi_{2}(D_{n}(\St))$ is isomorphic to $\ensuremath{\mathbb Z}$ (resp.\ is trivial) if $n=2$ (resp.\ $n>2$) (see also~\cite[Corollary, page~211]{FvB} for the case $n\geq 3$). This proves the result if $n\geq 3$. If $n=2$, observe that the map $\St \to F_2(\St)$ given by $x \longmapsto (x,-x)$ is a homotopy equivalence that is $\ensuremath{\mathbb Z}_2$-equivariant with respect to the action of the antipodal map on $\St$ and the action on $F_2(\St)$ given by $s\colon\thinspace F_2(\St) \to F_2(\St)$ defined by $s(x,y)=(y,x)$. This gives rise to a homotopy equivalence between the corresponding orbit spaces, namely $\ensuremath{{\mathbb R}P^2}$ and $D_2(\St)$. Since the action of $\pi_1(\ensuremath{{\mathbb R}P^2})\cong \ensuremath{\mathbb Z}_2$ on $\pi_2(\ensuremath{{\mathbb R}P^2}) \cong \ensuremath{\mathbb Z}$ is multiplication by $-1$, the same is true for the action of $\pi_1(D_2(\St)$ on $\pi_2(D_2(\St))$, and the orbits are the subsets of the form $\brak{m,-m}$, where $m\in \ensuremath{\mathbb Z}$. It follows that the set $[\St, D_2(\St)]$ is in bijection with $\ensuremath{\mathbb N}$. \end{proof} We now study the Wecken property for $n$-valued maps of $\St$. Recall that if $\phi\colon\thinspace \St \multimap \St$ is a $2$-valued map, the integer given by the correspondence with $\ensuremath{\mathbb N}$ of \relem{sphWec} is the \emph{degree} of $\phi$ (or of its homotopy class). \begin{comment} \begin{prop}\label{prop:propscst}\mbox{} \begin{enumerate}[(a)] \item\label{it:propscsta} Let $f_0\colon\thinspace \St \to \St$ be the constant map at a point $c\in \St$. Then $f_{0}$ has degree $0$, and $f_{0}$ and $A\circ f_0$ each have precisely one fixed point. \item\label{it:propscstb} There exists a map $ f_1\colon\thinspace \St \to \St$ of degree $1$ that has one fixed point and such that the map $A\circ f_1$ is fixed point free. \end{enumerate} Hence the Wecken property holds for the homotopy classes of $2$-valued maps that correspond to the integers $0,\pm 1$ under the correspondence given by \relem{sphWec}. \end{prop} \end{comment} \begin{proof}[Proof of \repr{propscst}]\mbox{} \begin{enumerate}[(a)] \item Part~(\ref{it:propscsta}) is clear since the constant map $f_{0}\colon\thinspace\St\to \St$ at a point $x_{0}\in \St$ has precisely one fixed point, namely $x_{0}$, and $-x_{0}$ is the unique fixed point of $A\circ f_{0}$. \item Let $f_1$ be a self-map of the unit $2$-sphere $\St$ that is a small deformation of the identity, \emph{i.e.}\ $f_1$ satisfies $\left\lvert x-f_1(x)\right\rvert<\pi/2$ for all $x\in \St$, and that has exactly one fixed point $x_{0}$. Such a map may be constructed using a vector field on $\St$ that possesses just one singular point, and in this way, the degree of $f_{1}$ is equal to $1$. Then $f_1(x)\neq -x$ for all $x\in \St$, and thus $A\circ f_1$ is fixed point free, which concludes the proof of part~(\ref{it:propscstb}). \item Consider the self map $\rho\colon\thinspace \St[1]\to \St[1]$ given by $z\longmapsto z^2$. This map has one fixed point, which is $\{1\}$, and the map $A\circ \rho$ has a single fixed point, which is $\{-1\}$. Now consider the reduced suspension of $\St[1]$, \emph{i.e.}\ $(\St[1]\times [0,1])/(\{1\}\times [0,1]\cup \St[1]\times \{0,1\})=\St$, and let $f_2$ be the suspension of the map $\rho$. Then $f_2$ has only one fixed point which is the equivalence class of the point $\{1\}\times \{0\}$, and the map $A\circ f_2$ has a single fixed point, which is the equivalence class of $\{-1\}\times \{1/2\}$, so it has the desired property. \item To prove part~(\ref{it:propscstb1}), by \relem{sphWec}, there is only one homotopy class of $n$-valued maps of $\St$ if $n\geq 3$, which is that of the constant $n$-valued map. The result then follows from part~(\ref{it:propscsta}). To prove part~(\ref{it:propscstb2}), the case of the homotopy class of degree $0$ follows as in part~(\ref{it:propscstb1}). Now let $i\in \brak{1,2}$. For the homotopy class of degree $i$, consider the split $2$-valued map $\phi_i=\brak{f_i,A\circ f_i}\colon\thinspace \St \multimap \St$. Then $L(f_i)=1+i$, so $N(f_i)\neq 0$, and thus $N(f_i)=1$ because $\St$ is simply connected. Hence by \reth{helgath0}, $N(\phi_i)=N(f_i)+N(A\circ f_i)$, and so $N(\phi_1)=1+0=1$ and $N(\phi_2)=1+1=2$. Since $\operatorname{\text{Fix}}(\phi_1)=\brak{x_{0}}$, where $x_{0}$ is the fixed point of $f_{1}$ given in the proof of part~(\ref{it:propscstb}), and $\operatorname{\text{Fix}}(\phi_2)=\brak{1, -1}$, it follows that the map $\phi_i$ has the Wecken property.\qedhere \end{enumerate} \end{proof} \section{The case of $n$-valued maps of the projective plane}\label{sec:proj} The aim of this section is to prove that the projective plane $\ensuremath{{\mathbb R}P^2}$ has the Wecken property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$. Jiang showed that $\ensuremath{{\mathbb R}P^2}$ has the Wecken property for single-valued maps~\cite{Ji}, so it will suffice to study the case $n\geq 2$. We start by computing the Nielsen number of an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$. \begin{lem}\label{lem:rp2split} Let $n\geq 1$, and let $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2}\multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map. Then $N(\phi)=n$. \end{lem} \begin{proof} From~\cite[Lemma~14]{GG15}, $\phi$ is split, so for $i=1,\ldots,n$, there exist self-maps $f_i\colon\thinspace\ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ such that $\phi=\brak{f_1,\ldots,f_n}$. Applying \reth{helgath0}, $N(\phi)=n$ since $N(f_i)=1$ for all $i=1,\ldots,n$ by~\cite{Ji}. \end{proof} We now classify the homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$. If $n=1$, it is well known that the set $[\ensuremath{{\mathbb R}P^2}, \ensuremath{{\mathbb R}P^2}]$ of homotopy classes of self-maps of $\ensuremath{{\mathbb R}P^2}$ that induce the trivial homomorphism on the level of fundamental groups has two elements, see~\cite[Proposition~2.1]{GS} for example. One of these two homotopy classes is that of the constant map. We will describe the second homotopy class in terms of a representative map $W_{P}\colon\thinspace \ensuremath{{\mathbb R}P^2}\to \ensuremath{{\mathbb R}P^2}$ that we shall now define, where $P\in \St$, and $\St\subset \ensuremath{\mathbb R}^3$ is the unit sphere in $\ensuremath{\mathbb R}^3$, which we equip with spherical coordinates $(\theta,\varphi)$, where $\theta\in [0,2\pi)$ and $\varphi\in [-\pi/2,\pi/2]$, so that $P=(\theta,\pi/2)$. With respect to the Cartesian coordinate system for which $P=(0,0,1)$ and the point $(1,0,0)$ has spherical coordinates $(0,0)$, the point with spherical coordinates $(\theta,\varphi)$ has Cartesian coordinates $(\cos \varphi \cos\theta, \cos \varphi \sin\theta,\sin \varphi)$. From this, one may see that with respect to the spherical coordinate system, $(\theta,\varphi-\frac{\pi}{2})$ may be identified with $(\theta+\pi, -\varphi-\frac{\pi}{2})$. We regard $\ensuremath{{\mathbb R}P^2}$ as the quotient of $\St$ by the free action of the group generated by the antipodal map $A$. Let $p\colon\thinspace \St \to \ensuremath{{\mathbb R}P^2}$ be the usual covering map, for all $x\in \St$, let $\overline{x}=p(x)$, and let $H_P^{+}$ be the hemisphere of $\St$ whose pole is $P$. Let $U_{P}\colon\thinspace \St\to \St$ be the map defined by $U_{P}(\theta,\varphi)=(\theta,2\varphi-\frac{\pi}{2})$. The restriction $U_{P}\bigl\lvert_{H_{P}^{+}}\bigr. \colon\thinspace H_P^{+} \to \St$ sends each semi-meridian lying in $H_P^{+}$ that starts at $P$ and ends at the equator linearly to the meridian of $\St$ that starts at $P$, ends at $-P$ and contains the original semi-meridian. Since $U_{P}$ sends the whole of the equator to the point $-P$, $U_{P}\bigl\lvert_{H_{P}^{+}}\bigr.$ induces a map $W_P'\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \St$ defined by $W_{P}'(\overline{x})= U_{P}(x)$ for all $x\in H_{P}^{+}$. Let $W_{P}\colon\thinspace \ensuremath{{\mathbb R}P^2}\to \ensuremath{{\mathbb R}P^2}$ be defined by $W_{P}=p\circ W_{P}'$. Now $A(\theta,\varphi)=(\theta+\pi,-\varphi)$, and up to the above-mentioned identification within the spherical coordinate system, for all $(\theta,\varphi)\in \St$, we have: \begin{align*} U_{P}\circ A(\theta,\varphi)&= \textstyle U_{P}(\theta+\pi,-\varphi)=(\theta+\pi,-2\varphi-\frac{\pi}{2})= (\theta,2\varphi-\frac{\pi}{2}) = U_{P}(\theta,\varphi), \end{align*} and it follows that $U_{P}$ is a lift of $W_{P}$. We thus have the following commutative diagram: \begin{equation*} \begin{tikzcd} H_{P}^{+} \ar[hookrightarrow]{r} \ar[swap]{rd}{U_{P}\bigl\lvert_{H_{P}^{+}}\bigr.} & \St \ar{d}{U_{P}} \ar{r}{p} & \ensuremath{{\mathbb R}P^2} \ar{d}{W_{P}}\\ & \St \ar{r}{p} & \ensuremath{{\mathbb R}P^2}, \end{tikzcd} \end{equation*} for which $W_{P}=p\circ W_{P}'$ also. The following lemma summarises various properties of $W_{P}$. \begin{lem}\label{lem:prinWe} Let $P\in \St$. The map $W_{P}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ satisfies the following properties: \begin{enumerate}[(a)] \item\label{it:prinWea} $W_{P}=W_{-P}$, and $\operatorname{\text{Fix}}(W_{P})=\brak{p(P)}$. \item\label{it:prinWeb} The map $W_{P}$ is non null-homotopic, so it belongs to the non-constant homotopy class of $[\ensuremath{{\mathbb R}P^2}, \ensuremath{{\mathbb R}P^2}]$ that induces the trivial homomorphism on the fundamental group. \item\label{it:prinWed} Let $P_{1},P_{2}\in \St$. If $P_1\neq \pm P_2$, then $\operatorname{\text{Coin}}(W_{P_1}, W_{P_2})$ is empty. \item\label{it:prinCoin} If $c_0\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ is a constant map then the pair $(W_P, c_0)$ cannot be deformed to a coincidence-free pair. \end{enumerate} \end{lem} \begin{proof}\mbox{} \begin{enumerate}[(a)] \item Let $\overline{x}\in \ensuremath{{\mathbb R}P^2}$, where $\overline{x}=p(x)$ and we take $x$ to belong to $H_{P}^{+}$. Thus $-x\in H_{-P}^{+}$, $p(-x)=\overline{x}$, and so $W_{-P}'(\overline{x})=U_{-P}(-x)$. With respect to the same spherical coordinate system that was used to define $U_{P}$, one may see that $U_{-P}(\theta,\varphi)= (\theta,2\varphi+\frac{\pi}{2})$ for all $(\theta,\varphi) \in \St$. From the above definitions, if $x=(\theta,\varphi)$, we have: \begin{equation*} W_{P}(\overline{x}) \textstyle=p\circ W_{P}'(\overline{x})= p\circ U_{P}(x)= \textstyle p(\theta,2\varphi-\frac{\pi}{2}), \end{equation*} from which it follows that: \begin{align*} W_{-P}(\overline{x}) & \textstyle=p\circ W_{-P}'(\overline{x})= p\circ U_{-P}(-x)= p\circ U_{-P}(\theta+\pi,-\varphi)= p(\theta+\pi, -2\varphi+\frac{\pi}{2})\\ & \textstyle=p\circ A(\theta,2\varphi-\frac{\pi}{2})=p(\theta,2\varphi-\frac{\pi}{2})=W_{P}(\overline{x}), \end{align*} hence $W_{P}=W_{-P}$ as required. \item Since $W_P$ factors through $\St$, it induces the trivial homomorphism on the fundamental group of $\ensuremath{{\mathbb R}P^2}$. Further, the map $W_P'$ is a lift of $W_P$, so is non null-homotopic (it represents the non-trivial element of $[\ensuremath{{\mathbb R}P^2}, \St]$) because its absolute degree is congruent to $1 \bmod{2}$. \item Let $P_{1},P_{2}\in \St$ be such that $P_1\neq \pm P_2$, let $\mathcal{C}$ be the (unique) great circle that passes through $P_{1}$ and $P_{2}$, and let $x\in \St$ be such that $\overline{x}\in \operatorname{\text{Coin}}(W_{P_1}, W_{P_2})$. Then either $x\in \operatorname{\text{Coin}}(U_{P_1}, U_{P_2})$ or $x\in \operatorname{\text{Coin}}(U_{P_1}, A\circ U_{P_2})$. Let $i\in \brak{1,2}$. Observe that any two great circles of $\St$ either coincide, or intersect in exactly two (antipodal) points, and that $U_{P_{i}}$ maps every great circle that passes through $P_{i}$ to itself. Suppose first that $x\notin \mathcal{C}$, and let $\mathcal{C}_{i}$ be the great circle that passes through $P_{i}$ and $x$. Then $\mathcal{C}_{1}\cap \mathcal{C}_{2}=\brak{x,A(x)}$. Since $U_{P_{i}}(x)\in \mathcal{C}_{i}$ and $x\in \operatorname{\text{Coin}}(U_{P_1}, U_{P_2}) \cup \operatorname{\text{Coin}}(U_{P_1}, A\circ U_{P_2})$, it follows that $U_{P_1}(x)\in \brak{x,A(x)}$, and so $p\circ U_{P_{1}}(x)=W_{P_{1}}(\overline{x})=\overline{x}\in \mathcal{C}$. By part~(\ref{it:prinWea}), this implies that $\overline{x}=\overline{P}_{1}$, which yields a contradiction since $x\notin \mathcal{C}$. So assume that $x\in \mathcal{C}$. We write the elements of $\mathcal{C}$ in exponential form, taking $P_{1}$ to be $e^{i\pi/2}$. Let $\rho$ be the oriented angle $\widehat{P_{1}P_{2}}$, and let $x=e^{i\varphi}$. Then $U_{P_{1}}(x)=e^{(2\varphi-\frac{\pi}{2})i}$ and $U_{P_{2}}(x)=e^{(2(\varphi-\rho)-\frac{\pi}{2})i+\rho i}= U_{P_{1}}(x) \ldotp e^{-\rho i}$. Since $x\in \operatorname{\text{Coin}}(U_{P_1}, U_{P_2}) \cup \operatorname{\text{Coin}}(U_{P_1}, A\circ U_{P_2})$, we see that $\rho\in \brak{0,\pi}$, but this implies that $P_{1}\in \brak{P_{2},-P_{2}}$, which yields a contradiction. We conclude that $W_{P_1}$ and $W_{P_2}$ are coincidence free. \item Suppose on the contrary that the pair $(W_P, c_0)$ can be deformed to a pair of coincidence-free self-maps of $\ensuremath{{\mathbb R}P^2}$. By~\cite{Broo}, there exists a map $h\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ that is homotopic to $W_{P}$ such that the pair $(h,c_{0})$ is coincidence free, and hence $h$ is non surjective. The maps $h$ and $c_{0}$ lift to maps $\widetilde{h}, \widetilde{c}_{0}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \St$, where $\widetilde{c}_{0}$ is also a constant map, and the non surjectivity of $h$ implies that of $\widetilde{h}$. Thus $\widetilde{h}$ is null homotopic, but then so is $h$, which yields a contradiction because $h$ is homotopic to $W_{P}$, and $W_{P}$ is non-null homotopic by part~(\ref{it:prinWeb}).\qedhere \end{enumerate} \end{proof} In the following proposition, we describe the set $[\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})]$ (resp.\ $[\ensuremath{{\mathbb R}P^2}, D_n(\ensuremath{{\mathbb R}P^2})]$) of homotopy classes of maps between $\ensuremath{{\mathbb R}P^2}$ and $F_{n}(\ensuremath{{\mathbb R}P^2})$ (resp.\ $D_{n}(\ensuremath{{\mathbb R}P^2})$), and the set of homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$, from which we will see that they each contain two elements. Let $N=(0,0,1)\in \St$. As we mentioned in the proof of \relem{rp2split}, any $n$-valued map $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ is split, and so the set of homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$ is equal to $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}/\!\sim$, where $\sim$ denotes the homotopy equivalence relation in $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}$. \begin{prop}\label{prop:classmap} Let $n\geq 2$. \begin{enumerate}[(a)] \item\label{it:classmap0} The sets $[\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})]$, $[\ensuremath{{\mathbb R}P^2}, D_n(\ensuremath{{\mathbb R}P^2})]$ and $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}/\!\sim$ are in bijection, and each possesses two elements. \item\label{it:classmap1} The two homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$ may be described as follows: \begin{enumerate}[(i)] \item\label{it:classmapa} the first homotopy class consists of those $n$-valued maps $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ such that any lift $\widehat{\Phi} \colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_n(\ensuremath{{\mathbb R}P^2})$ of $\phi$ induces the trivial homomorphism on the level of fundamental groups, and is homotopic to the constant map between $\ensuremath{{\mathbb R}P^2}$ and $F_{n}(\ensuremath{{\mathbb R}P^2})$. \item\label{it:classmapb} if $\phi_{n}\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ is an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$ that represents the second homotopy class, and $\widehat{\Phi}_n\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_n(\ensuremath{{\mathbb R}P^2})$ is a lift of $\phi_n$, then for all $i=1,\ldots, n$, the composition of $\widehat{\Phi}_n$ with the projection $p_{i}\colon\thinspace F_n(\ensuremath{{\mathbb R}P^2}) \to \ensuremath{{\mathbb R}P^2}$ onto the $i\up{th}$ coordinate is homotopic to the map $W_{N}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$. Moreover, for all $i=1,\ldots, n+1$, the composition of $\widehat{\Phi}_{n+1}$ with the projection $q_{i}\colon\thinspace F_{n+1}(\ensuremath{{\mathbb R}P^2}) \to F_n(\ensuremath{{\mathbb R}P^2})$ given by forgetting the $i\up{th}$ coordinate is homotopic to $\widehat{\Phi}_{n}$. \end{enumerate} \end{enumerate} \end{prop} \begin{proof} Let $n\geq 2$. \begin{enumerate}[(a)] \item We start by showing that the set of homotopy classes $[\ensuremath{{\mathbb R}P^2},F_{n}(\ensuremath{{\mathbb R}P^2})]$ of $n$-ordered maps of $\ensuremath{{\mathbb R}P^2}$ has two elements. Consider the following Barratt-Puppe sequence: \begin{equation}\label{eq:bps} \ldots \to [\St, F_n(\ensuremath{{\mathbb R}P^2})] \to [\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})] \to [\St[1], F_n(\ensuremath{{\mathbb R}P^2})] \to [\St[1], F_n(\ensuremath{{\mathbb R}P^2})] \end{equation} associated with the cofibration sequence $\St[1] \stackrel{2}\to \St[1] \to \ensuremath{{\mathbb R}P^2} \to \St \to \St \to \ldots$ for the space $F_n(\ensuremath{{\mathbb R}P^2})$, where the map $[\St[1], F_n(\ensuremath{{\mathbb R}P^2})] \to [\St[1], F_n(\ensuremath{{\mathbb R}P^2})]$ sends $[\beta]$ to $[\beta^2]$ for all maps $\beta\colon\thinspace \St[1] \to F_n(\ensuremath{{\mathbb R}P^2})$. Now $[\St, F_n(\ensuremath{{\mathbb R}P^2})]$ consists of a single homotopy class because $\pi_2(F_n(\ensuremath{{\mathbb R}P^2}))=0$~\cite[Corollary, p.~244]{FvB}. Therefore~\reqref{bps} implies that the set $[\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})]$ is in one-to-one correspondence with the set of elements of $\pi_1(F_n(\ensuremath{{\mathbb R}P^2}))=P_n(\ensuremath{{\mathbb R}P^2})$ of order less than or equal to $2$, namely the trivial element and the full twist $\ft$~\cite[Proposition~23]{GG3}. Let $\alpha\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_{n}(\ensuremath{{\mathbb R}P^2})$ be an $n$-ordered map of $\ensuremath{{\mathbb R}P^2}$ whose homotopy class $[\alpha]$ corresponds to $\ft$. By~\reqref{bps}, the image of $[\alpha]$ in $[\St[1], F_n(\ensuremath{{\mathbb R}P^2})]$ is non trivial, and so the induced homomorphism $\alpha_{\#}\colon\thinspace \pi_{1}(\ensuremath{{\mathbb R}P^2})\to P_{n}(\ensuremath{{\mathbb R}P^2})$ is non trivial (and injective). In particular, $\alpha$ is non-homotopic to the constant map $c\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_{n}(\ensuremath{{\mathbb R}P^2})$, from which we conclude that $[\ensuremath{{\mathbb R}P^2},F_{n}(\ensuremath{{\mathbb R}P^2})]=\brak{[c],[\alpha]}$ has two distinct elements. We now prove that there are bijections between $[\ensuremath{{\mathbb R}P^2},F_{n}(\ensuremath{{\mathbb R}P^2})]$, $[\ensuremath{{\mathbb R}P^2},D_{n}(\ensuremath{{\mathbb R}P^2})]$ and $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}/\!\sim$. As we pointed out in the proof of~\cite[Lemma~9(b)]{GG15}, the covering map $\pi\colon\thinspace F_n(\ensuremath{{\mathbb R}P^2}) \to D_n(\ensuremath{{\mathbb R}P^2})$ induces a map $\widehat{\pi}:[\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})] \to [\ensuremath{{\mathbb R}P^2}, D_n(\ensuremath{{\mathbb R}P^2})]$ defined by $\widehat{\pi}([\Phi])=[\pi\circ \Phi]$ for all $\Phi\in F_{n}(\ensuremath{{\mathbb R}P^2})^{\ensuremath{{\mathbb R}P^2}}$. If $\Psi\colon\thinspace \ensuremath{{\mathbb R}P^2} \to D_n(\ensuremath{{\mathbb R}P^2})$ is a map and $\psi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ is the associated $n$-valued map, then $\psi$ is split by~\cite[Lemma~14]{GG15}. It follows that there exists a lift $\widehat{\Psi}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_{n}(\ensuremath{{\mathbb R}P^2})$ of $\psi$, and this map satisfies $\pi\circ \widehat{\Psi}=\Psi$. In particular, $\widehat{\pi}$ is surjective, and hence $[\ensuremath{{\mathbb R}P^2},D_{n}(\ensuremath{{\mathbb R}P^2})]=\brak{[\pi\circ c],[\pi\circ \alpha]}$ has at most two elements. Since $\pi_{\#}\colon\thinspace P_{n}(\ensuremath{{\mathbb R}P^2})\to B_{n}(\ensuremath{{\mathbb R}P^2})$ is inclusion, $(\pi\circ \alpha)_{\#} \colon\thinspace \pi_{1}(\ensuremath{{\mathbb R}P^2}) \to B_{n}(\ensuremath{{\mathbb R}P^2})$ is injective, and thus $[\pi\circ c]\neq [\pi\circ \alpha]$. Therefore $\widehat{\pi}$ is a bijection. Finally, by~\cite[Lemma~9(b)]{GG15}, the set $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}/\!\sim$ is in one-to-one correspondence with the orbits of the set $[\ensuremath{{\mathbb R}P^2}, F_n(\ensuremath{{\mathbb R}P^2})]$ under the action of $S_{n}$ induced by that of $S_{n}$ on $F_{n}(\ensuremath{{\mathbb R}P^2})^{\ensuremath{{\mathbb R}P^2}}$. But $[\ensuremath{{\mathbb R}P^2},F_{n}(\ensuremath{{\mathbb R}P^2})]=\brak{[c],[\alpha]}$, and since the orbit of the homotopy class $[c]$ of the constant map under the action of $S_{n}$ must be $\brak{[c]}$, the orbit of $[\alpha]$ must be $\brak{[\alpha]}$. It follows that $\splitmap{\ensuremath{{\mathbb R}P^2}}{\ensuremath{{\mathbb R}P^2}}{n}/\!\sim$ has precisely two elements. \item\begin{enumerate}[(i)] \item If $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ is an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$ such that the associated map $\Phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \to D_{n}(\ensuremath{{\mathbb R}P^2})$ belongs to the homotopy class $[\pi\circ c]$ in $[\ensuremath{{\mathbb R}P^2},D_{n}(\ensuremath{{\mathbb R}P^2})]$ of the constant map $c\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_{n}(\ensuremath{{\mathbb R}P^2})$, then it follows from the proof of part~(\ref{it:classmap0}) that any lift $\widehat{\Phi}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_n(\ensuremath{{\mathbb R}P^2})$ of $\phi$ is homotopic to the constant map, and induces the trivial homomorphism on the level of fundamental groups. \item Let $\phi_{n}\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$ that represents the second (non-trivial) homotopy class, and let $\widehat{\Phi}_n\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_n(\ensuremath{{\mathbb R}P^2})$ be a lift of $\phi_n$, so that $\widehat{\Phi}_n$ is homotopic to the map $\alpha$ given in the proof of part~(\ref{it:classmap0}). For all $1\leq i \leq n$, the image of $\ft$ under the homomorphism induced by the projection $p_{i}\colon\thinspace F_n(\ensuremath{{\mathbb R}P^2}) \to \ensuremath{{\mathbb R}P^2}$ is trivial (this follows from the proof of~\cite[Proposition~8]{GG14} using the fact that $\ft$ may be written as a product of the generators $(A_{i,j})_{1\leq i,j\leq n}$ given in that paper). Since the image of the induced homomorphism $\alpha_{\#}\colon\thinspace \pi_{1}(\ensuremath{{\mathbb R}P^2}) \to P_n(\ensuremath{{\mathbb R}P^2})$ is equal to $\ang{\ft}$, the composition $p_{i}\circ \widehat{\Phi}_{n}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ induces the trivial homomorphism on the level of fundamental groups. Now let $P_1,\ldots,P_n$ be $n$ distinct points of $\St$ that lie on the geodesic arc between $(1,0,0)$ and $(0,0,1)$, and consider the map $(W_{P_1},\ldots, W_{P_n})\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_n(\ensuremath{{\mathbb R}P^2})$. The fact that this map is well defined is a consequence of \relem{prinWe}(\ref{it:prinWed}). Since $p_i\circ (W_{P_1},\ldots, W_{P_n})=W_{P_i}$ for all $i=1,\ldots,n$, the map $(W_{P_1},\ldots, W_{P_n})$ is not homotopic to the constant map $c$, and so it is homotopic to $\alpha$ by the proof of part~(\ref{it:classmap0}). In particular, $\widehat{\Phi}_n$ is homotopic to $(W_{P_1},\ldots, W_{P_n})$, and the statements of part~(\ref{it:classmap1})(\ref{it:classmapb}) then follow. \qedhere \end{enumerate} \end{enumerate} \end{proof} We are now able to prove the Wecken property for $\ensuremath{{\mathbb R}P^2}$. \begin{proof}[Proof of \reth{wecproj}] As we mentioned previously, $\ensuremath{{\mathbb R}P^2}$ has the Wecken property for self-maps. So suppose that $n>1$. From \repr{classmap}, there are two homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$, and so by \relem{rp2split}, it suffices to show that each of these classes admits a representative for which the number of fixed points is equal to $n$. Let $P_1,\ldots,P_n$ be as in the proof of \repr{classmap}(\ref{it:classmap1})(\ref{it:classmapb}), and let $c_{P_{i}}\colon\thinspace \ensuremath{{\mathbb R}P^2} \to \ensuremath{{\mathbb R}P^2}$ be the constant map at $P_{i}$. From \repr{classmap}, the two homotopy classes of $n$-valued maps of $\ensuremath{{\mathbb R}P^2}$ contain an $n$-valued map $\phi\colon\thinspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ that is split and admits a lift $\widehat{\Phi}=(\phi_{1},\ldots,\phi_{n})\colon\thinspace \ensuremath{{\mathbb R}P^2} \to F_{n}(\ensuremath{{\mathbb R}P^2})$, where either $\phi_{i}=c_{\overline{P_{i}}}$ is the constant map at $\overline{P_{i}}$ for all $i=1,\ldots,n$, or $\phi_{i}=W_{P_{i}}$ for all $i=1,\ldots,n$. Using \relem{prinWe}(\ref{it:prinWea}), $\operatorname{\text{Fix}}(\phi_{i})=\brak{\overline{P_{i}}}$ for all $i=1,\ldots,n$, and hence $\phi$ has exactly $n$ fixed points. Moreover, by \relem{rp2split} we have $N(\phi)=n$. So each of the two homotopy classes of $n$-valued maps contains a representative $\phi$ for which $N(\phi)=n$, and hence $\ensuremath{{\mathbb R}P^2}$ has the Wecken property for $n$-valued maps. \begin{comment} For the homotopy class of the constant map described by \repr{classmap}(\ref{it:classmapa}), consider the constant $n$-valued map $c:\ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ given by $c=(c_{x_{1}},\ldots,c_{x_{n}})$, where $(x_{1},\ldots, x_{n})\in F_{n}(\ensuremath{{\mathbb R}P^2})$, and for $i=1,\ldots,n$, $c_{x_{i}}$ is the constant map at $x_{i}$. Then $N(c)=n$ by \reth{helgath0} \comj{so $N(c_{x_{i}})=1$?}, and $c$ has precisely $n$ fixed points, so the Wecken property holds in this case. Now consider the homotopy class of the map $f_{n}$ described by \repr{classmap}(\ref{it:classmapb}), let $P_1,\ldots,P_n$ be $n$ distinct points of $\St$ that lie on the geodesic arc between $(1,0,0)$ and $(0,0,1)$. From \relem{prinWe}(\ref{it:prinWed}), $W_{P_i}(x)\neq W_{P_j}(x)$ for all $x\in \ensuremath{{\mathbb R}P^2}$ and all $1\leq i< j\leq n$, which implies that the $n$-valued map $W= (W_{P_1},\ldots,W_{P_n}): \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ is well defined. The map $W$ is not homotopic to the constant map since by \relem{prinWe}(\ref{it:prinWeb}), the composition of $W$ by the projection onto the $i\up{th}$ coordinate, where $i\in \brak{1,\ldots,n}$, yields the map $W_{P_{i}}$, which is not homotopic to the constant self-map of $\ensuremath{{\mathbb R}P^2}$. So $N(W_{P_i}\geq 1$ and $W_{P_i}$ has only one fixed point, then we have that $N(W_{P_i}=1$ and $N(W)=n$ by Theorem \ref{th:helgath0}. Moreover, the $n$-valued map $W$ has exactly $n$ fixed points by \relem{prinWe}(\ref{it:prinWec}). This concludes the proof. \end{comment} \end{proof} \section{Nielsen numbers of $n$-valued maps}\label{sec:calcu} The Nielsen number for $n$-valued maps of a compact polyhedron $X$ was defined in~\cite{Sch1}, and may be determined for split $n$-valued maps using \reth{helgath0}. The aim of this section is to prove \reth{form}, where we give a formula for the Nielsen number of non-split $n$-valued maps of a space $X$ that is in the same spirit as that of \reth{helgath0}. For \reth{form}, we shall require $X$ to be a compact, orientable manifold without boundary, in order to have the notions of index, Lefschetz and Nielsen numbers for coincidences of pairs of maps from finite coverings of $X$ to $X$ at our disposal. However, many of the results that lead to \reth{form} are valid under weaker hypotheses on $X$, namely those of \repr{nielsen0} below. We start by recalling some notation and results from~\cite[Section~3.2]{GG15} that will be used throughout the rest of the paper. Given an $n$-valued map $\phi\colon\thinspace X \multimap X$ of a topological space $X$ that is locally path-connected and semi-locally simply connected, we consider the corresponding map $\Phi\colon\thinspace X \to D_n(X)$, and the induced homomorphism $\Phi_{\#}\colon\thinspace \pi_1(X) \to \pi_1(D_n(X))$ on the level of fundamental groups, where $\pi_1(D_n(X))=B_n(X)$. By the short exact sequence \begin{equation* 1\to P_n(X) \to B_n(X) \stackrel{\tau}{\to} S_n \to 1, \end{equation*} $P_n(X)$ is a normal subgroup of $B_n(X)$ of finite index $n!$, so the subgroup $H=\Phi_{\#}^{-1}(P_n(X))$ is a normal subgroup of $\pi_1(X)$ of finite index. Let $L$ be the finite quotient group $\pi_{1}(X)/H$, and let $q\colon\thinspace \widehat{X} \to X$ be the covering of $X$ that corresponds to the subgroup $H$. Such a covering exists due to the hypotheses on $X$. As the following proposition shows, the fixed points of $\phi$ may be described in terms of the coincidences of $q$ with the coordinate maps $f_{1},\ldots, f_{n}\colon\thinspace \widehat{X}\to X$ of a lift of the $n$-valued map $\phi\circ q \colon\thinspace \widehat{X}\multimap X$. \begin{prop}\cite[Propositions~16 and~17]{GG15}\label{prop:nielsen0} Let $n\in \ensuremath{\mathbb N}$, and suppose that $X$ is a connected, locally arcwise-connected metric space. \begin{enumerate}[(a)] \item\label{it:nielsen} With the above notation, the $n$-valued map $\phi_1=\phi \circ q\colon\thinspace \widehat{X}\multimap X$ admits exactly $n!$ lifts, which are $n$-ordered maps from $\widehat{X}$ to $F_n(X)$. If one such lift $\widehat{\Phi}_{1}\colon\thinspace \widehat{X}\to F_n(X)$ is given by $\widehat{\Phi}_{1}=(f_1,\ldots, f_n)$, where for $i=1,\ldots,n$, $f_i$ is a map from $\widehat{X}$ to $X$, then the other lifts are of the form $(f_{\rho(1)},\ldots,f_{\rho(n)})$, where $\rho\in S_n$. \item\label{it:coinfix} if the lift $\widehat{\Phi}_{1}=(f_1,\ldots, f_n)$ is as in part~(\ref{it:nielsen}) then the restriction of $q\colon\thinspace \widehat{X} \to X$ to $\bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i) \to \operatorname{\text{Fix}}(\phi)$ is surjective. Furthermore, the pre-image of a point $x\in \operatorname{\text{Fix}}(\phi)$ by this map is precisely $q^{-1}(x)$, namely the fibre over $x\in X$ of the covering map $q$. \end{enumerate} \end{prop} Although the lift $\widehat{\Phi}_1$ is not unique, the set $\brak{f_{1},\ldots,f_{n}}$ is, and so the set $\bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i)$ is independent of the choice of lift of $\phi$. In what follows, we aim to describe the Nielsen classes of $\phi$ in terms of the Nielsen coincidence classes of the pairs $(q, f_i)$, where $i=1,\ldots,n$, which will lead to a formula for $N(\phi)$ similar in spirit to that of \reth{helgath0}. Observe that the composition $\pi_1(X) \stackrel{\Phi_{\#}}{\to} B_n(X) \stackrel{\tau}{\to} S_n$ is a homomorphism whose kernel is $H$, so it induces an injective homomorphism $\Gamma\colon\thinspace L \to S_n$. Let $L'=\im{\Gamma}=\im{\tau \circ \Phi_{\#}}$, and for $i,j=1,\ldots,n$, let $L_{i,j}'=\setr{\rho\in L'}{\rho(i)=j}$. The subset $L_{i,i}'$ is a subgroup of $L'$, and if $i,j\in \brak{1,\ldots,n}$, the subset $L'_{i,j}$ is either empty or is a left coset of $L'_{i,i}$ in $L'$. In the rest of this section, we will suppose without further comment that $X$ satisfies the hypotheses of \repr{nielsen0}, so that it is a connected, locally arcwise-connected metric space. If $\phi\colon\thinspace X \multimap X$ is an $n$-valued map, we recall the Nielsen relation on $\operatorname{\text{Fix}}(\phi)$, the index of a Nielsen fixed point class of $\phi$ and the definition of the Nielsen number $N(\phi)$ of $\phi$ from~\cite[Section~5]{Sch1}. For the definition of index, using~\cite[Theorem~6]{Sch0} and the homotopy invariance of the Nielsen number~\cite[Theorem 6.5]{Sch0}, without loss of generality, we may restrict ourselves to the case where $\operatorname{\text{Fix}}(\phi)$ is finite. First note that by~\cite[Lemma~12]{GG15}, if $\lambda\colon\thinspace I \to X$ is a path then the $n$-valued map $\phi \circ \lambda\colon\thinspace I \multimap X$ is split. Let $x,x'\in \operatorname{\text{Fix}}(\phi)$. We say that $x$ and $x'$ are Nielsen equivalent if there exist maps $g_1,g_2,\ldots,g_n \colon\thinspace I \to X$, a path $\lambda\colon\thinspace I \to X$ from $\lambda(0)=x$ to $\lambda(1)=x'$ and $j\in \brak{1,\ldots, n}$ such that $\phi \circ \lambda =\brak{g_1,g_2,\ldots,g_n}$, and $g_{j}$ is a path from $g_j(0)=x$ to $g_j(1)=x'$ that is homotopic to $\lambda$ relative to their endpoints. This defines an equivalence relation on $\operatorname{\text{Fix}}(\phi)$, and the resulting equivalence classes are called \emph{Nielsen fixed point classes} of the $n$-valued map $\phi$. To define the index of an isolated point $x$ in $\operatorname{\text{Fix}}(\phi)$, we suppose that $X$ is a compact polyhedron. Following~\cite[Section~3]{Sch1}, let $x$ be in the interior of a maximal simplex $\overline{\sigma}$. By~\cite[Splitting~Lemma~2.1]{Sch1} $\phi\left\lvert_{\overline{\sigma}}\right.$ is split, so may be written in the form $\phi\left\lvert_{\overline{\sigma}}\right.=\brak{f_1,\ldots,f_n}$, where $x \in \operatorname{\text{Fix}}(f_j)$ for some (unique) $1\leq j\leq n$. We then define $\operatorname{\text{Ind}}(\phi, x)=\operatorname{\text{Ind}}(f_j, x)$, where the right-hand side is the usual fixed point index (see~\cite[Sections~3 and~5]{Sch1} for more details). As in the single-valued case, the \emph{index} of a Nielsen fixed point class of $\phi$ that contains a finite number of points is the sum of the indices of these fixed points, and such a fixed point class is said to be \emph{essential} if its index is non zero. The \emph{Nielsen number} $N(\phi)$ is defined to be the number of essential Nielsen fixed point classes of $\phi$. \begin{rem}\label{rem:splitpath} Within our framework, the maps $g_1,g_2,\ldots,g_n$ may be chosen as follows: given $x_0,x_0'\in \operatorname{\text{Fix}}(\phi)$, a point $\widetilde{x}_{0}\in \widehat{X}$ such that $q(\widetilde{x}_{0})=x_{0}$, and a path $\lambda\colon\thinspace I \to X$ from $x_0$ to $x_0'$, let $\widetilde{\lambda}\colon\thinspace I \to \widehat{X}$ be the unique lift of $\lambda$ to $\widehat{X}$ for which $\widetilde{\lambda}(0)=\widetilde{x}_{0}$. Consider the $n$-ordered map $\widehat{\Phi}_1=(f_1,\ldots,f_n)\colon\thinspace \widehat{X} \to F_{n}(X)$ given by \repr{nielsen0}(\ref{it:nielsen}). Then $\phi \circ \lambda=\brak{g_1,\ldots,g_n}$, where for $i=1,\ldots,n$, $g_{i}\colon\thinspace I \to X$ is the map defined by $g_i=f_i\circ \widetilde \lambda$. So $x_0$ and $x_0'$ are Nielsen equivalent if there is a path $\lambda$ as above and $j\in \brak{1,\ldots,n}$ such that $\lambda(0)=g_j(0)=f_j\circ \widetilde \lambda(0)$, $\lambda(1)=g_j(1)=f_j\circ \widetilde \lambda(1)$ and $\lambda$ is homotopic to $g_j$ relative to their endpoints. \end{rem} In the following lemmas, we will compare the coincidences of $q$ and the $f_i$ with the fixed points of $\phi$. \begin{lem}\label{lem:auxil0} With the above notation, let $x_0\in \operatorname{\text{Fix}}(\phi)$, let $\widetilde{x}_{0}\in \widehat{X}$ and $i\in \brak{1,\ldots,n}$ be such that $q(\widetilde{x}_{0})=x_0$ and $\widetilde{x}_{0}\in \operatorname{\text{Coin}}(q,f_i)$. If $y\in q^{-1}(x_0)$ and $j\in \brak{1,\ldots,n}$ then $y \in \operatorname{\text{Coin}}(q, f_j)$ if and only if $\Gamma(\alpha)\in L'_{i,j}$, where $\alpha$ is the $H$-coset of $[q(\gamma)]$, $\gamma$ being any path from $\widetilde{x}_{0}$ to $y$. In particular, the points of $q^{-1}(x_0)$ that belong to $ \operatorname{\text{Coin}}(q,f_i)$ are in one-to-one correspondence with the subgroup $L'_{i,i}$. \end{lem} Since $L_{i,i}'$ does not depend on $x_0$, \relem{auxil0} implies that for all $x \in \operatorname{\text{Fix}}(\phi)$, the set $q^{-1}(x)\cap \operatorname{\text{Coin}}(q,f_i)$ has $\lvert L'_{i,i}\rvert$ elements. \begin{proof}[Proof of \relem{auxil0}] The proof makes use of basic covering space theory. Let $\alpha\in L$ be the unique deck transformation for which $y$ is equal to the element $\alpha \ldotp \widetilde{x}_{0}$ of $\widehat{X}$ that arises from the action of deck transformation group $L$ on $\widehat{X}$. Then $\Gamma(\alpha)\in L'$ defines a deck transformation of the covering $\pi\colon\thinspace F_n(X)\to D_n(X)$. Using the fact that $q$ and $\pi$ are covering maps and $\widehat{\Phi}_1$ is a lift of $\Phi$, we have: \begin{equation}\label{eq:phigamma} \widehat{\Phi}_1(\alpha \ldotp \widetilde{x})= \Gamma(\alpha) \ldotp \widehat{\Phi}_1(\widetilde{x}) \end{equation} for all $\widetilde{x}\in q^{-1}(x_0)$, where $\Gamma(\alpha) \ldotp \widehat{\Phi}_1(\widetilde{x})$ is the element of $F_{n}(X)$ arising from the action of $S_{n}$ on $F_{n}(X)$. Since $\widetilde{x}_{0}\in \operatorname{\text{Coin}}(q,f_i)$, $\widehat{\Phi}_1(\widetilde{x}_0)$ is an element of $F_n(X)$ whose $i\up{th}$ coordinate is $x_0$, and $\Gamma(\alpha) \ldotp \widehat{\Phi}_1(\widetilde{x}_0)$ is an element of $F_n(X)$ whose $\Gamma(\alpha)(i)\up{th}$ coordinate is $x_0$. So if $y\in \operatorname{\text{Coin}}(q,f_j)$, the $j\up{th}$ coordinate of $\widehat{\Phi}_1(y)$ is $x_0$, and thus $\Gamma(\alpha) \in L_{i,j}'$. Conversely, if $\Gamma(\alpha) \in L_{i,j}'$ then $\Gamma(\alpha)(i)=j$, so the $j\up{th}$ coordinate of $\widehat{\Phi}_1(y)$ is $x_0$, and hence $y\in \operatorname{\text{Coin}}(q,f_j)$. For the last part of the statement, since $L_{i,i}'$ is a subgroup of $L'$, it suffices to take $j=i$. \end{proof} With \relem{auxil0} in mind, we define the following notation. For each $i\in \brak{1,\ldots,n}$, let $\mathbb{O}_i$ be the orbit of $i$ by the action of the subgroup $L'$ of $S_{n}$ on the set $\brak{1,\ldots,n}$, and let $I_0=\brak{i_1,\ldots,i_s}$ be such that the sets $\brak{\mathbb{O}_i}_{i\in I_{0}}$ form a partition of $\brak{1,\ldots,n}$. As examples, if $H=\pi_1(X)$ (resp.\ $L'=S_n$) then $\mathbb{O}_i=\brak{i}$ (resp.\ $\mathbb{O}_i=S_n$) for all $i\in \brak{1,\ldots,n}$. The following result underlines the relevance of these orbits. \begin{lem}\label{lem:auxil} With the above notation, let $x_0\in \operatorname{\text{Fix}}(\phi)$. Then there exists $i\in \brak{1,\ldots,n}$ such that: \begin{equation*} \setr{j\in \brak{1,\ldots,n}}{q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q, f_j)\neq \ensuremath{\varnothing}}=\mathbb{O}_i. \end{equation*} \end{lem} \begin{proof} The proof uses arguments similar to those of \relem{auxil0}. Let $x_0\in \operatorname{\text{Fix}}(\phi)$, and let $\widetilde{x}_{0}\in \widehat{X}$ be a lift of $x_{0}$. By \repr{nielsen0}(\ref{it:coinfix}), $\widetilde{x}_{0}$ belongs to $\operatorname{\text{Coin}}(q,f_i)$ for some $i\in \brak{1,\ldots,n}$. First, suppose that $y\in q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q, f_j)$ for some $j\in \brak{1,\ldots,n}$, and let $\alpha\in L$ be such that $\alpha\ldotp \widetilde{x}_{0}=y$. From~\reqref{phigamma}, $\Gamma(\alpha)(i)=j$, so $j\in \mathbb{O}_i$. Conversely, suppose that $j\in \mathbb{O}_i$. Then there exists $\alpha\in L$ such that $\Gamma(\alpha)(i)=j$, and taking $y=\alpha\ldotp \widetilde{x}_{0}$, we have $y\in q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q, f_j)$. \end{proof} Note that by \repr{nielsen0}(\ref{it:coinfix}) and \relem{auxil}, $\operatorname{\text{Fix}}(\phi)= q\bigl( \bigcup_{j\in I_{0}} \operatorname{\text{Coin}}(q,f_j)\bigr)$. Since we wish to express the Nielsen number of $\phi$ in terms of the Nielsen coincidence numbers of the pairs $(q, f_i)$, for the values of $i$ belonging to $I_0$, we shall compare the Nielsen coincidence relation and the Nielsen coincidence number of the pairs $(q, f_i)$ with the Nielsen relation and the Nielsen number for $\phi$ respectively. \begin{comment} Note also that if $j\in \mathbb {O}_i$ then $f_j$ may be determined by $f_i$ as follows: let $\tau\in L$ be and consider the covering transformation of the covering $F_n(X) \to D_n(X)$ which correspond to $\tau$, \comj{of which space?!!!D!!!19Jan See what I added} where $\tau$ takes $i$ to $j$ (in turn the deck transformation permute the coordinates $i$ and $j$. \comj{The notation for $\tau$ should probably be modified, as we use it for the homomorphism $B_{n}(X)\to S_{n}$. !!!D!!!19Jan I full agree. Also, if it is a covering transformation, how can it send $i$ to $j$?!!!D!!!19Jan See what I added about the permutation and the deck transformation}. The lift $(f_1,\ldots,f_i,\ldots, f_j,\ldots,f_n): \widehat{X} \to F_n(X)$ \comj{the lift of which map? Aren't our lifts always maps, rather than multimaps? !!!D!!!19Jan I full agree. I correct. Can we have $i=j$ (or $i>j$)?!!!D!!!19Jan I think so. It is a good point and in fact we can even define the isotropy subgroup and say the results using such subgroups. But I perhaps we can develop this this point later.} satisfies the equivariant equation \begin{align*} (f_1,\ldots,f_i,\ldots f_j,\ldots,f_n)(\tau(x)) &=(f_1(\tau(x)),\ldots,f_i(\tau(x)),\ldots f_j(\tau(x)),\ldots,f_n(\tau(x)))\\ &=\tau(f_1(x),\ldots,f_i(x),\ldots, f_j(x),\ldots,f_n(x))\\ &=(f_{\tau^{-1}(1)}(x),\ldots,f_{\tau^{-1}(i)}(x),\ldots, f_i(x),\ldots,f_{\tau^{-1}(n)}(x)), \end{align*} which implies that $(f_j(\tau(x))=f_i(x)$ or $f_j(z)=f_i(\tau^{-1}(z))$ for all $z\in \widehat{X}$. \end{comment} \begin{lem}\label{lem:nielsenclasses} With the above notation, let $i\in \brak{1,\ldots,n}$, and let $y_1$ and $y_2$ be elements of $\operatorname{\text{Coin}}(q, f_i)$ that belong to the same Nielsen coincidence class of the pair $(q,f_{i})$. Then $q(y_1)$ and $q(y_2)$ are elements of $\operatorname{\text{Fix}}(\phi)$ that belong to the same Nielsen fixed point class of the $n$-valued map $\phi$. Further, $q$ sends each Nielsen coincidence class of the pair $(q,f_i)$ surjectively onto a Nielsen fixed point class of $\phi$. \end{lem} \begin{proof} Let $y_1$ and $y_2$ be elements of $\operatorname{\text{Coin}}(q, f_i)$ for some $i\in \brak{1,\ldots,n}$ that belong to the same Nielsen coincidence class of the pair $(q,f_{i})$. By \repr{nielsen0}(\ref{it:coinfix}), $q(y_1)$ and $q(y_2)$ are fixed points of $\phi$. Since $y_1$ and $y_2$ belong to the same Nielsen coincidence class of $(q,f_{i})$, there exists a path $\widetilde{\lambda}\colon\thinspace I \to \widehat{X}$ from $y_{1}$ to $y_{2}$ such that the path $f_{i}\circ \widetilde{\lambda}$ is homotopic in $X$ to $\lambda$ relative to the endpoints $q(y_1)$ and $q(y_2)$, where $\lambda \colon\thinspace I \to X$ is the path defined by $\lambda=q\circ \widetilde{\lambda}$. Now the $n$-valued map $\phi \circ \lambda\colon\thinspace I \multimap X$ is split by~\cite[Lemma~12]{GG15}, and by \rerem{splitpath}, $\phi \circ \lambda=\brak{g_{1},\ldots,g_{n}}$, where for $j=1,\ldots,n$, $g_{j}= f_{j} \circ \widetilde{\lambda}$. So $g_{i}(0)= f_{i} \circ \widetilde{\lambda}(0)=f_{i}(y_{1})=q(y_{1})$, $g_{i}(1)= f_{i} \circ \widetilde{\lambda}(1)=f_{i}(y_{2})=q(y_{2})$, and $g_{i}$ is homotopic to $\lambda$ relative to the endpoints $q(y_1)$ and $q(y_2)$, from which we deduce that $q(y_1)$ and $q(y_2)$ belong to the same Nielsen fixed point class of $\phi$. To prove the second part of the statement, by the first part, it suffices to show that if $x$ is a fixed point of $\phi$ that belongs to the same Nielsen fixed point class of $\phi$ as $q(y_{1})$ then there exists $y\in \operatorname{\text{Coin}}(q,f_i)$ such that $q(y)=x$, and $y$ and $y_{1}$ belong to the same Nielsen coincidence class of the pair $(q,f_{i})$. To see this, note that by \rerem{splitpath}, there exist a path $\lambda\colon\thinspace I \to X$ from $q(y_{1})$ to $x$, a lift $\widetilde{\lambda}\colon\thinspace I \to \widehat{X}$ such that $\widetilde{\lambda}(0)=y_{1}$ and $j\in \brak{1,\ldots,n}$ such that $\lambda(0)=g_j(0)=f_j\circ \widetilde \lambda(0)$, $\lambda(1)=g_j(1)=f_j\circ \widetilde{\lambda}(1)$, and $\lambda$ is homotopic to $g_j$ relative to their endpoints. In particular, $f_{j}(y_{1})= q(y_{1})$, and so $j=i$ because $y_{1}\in \operatorname{\text{Coin}}(q, f_i)$. Further, if $y=\lambda(1)$ then $q(y)=x$ because $\widetilde{\lambda}$ is a lift of $\lambda$, and $x=\lambda(1)=f_{i}(y)$, so $y\in \operatorname{\text{Coin}}(q, f_i)$. Finally, the paths $\lambda$ and $g_{i}=f_i\circ \widetilde{\lambda}$ are homotopic in $X$ relative to their endpoints, and hence $y_{1}$ and $y$ belong to the same Nielsen coincidence class of the pair $(q,f_{i})$ as required. \end{proof} In order to obtain a formula for $N(\phi)$, another ingredient that we require is the number of points of $q^{-1}(x_{0})\cap \operatorname{\text{Coin}}(q,f_i)$ that belong to the same Nielsen coincidence class of the pair $(q,f_i)$, where $x_0\in \operatorname{\text{Fix}}(\phi)$ and $i\in I_0$. Suppose that $q^{-1}(x_{0})\cap \operatorname{\text{Coin}}(q,f_i)\neq \ensuremath{\varnothing}$, and let $\widetilde{x}_0,y\in\widehat{X}$ be elements of this intersection. There exists a unique $\mu \in L=\pi_1(X)/H$ such that $y=\mu \ldotp \widetilde{x}_0$. Let $L_i$ be the subset of $L$ consisting of such $\mu$ as $y$ runs over the elements of $q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q,f_i)$. Note that $L_i=\set{\mu \in L}{\Gamma(\mu)(i)=i}$, in particular, $L_{i}$ is independent of $\widetilde{x}_0$, $L_i$ is a subgroup of $L$, the order of $L_i$ is equal to the cardinality of $q^{-1}(x_0)\cap{\text{Coin}}(q,f_i)$, and $L_{i,i}'=\Gamma(L_i)$, so $L_{i,i}' \cong L_i$. If $\mu\in L_i$, consider the corresponding element $y\in q^{-1}(x_0)\cap\operatorname{\text{Coin}}(q,f_i)$ defined by $y=\mu \ldotp \widetilde{x}_0$, and let $\gamma \colon\thinspace I \to \widehat{X}$ be a path from $\widetilde{x}_0$ to $y$. Then $f_i\circ \gamma$ and $q\circ \gamma$ are loops in $X$ based at $x_{0}$. Let $W_{\widetilde x_0}(\mu)$ be the subset of $\pi_1(X)$ of loop classes of the form $[(q\circ \gamma) \ast (f_i\circ \gamma)^{-1}]$, where $\gamma$ runs over the set of paths from $\widetilde{x}_0$ to $y$. Observe that $W_{\widetilde x_0}(\mu)$ contains the trivial element of $\pi_1(X)$ if and only if $\widetilde{x}_0$ and $y$ belong to the same Nielsen coincidence class for the pair $(q,f_i)$. With this in mind, let $K_i(\widetilde{x}_0)$ be the set of elements $y\in q^{-1}(x_0)\cap\operatorname{\text{Coin}}(q,f_i)$ for which $W_{\widetilde x_0}(\mu)$ contains the trivial element, where $\mu\in L_i$ is such that $y=\mu \ldotp \widetilde{x}_0$. Then $K_i(\widetilde{x}_0)$ is the subset of elements $q^{-1}(x_0)\cap\operatorname{\text{Coin}}(q,f_i)$ that belong to the same Nielsen coincidence class of the pair $(q,f_i)$ as $\widetilde{x}_0$. Let $\lvert K_i(\widetilde{x}_0)\rvert$ denote the cardinality of $K_i(\widetilde{x}_0)$. \begin{lem}\label{lem:fund1} With the above notation, let $x_{0}\in \operatorname{\text{Fix}}(\phi)$, and let $\widetilde{x}_0\in q^{-1}(x_{0})\cap \operatorname{\text{Coin}}(q,f_i)$.\vspace*{-1mm} \begin{enumerate} \item\label{it:fund1a} The number of coincidence points of the pair $(q, f_i)$ that belong to $q^{-1}(x_{0})$ is equal to the order of the subgroup $L_i$. \item\label{it:fund1b} If $\widetilde{z}\in q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q,f_i)$, the set $K_i(\widetilde{z})$ of points of $q^{-1}(x_0)$ that belong to the same Nielsen coincidence class for the pair $(q,f_{i})$ as $\widetilde{z}$ is in one-to-one correspondence with the set $K_i(\widetilde{x}_0)$. \item\label{it:fund1c} Let $\widetilde{z}\in \operatorname{\text{Coin}}(q,f_i)$ be such that $\widetilde{x}_0$ and $\widetilde{z}$ belong to the same Nielsen coincidence class for the pair $(q,f_i)$. Then $K_i(\widetilde{z})$ is in one-to-one correspondence with the set $K_i(\widetilde{x}_0)$. \end{enumerate} \end{lem} \begin{proof}\mbox{} \begin{enumerate} \item This follows from \relem{auxil0} and the isomorphism $L_i \cong L_{i,i}'$. \item Let us construct an injective map from $K_i(\widetilde{x}_0)$ to $K_i(\widetilde{z})$. Given $y\in K_i(\widetilde{x}_0)$, by definition, there exists a path $\gamma\colon \thinspace I \to \widehat{X}$ such that $\gamma(0)=\widetilde{x}_0$, $\gamma(1)=\widetilde{y}$ and $q\circ \gamma$ is homotopic to $f_i\circ \gamma$ relative to their endpoints. Let $\gamma_1$ be the unique lift of $q\circ \gamma$ for which $\gamma_1(0)=\widetilde{z}$. Since $q\circ \gamma$ is a loop in $X$ based at $x_{0}$, $\gamma_1(1)\in q^{-1}(x_{0})$. We claim that $\gamma_1(1)\in K_i(\widetilde{z})$. To see this, let us show that $q\circ \gamma_1=q\circ \gamma$ is homotopic to $f_i \circ \gamma_1$ relative to their endpoints. It suffices to observe that $f_i\circ \gamma=f_i\circ \gamma_1$. Since the two compositions $\widehat{X} \stackrel{q}{\to} X \stackrel{\Phi}{\to} D_{n}(X)$ and $\widehat{X} \stackrel{\widehat{\Phi}_{1}}{\to} F_{n}(X) \stackrel{\pi}{\to} D_{n}(X)$ are equal, we have $\pi(\widehat \Phi\circ \gamma_1)=\pi(\widehat \Phi\circ \gamma)$, in other words, the $n$-tuple $(f_{1}\circ \gamma, \ldots, f_{n}\circ \gamma)$ of paths is a permutation of the $n$-tuple $(f_{1}\circ \gamma_{1}, \ldots, f_{n}\circ \gamma_{1})$ of paths. Since these two $n$-tuples are paths in $F_{n}(X)$ and $f_{i}\circ \gamma(0)=f_{i}\circ \gamma_{1}(0)=x_{0}$, it follows that $f_{i}\circ \gamma=f_{i}\circ \gamma_{1}$, and hence $f_{i}\circ \gamma_{1}$ is homotopic to $q\circ \gamma_1$ relative to their endpoints. In particular, $f_{i}\circ \gamma_{1}(1)=f_{i}\circ \gamma(1)=q\circ \gamma(1)=q\circ \gamma_{1}(1)$, so $\gamma_{1}(1)\in q^{-1}(x_{0}) \cap \operatorname{\text{Coin}}(q,f_i)$, and thus $\gamma_{1}(1)\in K_{i}(\widetilde{z})$, which proves the claim. By construction, the map from $K_i(\widetilde{x}_0)$ to $K_{i}(\widetilde{z})$ that to $y$ associates $\gamma_{1}(1)$ is injective. Exchanging the r\^{o}les of $\widetilde{x}_0$ and $z$, we see that $K_i(\widetilde{x}_0)$ and $K_{i}(\widetilde{z})$ have the same number of elements, and that this map is a bijection. \item Let $\lambda\colon\thinspace I \to \widehat{X}$ be a path such that $\lambda(0)=\widetilde{x}_0$, $\lambda(1)=\widetilde{z}$ and $q\circ \lambda$ is homotopic to $f_i\circ \lambda$ relative to their endpoints. Let $y\in q^{-1}(x_{0})\cap \operatorname{\text{Coin}}(q,f_i)$ be a point that belongs to the same Nielsen coincidence class of $(q,f_i)$ as $\widetilde{x}_0$, and let $\gamma\colon\thinspace I \to \widehat{X}$ be a path such that $\gamma(0)=\widehat{x}_0$, $\gamma(1)=y$ and $q\circ \gamma$ is homotopic to $f_i\circ \gamma$ relative to their endpoints. Let $\lambda'\colon\thinspace I \to \widehat{X}$ be the unique lift of $q\circ \lambda$ for which $\lambda'(0)=y$, and let $\gamma'\colon\thinspace I \to \widehat{X}$ be the path defined by $\gamma'=\lambda^{-1}\ast\gamma\ast\lambda'$. Then $\gamma'(0)=\widetilde{z}$, $q\circ \gamma'(1)=q\circ \lambda'(1)=q\circ \lambda(1)=q(\widetilde{z})$, so $\gamma'(1)\in q^{-1}(q(\widetilde{z}))$. As in the proof of part~(\ref{it:fund1b}), $q\circ \lambda'$ is homotopic to $f_i\circ \lambda'$ relative to their endpoints, and it follows that $q\circ \gamma'$ is homotopic to $f_i\circ \gamma'$ relative to their endpoints, so $\gamma'(1)\in K_i(\widetilde{z})$. We thus obtain an injective map from $K_i(\widetilde{x}_0)$ and $K_i(\widetilde{z})$ that to $y$ associates $\gamma'(1)$. By exchanging the r\^{o}les of $\widetilde{x}_0$ and $\widetilde{z}$, we see that this map is a bijection.\qedhere \end{enumerate} \end{proof} \begin{comment} To state the main result we provide some notation. We want to describe the number of Nielsen classes in terms of the coincidence Nielsen classes and induced homomorphism of several maps related with the data of the problem. The subgroup $L'$ acts on the set $\{1,2,\ldots,n-1,n\}$ and for each $i\in \{1,2,\ldots,n-1,n\}$ let $\mathbb{O}_i$ be the orbits of $i$ by the action of $L'$ on $\{1,2,\ldots,n-1,n\}$. Associated to one orbit we will consider the functions $f_j$ where $j\in \mathbb{O}_i$. We can find a subset $\{i_1,\ldots,i_s\}$ such that the orbits give a partition of $\{1,2,\ldots,n-1,n\}$ and we may assume that $i_1<i_2\ldots.<i_s$. The relevancy of the orbits $\mathbb{O}_i$ come from the following lemma. \begin{lem}\label{auxil} Given $x_0\in \operatorname{\text{Fix}}(\phi)$ then the set of indices $j$ such that $q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q, f_j)\ne \ensuremath{\varnothing}$ equal to $\mathbb{O}_i$ for some $i\in \{1,\ldots,n\}$. \end{lem} \begin{proof} From is the fact that over a fixed point $x_0\in X$ the set of indices $j$ such that there is point $y\in q^{-1}(x_0)$ such that $y\in \widehat{\Phi}q, f_j)$ is presicely $\mathbb{O}_i$. To see this, suppose that $x_1$ over $x_0\in \operatorname{\text{Fix}}(phi)$ belong to $\widehat{\Phi}q,f_i)$. Given $y\in q^{-1}(x_0)$ consider a path $\gamma$ from $x_1$ to $y$. The permutation associated of $\phi_{\#}([q(\gamma)])$ takes $i$ to a certain element $j$. We claim that $y\in \operatorname{\text{Coin}}(q,f_j)$. We have the tuples $(f_1(x_1),\ldots.,f_i(x_1),\ldots,f_j(x_1),\ldots,f_{\ell}(x_1))$ where $f_i(x_1)=x_0$. To compute the permutation we look at the tuple $(f_1(y),\ldots.,f_i(y),\ldots,f_j(y),\ldots,f_{\ell}(x_1))$ which correspond to a permutation associated of $\phi_{\#}([q(\gamma)])$ applied to the original tuple. Therefore the element $x_0$ will be in the $j\up{th}$ position and the result follows. The converse is similar. \end{proof} \end{comment} If $i\in \brak{1,\ldots,n}$, by \relem{fund1}, the set $q^{-1}(x_0)\cap \operatorname{\text{Coin}}(q,f_i)$ contains $\lvert L_i\rvert$ points, and is partitioned into Nielsen coincidence classes of the pair $(q,f_i)$ that each contain $\lvert K_i(\widetilde{x}_0)\rvert$ points. So this set is partitioned into $\lvert L_i\rvert/\lvert K_i(\widetilde{x}_0)\rvert$ disjoint subsets each of which is contained in a Nielsen coincidence class of $(q,f_i)$. If $W_1, W_2$ are Nielsen coincidence classes of $(q,f_i)$, we say that they are \emph{related} if $q(W_1)\cap q(W_2)\neq \ensuremath{\varnothing}$. This defines an equivalence relation. Let $C_1, \ldots,C_r$ denote the corresponding set of equivalence classes of the Nielsen coincidence classes of $(q,f_i)$. \begin{lem}\label{lem:auxiV} With the above notation, for $j=0,1$, let $x_j \in \operatorname{\text{Fix}}(\phi)$, and suppose that $\widetilde{x}_j \in q^{-1}(x_j)\cap \operatorname{\text{Coin}}(q,f_i)$ for some $i\in \brak{1,\ldots,n}$. If $\widetilde x_0$ and $\widetilde{x}_1$ belong to related Nielsen coincidence classes of $(q,f_i)$ then $\lvert K_i(\widetilde{x}_0)\rvert=\lvert K_i(\widetilde{x}_1)\rvert$. \end{lem} \begin{proof} We shall construct a bijection between $ K_i(\widetilde{x}_0)$ and $ K_i(\widetilde{x}_1)$. First, we claim that there is an element $\widetilde{x}_1' \in q^{-1}(q(\widetilde{x}_1))$ that belongs to the same Nielsen coincidence class of $(q,f_i)$ as $\widetilde{x}_0$. To see this, let $\widetilde{x}_2,\widetilde{x}_2'\in \operatorname{\text{Coin}}(q,f_i)$ be such that $q(\widetilde{x}_2)=q(\widetilde{x}_2')$, and for which $\widetilde{x}_2$ (resp.\ $\widetilde{x}_2'$) belongs to the same Nielsen coincidence class of $(q,f_i)$ as $\widetilde{x}_0$ (resp.\ as $\widetilde{x}_1$). Let $\lambda\colon\thinspace I \to \widehat{X}$ be a path such that $\lambda(0)=\widetilde{x}_2'$, $\lambda(1)=\widetilde{x}_1$ and $q\circ \lambda$ is homotopic to $f_i\circ \lambda$ relative to their endpoints, and let $\lambda'\colon\thinspace I \to \widehat{X}$ be the unique lift of $q\circ \lambda$ for which $\lambda'(0)=\widetilde{x}_2$, and let $\widetilde{x}_1'=\lambda'(1)$. Then $q(\widetilde{x}_1')=x_1$, and arguing as in the proof of \relem{fund1}, it follows that $q\circ \lambda'$ is homotopic to $f_i\circ \lambda'$ relative to their endpoints. Thus $\widetilde{x}_2$ and $\widetilde{x}_1'$ belong to the same Nielsen coincidence class of $(q,f_{i})$, which proves the claim since $\widetilde{x}_2$ and $\widetilde{x}_0$ belong to the same Nielsen coincidence class of $(q,f_i)$. Then by \relem{fund1}(\ref{it:fund1b}) and~(\ref{it:fund1c}), we have $\lvert K_i(\widetilde{x}_0)\rvert=\lvert K_i(\widetilde{x}_2)\rvert=\lvert K_i(\widetilde{x}_2')\rvert=\lvert K_i(\widetilde{x}_1)\rvert$ as required. \end{proof} The following lemma is a key ingredient in the process of giving a computable formula for $N(\phi)$, which is the number of essential Nielsen classes. We fix once and for all an orientation of the manifold $X$, and we choose the unique orientation of $\widehat{X}$ for which $q$ preserves orientation. Further, the orientation on $\widehat{X}$ (resp.\ $X$) induces an orientation on any open subset of $\widehat{X}$ (resp.\ of $X$), and hence a local orientation system on $\widehat{X}$ (resp.\ on $X$). The map $q$ carries the local orientation of $\widehat X$ to that of $X$. The fixed point index of maps defined on open sets of $X$ as well as the coincidence index of maps from open sets of $\widehat{X}$ to $X$, are computed with respect to these orientations. \begin{lem}\label{lem:index} Let $X$ be an orientable manifold, $x_0\in \operatorname{\text{Fix}}(\phi)$ be an isolated fixed point of $\phi$, and let $\widetilde{x}_{0}\in \widehat{X}$ and $i\in \brak{1,\ldots,n}$ be such that $q(\widetilde{x}_{0})=x_0$ and $\widetilde{x}_{0}\in \operatorname{\text{Coin}}(q,f_i)$. Then the fixed point index of $\phi$ at $x_0$ is equal to the coincidence index of the pair $(q, f_i)$ at $\widetilde{x}_{0}$. \end{lem} \begin{proof} Since $X$ is a manifold and $\widehat{X}$ is a finite covering of $X$, there exists an open, contractible neighbourhood $U$ of $x_0$ such that the restriction $q \left\lvert_{\widetilde{U}}\right.$ of $q$ to the component $\widetilde{U}$ of $q^{-1}(U)$ that contains $\widetilde{x}_0$ is a homeomorphism. The restriction of the $n$-valued map $\phi$ to $U$ is split, and a splitting is given by $\brak{\overline{f}_1,\ldots ,\overline{f}_n}$, where $\overline{f}_j=f_j\circ (q \left\lvert_{\widetilde{U}}\right.)^{-1}\colon\thinspace U \to X$ for $j=1,\ldots,n$. So $x_0$ is a fixed point of the map $\overline{f}_j$ for some $j\in \brak{1,\ldots,n}$, and by definition, $\operatorname{\text{Ind}}(\phi, x_0)$ is equal to $\operatorname{\text{Ind}}(\overline{f}_j, x_0)$. Since $q\left\lvert_{\widetilde{U}}\right.$ is an orientation-preserving homeomorphism, $\operatorname{\text{Ind}}(\overline{f}_j, x_0)$ is equal to the coincidence index of the pair $(q, f_j)$ at the coincidence $\widetilde{x}_0$, where we use the local orientation in a neighbourhood of $\widetilde{x}_0$ determined by the local homeomorphism $q$, and this proves the lemma. \end{proof} One consequence of \relem{index} is that if a Nielsen coincidence class of $(q, f_i)$ is sent to a Nielsen fixed point class of $\phi$ then one of these Nielsen classes is essential if and only if the other is. \begin{cor}\label{cor:indexessent} Under the hypothesis of \relem{index}, if $C$ is a Nielsen coincidence class of the pair $(q, f_i)$ and $\widetilde{x}_0 \in C$, then the coincidence index of $C$ is equal to $\lvert K_i(\widetilde{x}_0)\rvert$ times the index of the fixed point class $q(C)$ of $\phi$. \end{cor} \begin{proof} Let $C$ be a Nielsen coincidence class of the pair $(q, f_i)$ for some $i\in \brak{1,\dots,n}$. By \relem{index}, the points of $C$ that lie in $q^{-1}(q(\widetilde {x}_0))$ all have the same coincidence index for the pair $(q, f_i)$, this index being equal to $\operatorname{\text{Ind}}(\phi, x_0)$. By \relem{auxiV}, the cardinality of the set $C\cap q^{-1}(q(\widetilde {x}_0))$ is equal to $\lvert K_i(\widetilde{x}_0)\rvert$. If $\widetilde{x}_1$ is another point of $C$ (so is Nielsen equivalent to $\widetilde{x}_0$), the same conclusions hold, and similarly the cardinality of the set $C\cap q^{-1}(q(\widetilde {x}_1))$ is equal to $\lvert K_i(\widetilde{x}_0)\rvert$, and all of the points of this set have the same coincidence index for the pair $(q, f_i)$. Since the index of $C$ is the sum of the indices over the elements of the class $C$, which we can assume to be finite, the result follows. \end{proof} For results related with \relem{index} and \reco{indexessent} and more results of similar nature, see ~\cite{Je,Moh}. If $s\in \brak{1,\ldots,r}$ and $i\in I_{0}$, let $m_{i,s}= \lvert L_i\rvert/\lvert K_i(\widetilde{x}_s)\rvert$, where $\widetilde{x}_s$ is an element of one of the Nielsen coincidence classes of $(q,f_i)$ that belongs to the equivalence class $C_s$. This quantity is the number of Nielsen coincidence classes of $(q,f_i)$ in $C_s$ that are sent to the same Nielsen fixed point class of $\phi$ under the map $q$. Observe that if $\lvert L_i\rvert=1$ then $m_{i,s}=1$ for all $s\in \brak{1,\ldots,r}$. We now come to the proof of the main result of this section. \begin{proof}[Proof of \reth{form}] Let $i_j \in \mathbb{O}_{j}$, where $j$ runs over the set $I_0$ defined just before \relem{auxil}. From \relem{auxil0}, the image by $q$ of $\operatorname{\text{Coin}}(q,f_{i_j})$ is independent of the choice of representative $i_j$ in $\mathbb{O}_{j}$. Since the action of $L_{i,i}'$ on $\brak{1,\ldots,n}$ is free, $\lvert L_{i,i}'\rvert= \lvert L_i\rvert=1$, which implies that the map induced by $q$ between the Nielsen coincidence classes of $(q,f_i)$ and the Nielsen fixed point classes of $\phi$ is injective for all $i\in \brak{1,\ldots,n}$. Further, if $j,j'\in I_0$ are distinct then $q(\operatorname{\text{Coin}}(q,f_{i_j}))\cap q(\operatorname{\text{Coin}}(q,f_{i_{j'}}))=\ensuremath{\varnothing}$ by \relem{auxil0}, and we conclude that the map between the union of the Nielsen coincidence classes of the pairs $(q,f_{i_j})$, where $i_j\in I_0$, and the Nielsen fixed point classes of $\phi$ is injective too. This map is also surjective by \repr{nielsen0}(\ref{it:coinfix}) and \relem{nielsenclasses}. By \reco{indexessent}, a coincidence class of $(q,f_{i_j})$ is essential if and only if its image under $q$ is essential. From this, it follows that $N(\phi)$ is equal to $\sum_{j\in I_0} N(q,f_{i_j})$. \end{proof} If the space $X$ is a non-orientable manifold, the situation is more complex, and it is not clear for the moment how to obtain a formula similar to that of \reth{form} in this case. In the case $n=2$, the hypothesis of that theorem on the action of $L_i$ is always satisfied. \begin{cor}\label{cor:nielsenphi} Let $X$ be an orientable compact manifold without boundary and $\phi\colon\thinspace X \multimap X$ a non-split $2$-valued map. Then $N(\phi)=N(q, f_1)=N(q, f_2)$. \end{cor} \begin{proof} Since $\phi$ is non-split, $L\cong L'\cong \ensuremath{\mathbb Z}_{2}$, there is a single orbit $\mathbb{O}_{1}=\brak{1,2}$, and $\lvert L_{1}\rvert=\lvert L_{2}\rvert=1$. Then by \reth{form}$, N(\phi)=N(q, f_j)$ for all $j\in \brak{1,2}$. \end{proof}
1,108,101,564,474
arxiv
\section{Data analysis} \label{sec:data} \subsection{Calibration} Accurate calibration is the key to the useful application of C-BASS data. It is essential to be able to calibrate the absolute intensity (temperature) scale, the relationship between polarized and unpolarized intensity, the absolute polarization angle, and the cross-polarization response of the instrument. Tau A is by far the brightest polarized source that is unresolved at C-BASS resolution, and is visible from both observing sites. It therefore provides our primary astronomical calibration source. Observations of other bright calibrators such as Cas A are also used when Tau A is not visible (for intensity only). Observing Tau A for long continuous periods, during which the polarization angle rotates due to parallactic rotation, allows us to measure and hence correct for the non-orthogonality of the nominal $Q$ and $U$ channels. Observations of Tau A also provide the primary flux-density calibration of the data. Converting this to a temperature scale requires a knowledge of the effective area of the antenna, or equivalently of its beam pattern. We use a detailed physical model of the antenna to construct a full-sky beam pattern using the GRASP physical optics package, which is verified with comparison measurements of the main beam and sidelobes over a wide range of angles \citep{Holler2011}. Between primary calibration observations, the gain and polarization angle response of the instrument is tracked using a noise diode. A noise diode signal is split and injected into both circular polarization channels immediately after the linear-to-circular converter, using $-30$~dB coaxial couplers. The diode is temperature stabilized in order to provide a fixed-amplitude reference signal in both intensity and polarization. The noise diode is switched on for a few seconds at the beginning of each scan, which provides a gain measurement on a timescale of minutes. It provides a constant signal in both the $I$ and $Q$ channels (in instrument co-ordinates). Phase variations between LCP and RCP in the subsequent signal chains result in some of the noise diode signal appearing in the instrumental $U$ channel. The polarization data are rotated in post-processing to put the noise diode signal wholly back into instrumental $Q$. The absolute polarization angle will ultimately be fixed by measurements using the C-BASS South telescope of a ground-based polarized calibration source, whose polarization angle can be set to $\sim 0.1$ deg accuracy. Gains of both the intensity and polarization data derived from the noise diode are interpolated to provide a continuous relative gain correction across the entire data set. The absolute flux-density scale is set from observations of Tau A, corrected for opacity variations between the elevation of observation of Tau A and the elevation of the survey scans. Since the noise diode is effectively a source of 100\% polarization (perfectly correlated between RCP and LCP), it can be used to transfer the astronomical intensity calibration to the polarized intensity calibration, so that measurements of $I$, $Q$ and $U$ are on the same scale. The opacity is monitored by sky dip observations that are done periodically throughout the survey observations. The telescope is scanned between elevations 60\degr\ and 40\degr\ at a fixed azimuth, providing a change in airmass of about 0.4, which gives a change in background temperature of about 1.5\,K. This signal is fitted to a cosec(elevation) law to derive a zenith sky temperature and hence a zenith opacity. Opacity obervations are not made below elevation 40\degr\, to avoid contamination from ground pick-up. Opacity corrections are typically of order 1 per cent or less. Pointing calibration is determined from cross-scans of bright radio sources, to which a beam model is fitted to obtain azimuth and elevation offsets. These are then used to fit for a pointing model incorporating collimation, axis misalignment and flexure terms. Pointing residuals are in the range of a few arcmin and are not expected to be a significant issue in data analysis. \subsection{Flagging and data correction} Given the relatively high temperature sensitivity of \mbox{C-BASS} (NET $\sim 2$\,mK\,s$^{1/2}$) compared to the brightness of the sky (several K in the Galactic plane), the C-BASS time-ordered data are frequently signal-dominated rather than noise-dominated. This complicates the removal of non-astronomical signals from the data. For example, it is not possible to flag for sporadic radio-frequency interference (RFI) simply using an amplitude clip, as a threshold low enough to eliminate significant RFI would also flag much true emission in the sky. Instead we use a sky model that is interpolated onto the time-ordered data stream and subtracted. Discrepant events can then be detected and flagged. Very small pointing errors during the crossing of bright and/or compact sources can still generate significant residuals, so RFI flagging is disabled for bright parts of the sky model. RFI that is coincident with bright emission has a proportionally smaller effect on the final map, and the very high level of redundancy in the C-BASS observations, with each sky pixel being observed dozens of times, means that any residual contamination is effectively washed out in the final map. The sky model used for RFI removal is initially made using a crude RFI cut, and progressively updated with more refined edits of the time-ordered data. The other main non-astronomical component of the data is ground pick-up, which appears as a clear pattern repeating with azimuth, and varies on timescales of many days with changes in temperature and emissivity of the ground. As with RFI removal, a sky model is used to subtract the bulk of the sky signal from the time-ordered data, and regions of high sky brightness are excluded completely. The remaining data are averaged into azimuth bins, constructing a ground profile for every day. These profiles are then subtracted from the data before map-making. This procedure also removes fixed RFI, such as from fixed radio links and geostationary satellites. \subsection{Mapping} Although the receiver has been designed to suppress $1/f$ noise in both intensity and polarization as much as possible, there are long-term variations in background level, and residual atmospheric and ground-spill emission, that are still present in the time-ordered data. Typical $1/f$ knee frequencies in real data are around 0.1 -- 0.2 Hz. While drifts longer than a complete azimuth scan can be filtered from the time-ordered data, shorter drifts will appear in maps as stripes along the scan directions. However, it is possible to solve for a good approximation to the true sky map in the presence of drifts, using the redundancy introduced by the repeated coverage of every pixel in the sky many times in the total time stream. Many mapping codes have been developed to solve this problem in the context of CMB observations \citep[e.g.,][]{Ashdown2007}, either by explicitly modelling the drift signal or by solving the map-making equation using the full noise statistics of the data. We use a destriping mapper, {\sc Descart} \citep{Sutton2010}, which models the time-ordered data as consisting of a true sky signal $s_p$ that depends on the pointing in celestial coordinates, plus an offset series consisting of a set of constant values $a_i$, plus stationary white noise $w_t$, i.e., $$ d_t = P_{tp}s_p + F_{ti}a_i + w_t. $$$P_{tp}$ is the pointing matrix that gives the telescope pointing direction $p$ at each time sample, and $F_{ti}$ defines the timebase on which the offsets vary. For a well-sampled data set it is possible to solve for the offset vector $a$, which {\sc Descart} does using a conjugate gradient method. The offsets are then subtracted from the data, leaving a clean time-ordered data set with only white noise, which can be mapped by binning into sky pixels. \section{Instrument design} \label{sec:design} \subsection{Overview} The two C-BASS systems, north and south, have been designed to produce a single unified survey, and have many features in common. However there are some significant differences in implementation between the two systems, some forced by practical constraints, and others due to improvements in technology and lessons learned between the northern system, which was designed first, and the southern system. The two telescopes (see Figure \ref{fig:telescopes_picture}) are similar in size but differ in numerous details. The northern telescope was donated to the project by the Jet Propulsion Laboratory, having been designed as a prototype for an array element for the Deep Space Network \citep{Imbriale2004}. It has a 6.1-m single-piece reflector with focal ratio $f/D = 0.36$. The southern telescope was donated by Telkom SA to SKA South Africa and was originally designed for the ground segment of a low-earth orbit telecommunications satellite constellation. It has a segmented 7.6-m primary with twelve radial panels, and also has a focal ratio of $f/D=0.36$. However, since the same area of the primary is illuminated as on the northern antenna, i.e. a 6.1 m diameter, the effective focal ratio of the southern antenna is 0.46. This difference results in our having to use different optical configurations for the two telescopes -- the northern antenna uses Gregorian optics, while the southern antenna uses Cassegrain optics. Nevertheless, the two antennas have very well-matched beams \citep{Holler2011}. The northern receiver is an all-analogue system \citep{king2014}, while the southern receiver (Copley et al., in prep.) implements the same architecture with a digital back-end that also provides spectral resolution within the band. \subsection{Optics} A total-power scanning telescope is vulnerable to scan-synchronous systematics, i.e., spurious signals appearing in the time-ordered data at the same frequency as astronomical signals. The most obvious cause of such contamination is pick-up of the ground and other non-astronomical sources of radiation in the sidelobes of the antenna. To mitigate this, we have designed the optics to minimize the far-out sidelobes as much as possible. This is achieved by designing an optical system with minimal blockage and scattering, and very low edge illumination. Full details of the optical design are given by \citet{Holler2011}. Given that we only had on-axis telescopes available, we were constrained to use a blocked aperture, rather than an off-axis unblocked design. The secondary mirror blockage results in unavoidable near-in sidelobes, which can however be quite accurately modeled and measured, and hence corrected for in the map analysis. Far-out sidelobes were minimized by having the secondary mirror supported on a transparent dielectric material rather than using metal struts. This also has the effect of maintaining the circular symmetry of the optics and thus minimizing cross-polarization. We also used a feed horn with very low sidelobes, which minimizes direct coupling between the feed and the ground when the telescope is pointed to low elevations. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{figures/CBASS-N.jpg} \includegraphics[width=0.45\textwidth]{figures/CBASS-S.jpg} \caption{{\it Top}: The C-BASS North telescope, located at the Owens Valley Radio Observatory in California, U.S.A. {\it Bottom}: The C-BASS South telescope, located in the Karoo desert, South Africa. The weather shield around the receiver is removed in this image, showing the lower part of the feed horn and cryostat.} \label{fig:telescopes_picture} \end{figure} The feed is a profiled corrugated horn that generates HE12 modes in a cosine-squared section, which are phased up with the dominant HE11 mode in a cylindrical final section, resulting in a beam pattern with very low sidelobes and cross-polarization. In both telescopes the feed is well forward of the dish surface, and the entire receiver assembly is mounted above the dish surface. The feed to subreflector distance is less than 1\,m in each case, which allows the subreflector to be mounted off the receiver assembly using a structure made of Zotefoam Plastazote, a nitrogen-blown polyethylene foam. This foam has very low dielectric constant and RF losses, and allows the subreflector to be supported without the use of struts that would cause scattering and break the circular symmetry of the antenna. To minimize far-out sidelobes and hence reduce ground pick-up, the northern telescope has absorptive baffles around the primary and secondary mirrors. The primary baffle intercepts radiation that would otherwise spill over the side of the dish to the ground, while the secondary baffle reduces direct radiation from the feed to the sky/ground. Although these baffles increase the temperature loading on the receiver and contribute to the system temperature, they significantly reduce the scan-synchronous ground pick-up. The southern telescope has a larger (7.6-m) primary and so, when illuminated to produce the same beam size as the 6.1-m northern telescope, has extremely low edge illumination and negligible spillover lobes. The Cassegrain design of the southern telescope means that a baffle around the secondary mirror is not possible. Even better rejection of ground pick-up could be achieved by surrounding the telescopes with a reflecting ground screen that shields the horizon. This would mean that the environment seen by the telescope is all at the temperature of the sky, which is around two orders of magnitude colder than the ground. Unfortunately the large size of ground screen required to shield the telescopes and still allow access to a reasonable range of elevations on the sky was too expensive to build. \subsection{Radiometer and polarimeter} The C-BASS receivers \citep[][Copley et al., in prep.]{king2014} measure both intensity and linear polarization. The intensity measurement uses a continuous-comparison radiometer, which compares the power received by the antenna to a stabilized load signal, using the same gain chain for both signals so that gain instabilities in the electronics can be effectively removed. The same basic design has been used in previous instruments such as the {\it Planck} Low Frequency Instrument \citep{Bersanelli2010}. In this design, a four-port hybrid is used to form two linear combinations of the feed and reference signals, which are then both amplified, before being separated with a second hybrid and the powers of each signal detected and differenced. Gain fluctuations in the amplifiers affect both feed and reference signals equally, and are therefore cancelled out. This cancellation is continuous and does not rely on a switching frequency, and is more efficient than a Dicke switch \citep{Dicke1946}, in which half the integration time is spent looking at the reference load. To protect against gain fluctuations in the detectors, which come after the sky and load signals have been separated, phase switches are introduced in to the two gain arms. A single ideal 180-degree phase switch in one arm will cause the feed and reference signals to swap between the two detectors, allowing cancellation of detector gain differences. Non-ideal performance of the phase switch (e.g., different gains in the two phase states) are cancelled out by placing phase switches in both arms, and cycling between all four states of the two switches. Polarization is measured by taking the complex correlation of the right and left circular polarizations, which yields $Q$ and $U$ directly as the real and imaginary parts of the correlation: \begin{eqnarray} \langle |E_R|^2 + |E_L|^2 \rangle &=& I \\ \langle E_R E_L^* \rangle &=& (Q + iU)/2 \\ \langle E_L E_R^* \rangle &=& (Q - iU)/2 \\ \langle |E_R|^2 - |E_L|^2 \rangle &=& V \end{eqnarray} where complex amplitudes $E_{R,L} = (E_x \pm iE_y)/\sqrt{2}$ multiply the propagator $\exp[i(kz - \omega t)]$ \citep{Hamaker1996b}. This means that $Q$ and $U$ are measured simultaneously and continuously, without needing any polarization modulation or physical rotation. This is more accurate than taking either the difference in power of the individual linear polarizations, or correlating linear polarizations, both of which require subtracting quantities involving the total intensity in order to obtain the much smaller linear polarization signal. Intensity fluctuations in the right and left channels from the unpolarized atmospheric background, and from the low-noise amplifiers, are uncorrelated and appear in the $Q$ and $U$ measurements only as noise terms. Stokes $V$ can in principle be obtained from the difference of the intensities in right and left circular polarization (Eqn. 4). However, astronomical circular polarization is expected to be extremely small, and accurate measurement of $V$ would require very precise calibration of the individual intensity measurements. In practice the $V$ signal is used as a check of the relative calibration of the intensity channels. \subsection{Cryogenic receivers and analogue electronics} The receivers for the two C-BASS telescopes are similar but differ in some significant details (\citealp{king2014}, Copley et al., in prep.). The cryostat bodies are very similar, and both use two-stage Gifford-McMahon coolers. The northern receiver uses a Sumitomo Heavy Industries (SHI) SRDK-408D2 cold head, which cools the second stage to 4~K. The southern receiver uses an Oxford Cryosystems Coolstar 6/30 cold head, which cools to 10~K. The southern cold head does not reach such a cold base temperature but uses significantly less compressor power (3\,kW vs 9\,kW for the SHI system). Both receivers use the same design of corrugated feedhorn. The main body of the feedhorn is at ambient temperature and is bolted directly to the cryostat body. The upper section of the feedhorn also provides the support for the secondary mirror assembly. The smooth-walled throat section of the horn is machined directly into the first-stage heat shield of the cryostat, and the orthomode transducer (OMT) is mounted onto the second-stage cold plate. The 4-probe OMT \citep{grimesOMT} is connected via coaxial cables to a planar circuit that combines the linearly polarized signals and produces circularly polarized outputs. Coaxial $-30$~dB directional couplers are used to couple in the noise source signal used for calibration. The circularly polarized signals are combined with reference signals in two 180\degr\ hybrids. The reference signals are generated from temperature-stabilized matched loads controlled by an external PID controller, which provide a load temperature stable to better than 1\,mK (see Figure \ref{fig:block-rx}). Both receivers use LNF-LNC4$\_$8A low noise amplifiers from Low Noise Factory, which provide 40\,dB of gain between 4 -- 8\,GHz with a typical amplifier noise temperature of 2 -- 3\,K. In the southern system the signals then simply leave the cryostat via stainless steel cables. In the northern system there are notch filters that remove ground-based RFI near the centre of the band, reducing the effective bandwidth in polarization from 1 GHz to 499 MHz, and shifting the effective centre frequency to 4.783~GHz. \begin{figure*} \includegraphics{figures/cbass-sysA.pdf} \caption{Simplified block diagram of the C-BASS front end, which is common to C-BASS north and south. Key: OMT = orthomode transducer, L2C = linear to circular converter, $\Sigma, \Delta$ = sum, differencing, BPF = bandpass filter, RCP = right circular polarization, LCP = left circular polarization. L1 and L2 are matched loads. \label{fig:block-rx}} \end{figure*} \subsection{Backends and readout} The two C-BASS receiver systems implement the same signal processing operations to generate the intensity and polarization measurements, but in very different ways (see Figure \ref{fig:block-pol}). The northern system is described in detail in \cite{king2014}. The radiometer and polarimeter functions are implemented by analogue electronics operating on the whole RF band as a single channel. The radiometer uses 180\degr\ hybrids identical to those used in the cryostat to separate out the sky signal from the reference signals, which are then detected with Schottky diodes. Phase switches in the RF signal path cause the sky and reference signals to be alternated between the physical channels, averaging out any gain differences or drifts in the amplifier and detector chain. The data are sampled at 2 MHz following post-detection filtering to 800 kHz bandwidth, and the sky and reference signals are differenced before phase switch demodulation and integration to 10 ms samples. For the polarimeter operation, the separated sky signals are correlated using a complex analogue correlator consisting of 90\degr\ hybrids and detector diodes. Again phase switching is used to ensure gain differences do not bias the correlated outputs. The detector diode outputs are filtered, sampled, synchronously detected at the phase switch frequencies, and filtered and averaged down to 10~ms samples in an FPGA. The southern system, by contrast, is fully digital. After further gain and bandpass filtering, the four RF signals from the cryostat are downconverted using a 5.5\,GHz local oscillator to an IF band of 0 -- 1\,GHz. The lower sideband is used to ensure that images of strong out-of-band signals from geostationary satellites in the range 3.5 -- 4.5 GHz are not aliased in to the IF bands. The IF signals are then split and filtered to give 0 -- 0.5 and 0.5 -- 1\,GHz IFs. Two identical digital backends are then used to process each of these two frequency bands. Each one consists of a Roach FPGA board and two iADC cards \citep{2016JAI.....541001H}. The iADC cards provide dual channel sampling at 1 GHz and 8-bits resolution. The lower IF band is sampled in its first Nyquist zone, while the upper IF band is directly sampled in the second Nyquist zone with no further analogue downconversion. The Roach board uses a Xilinx Virtex~5 FPGA to carry out the signal processing tasks. The incoming signals are first channelised using a polyphase filter bank (PFB) into 64 frequency channels of bandwidth $500/64 = 7.8125$~MHz. The PFB provides better than 40 dB of isolation between different channels. The signals are then combined on a channel-by-channel basis to produce the radiometer and polarimeter outputs. A bank of complex gain corrections allows phase and amplitude variations across the band due to the analogue part of the signal path to be calibrated out. The sum and difference of the pairs of input channels yields the RCP and LCP signals and their respective reference load signals. These are squared and averaged to provide measures of the power in the respective sky and reference channels. Unlike the northern system, the sky and reference signals are not differenced in the real-time system but stored separately, and only differenced in the off-line software. This allows us to assess the degree of low-frequency drifts in the raw data, but which are then cancelled out when the sky and reference are differenced. The LCP and RCP voltage signals are complex correlated to produce the polarization outputs $Q$ and $U$. The data are again averaged to 10~ms samples before being read out and stored on disk by the control system. \begin{figure*} \includegraphics[width=0.8\textwidth]{figures/cbass-sysN.pdf} \vspace{5mm} \includegraphics[width=\textwidth]{figures/cbass-sysS.pdf} \caption{Block diagrams of the C-BASS radiometer/polarimeter systems. {\it Top}: C-BASS north analogue backend, {\it Bottom}: C-BASS south digital backend. Key: $\phi$ = phase switch modulation/demodulation, $\Sigma, \Delta$ = sum, differencing, ADC = analogue to digital converter, Acc = accumulator, CMUL = complex multiply, BPF = band-pass filter, LPF = low-pass filter, Ch = channeliser, $G\rm e^{i\phi}$ = complex gain correction, Sq = square detector (evaluates $VV^*$ on complex voltages $V$).\label{fig:block-pol}} \end{figure*} \section{CMB Foregrounds} \label{sec:mechanisms} In this section we summarise the properties of the main foreground components that are known, and review how the new C-BASS data will help with the problem of cleaning foregrounds from CMB observations. We focus on `low' frequencies ($\lesssim 100$\,GHz) where synchrotron, free-free, AME and CMB emissions dominate. At high frequencies ($\gtrsim 100$\,GHz), thermal dust dominates the sky and has been mapped in detail by new observations from \emph{Planck} \citep{Planck_Int_XIX,Planck_Int_XXII}, which complement the data from low-frequency surveys such as C-BASS. Fig.~\ref{fig:frequency_spectra} shows the frequency spectra of diffuse foregrounds in intensity and polarization, based on the modelling by \citet{Planck2015_X}. At very low frequencies ($<1$\,GHz), synchrotron radiation invariably dominates due to its steep spectrum, while at higher frequencies ($\approx 10$--100\,GHz), free-free and AME are stronger. In polarization, synchrotron dominates up to frequencies of $\approx 80$\,GHz or higher \citep{Dunkley2009a,Planck2015_X,Krachmalnicoff2015}. These typical spectra show that these diffuse components of radiation emit over a similar range of frequencies with spectra that are hard to discern from each other. In particular, at frequencies around the peak of the CMB spectrum (150 -- 250 GHz) the spectrum of the CMB is very similar to that of synchrotron emission. Strong spectral lines (e.g., CO and HCN rotational transitions) can also have a significant impact on the broad-band intensities measured by the CMB spacecraft \citep[e.g.,][]{Planck2013_XIII}. The broad-band detectors used in most CMB experiments cannot distinguish between line emission and the surrounding continuum, so both components have to be modelled to give the expected signal in a given frequency channel. While the total foreground signal is tightly constrained by the observations, the decomposition into components is currently quite uncertain, with different model assumptions capable of changing the ratio of synchrotron to AME power at 30\,GHz by a factor of two \citep{Planck2015_XXV}. Of course this is one of the main motives for surveys such as C-BASS, which as we demonstrate in Section~\ref{sec:impact} will substantially improve the situation. \subsection{Synchrotron Emission} \label{sec:synchrotron} Synchrotron radiation is the dominant low-frequency foreground and will be the one most constrained by C-BASS. It is produced by cosmic ray leptons (electrons and positrons) spiralling in the Galactic magnetic field \citep{Rybicki_book}. The radio spectrum of a single component of synchrotron radiation is well approximated by a power-law over a wide range of frequencies, with brightness temperature $T_B(\nu) \propto \nu^{\beta_{S}}$, which derives from a power-law distribution of cosmic-ray energies, $N(E) \propto E^{2\beta_{S}+3}$. Since the local cosmic ray lepton energy spectrum is extremely smooth in log-frequency space \citep{AMS2_2014}, and the frequency range of interest 1.5--150\,GHz maps to only one decade of particle energy, the basic synchrotron spectrum is also extremely smooth. However, both intrinsic and line-of-sight effects can cause the spectrum to deviate from a simple power law, complicating the process of fitting and removing synchrotron emission from CMB maps \citep[e.g.][]{2017MNRAS.472.1195C}. Both the observed radio spectrum \citep[e.g.,][]{deOliveiraCosta2008,Kogut2012}, and direct measurement of the local cosmic-ray lepton spectrum \citep[e.g.,][]{PAMELA2011,AMS2_2014} show significant spectral curvature at a few GHz, corresponding to particle energies of $\sim 5$\,GeV,\footnote{ These energies are near those strongly affected by solar modulation of the cosmic ray spectrum, but detailed modelling by e.g., \citet{Strong2011} and \citet{DiBernardo2013}, shows that the observed curvature is not solely due to solar modulation.} giving a net change in the spectral index $\beta_S$ from about $-$2.6 at a few hundred MHz to about $-$3.1 above 10\,GHz \citep[e.g.,][]{Strong2011}. Although spectral curvature in synchrotron radiation is often attributed to radiative energy losses, such losses in the interstellar medium cannot explain a spectral break at this energy, and hence it must be attributed to a feature in the ill-understood injection mechanism that supplies the Galactic cosmic ray population. In addition to these causes of intrinsic spectral curvature, it is expected that on long lines of sight through the Galaxy, i.e., at low Galactic latitudes, the superposition of regions with different spectral indices will tend to flatten the observed synchrotron spectrum at higher frequencies. Observations at very low frequencies will thus tend to underestimate the synchrotron contribution at frequencies near to the foreground minimum unless this curvature is taken into account. We can thus expect that multiple measurements of the synchrotron component across the microwave band will be required in order to determine the spectral shape to the accuracy required for future $B$-mode observations. Our knowledge of the spectrum of intensity of Galactic synchrotron radiation comes primarily from sky surveys at 0.4\,GHz \citep{Haslam1982}, 1.4\,GHz \citep{Reich1986}, 2.3\,GHz \citep{Jonas1998}, and 23 GHz \citep[WMAP;][]{Bennett2003b,Gold2011}; see Table~\ref{tab:surveys}. Maps of the spectral index across the sky based on radio total intensity \citep{Lawson1987,Reich1988,Davies1996,Platania1998,Platania2003,Bennett2003b,Dickinson2009,Gold2011} and microwave polarization from {\it WMAP} \mbox{\citep{Fuskeland2014,Vidal2015}} and S-PASS \citep{SPASS2018} show variations in the range $-4.4 < \beta < -2$. The flattest spectra are found along the Galactic plane, and are probably due to free-free emission (and absorption, at the lowest frequencies). Apparent large-amplitude variations in spectral index are also found in the regions of weakest synchrotron emission at high latitudes, which are most susceptible to the artefacts discussed in Section~\ref{sec:surveys}. The most reliable maps tend to show the smallest-amplitude variations. Nevertheless, after correction for the free-free contribution there is good evidence for genuine spatial variations of intensity spectral index, with slightly flatter spectra along the Galactic plane \citep{Planck_Int_XXIII} and in the `haze' near the Galactic centre \citep{Dobler2008,Planck2013_IX}. Individual supernova remnants (SNR) and pulsar wind nebulae (PWN), usually taken as the major sources of Galactic cosmic rays, typically have flatter spectra than the diffuse synchrotron, from $\beta_S= -2$ to $-2.3$ for PWN and $-$2.4 to $-$2.8 for shell SNR \citep{Green2014,Planck_Int_XXXI}. Polarized spectral indices will not necessarily be the same as in intensity, due the summing over different polarization angles within the volume probed by the beam. \citet{SPASS2018} observe that the average spectral index between 2.3 GHz and the 23 -- 33~GHz {\it WMAP} and {\it Planck} bands is $-3.22$ independent of angular scale, but with significant spatial variations that are not simply due Galactic latitude. These variations will complicate efforts to extrapolate synchrotron contamination to the CMB foreground minimum frequencies. Because synchrotron emission does not dominate the total intensity foreground in the space microwave band ($\sim 20$--$300$\,GHz), attempts at component separation have effectively extrapolated it from the most reliable of the low-frequency templates, i.e. the 408\,MHz survey. This long frequency baseline, and the poorly quantified variable slope and curvature of the spectrum, make this one of the main sources of uncertainty in component separation. The synchrotron-dominated data from C-BASS, at much higher frequency, will substantially reduce this uncertainty (e.g., \citealp{Errard2016}). Further reliable surveys between 5 and 30 GHz would improve the situation even more, as this would tightly constrain measurements of both the spectral index and spectral curvature as a function of sky position. \subsubsection{Loops, spurs and the haze} C-BASS will provide a new look at diffuse Galactic synchrotron and free-free emission. Given its modest resolution and high brightness sensitivity, this will be especially valuable for faint, large-scale structures at intermediate and high Galactic latitude. Of course, the synchrotron total intensity on these scales is mapped with high signal-to-noise ratio at 408\,MHz by \citet{Haslam1982}; however, it is clear from {\it WMAP} and {\it Planck} that more structure is apparent in polarization; in particular, the synchrotron loops and spurs are seen with much higher contrast in the polarization images \citep{Planck2015_XXV}. These features are relatively local, but there may also be a contribution from the Galactic halo. Even the weighted average {\it WMAP} and {\it Planck} data are not sensitive enough to detect the polarized emission in the faintest regions, but C-BASS will detect it everywhere, and hence address the issue of whether the inter-loop high-latitude emission is a distinct (e.g., halo) component, in which case it may have a discernibly different spectrum, or whether it is produced by numerous overlapping structures similar to the visible loops, but fainter. Of particular interest is the {\it WMAP}/{\it Planck} haze \citep{Planck2013_IX}, identified as excess emission at $\approx 1$\,cm partly coincident with the Fermi $\gamma$-ray bubbles \citep{Dobler2010,Ackermann2014} which appear to delineate a 10-kpc scale bipolar outflow from the Galactic centre. The haze is (presumably) synchrotron emission with a flatter spectral index ($\beta \approx -2.5$) than the rest of the sky ($\beta \approx -3.0$). However, because of its low signal-to-noise ratio in the satellite data, and the uncertainty in foreground separation, it is not clear if the haze is really a distinct component rather than simply a trend to flatter spectral index in the inner Galactic halo, let alone whether it is related to the bubbles \citep[see e.g.,][]{Planck2015_XXV}. Including C-BASS in the component separation analysis should pin down the spectrum of the haze and reveal whether it has a well-defined boundary and to what extent it matches the $\gamma$-ray structures. \subsubsection{Polarized synchrotron and Faraday rotation} \label{sec:impact-FR} Optically thin synchrotron radiation has an intrinsic polarization of 70--75 per cent, oriented perpendicular to the projected magnetic field in the source region \citep{Rybicki_book}. Although reduced in practice by superposition of different field directions along the line of sight, observed polarization fractions can exceed 30 per cent \citep[e.g.,][]{Vidal2015}. Because these regions may have different spectral indices, polarized and unpolarized spectra may differ, and need to be fitted separately. In principle it should be easier to fit the polarized spectrum, since synchrotron radiation is the dominant polarization foreground below the foreground minimum; but at present this is limited by the low signal-to-noise ratio of the {\it WMAP} and {\it Planck} polarization maps, and also by large-scale systematic differences between the two surveys \citep{Planck2015_X} which indicate residual systematic errors in at least one of them. The C-BASS data will provide the first measurements of the polarized synchrotron emission that are both high signal-to-noise ratio, and not affected by depolarization, across most of the sky. The Galactic magnetic field reveals itself through both Faraday rotation and through the intrinsic polarization of the Galactic synchrotron emission, which is orthogonal to the projected field direction in the plane of the sky. Only a band of a few degrees along the plane in the inner quadrants will suffer large depolarization; C-BASS will give a reliable map of projected magnetic field direction at moderate and high latitudes. These lines of sight probe the local interstellar medium in the plane and the Galactic halo above the spiral arms, and so can provide constraints on the measured tangling of the field on relatively small scales: 1\degr\ corresponds to about 3\,pc for typical structures in the Population I disc, and $\sim 20$\,pc for a 1-kpc scale-height halo. If the halo field is relaxed, the degree of polarization should reach a substantial fraction of the $m_{\rm max} \approx 75$ per cent expected from a uniform $B$-field; if the structure is tangled, the structure function of the polarized pattern will give the angular scale(s) of tangling, while random-walk depolarization will allow us to estimate the number of reversals on the line of sight $m \sim m_{\rm max} / \sqrt{N}$; these two approaches give independent estimates of the tangling scale as a fraction of the scale height. It will be illuminating to compare the field revealed by synchrotron polarization with the projected field traced by dust polarization in emission \citep{Planck_Int_XIX} and absorption \citep[e.g.,][]{Heiles2000,Panopoulou2015}, which give us different weighting functions on the line of sight, and, for starlight polarization, an upper limit to the distance. At low latitudes the projected magnetic field is an average along the line-of-sight, but it still gives information about the field direction and coherence; in fact, modelling of the magnetic field pattern in the disk hinges on accurate assessment of the synchrotron fractional polarization at low latitudes, and is currently limited by our inability to distinguish synchrotron from AME in the Galactic disk \citep{Planck_Int_XLII}. C-BASS data will be combined with polarimetry from the GMIMS HB and S-PASS surveys to yield improved maps of the Faraday rotation of the diffuse Galactic synchrotron polarization, hence probing the Galactic magnetic field. Adding C-BASS doubles the range in $\lambda^2$ compared to GMIMS alone, yielding a corresponding increase in RM precision, while the precision of the intrinsic position angle will be improved by a factor of eight. Discrepancies between RM values derived in-band from GMIMS and in combination with C-BASS will reveal breakdown of the simple $\lambda^2$ law of Faraday rotation, as expected when there is measurable variation of Faraday depth across the beam and/or along the line of sight. Such Faraday dispersion will also be associated with depolarization, and so is expected to be seen only around the borders of regions which are strongly depolarized at the lower frequency, specifically at $|b| \la 30\degr$ in the inner quadrants for GMIMS and over a substantially smaller region for S-PASS \citep{Wolleben2006,Carretti2013}. This requires differential rotation of $\Delta RM \ga \pi/2\lambda^2$, i.e. $\ga 36$ and $92 \mbox{$\rm \,rad \,m^{-2}$}$ at 1.4 and 2.3\,GHz respectively. Where GMIMS is depolarised (almost exclusively in the southern hemisphere), we can derive RM from the combination of S-PASS and C-BASS, which increases $\Delta\lambda^2$ by a factor of 5.5 compared to using the intra-band $\Delta\lambda^2$ from S-PASS alone. Similar depolarization at 5\,GHz requires $\Delta RM \ga 440 \mbox{$\rm \,rad \,m^{-2}$}$, and hence such depolarization should be restricted to very low latitudes in the inner Galactic plane ($|\ell| < 50$). This entire region will be observed by C-BASS South, and its 128-channel backend (8\,MHz channels) will allow us to measure RMs up to $10^5 \mbox{$\rm \,rad \,m^{-2}$}$, an order of magnitude larger than even that at the Galactic centre \citep[$6500 \mbox{$\rm \,rad \,m^{-2}$}$, see][]{Vidal2015}. (In this region the synchrotron intensity is high enough that it will be detectable in each channel, except where strongly depolarized.) The RM map gives a clear look at the line-of-sight structure of the field in the Faraday layer. For example, we would like to know whether it varies smoothly or characterized by abrupt current sheet transitions \citep{Uyaniker2002}. When tangential to the line of sight, current sheets show up as discontinuities in RM, accompanied by ``depolarization canals''. It will be particularly interesting to compare the Faraday rotation of the diffuse synchrotron emission with that of extragalactic sources and discrete Galactic supernova remnants and pulsars \citep[e.g., ][]{vanEck2011}, which will allow us to constrain models for both the magnetic field geometry and the distribution of emitting regions along the line of sight \citep{Jaffe2011}. \subsection{Free-Free Emission} \label{sec:free-free} Free-free emission due to coulombic interactions of electrons with ions is produced in individual H{\sc ii} regions and the diffuse warm ionized medium ($T \approx 10,000$~K). The free-free spectrum from a plasma in local thermodynamic equilibrium (LTE) is accurately known \citep{Rybicki_book,Draine_book}; in the optically thin regime it has a near-universal form with spectral index $\beta = -2.1$ at GHz frequencies, slightly steepening ($\Delta \beta < 0.05$) at frequencies of tens of GHz and higher. The steepening slightly increases as plasma temperature falls, but for the relevant temperature range the impact is barely detectable. In contrast, the transition to the optically thick regime cannot be accurately modelled at the degree-scale resolution of interest here because it depends on the brightness distribution within the beam; fortunately this only becomes a significant issue below $\sim 1$\,GHz, with the brightest H{\sc ii} regions on the Galactic plane showing absorption effects at 408\,MHz and lower. The well-defined spectrum makes free-free emission one of the most stable solutions in component separation analyses, at least for the distinct nebulae dominated by free-free emission up to 100\,GHz and even higher \citep{Planck2014_int_XIV}. In these large H{\sc ii} complexes, C-BASS data will be dominated by free-free emission, which will allow verification of the spectral index and provide constraints on free-free polarization. On the other hand the diffuse high-latitude free-free emission is weaker than other foreground components at all frequencies, making it difficult to separate based on spectral information alone. Although attempts have been made to use H$\alpha$ templates to constrain models of the high-latitude component \citep{Dickinson2003,Finkbeiner2003,Draine_book}, for various reasons this has not proved very accurate \citep{Planck2015_XXV}. Radio Recombination Line (RRL) surveys \citep[e.g.,][]{Alves2015} may also provide an independent and direct tracer of free-free emission. Free-free emission is inherently unpolarized, but low levels of polarization (a few percent) can be induced by Thomson scattering around the peripheries of H{\sc ii} regions \citep{Rybicki_book}, and locally could be stronger than the synchrotron emission near the foreground minimum ($\nu \approx 70$\,GHz) because of the flatter free-free spectrum; as yet, this has not been detected. As we will see in Section~\ref{sec:impact}, C-BASS will dramatically improve our ability to recover the free-free emission from the Galactic warm ionized medium (WIM), including the faint WIM emission at high Galactic latitudes that is also traced by H$\alpha$. Standard models of the WIM seem to over-predict the radio free-free emission given the observed H$\alpha$ \citep[e.g.,][]{Dickinson2003,Planck2015_XXV}, and a more accurate free-free map allowing detailed point-for-point comparison with reasonable signal-to-noise ratio should help identify the source of the discrepancy, be it unexpectedly low $T_e$, scattering of H$\alpha$ by high-latitude dust, or departures from LTE. Because free-free emission comes primarily from H{\sc II} regions, which are strongly clustered with the increased star formation in the Galactic plane, free-free emission dominates the narrow Galactic plane in the space microwave, and is about equal to synchrotron at 5\,GHz, as early C-BASS results have shown \citep{Irfan2015}. Here C-BASS will help recover the spectrum of the subdominant synchrotron emission, which comes from distant regions of the Galactic disk. \subsection{Anomalous Microwave Emission (AME)} \label{sec:ame} Anomalous microwave emission is a component of Galactic emission that is strongly correlated with thermal dust emission but has a frequency spectrum that peaks in the tens of GHz \citep{1996ApJ...460....1K,Leitch1997}; see e.g. \citet{deOliveira-Costa2004,Davies2006,Gold2011,Ghosh2012, Planck_Int_XV} and \citet{2018NewAR..80....1D} for a review. AME is clearly seen at 10 -- 60\,GHz with a rising spectrum at low frequencies and a steeply falling spectrum at higher frequencies, radically different from the tail of the thermal dust emission, and is very closely correlated with dust emission at IR/sub-mm wavelengths \citep{Planck2015_XXV}. The best example comes from the Perseus molecular cloud where the spectrum has been accurately determined \citep{Watson2005,Planck2011_XX,Genova-Santos2015b}. A major problem for component separation is that the spectrum is spatially variable, with individual clouds peaking in the range at least 20 -- 50 GHz \citep{Planck_Int_XV,Planck2015_XXV}. At low latitude we expect superposition of clouds with a range of peak frequencies, so that AME can resemble free-free or synchrotron spectra rather closely: along with the variable synchrotron spectrum this is the second major cause of the large uncertainty in current component separation. Measurements of the polarization of AME are challenging due to the weak signal and difficulties in component separation. Nevertheless, a number of measurements indicate that AME is at most weakly polarized, with upper limits of a few per cent in the space microwave band \citep{Mason2009,Macellari2011,Dickinson2011,Lopez-Caraballo2011,Rubino-Martin2012a,Hoang2013,Planck2015_XXV} and less than 0.5 per cent at lower frequencies \citep{2017MNRAS.464.4107G}. The source of AME remains uncertain. The leading candidate is electric dipole radiation from small spinning dust grains \citep{Draine1998a,Draine1998b}, but another mechanism still in play is `magnetic dust', i.e., magnetic dipole emission due to thermal vibrations of ferromagnetic grains, or inclusions in grains \citep{Draine1999}. Earlier suggestions of hot ($\sim 10^{6}$\,K) free-free emission \citep{Leitch1997} and flat-spectrum synchrotron \citep{Bennett2003b}, now seem unlikely due to the peaked spectrum and close correlation with FIR templates. Spinning dust possibly explains the low level of polarization and the narrow range of frequencies at which it is detected. However, \citet{Tibbs2013} and \citet{Hensley2015} cite some properties of AME that do not match expectations for spinning dust, casting serious doubt on this interpretation. By design, the C-BASS frequency is too low for significant AME to be detected over most of the sky, which is a major reason why C-BASS substantially improves the separation of the non-AME components, as the lower space-microwave frequencies can contain both AME and synchrotron emission. If the peaked spectrum seen in examples such as the Perseus molecular cloud is typical, AME should be negligible at 5\,GHz and C-BASS will provide an AME-free template for synchrotron and free-free emission, which in turn will allow clear identification of actual AME emission at space microwave frequencies. With an additional low-frequency measurement that is not contaminated by AME, it is possible to break the degeneracy between synchrotron spectral index and AME amplitude (see Section \ref{sec:impact}). Nevertheless, there may be a few lines of sight where AME is detectable, allowing \mbox{C-BASS} to constrain models of the low-frequency tail of its spectrum; a good example is G353.05+16.90 ($\rho$~Oph West) on $1^{\circ}$ scales, where there may still be appreciable AME at 5\,GHz \citep{Planck2011_XX}. If any of the dust-correlated features so evident in the {\it WMAP} and {\it Planck}-LFI maps are visible in C-BASS, this could imply a radically different emission mechanism from spinning dust. \subsection{Thermal Dust} \label{sec:dust} Interstellar dust grains, with sizes ranging from a few to several hundred nanometers, absorb optical and UV starlight and re-emit via thermal vibrations in the crystal lattice, which excite electric dipole radiation \citep{Draine_book}. This is the dominant foreground above 70\,GHz. Dust emission can be fitted with a modified blackbody, i.e., a Planck spectrum, $B(\nu,T_{\rm d})$ multiplied by an emissivity $\propto \nu^{\beta_{\rm d}}$. The latest {\it Planck} fits to the spectrum below 1\,THz \citep{Planck2015_X} give a narrow range around $\beta_{\rm d} \approx 1.53$, with an rms of 0.03 that may be dominated by fitting errors; $T_{\rm d}$ ranges from 15--27\,K, with a mean $\approx 21$\,K and a standard deviation of 2.2\,K. However this model over-predicts the data above 1\,THz, where the best-fit values are $\beta_{\rm d} \approx 1.50$ and $\langle T_{\rm d} \rangle \approx 19.6$\,K \citep{Planck_Int_XXII}. The apparent uniformity of the dust spectrum disguises considerable spatial variation in dust properties. \citet{Planck_Int_XVII} showed that $T_d$ is anticorrelated with emissivity at high Galactic latitude, the opposite of what would be expected from variations in starlight intensity, implying significant variations in the UV/optical absorption to FIR emission ratio. There are at least two, and likely more, chemically-distinct grain populations \citep{Draine_book}. There are certainly real spatial variations in $\beta_D$; for instance, the Small Magellanic Cloud has $\beta_D \approx 1.2$ \citep{Planck2011_XVII}. Laboratory-synthezised grain analogues show a range of $\beta_D$ and also spectral curvature \citep{Coupeaud2011}, and the observed mm-wave spectrum presumably represents whatever reasonably abundant grain population has the slowest fall-off towards long wavelengths. Polarization of dust emission is due to anisotropic optical properties of the grains and a preferred orientation with respect to the magnetic field. Polarized optical extinction is associated with silicates \citep{Draine_book}, which are also believed to dominate the mm-wave dust emission \citep[e.g.,][]{Planck_Int_XXIX,Fanciullo2015}, and, as expected, the polarization angles seen in emission and absorption are strongly correlated \citep{Planck_Int_XXI}. The intrinsic polarization fraction of thermal dust emission may be around 26 per cent \citep{Planck_Int_XLIV}; as for synchrotron radiation this is reduced by geometric depolarization, but observed polarization can reach 20 per cent, with typical values of $\approx 5$ per cent \citep{Planck_Int_XIX}. Also as for synchrotron radiation, these effects can lead to different spectra in polarization and total intensity, and in fact the polarized spectrum is slightly steeper \citep{Planck_Int_XXII}. The intrinsic complexity of the dust spectrum poses a challenge for observing strategies that concentrate on frequencies above 100 GHz. Although synchrotron emission is below the dust emission in this frequency range, without effective constraints on the synchrotron spectrum, degeneracies between different dust models and residual synchrotron will compromise the accuracy of foreground separation at the levels of precision needed for accurate $B$-mode measurements. Although C-BASS measures frequencies far from the peak of the dust spectrum, removing these degeneracies in component fitting can lead to improvements in the measurements of the dust parameters through the improved fitting of the other components (see Section \ref{sec:impact}). \section{Potential impact of C-BASS} \label{sec:impact} The C-BASS data are primarily intended to improve foreground separation for CMB analysis by breaking degeneracies that currently exist in the component separation problem. Here we make some estimates of the degree of improvement in the accuracy of CMB and foreground component parameters that can be expected from C-BASS data. We have simulated the component separation process for a variety of mock data sets representing typical levels of foreground contamination in pixels across different regions of the sky, using the properties of existing or planned sky surveys, with and without C-BASS. We assess the ability to recover a set of input parameters describing the CMB and foregrounds, using measurements at different frequencies $\nu$ with error bars $\sigma_{\nu}$ corresponding to particular surveys (see Table \ref{table:sensitivities} for the actual frequencies and sensitivities used). The simulations consider only the thermal noise on a single pixel, and thus do not include effects due to sample or cosmic variance, nor the improvement in thermal signal-to-noise from observing a larger sky area. The full set of results showing the impact of C-BASS data on component separation in a variety of sky regions with different levels of foreground contamination will be presented in a forthcoming paper (Jew et al., in prep.). Here we will show representative results for one scenario in intensity and one in polarization. In each case, we generate mock data at each frequency for which we expect to have an observation, using a model of the foregrounds and the CMB component. We then attempt to recover the parameters from which the mock data were generated, using an MCMC fitting process. Many examples of similar techniques can be found in the literature, including {\sc FGFIT} \citep{Eriksen2006}, {\sc Commander} \citep{Eriksen2008b}, and {\sc Miramare} \citep{Stompor2009}, and a similar methodology has been used by \citet{2018ApJ...853..127H} to explore the impact of different dust models on CMB component separation. We assign priors appropriate to the particular foreground component model. For power-law components of the form $A(\nu/\nu_0)^{\beta}$, we use the form of the Jeffreys prior $\mathcal{P}$ suggested by \cite{Eriksen2008}, namely $\mathcal{P}(A) = 1$ and $\mathcal{P}(\beta) = [\Sigma_{\nu}(\sigma_{\nu}^{-1}(\nu/\nu_0)^{\beta}\ln(\nu/\nu_0))^2]^{1/2}$. For the CMB amplitude we use a flat prior. We also use flat priors for the amplitude and peak frequency of the AME spectrum. We do not add noise to the mock data, so that the results are not biased by individual realisations of the noise, but simply use the noise levels $\sigma_{\nu}$ in the calculation of the likelihood in the fitting process. Thus the posterior probability density functions that we show should be interpreted as the distribution from which any particular pixel realization would be drawn, for the given set of parameters. For example, for the intensity simulations in which we assume a CMB pixel value of $75 \, \mu \rm{K}$, the posterior density is the probability of obtaining a particular value for that pixel alone. A real observation would contain many pixels with different individual CMB values, and the CMB power would be inferred from the ensemble of pixels. \begin{table} \caption{The surveys and sensitivities used for the simulations. Sensitivities for the intensity simulations are for a 1\degr\ pixel while those for polarization are for a 3\degr\ pixel. The {\it FutureSat} sensitivities are taken from an early version of the LiteBIRD mission description \citep{Matsumura2014} and are intended to be indicative of a near-future satellite mission. The effective sensitivity on the Haslam map is taken to be 10 per cent of the median map temperature, i.e. it is dominated by the overall 10 per cent calibration uncertainty rather than the thermal noise.\label{table:sensitivities}} \begin{tabular}{cccc} \hline Survey & Frequency / GHz & $\sigma^I \, /\mu{\rm K_{RJ}}$ & $\sigma^P \, /\mu{\rm K_{RJ}}$\\ \hline Haslam et al & 0.408 & $2.5 \times 10^6$ &\\ \hline C-BASS & 5.0 & 73.0 & 24.0\\ \hline WMAP K & 22.8 & 5.8 & \\ WMAP Ka & 33.0 & 4.2 & \\ WMAP Q & 40.7 & 3.5 & \\ WMAP V & 60.7 & 3.8 & \\ WMAP W & 93.5 & 3.9 & \\ \hline {\it Planck} 30& 28.4& 2.5& 1.1\\ {\it Planck} 44& 44.1 & 2.6& 1.3\\ {\it Planck} 70&70.4 & 3.1& 1.5\\ {\it Planck} 100&100 & 1.0& 0.51\\ {\it Planck} 143&143 & 0.33& 0.24\\ {\it Planck} 217&217 & 0.26& 0.20\\ {\it Planck} 353&353 & 0.2& 0.19\\ {\it Planck} 545&545 & 0.086 & \\ {\it Planck} 857&857 & 0.032 & \\ \hline {\it FutureSat} 60& 60 && 0.052\\ {\it FutureSat} 78&78 & & 0.031\\ {\it FutureSat} 100&100 & & 0.020\\ {\it FutureSat} 140&140 & & 0.013\\ {\it FutureSat} 195 &195 & & 0.0070\\ {\it FutureSat} 280 &280 & & 0.0038\\ \hline \end{tabular} \end{table} \subsection{Intensity} To simulate the data we use a simplified version of the foreground model found in Table 4 of \citet{Planck2015_X}. Our model for total intensity measurements is summarized in Table \ref{table:intensity-model}, and consists of the following components: a single power-law synchrotron component with amplitude $A_s$ and spectral index $\beta_s$; a free-free component with a fixed electron temperature of 7000\,K and effective emission measure EM; a thermal dust component with a modified blackbody spectrum with amplitude $A_{\rm d}$, an emissivity index $\beta_{\rm d}$ and a temperature $T_{\rm d}$; and a single AME component with the {\sc spdust2} spectrum \citep{Ali-Hamoud2009,Silsbee2011} allowed to shift in logarithmic frequency-brightness space with an amplitude $A_\textrm{AME}$ and peak frequency $\nu_{\rm peak}$ (following the same prescription as in \citealt{Planck2015_X}). \begin{table*} \caption{The models used to generate foregrounds and CMB spectra. The free parameters are those fitted for in the MCMC fitting, while the fixed parameters are fixed for each model component and are not fitted for. Each model is used to generate a temperature component in Rayleigh-Jeans brightness temperature. \label{table:intensity-model}} \begin{tabular}{cccc} \hline Component & Free parameters & Fixed parameters & Model for $T_{\rm RJ}$\\ \hline Synchrotron & $A_{\rm s}, \beta_{\rm s}$ & $\nu_0 = 408\, {\rm MHz\,(intensity)}$& $A_{\rm s} (\nu/\nu_0)^{\beta_{s}}$\\ & & $\nu_0 = 30 \,{\rm GHz\,(polarization)}$ &\\ \hline Free-free & EM & $T_{\rm e} = 7000 \, \rm K$, $\nu_0 = 1\, {\rm GHz}$& $T_{\rm e}(1- \exp(-\tau))$\\ & & & $\tau = 0.05468T_{\rm e}^{-3/2}\,{\rm EM}\,g_{\rm ff}\,(\nu/\nu_0)^{-2}$, \\ & & & $g_{\rm ff} = \ln(\exp[5.96-\sqrt{3}/\pi \ln((\nu/\nu_0)(T_{\rm e}/10^4)^{-3/2})]+e)$\\ \hline AME & $A_{\rm AME}, \nu_{\rm peak}$ & & {\sc spdust2} \\ \hline Dust & $A_{\rm d}, \beta_{\rm d}, T_{\rm d}$& $\nu_0 = 545\, {\rm GHz\,(intensity)}$ &$A_{\rm d}\Big(\frac{\nu}{\nu_0}\Big)^{\beta_{\rm d}+1}\, \frac{\exp(h\nu_0/k_{\rm B} T_{\rm d})-1}{\exp(h\nu/k_{\rm B} T_{\rm d})-1}$ \\ & & $\nu_0 = 353 \, {\rm GHz\,(polarization)}$& \\ \hline CMB & $A_{\rm CMB}$& $T_0 = 2.7255 \, {\rm K}$ & $A_{\rm CMB}\, x^2 e^x / (e^x - 1)^2$,\\ & & & $x = h\nu / k_{\rm B}T_0$\\ \hline \end{tabular} \end{table*} \begin{table*} \caption{Recovered parameter values for the intensity simulations, with and without the inclusion of the C-BASS data point (corresponding to the posterior density estimates in Fig. \ref{fig:sim_GP_I_pdf}. } \begin{tabular}{lrrrr} \hline Parameter & Recovered value & Recovered value & True value & Units\\ & (No C-BASS) & (with C-BASS) &\\ \hline $A_\textrm{s}$ @ 100\,GHz & $1.33_{-1.33}^{+1.81}$ & $1.84_{-0.165}^{+0.191}$ & $1.86$ & $\mu$K$_\textrm{RJ}$ \\ $\beta_\textrm{s}$ & $-3.02_{-0.16}^{+0.11}$ & $-3.10_{-0.026}^{+0.025}$ & $-3.10$ & \\ EM & $365_{-21}^{+11}$ & $362_{-4}^{+4}$ & $361$ & cm$^{-6}$pc \\ $A_\textrm{AME}$ & $701_{-39}^{+37}$ & $707_{-11}^{+13}$ & $708$ & $\mu$K$_\textrm{RJ}$ \\ $\nu_\textrm{peak}$ & $25.0_{-3.2}^{+3.1}$ & $25.0_{-1.6}^{+1.4}$ & $25.0$ & GHz \\ $A_\textrm{d}$ & $2080.9_{-0.11}^{+0.10}$ & $2080.9_{-0.09}^{+0.09}$ & $2080.86$ & $\mu$K$_\textrm{RJ}$ \\ $\beta_\textrm{d}$ & $1.545_{-0.00087}^{+0.00095}$ & $1.545_{-0.00074}^{+0.00097}$ & $1.545$ & \\ $T_\textrm{d}$ & $17.480_{-0.012}^{+0.011}$ & $17.481_{-0.012}^{+0.009}$ & $17.480$ & K \\ $A_\textrm{CMB}$ & $75.4_{-2.3}^{+2.0}$ & $75.0_{-1.2}^{+1.3}$ & $75.0$ & $\mu$K$_\textrm{CMB}$ \\ \hline \end{tabular} \label{table:results_I} \end{table*} \begin{table*} \caption{Recovered parameter values for the polarization simulations, with and without the inclusion of the C-BASS data point, corresponding to the posterior density estimates in Fig. \ref{fig:sim_B_pdf}.} \begin{tabular}{lrrrr} \hline Parameter & Recovered value & Recovered value & True value & Units\\ & (No C-BASS) & (with C-BASS) &\\ \hline $A_\textrm{s}$ @ 100\,GHz & $0.086_{-0.048}^{+0.149}$ & $0.072_{-0.018}^{+0.021}$ & $0.074$ & $\mu$K$_\textrm{RJ}$ \\ $\beta_\textrm{s}$ & $-2.37_{-0.27}^{+1.37}$ & $-3.09_{-0.10}^{+0.08}$ & $-3.10$ & \\ $A_\textrm{d}$ & $0.313_{-0.023}^{+0.034}$ & $0.329_{-0.019}^{+0.022}$ & $0.335$ & $\mu$K$_\textrm{RJ}$ \\ $\beta_\textrm{d}$ & $0.97_{-0.96}^{+0.37}$ & $1.56_{-0.50}^{+0.51}$ & $1.63$ & \\ $T_\textrm{d}$ & $65.8_{-36.6}^{+4.2}$ & $65.3_{-34.9}^{+4.7}$ & $24.9$ & K \\ $A_\textrm{CMB}$ & $-0.02_{-0.38}^{+0.09}$ & $0.02_{-0.09}^{+0.06}$ & $0.00$ & $\mu$K$_\textrm{CMB}$ \\ \hline \end{tabular} \label{table:results_P} \end{table*} We use the component separation results from \citet{Planck2015_X} to suggest values of the foreground parameters. For this example, we used a region close to the Galactic plane to illustrate a fairly severe instance of foreground contamination. We then produce mock brightness values using the foreground models plus a CMB signal. We simulate the intensity measurements in 1\degr\ pixels, since all components (including the CMB) are detected at high signal-to-noise ratio in a typical pixel. The CMB value was set to 75\,$\mu$K, corresponding to the rms fluctuations on a 1\degr\ scale. Simulated observations at the central frequencies of the Haslam, {\it Planck}, WMAP and C-BASS surveys were included. For each frequency measurement we assigned thermal noise based on the achieved or expected sensitivity of the appropriate survey. These are summarized in Table \ref{table:sensitivities}. Figure~\ref{fig:sim_GP_I_pdf} shows the posterior density estimates (PDE) of the total intensity foreground parameters for a single 1\degr\ pixel in a region with significant AME and free-free emission. Figure~\ref{fig:sim_GP_I_spectrum} shows the corresponding estimates of the actual component spectra, along with the true input spectra, and Table \ref{table:results_I} shows the numerical values for the recovered parameters. These are given as the peak posterior value and the parameter range that contains 68 per cent of the posterior volume, as the PDEs are often quite skewed and cannot be represented with a symmetrical error bar. Without the C-BASS data, the synchrotron parameters, $A_{\rm s}$ and $\beta_s$, are very poorly constrained. Including the C-BASS data improves the measurement of the synchrotron radiation amplitude by an order of magnitude, and reduces the error range on the spectral index from 0.27 dex to 0.05 dex. It also markedly improves the estimates of the free-free emission measure and the AME parameters, reducing the error bars on these parameters by factors of 2 -- 4. There is even a small improvement on the constraints on the dust amplitude. These improvements in foreground parameter estimates result in a reduction of the errors on the measurement of the CMB amplitude in this pixel of 40 per cent. \subsection{Polarization} For the polarization simulations we did not include a free-free or AME component. Free-free emission is essentially unpolarized, while AME polarization is expected to be small, and has not yet been detected. We also set the CMB signal to zero. This represents a situation in which the $E$-mode signal has been perfectly separated out, and we are searching for a $B$-mode signal of very small amplitude. Data points at the centre frequencies of C-BASS and {\it Planck} are included, along with a set of sensitivities indicative of a near-future CMB satellite mission (`{\it FutureSat}'), based on the early mission description of {\it LiteBIRD} \citep{Matsumura2014}. The PDE of the polarization foreground parameters ($B$-mode) for a 3\degr\ pixel in a low-foreground region of sky are shown in Figure~\ref{fig:sim_B_pdf}. Figure~\ref{fig:sim_B_spectrum} shows the corresponding estimates of the component spectra, along with the true input spectra, and Table \ref{table:results_P} summarizes the results. Including C-BASS data results in much tighter constraints on the synchrotron amplitude and spectral index, with a previously almost unconstrained spectral index now measured with an accuracy of 0.1 dex. There is also significant improvement in the dust spectral index, resulting in a reduction in the 1-$\sigma$ range on the CMB amplitude by a factor of three. Additional low-frequency points between the C-BASS and {\it Planck} frequencies would provide additional constraints on the synchrotron spectrum and lower bias on the $B$-mode amplitude measurement. While the addition of the C-BASS data point dramatically improves the recovery of the synchrotron components and the CMB amplitude in the case of a straight synchrotron spectrum, additional complication in the synchrotron spectra will require additional observational constraints. A C-BASS-like instrument covering frequencies between 5~GHz and the lower end of the space microwave band would provide constraints on realistic synchrotron spectra, including the effects of intrinsic curvature and line-of-sight integration of different spectra. A detailed study of such an instrument, NextBASS, and its potential impact on component separation using the techniques presented here, is in preparation. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/projPaper_8_galacticPlane_I_pdf2.png} \caption{PDEs of the total intensity component parameters for a typical 1\degr\ pixel in a sky region with significant foreground contamination. The dashed lines are the PDEs when only including Haslam, WMAP and {\it Planck} data points in the fit. The solid lines are the PDEs when the C-BASS data point is included. The vertical lines are at the true parameter values used to simulate the data.} \label{fig:sim_GP_I_pdf} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\columnwidth]{figures/projPaper_hwp_8_galacticPlane_I_I_spectrum.png} \includegraphics[width=1.0\columnwidth]{figures/projPaper_hwpc_8_galacticPlane_I_I_spectrum.png} \caption{Total intensity frequency spectra for a 1\degr\ pixel in a sky region with significant foreground contamination. The solid black lines are spectra of the true simulated foreground components. The coloured lines are the frequency spectra of the sky components of 5000 randomly drawn samples from the converged MCMC chains. {\it Left} is the result from only including Haslam, WMAP and {\it Planck} data points. {\it Right} is with the addition of a C-BASS data point. Synchrotron is { red}; thermal dust is {blue}; AME is {yellow}; free-free is {green}; and CMB is { purple}.} \label{fig:sim_GP_I_spectrum} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/s_straight_6_neg_offPlane_0B_pdf2.png} \caption{PDE of the $B$-mode polarization component parameters for a typical 3\degr\ pixel in a sky region with low foreground emission. The dashed lines are the PDEs when only including {\it Planck} and {\it FutureSat} data points. The {solid lines} are the posterior density estimates when the C-BASS data point is included. The vertical lines are at the true parameter values used to simulate the data.} \label{fig:sim_B_pdf} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\columnwidth]{figures/s_pl_straight_6_neg_offPlane_0B_0B_spectrum.png} \includegraphics[width=1.0\columnwidth]{figures/s_pcl_5_straight_6_neg_offPlane_0B_0B_spectrum.png} \caption{B-mode polarization frequency spectra for a 3\degr\ pixel in a sky region with low foreground emission. The {solid black lines} are spectra of the true simulated foreground components. The coloured lines are the frequency spectra of the sky components of 5000 randomly drawn samples from the converged MCMC chains. {\it Left} is the result from only including {\it Planck} and {\it FutureSat} data points. {\it Right} is with the addition of the C-BASS data point. Synchrotron is { red}; thermal dust is { blue}; and CMB is { purple}.} \label{fig:sim_B_spectrum} \end{figure*} \section{Introduction} In recent years great effort has been made to systematically survey the whole sky from microwave to sub-millimetre wavelengths using the \emph{WMAP} \citep{Bennett2013} and \emph{Planck} \citep{Planck2015_I} spacecraft. These surveys have primarily been aimed at studying the cosmic microwave background (CMB) radiation, and have yielded cosmological information of unprecedented precision \citep{Hinshaw2013,Planck2015_I}. Since the first searches for anisotropies in the CMB, the danger that foreground emission could masquerade as the sought-for cosmological signal has been of great concern. Consequently, most CMB experiments have involved observing at multiple frequencies. This was first done to confirm the expected thermal spectrum of the anisotropies \citep[e.g.,][]{Smoot1992}. In later experiments, cuts on the sky were defined, in frequency, and in angular scale (multipole range) where CMB fluctuations were known to dominate over foregrounds \citep[e.g.,][]{Planck2015_XI}, so that only minor foreground corrections were needed. The practical limit to this strategy has now been reached with the attempt to detect large-scale $B$-mode fluctuations in the CMB polarization \citep{Zaldarriaga1997,KKS1997}, which would be convincing evidence of the reality of inflation and would determine the characteristic energy of the inflaton field. A recent claimed detection of inflationary $B$-modes from the BICEP2 experiment \citep{BICEP2014}, in a region selected specifically for minimal foreground emission, has now been explained in terms of polarized thermal dust emission \citep{Bicep-Planck}. Evidently, in future we will need to model and subtract foregrounds with high accuracy, to reveal CMB signals that are subdominant at all frequencies. Early hopes that multifrequency analyses using the wealth of frequency channels obtained by \emph{WMAP} and \emph{Planck} would allow accurate foreground correction have been only partially fulfilled \citep[e.g.,][]{Planck2015_X}. Foreground emission has a minimum brightness relative to the CMB at around 70\,GHz. While \emph{Planck} has mapped the dominant high-frequency component (thermal dust emission) to high enough frequencies that the CMB fluctuations themselves are negligible and the foreground is well detected all over the sky, on the low frequency side the foregrounds remain subdominant to the CMB fluctuations at high Galactic latitudes at the lowest frequency observed from space, the \emph{WMAP} 23\,GHz channel. Furthermore, the low-frequency foreground spectrum has proved substantially more complicated than was expected when the frequency coverage of these instruments were designed. Originally, it was believed to consist of free-free and synchrotron emission, but we now know there is a third continuum component, termed anomalous microwave emission (AME; \citealt{Leitch1997}). Moreover, the synchrotron component is spectrally more complicated than anticipated (see Section\,\ref{sec:synchrotron}). Consequently, in the narrow band (23--70\,GHz) where these three mechanisms are detected by the CMB spacecraft, they cannot be reliably disentangled \citep[e.g.,][]{Planck2015_XXV}. For more reliable modelling, we need to extend the frequency coverage to much lower frequencies, where the spectra of the three low-frequency components should be easily distinguishable \citep{Krachmalnicoff2015,Remazeilles2016}. This will also give sky maps where the low-frequency foregrounds are clearly detected in each pixel. These observations must be carried out from the ground, because wavelengths much longer than 1\,cm are not practical for CMB space missions, due to the large size of the feeds required and the limited resolution available from the relatively small size of the primary mirror. In this paper we describe the design, specifications, and capabilities for one such project: the C-Band All-Sky Survey (C-BASS)\footnote{\url{http://cbass.web.ox.ac.uk}}, which aims to map the entire sky in total intensity and polarization at 5\,GHz, at a resolution of $45$~arcmin. 5 GHz is simultaneously the highest frequency at which the foreground polarization will be clearly detected all across the sky, and the lowest frequency at which the confusing effects of Faraday rotation and depolarization can be robustly corrected. The survey is being conducted in two parts, a northern survey using a 6.1-metre telescope at the Owens Valley Radio Observatory (OVRO) in California, and a southern survey with a 7.6-metre telescope at Klerefontein in South Africa. Although the telescopes are somewhat different in size, the optics are designed to give the same beamsize with both instruments \citep{Holler2011}. The instruments are designed to provide a high-efficiency beam with low intrinsic cross-polarization, and to have sufficient stability to produce maps not limited by systematic effects. The C-BASS maps will enable new studies of the interstellar medium and magnetic field in the Galaxy, and help to determine the origin of the poorly-understood anomalous microwave emission (AME). They will be used to model the polarized synchrotron emission from the Galaxy; this model will be essential for removing foreground emission from the cosmic microwave background polarization maps from \emph{WMAP}, {\it Planck}, and future CMB missions. The remainder of this paper is organised as follows. Section \ref{sec:surveys} summarises the existing large-area radio and microwave surveys, and Section~\ref{sec:mechanisms} reviews the foreground emission mechanisms that need to be measured and modelled, which motivated the design of C-BASS. Section~\ref{sec:requirements} outlines the requirements for the survey and instrument design necessary to achieve the scientific goals of the project, and Section \ref{sec:design} describes the instrument design adopted. In Section \ref{sec:data} we describe how the raw data are calibrated and used to make the primary science data products, which are maps of Stokes parameters. Section~\ref{sec:impact} outlines the impact that C-BASS will have on both CMB and Galactic science, and we summarise our conclusions in Section~\ref{sec:conclusions}. \section{Conclusions} \label{sec:conclusions} Low-frequency radio surveys are an essential component of a CMB foreground removal strategy, providing constraints on the synchrotron, free-free and AME components of Galactic emission. However, all-sky surveys to date below 20\,GHz have been of limited use due to map artefacts and calibration problems. The C-Band All-Sky Survey will provide accurate and well-calibrated maps of the whole sky in Stokes $I$, $Q$ and $U$ at 5\,GHz, with additional frequency resolution in the southern part of the survey. This will allow a major improvement in the accuracy of foreground separation for CMB intensity and polarization measurements. The data will also be used to study diffuse Galactic emission, such as measuring the synchrotron spectral index, constraining foreground models for studying AME at higher frequencies, and constraining models of the Galactic magnetic field. The northern survey is now complete, with the telescope having been decommissioned in April 2015. Data reduction and analysis for the northern data are ongoing, and full results will be presented in forthcoming papers. Preliminary maps of the northern sky have been presented by \citet{moriond2018}. At the time of writing, observations were still being made for the southern survey. The C-BASS frequency at 5~GHz is the ideal balance between being sufficiently low to give good sensitivity to synchrotron radiation, with its steeply falling spectrum, and sufficiently high to avoid the worst effects of depolarization and Faraday rotation. Higher sensitivity observations at frequencies above C-BASS but below the space microwave band would of course give even better constraints on the synchrotron spectrum. C-BASS has been designed to give a clean beam with relatively high main-beam efficiency, well understood sidelobe structure, and minimal far-out and cross-polarization sidelobes. This allows accurate calibration and gives a well-understood effective temperature scale. The inclusion of C-BASS data in component separation analyses will break degeneracies in both intensity and polarization measurements, allowing more accurate estimation of foregrounds and hence of the CMB component. This additional accuracy will be crucial for future $B$-mode detections. \section*{Acknowledgments} The C-BASS project is a collaboration between Oxford and Manchester Universities in the U.K., the California Institute of Technology in the U.S., Rhodes University, UKZN and the South African Radio Astronomy Observatory in South Africa, and the King Abdulaziz City for Science and Technology (KACST) in Saudi Arabia. The work at Oxford was supported by funding from STFC, the Royal Society and the University of Oxford. The work at the California Institute of Technology and Owens Valley Radio Observatory was supported by National Science Foundation (NSF) awards~AST-0607857, AST-1010024, AST-1212217, and AST-1616227, and by NASA award NNX15AF06G. The work at Manchester was supported by STFC and CD also acknowledges support from an ERC Starting (Consolidator) Grant (no.~307209). OGK acknowledges the support of a Dorothy Hodgkin Award in funding his studies while a student at Oxford, and the support of a W.M. Keck Institute for Space Studies Postdoctoral Fellowship at Caltech. CJC acknowledges the support of a Commonwealth Scholarship in funding his studies while a student at Oxford. MP acknowledges funding from a FAPESP Young Investigator fellowship, grants 2015/19936-1 and 2016/19425-0, S\~{a}o Paulo Research Foundation (FAPESP). HMH acknowledges the financial assistance of the South African SKA Project (SKA SA) (www.ska.ac.za) towards this research. We also thank Hans Kristian Eriksen and Ingunn Wehus for their assistance with producing Fig.~\ref{fig:frequency_spectra}. Finally, we thank the late Profs. Richard J. Davis and Rodney D. Davies, who were strong supporters of the C-BASS project from the beginning. \url{http://cbass.web.ox.ac.uk} \bibliographystyle{mnras} \section{Survey requirements and constraints} \label{sec:requirements} The resolution requirement of the C-BASS survey is partly set by that of the complementary surveys at other frequencies and partly by the science goals, but it is also limited by practical constraints. {\it WMAP} and {\it Planck} have resolutions at their lowest frequencies of $\approx 48$ arcmin and $\approx 33$ arcmin respectively, while the 408~MHz Haslam et al. map has a nominal resolution of $51$ arcmin. In order to remove foregrounds at the angular scale of the peak of the $B$-mode power spectrum at $\ell \approx 90$, a resolution of around 1\degr\ is required. The resolution is also ultimately set by the size of antenna available, and the need to under-illuminate it to minimise sidelobes. With a 6.1-m antenna available, it was possible to design for a beam FWHM of $45$ arcmin. This is slightly better than the resolution of the Haslam map and sufficient to clean CMB maps well in to the region of the $B$-mode power spectrum peak. Ideally C-BASS would detect polarized emission across the entire sky. To estimate the level of polarized emission at high Galactic latitudes, and hence the sensitivity required, we extrapolated from the {\it WMAP} K-band polarization map. Assuming a mean temperature spectral index of $\beta = -3$, we estimate that the polarized intensity at 5\,GHz will be greater than 0.5\,mK over 90 per cent of the sky. We therefore set a sensitivity goal of 0.1\,mK per beam in polarization. This corresponds to about 14\,mJy in flux density sensitivity. At this sensitivity level the C-BASS intensity map will be confusion limited. We estimate the confusion limit from the source counts in the GB6 survey \citep{GB6}, which can be modelled as $N(S){\rm d}S = 76 \, (S/{\rm Jy})^{-2.44}\,{\rm Jy ^{-1}\, sr^{-1}}$. With a beamsize of $45$ arcmin the expected confusion limit from extragalactic sources is about 85\,mJy, corresponding to 0.6\,mK, for an upper flux density limit of 100\,mJy (roughly the individual source detection level in C-BASS maps). In practice the confusion limit will be somewhat lower than this, since the source counts are known to flatten at lower flux density levels than the lower limit of GB6. The polarization maps will not be confused, as the typical polarization fraction of extragalactic sources is only a few per cent. It will also be possible to correct the C-BASS intensity maps for source confusion using data from higher resolution surveys such as GB6 and PMN \citep{PMN}. The overall specifications of the C-BASS survey are summarized in Table \ref{tab:overview}. \subsection{Survey Design} In order to map the entire sky with sensitivity to all angular scales up to the dipole, the only feasible instrument architecture is a total power scanning telescope. An interferometer is not feasible because of the difficulty in obtaining information on scales larger than the inverse of the shortest baseline. To cover the entire sky from the ground required two instruments, one in each hemisphere, situated at latitudes that give significant overlap in the sky coverage to ensure continuity on large scales between the two halves of the survey and good cross-calibration. We also require sensitivity to both intensity and polarization. In order to construct a sky map with good accuracy on large angular scales we require a scan strategy with long continuous sweeps of the sky and good cross-linking of scans (i.e., each pixel is crossed by several scans in different directions). For intensity measurements we also choose to use a fixed reference temperature rather than a differential measurement that switches out signal at the separation angle between the beams. We scan at constant elevation to minimise the variation in atmospheric emission and ground spillover during a scan, . The survey strategy is therefore to make constant-elevation scans over the entire azimuth range, at the maximum slew rate that the telescope can manage. Maximising the slew rate pushes the signal frequency band in the time-ordered data as far as possible away from any residual $1/f$ noise in the receiver noise power spectrum. The fastest convenient azimuth slew rate for both C-BASS telescopes is 4 deg/sec. We actually use several different slew rates close to 4 deg/sec so that any systematics in the data that are at fixed frequency (for example, related to the receiver cold head cycle frequency or the mains frequency) do not always map to the same angular scale on the sky. The telescope is slewed at full speed from 0\degr\ to 360\degr\ azimuth, and then decelerates, halts, and turns around. This gives a small region of overlap in azimuth coverage and ensures the whole sky is covered at full slew speed. We also have full sky coverage in both clockwise- and anti-clockwise-going scans. \begin{figure} \includegraphics[width=\columnwidth]{figures/projPaperHitCountMap_El37_v2.png} \includegraphics[width=\columnwidth]{figures/projPaperHitCountMap_El47_v2.png} \includegraphics[width=\columnwidth]{figures/projPaperHitCountMap_NandS_v2.png} \caption{{\it Top}: Sky coverage from roughly one day of observations with C-BASS north, using scans at a single elevation going through the north celestial pole (elevation $37\degr$). The map is in Galactic coordinates, with an equatorial co-ordinate grid overlaid. {\it Middle}: Sky coverage from scans at an elevation ten degrees above the celestial pole (elevation $47\degr$), showing how these scans fill in the sky coverage at mid declinations. {\it Bottom}: Complete sky coverage expected from northern and southern surveys combined, using data from all elevations. \label{fig:scans}} \end{figure} Scanning at constant elevation equal to the latitude of the observing site $\phi$ results in the scans always passing through the celestial poles, and the entire sky is eventually covered down to declination $\delta = -90\degr+2\phi$ (in the northern hemisphere). Scanning through the pole has the additional benefit that the same point on the sky is observed every scan, giving an immediate check on the drifts in offsets due to the atmosphere of the receiver. However, the resulting sky coverage is very non-uniform, with deep coverage at the pole and at the lower declination limit, but much sparser coverage at intermediate declinations. In order to get sufficient integration time over the whole sky we also observe at higher elevations, with about 60 per cent of the survey time spent at the elevation of the pole and decreasing amounts of time spent at 10, 30 and 40 degrees above the elevation of the pole. This results in a much more uniform sky coverage (see Figure \ref{fig:scans}). For scans at a given elevation, any residual ground spillover signal will be a fixed function of telescope azimuth. The azimuth at which any given declination on the sky is observed is also fixed (in fact each declination is observed at two azimuths, symmetrically placed about the meridian), which means there is a degeneracy between the ground spillover and the sky for sky modes that are circularly symmetric about the pole (these are the $m = 0$ modes in the spherical harmonic decomposition of the sky in equatorial coordinates). This degeneracy can be partly broken by observing at different elevations, which have somewhat different ground-spillover profiles, and by using the overlap region between the northern and southern surveys, which will have quite different ground-spill profiles. With the northern telescope at latitude $\phi = +37\degr$ and the southern telescope at latitude $\phi = -31\degr$ the overlap region between the two surveys is from declination $\delta = +28\degr$ to $\delta = -16\degr$. This overlap region also allows for extensive calibration cross-checks between the two surveys. The telescopes observe continuously day and night, with calibration observations (including sky dips) inserted roughly every two hours. No attempt is made to synchronize scans, as the sky is covered many times in the course of the survey observations. Contamination from the Sun or Moon is assessed after the observations, and the final survey data will be tested empirically for residual contamination. This gives us the maximum freedom to include good data, but the survey timing is planned such that even using strictly night-time only data will give sufficient integration time. \section{Large-area radio surveys} \label{sec:surveys} Table~\ref{tab:surveys} summarises the current state of large-area surveys in the frequency range useful for modelling CMB foregrounds, roughly 400 MHz to 1 THz \citep[for a discussion of radio surveys at lower frequencies see][]{deOliveiraCosta2008}. The table only includes surveys that cover at least $2\pi$~sr and that have angular resolutions of $\approx 1\degr$ or better. \setlength{\tabcolsep}{3pt} \begin{table*} \caption{Existing and on-going large-area radio surveys of intensity and polarization between 400\,MHz and 1\,THz, and with angular resolutions $\lesssim 1\degr$.} \label{tab:surveys} \begin{tabular}{lllccrrll} \hline Survey / & Frequency & FWHM & Declination &Stokes$^a$ &\multicolumn{2}{c}{Sensitivity$^b$} & Status$^c$ & Reference(s)\\ Telescope & [GHz] & [arcmin] & Coverage & & noise & offsets & & \\ \hline Haslam (various) & 0.408 & 51 & All-sky &$I$ & 1\,K & 3\,K & 3 & \protect{\cite{Haslam1982}} \\ Dwingeloo &0.82 &72 &$-7^{\circ}$ to $+85^{\circ}$ &$I$ &0.2\,K &0.6\,K &3 &\protect\cite{Berkhuijsen1972} \\ CHIPASS (Parkes) & 1.394 & 14.4 & $< +25\degr$ &$ I$ & 0.6\,mK & 30\,mK & 3 & \protect\cite{Calabretta2014} \\ DRAO (26-m)$^d$ & 1.4 & 36 & $> -29\degr$ &$QU$ & 12\,mK & 30\,mK & 3 & \protect\cite{Wolleben2006} \\ Villa Elisa$^d$ & 1.4 &35.4 & $< +10\degr$ &$IQU$ & 9\,mK & 50\,mK & 3 & \protect\cite{Testori2008} \\ Stockert$^d$ & 1.42 &35 & $> -30\degr$ &$I$ & 9\,mK & 50\,mK & 3 & \protect\cite{Reich1986}\\ GMIMS-HB N &1.28--1.75 & 30 & $> -30\degr$ & $IQU$ & 12\,mK & unknown & 1 & \protect\cite{Wolleben2010} \\ STAPS (Parkes) &1.3--1.8 & 15 & $< 0\degr$ & $IQU$ & unknown & unknown & 1 & Haverkorn (priv. comm.) \\ HartRAO & 2.326 & 20 & $-$83\degr\ to $+$13\degr &$I-Q$ & 25\,mK &80\,mK & 3 & \protect\cite{Jonas1998} \\ S-PASS (Parkes) & 2.3 & \phantom{0}9 & $< 0\degr$ &$IQU$ & 0.1\,mK & unknown & 1 & \protect \cite{Carretti2013} \\ GEM & 4.8--5.2 & 45 & $-$52\degr\ to $+$7\degr& $QU$ & 0.5\,mK & unknown & 0 & \protect\cite{Barbosa2006,Tello2013}\\ C-BASS & 4.5--5.5 & 45 & All-sky & $IQU$ & 0.1\,mK & 1 mK & 0 & This paper\\ QUIJOTE & 11--19,30,40 &$\approx 60$& $\gtrsim 0\degr$ &$[I]QU$ & $25\,\mu$K & unknown & 1 & \protect\cite{Genova-Santos2015a} \\ {\it WMAP} & 22.8--94 & 49--15 & All-sky &$IQU$ &$4\,\mu$K & 1\,$\mu$K & 3 & \protect\cite{Bennett2013} \\ {\it Planck} LFI &28.4--70 & 32--13 & All-sky &$IQU$ &$3\,\mu$K & 1\,$\mu$K & 2 & \protect\cite{Planck2015_I} \\ {\it Planck} HFI &100--353 & 10--5 & All-sky &$IQU$ & 0.2--0.5\,$\mu$K & 1--5\,$\mu$K & 2 & \protect\cite{Planck2015_I} \\ {\it Planck} HFI &545, 857 & \phantom{0}5 & All-sky &$I$ & 0.4, 0.8\,$\mu$K & 1\,$\mu$K & 2 & \protect\cite{Planck2015_I} \\ CLASS & 38--217 & 90--18 & $-$68\degr\ to $+$22\degr& $QU$ & 0.4\,$\mu$K & unknown & 0 & \protect\cite{Harrington2016}\\ \hline \end{tabular} \\ $^a$ [I]QU denotes surveys where total intensity (Stokes I) is measured but with much larger systematic errors than for the linear polarization (Stokes Q and U). I$-$Q denotes a single linear polarization.\\ $^b$ Approximate average total intensity sensitivity in Rayleigh-Jeans temperature after convolution to 1\degr\ FWHM resolution: ``noise'' is local rms; ``offsets'' is global systematic uncertainty. \\ $^c$ Status 0: observations ongoing; 1: observations complete, reduction in progress; 2: preliminary results released; 3: Final data released. \\ $^d$ An all-sky 1.4\,GHz map in IQU has been assembled from the Stockert, DRAO and Villa Elisa surveys \protect\citep{Reich2004,Testori2008}, but full details of its construction have not been published, and it is not clear if the currently-available version is the final one. \end{table*} The separation of foregrounds from CMB emission places strong demands on the accuracy of the sky maps, which must be absolutely calibrated to of order 1 per cent precision, and must accurately reproduce sky features on scales of tens of degrees. Far sidelobe responses to the bright Galactic plane, the Sun and Moon, and the ground around the telescope must be reduced to well below the high-latitude foreground intensity. Even for the {\it Planck} spacecraft, with its unblocked optical system designed to minimize far sidelobes, this could only be achieved by correcting the maps for sidelobe responses; even then some detectors had to be omitted due to excessive residual sidelobes, to achieve the best multi-frequency fit \mbox{\citep{Planck2015_X}}. The ground-based radio surveys published to date were never intended to reach this level of accuracy, and typically suffer from unquantified sidelobe responses \citep[see e.g.,][]{Du2016} and scan-synchronous artefacts in the maps, which limit the accuracy and fidelity of the images. For example, \citet{Calabretta2014} show a difference map between the 1.4\,GHz Stockert/Villa Elisa and CHIPASS surveys, which reveals obvious scan-synchronous residuals. These features significantly degrade the recovered component maps if these surveys are included in component separation analysis, and in practice they do not add usefully to the analysis. The most useful all-sky low-frequency survey for intensity measurements is the 408\,MHz survey of \citet{Haslam1982}. Although it also contains artefacts, there have been a number of attempts to remove the residual striping in this map, most recently and successfully by \mbox{\citet{Remazeilles2015a}}. In practice this is the only ground-based survey that has proved useful in CMB component separation, thanks to a relatively clean beam, the high sky brightness which reduces the relative impact of ground pick-up, and to the long frequency lever arm to the space microwave band,\footnote{ By `microwave' we mean frequencies of 3--300\,GHz, while `space microwave' is the part of this band used by space survey missions, roughly 20--300\,GHz.} which reduces the impact of map errors on derived spectral indices. In polarization, the Villa Elisa and DRAO surveys at 1.4\,GHz are the only large-area surveys to have been fully published; but in any case at frequencies of a few GHz there is significant depolarization and polarization angle rotation due to Faraday rotation, which substantially complicates multi-frequency modelling of the sky polarization. We can estimate the size of the effect from the catalogue of Faraday rotation measures, RM, of extragalactic sources by \citet{Taylor2009}: at $|b| > 30\degr$ the rms rotation measure is $\sigma_{\rm RM} \approx 28$\mbox{$\rm \,rad \,m^{-2}$}, while at lower latitudes $\sigma_{\rm RM} \approx 85\mbox{$\rm \,rad \,m^{-2}$}$. We are primarily interested in the diffuse interstellar polarization, for which emission and Faraday rotation are mixed along the line of sight, giving RMs roughly half the extragalactic values, so the typical rotations at high (low) latitudes are 37\degr (112\degr) at 1.4\,GHz, 14\degr (42\degr) at 2.3\,GHz, and 3\degr (9\degr) at 5\,GHz. Strong depolarization is likely to set in when rotations exceed about a radian, and indeed the sky polarization at $|b| < 30\degr$ towards the inner Galaxy is largely suppressed in the 1.4\,GHz surveys. These numbers illustrate one of our main motives for choosing to observe at 5\,GHz, but they also show that to accurately model the polarization in the space microwave band we will have to correct for the residual (few degrees at most) Faraday rotation at 5\,GHz. Fortunately, two new surveys should yield the required RM data. The Global Magneto-Ionic Medium Survey (GMIMS) is an ambitious project to map the entire sky with continuous frequency coverage in the range 0.3--1.8\,GHz, to allow high-resolution Faraday synthesis \citep{Wolleben2009,Wolleben2010}. The project is subdivided into Low- (300--700\,MHz), Mid- (800--1300\,MHz) and High-band (1.3--1.8\,GHz) surveys. Observations for the High-band (HB) survey are complete: in the north this used the DRAO 26-m, while the southern component (also known as STAPS) used the Parkes 64-m telescope. Early results from the northern survey have been published \citep{Wolleben2010b,Sun2015}. Unlike the earlier DRAO survey \citep{Wolleben2006}, GMIMS HB fully samples the sky, and its multichannel backend gives a good estimate of RM wherever the signal is not wiped out by strong depolarization in this band. Combined with C-BASS measurements at 5\,GHz this will allow accurate extrapolation of the polarization angles to short wavelengths where depolarization is negligible. The second new initiative is the S-band Parkes All Sky Survey (S-PASS) at 2.3\,GHz \citep{Carretti2010}. Like GMIMS this is a multichannel survey allowing in-band RM measurements, albeit of limited accuracy since the available bandwidth is only 184\,MHz. Observations are complete (STAPS and S-PASS were observed commensurately) and initial results were published by \citet{Carretti2013}. Although only covering the southern hemisphere, S-PASS includes most of the sky regions that are strongly depolarized in GMIMS. As expected, at 2.3\,GHz there is much less depolarization, so $RM$s derived from S-PASS and C-BASS should fill most of these gaps. In the small fraction of the sky still depolarized at 2.3\,GHz in-band measurements using the multi-channel southern C-BASS receiver will be used to make the correction. \begin{figure*} \centering \includegraphics[width=0.49\textwidth]{figures/fig1_spectrum.pdf} \includegraphics[width=0.49\textwidth]{figures/fig1_spectrum_pol.pdf} \caption{Frequency spectra of diffuse foregrounds in temperature ({\it left}) and polarization ({\it right}). Black solid line: CMB temperature and $E$-mode polarization; Magenta line: synchrotron; Blue line: free-free; Red line: thermal dust; Yellow line: anomalous microwave emission; black dashed line: sum of foreground components. The lines indicate the rms fluctuation level in each continuum component from the \citet{Planck2015_X} model (except for the $E$-mode polarization), evaluated at 1\degr\ FWHM resolution, for the region outside the {\it Planck} 2015 HFI Galactic plane masks that include 80 and 90 per cent of the sky (shown by the bottom and top edges of the lines respectively). Underlaid are the bands of \textit{Planck}, \textit{WMAP}, C-BASS, and the lower frequency radio surveys. The $E$-mode polarization amplitude has been taken from \citet{Planck2015_X}, and is calculated from the best-fit power spectrum.} \label{fig:frequency_spectra} \end{figure*} It remains to be seen whether GMIMS and S-PASS will be sufficiently free of scanning artefacts and far sidelobes to be useful in constraining the total intensity foreground spectrum. However, such artefacts are less important for determining rotation measures for two reasons. Firstly, in the Faraday-thin regime the position angle-wavelength relation closely follows the simple law: $\chi(\lambda) = \chi_0 + {\rm RM}\lambda^2$. This allows an internal consistency check and rejection of outlier data. Secondly, Faraday rotation causes order unity changes (including sign changes) to the measured Stokes $Q$ and $U$ parameters. Consequently low-level artefacts have much less impact than on modeling the Stokes $I$ spectrum, where we are interested in spectral index variations that may change the intensity ratio between 1.4 and 5\,GHz by 10 per cent or less. Between the C-BASS and \emph{WMAP} frequencies, the only large-scale survey is the QUIJOTE experiment at 11--19\,GHz \citep{Genova-Santos2015a,Genova-Santos2015b}, which only covers the northern sky.\footnote{ There are plans, not yet funded, to extend the QUIJOTE survey to the southern hemisphere (J. A. Rubi\~{n}o-Martin, priv. comm.).} Unlike C-BASS, QUIJOTE does not aim to accurately recover very large-scale sky structures, and it is much less sensitive to the foreground emission, which fades rapidly with frequency. However, QUIJOTE does cover the frequencies over which the anomalous microwave emission rises rapidly to prominence, and will provide very useful constraints on this component, especially along the Galactic plane, where component separation is most complicated. The GEM 5~GHz survey \cite{Barbosa2006} is at the same frequency and resolution as C-BASS. It will cover a limited range of declinations in the southern hemisphere in polarization only (not intensity), and may provide a useful cross-check on the C-BASS South observations.
1,108,101,564,475
arxiv
\section{ Introduction } By using the spectroscopic method involving the electronic hydrogen (the ordinary hydrogen atom consisting of proton and electron) the charge radius of the proton is measured to be 0.8768 $\times 10^{-15}$ meter. Similarly by using the electron-proton scattering method the charge radius of the proton is measured to be 0.8775 $\times 10^{-15}$ meter which is consistent with the spectroscopic method. The CODATA-2014 world average value of the charge radius of the proton by using the electrons, {\it i. e.}, by using the above two methods, is 0.8751 $\times 10^{-15}$ meter \cite{pr}. However, the muonic hydrogen (hydrogen atom consisting of proton and muon) experiment in the year 2010 found that the charge radius of the proton is 0.84087 $\times 10^{-15}$ meter \cite{mh,mh1}. The CODATA-2018 value of the charge radius of the proton is 0.8414 $\times 10^{-15}$ meter \cite{cod}. This disagreement between various experiments about the value of the charge radius of the proton is known as the proton radius puzzle which remains an unsolved problem in science. It is well known that the proton is not a point particle but it is a composite particle consisting of quarks and gluons which are the fundamental particles of the nature. The up quark has the fractional electric charge $\frac{2e}{3}$ and the down quark has fractional electric charge $-\frac{e}{3}$ where $e$ is the magnitude of the charge of the electron. Hence the charge radius of the proton depends on the charge distribution (the form factor) of the partons inside the proton. The electric charge radius $R_P$ of the proton $P$ is given by \cite{rp} \begin{eqnarray} R^2_P=-\frac{6}{G_E(0)}\frac{dG_E(Q^2)}{dQ^2}|_{Q^2=0} \label{rpj} \end{eqnarray} where $G_E(Q^2)$ is the electric form factor and $Q^2$ is the momentum transfer square of the virtual photon in the lepton-proton scattering. The interaction between the quarks and gluons inside the proton is described by the quantum chromodynamics (QCD) \cite{ymj} which is a fundamental theory of the nature. The short distance partonic cross section can be calculated by using the perturbative QCD (pQCD) due to the asymptotic freedom in QCD \cite{gwj}. Using the factorization theorem in QCD \cite{fcj,fcj1,fcj2} the hadronic cross section can be calculated from the partonic cross section at the high energy colliders by using the experimentally extracted parton distribution function (PDF) and fragmentation function (FF). The formation of the proton from the quarks and gluons is a long distance phenomenon in QCD. Due to the asymptotic freedom the QCD coupling becomes large at the large distance where the pQCD is not applicable. Hence the formation of the proton from the quarks and gluons cannot be studied by using the pQCD. The non-perturbative QCD is necessary to study the formation of the proton from the quarks and gluons. However, the analytical solution of the non-perturbative QCD is not known yet because of the presence of the cubic and quartic power of the gluon fields in the QCD lagrangian inside the path integration in the generating functional in QCD [see section II for details]. The path integration in QCD can be performed numerically in the Euclidean time by using the lattice QCD method. Hence the lattice QCD provides the first principle method to study the formation of the proton from the quarks and gluons. Since the electric form factor $G_E(Q^2)$ of the partons inside the proton in eq. (\ref{rpj}) is a non-perturbative quantity in QCD it cannot be calculated by using the perturbative QCD (pQCD) method but it can be calculated by using the lattice QCD method. Recently we have formulated the lattice QCD method to study the proton formation from the quarks and gluons \cite{pj} and to study the proton spin crisis \cite{psj} and to study the proton decay \cite{pd} by implementing the non-zero boundary surface term in QCD which arises due to the confinement of quarks and gluons inside the finite size proton \cite{nkbsj}. In this paper we extend this to formulate the lattice QCD method to study the proton radius puzzle. We derive the non-perturbative formula of the charge radius of the proton from the first principle in QCD which can be calculated by using the lattice QCD method. The paper is organized as follows. In section II we describe the lattice QCD method to study the proton formation from quarks and gluons by implementing the non-zero boundary surface term in QCD due to confinement. In section III we formulate the lattice QCD method to study the proton radius puzzle and derive the non-perturbative formula of the charge radius of the proton from the first principle in QCD which can be calculated by using the lattice QCD method. Section IV contains conclusions. \section{Formation of proton from quarks and gluons Using lattice QCD Method} The partonic operator for the proton ($P$) formation is given by \begin{eqnarray} {\cal O}_P(x)=\epsilon_{kln} U_k^T(x) C\gamma_5 D_l(x) U_n(x) \label{po} \end{eqnarray} where $U_k(x)$ is the up quark field and $D_k(x)$ is the down quark field, $C$ is the charge conjugation operator and $k,l,n=1,2,3$ are the color indices. In the path integral formulation of QCD the vacuum expectation value of the non-perturbative partonic correlation function of the type $<0|{\cal O}^\dagger_P(x'){\cal O}_P(x'')|0>$ is given by \begin{eqnarray} &&<0|{\cal O}^\dagger_P(x'){\cal O}_P(0)|0>=\frac{1}{Z[0]}\int [dA] [d{\bar U}][dU][d{\bar D}][dD] \times {\cal O}^\dagger_P(x'){\cal O}_P(0) \times {\rm det}[\frac{\delta B_f^s}{\delta \omega^b}]\nonumber \\ && \times {\rm exp}[i\int d^4x [-\frac{1}{4} F_{\sigma \lambda}^s(x)F^{\sigma \delta s}(x) -\frac{1}{2\alpha} [B_f^s(x)]^2 +{\bar U}_k(x)[\delta^{kn}(i{\not \partial}-m_U)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]U_n(x)\nonumber \\ &&+{\bar D}_k(x)[\delta^{kn}(i{\not \partial}-m_D)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]D_n(x)]] \label{pcf} \end{eqnarray} where $B_f^s(x)$ is the gauge fixing term with color index $s=1,...,8$, the $A_\sigma^s(x)$ is the gluon field with Lorentz index $\sigma=0,1,2,3$, the $\alpha$ is the gauge fixing parameter, $m_U$ is the mass of the up quark, $m_D$ is the mass of the down quark and $Z[0]$ is the generating functional in QCD given by \begin{eqnarray} && Z[0]=\int [dA] [d{\bar U}][dU][d{\bar D}][dD] \times {\rm det}[\frac{\delta B_f^s}{\delta \omega^b}]\times {\rm exp}[i\int d^4x [-\frac{1}{4} F_{\sigma \lambda}^s(x)F^{\sigma \delta s}(x) -\frac{1}{2\alpha} [B_f^s(x)]^2 \nonumber \\ &&+{\bar U}_k(x)[\delta^{kn}(i{\not \partial}-m_U)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]U_n(x)+{\bar D}_k(x)[\delta^{kn}(i{\not \partial}-m_D)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]D_n(x)]]\nonumber \\ \label{pz0} \end{eqnarray} with \begin{eqnarray} F_{\sigma \lambda}^s(x)=\partial_\sigma A_\lambda^s(x) - \partial_\lambda A_\sigma^s(x) +gf^{scd} A_\sigma^c(x) A_\lambda^d(x). \label{fsl} \end{eqnarray} In eq. (\ref{pcf}) we do not have ghost fields as we directly work with the ghost determinant ${\rm det}[\frac{\delta B_f^s}{\delta \omega^b}]$. The time evolution of the partonic operator in the Heisenberg representation is given by \begin{eqnarray} {\cal O}_P(t,{\vec r})=e^{-iHt} {\cal O}_P(0,{\vec r}) e^{iHt} \label{tv} \end{eqnarray} where the QCD Hamiltonian of the partons is given by $H$. The complete set of proton energy-momentum eigenstates is given by \begin{eqnarray} \sum_{l''} |H_{l''}><H_{l''}|=1. \label{cse} \end{eqnarray} Using eqs. (\ref{tv}) and (\ref{cse}) in (\ref{pcf}) we find in the Euclidean time \begin{eqnarray} &&\sum_{{\vec r}} <0|{\cal O}^\dagger_P(t,{\vec r}){\cal O}_P(0)|0>= \sum_{l''} |<H_{l''}|{\cal O}_P(0)|0>|^2~ e^{-\int dt~ E_{l''}(t)} \label{pcfa} \end{eqnarray} where $\int dt$ is an indefinite integration. In the large time limit we neglect the higher energy level contribution to find \begin{eqnarray} &&[\sum_{{\vec r}} <0|{\cal O}^\dagger_P(t,{\vec r}){\cal O}_P(0)|0>]_{t\rightarrow \infty} = |<P|{\cal O}_P(0)|0>|^2~ e^{-\int dt ~E(t)} \label{pcfb} \end{eqnarray} where $E(t)$ is the energy of all the partons inside the proton in its ground state and $|P>$ is the energy-momentum eigenstate of the proton in its ground state. Due to non-vanishing boundary surface term $E_{BS}(t)$ in QCD due to the confinement of partons inside the finite size proton we find \cite{nkbsj} \begin{eqnarray} E_P=E(t)+E_{BS}(t) \label{bs} \end{eqnarray} where $E_P$ is the energy of the proton, $E(t)$ is energy of all the partons inside the proton and $E_{BS}(t)$ is the non-vanishing boundary surface term in QCD given by \begin{eqnarray} \frac{dE_{BS}(t')}{dt'}=[\frac{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P(t'',{\vec r}'')[\sum_{q,{\bar q},g}\int d^3r' \partial_i T^{i0}(t',{\vec r}')] {\cal O}_P(0)|0>}{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P(t'',{\vec r}''){\cal O}_P(0)|0>}]_{t'' \rightarrow \infty}. \label{bst} \end{eqnarray} In eq. (\ref{bst}) the energy-momentum tensor density $T^{\sigma \lambda}(x)$ of the partons in QCD is given by \begin{eqnarray} && T^{\sigma \lambda}(x) =F^{\sigma \mu s}(x)F_{\mu}^{~\lambda s}(x) +\frac{g^{\sigma \lambda}}{4} F^{\sigma' \mu' s}(x)F_{\mu'}^{~\lambda' s}(x) ++ {\bar U}_l(x) \gamma^\sigma [\delta^{ln}i\partial^\lambda -igT^s_{ln}A^{\lambda s}(x)]U_n(x)\nonumber \\ && + {\bar D}_l(x) \gamma^\sigma [\delta^{ln}i\partial^\lambda -igT^s_{ln}A^{\lambda s}(x)]D_n(x). \label{enmf} \end{eqnarray} Using eqs. (\ref{bs}) and (\ref{bst}) in (\ref{pcfb}) we find \begin{eqnarray} && |<P|{\cal O}_P(0)|0>|^2e^{- M_Pt}=[\frac{\sum_{{\vec r}'} <0|{\cal O}^\dagger_P(t',{\vec r}') {\cal O}_P(0)|0>}{e^{\int dt' [\frac{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P({\vec r}'',t'')[\sum_{q,{\bar q},g}\int dt' \int d^3r' \partial_i T^{i0}(t',{\vec r}') ]{\cal O}_P(0)|0>}{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P({\vec r}'',t''){\cal O}_P(0)|0>}]_{t'' \rightarrow \infty}}}]_{t' \rightarrow \infty}\nonumber \\ \label{frs} \end{eqnarray} which can be calculated by using the lattice QCD method where $M_P$ is the mass of the proton and $\int dt'$ is indefinite integration. Eq. (\ref{frs}) is the non-perturbative formula to study the proton formation from quarks and gluons using the lattice QCD method by implementing non-zero boundary surface term in QCD due to confinement. \section{Lattice QCD Method to study charge radius of the proton } In the previous section we have formulated the lattice QCD method to study the formation of the proton from the quarks and gluons by implementing the non-zero boundary surface term in QCD due to the confinement. In this section we will extend this to formulate the lattice QCD method to study the proton radius puzzle. We will derive the non-perturbative formula of the charge radius of the proton from the first principle in QCD which can be calculated by using the lattice QCD method. This method is also used to study various quantities in QCD in vacuum \cite{psj,pd,avj} and in QCD in medium \cite{amj} to study the quark-gluon plasma at RHIC and LHC \cite{qk,qk1,qk2}. In the single (virtual) photon exchange the amplitude ${\cal M}$ for the electron-proton ($eP$) elastic scattering process is given by \begin{eqnarray} {\cal M} =\frac{g^{\lambda \delta}}{q^2} [{\bar u}_e(k_f) \gamma_\lambda u_e(k_i)][ie{\bar u}_P(p_f) \Gamma_\delta(p_f,p_i)u_P(p_i)] =\frac{4\pi \alpha}{Q^2} l_{\lambda} J^\lambda \label{scm} \end{eqnarray} where $k_i (k_f)$, $p_i(p_f)$ are the initial (final) momentum of the electron, proton, $q^\mu=p^\mu_f-p^\mu_i$ is the momentum of the virtual photon, $u_e,u_P$ are the electron, proton spinors respectively, $l^\mu,J^\mu$ are the leptonic, hadronic (electromagnetic) currents respectively and $q^2=-Q^2$. The general form of the vertex function $\Gamma^\lambda(p_f,p_i)$ satisfying the relativistic invariance and current conservation is given by \begin{eqnarray} J^\lambda = {\bar u}_P(p_f) [\gamma^\lambda F_1(Q^2)+\frac{\sigma^{\lambda \delta}q_\delta}{2M_P}F_2(Q^2) ]u_P(p_i)=<P|\frac{2}{3} {\bar U} \gamma^\lambda U - \frac{1}{3} {\bar D} \gamma^\lambda D|P>. \label{hdc} \end{eqnarray} The electric form factor $G_E(Q^2)$ is given by \cite{rp} \begin{eqnarray} G_E(Q^2)=F_1(Q^2)-\frac{Q^2}{4M_P^2}F_2(Q^2). \label{geq} \end{eqnarray} Using eqs. (\ref{tv}) and (\ref{cse}) in (\ref{pcf}) we find in the Euclidean time \begin{eqnarray} &&\sum_{{\vec r}} e^{{\vec p}\cdot {\vec r}} <0|{\cal O}^\dagger_P(t,{\vec r}){\cal O}_P(0)|0>= \sum_{l''} |<H_{l''}({\vec p})|{\cal O}_P(0)|0>|^2~ e^{-\int dt~ E_{l''}({\vec p},t)} \label{pcfa3} \end{eqnarray} where ${\vec p}$ is the momentum of the proton and $\int dt$ is an indefinite integration. In the large time limit we neglect the higher energy level contribution in eq. (\ref{pcfa3}) to find \begin{eqnarray} &&[\sum_{{\vec r}} e^{{\vec p}\cdot {\vec r}} <0|{\cal O}^\dagger_P(t,{\vec r}){\cal O}_P(0)|0>]_{t\rightarrow \infty} = |<P({\vec p})|{\cal O}_P(0)|0>|^2~ e^{-\int dt ~E({\vec p},t)} \label{pcfb3} \end{eqnarray} where $E({\vec p},t)$ is the energy of all the partons inside the proton of momentum ${\vec p}$ in its ground state and $|P({\vec p})>$ is the energy-momentum eigenstate of the proton $P$ of momentum ${\vec p}$ in its ground state. From eq. (\ref{hdc}) the electromagnetic current operator $j^\lambda_q(x)$ of the quarks inside the proton is given by \begin{eqnarray} j_\lambda^q(x) = \frac{2}{3} {\bar U}(x) \gamma_\lambda U(x) - \frac{1}{3} {\bar D}(x) \gamma_\lambda D(x)=\sum_f e_f {\bar q}_f(x)\gamma_\lambda q(x) \label{qem} \end{eqnarray} where $q_f(x)$ and $e_f$ are the quark field and the fractional electrical charge of the quark of the flavor $f$ respectively. In the path integral formulation of QCD the vacuum expectation value of the non-perturbative 3-point partonic correlation function of the type $<0|{\cal O}_P(x'')j^q_0(x'){\cal O}_P(0)|0>$ is given by \begin{eqnarray} &&<0|{\cal O}^\dagger_P(x'')j^q_0(x'){\cal O}_P(0)|0>=\frac{1}{Z[0]}\int [dA] [d{\bar U}][dU][d{\bar D}][dD] \times {\cal O}^\dagger_P(x'')j^q_0(x'){\cal O}_P(0) \times {\rm det}[\frac{\delta B_f^s}{\delta \omega^b}]\nonumber \\ && \times {\rm exp}[i\int d^4x [-\frac{1}{4} F_{\sigma \lambda}^s(x)F^{\sigma \delta s}(x) -\frac{1}{2\alpha} [B_f^s(x)]^2 +{\bar U}_k(x)[\delta^{kn}(i{\not \partial}-m_U)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]U_n(x)\nonumber \\ &&+{\bar D}_k(x)[\delta^{kn}(i{\not \partial}-m_D)+gT^s_{kn}A\hspace{-0.067in}\slash^s(x)]D_n(x)]]. \label{pcf3} \end{eqnarray} In this paper we assume that the initial proton is at rest, {\it i. e.}, ${\vec p}_i=0$. This means ${\vec p}_f={\vec q}={\vec p}$. Using eqs. (\ref{tv}) and (\ref{cse}) in (\ref{pcf3}) we find in the Euclidean time \begin{eqnarray} && \sum_{{\vec r}'',{\vec r}'} e^{i{\vec q}\cdot ({\vec r}''-{\vec r}')} <0|{\cal O}_P(t'',{\vec r}'')j^q_0(t',{\vec r}') {\cal O}_P(0)|0> =\sum_{n'',n'}<0|{\cal O}_P|H_{n''}({\vec p})>\nonumber \\ &&<H_{n''}({\vec p})|j^q_0|H_{n'}><H_{n'}|{\cal O}_P|0>e^{-[\int dt'' E_{n''}({\vec p},t'')-\int dt' E_{n''}({\vec p},t')]}e^{-\int dt' E_{n'}(t')} \label{mef} \end{eqnarray} where ${\vec p}$ is the momentum of the final state proton and ${\vec q}={\vec p}$. In the limit $t'' >>>t',~~t'\rightarrow \infty$ we find by neglecting the higher energy level contributions \begin{eqnarray} && [\sum_{{\vec r}'',{\vec r}'} e^{i{\vec q}\cdot ({\vec r}''-{\vec r}')} <0|{\cal O}_P(t'',{\vec r}'')j^q_0(t',{\vec r}') {\cal O}_P(0)|0>]_{t''>>>t',~~t'\rightarrow \infty} =<0|{\cal O}_P|P({\vec p})>\nonumber \\ &&<P({\vec p})|j^q_0|P><P|{\cal O}_P|0>e^{-[\int dt'' E({\vec p},t'')-\int dt' E({\vec p},t')]}~e^{-\int dt' E(t')} \label{meg3} \end{eqnarray} where $E({\vec p},t)$ is the energy of all the partons inside the proton of momentum ${\vec p}$, the $E(t)$ is the energy of all the partons inside the proton at rest, $|P({\vec p})>$ is the energy-momentum eigenstate of the proton of momentum ${\vec p}$ and $|P>$ is the energy-momentum eigenstate of the proton at rest. From eq. (\ref{pcfb}) we find for the proton at rest \begin{eqnarray} && [\sum_{{\vec r}'} <0|{\cal O}_P(t',{\vec r}') {\cal O}_P(0)|0>]_{t' \rightarrow \infty} =|<P|{\cal O}_P|0>|^2e^{-\int dt' E(t')}. \label{meh3} \end{eqnarray} Similarly from eq. (\ref{pcfb3}) we find for the proton with momentum ${\vec p}$ \begin{eqnarray} && [\sum_{{\vec r}''} e^{i{\vec p}\cdot {\vec r}''} <0|{\cal O}_P({\vec r}'',t''-t') {\cal O}_P(0)|0>]_{t''>>>t',~~t' \rightarrow \infty} \nonumber \\ &&=|<P({\vec p})|{\cal O}_P|0>|^2e^{-[\int dt'' E({\vec p},t'')-\int dt' E({\vec p},t')]}. \label{mei3} \end{eqnarray} From eqs. (\ref{meg3}), (\ref{meh3}) and (\ref{mei3}) we find \begin{eqnarray} &&<P({\vec p})|j^q_0|P>= \sqrt{|<P({\vec p})|{\cal O}_P|0>|^2}\times \sqrt{|<P|{\cal O}_P|0>|^2}\nonumber \\ &&\times [\frac{\sum_{{\vec r}''',{\vec r}'} e^{i{\vec q}\cdot ({\vec r}'-{\vec r}''')} <0|{\cal O}_P(t',{\vec r}')j^q_0(t''',{\vec r}''') {\cal O}_P(0)|0>}{[\sum_{{\vec r}'''} <0|{\cal O}_P(t''',{\vec r}''') {\cal O}_P(0)|0>][\sum_{{\vec r}'} e^{i{\vec p}\cdot {\vec r}'} <0|{\cal O}_P(t'-t''',{\vec r}') {\cal O}_P(0)|0>]}]_{t'>>>t''',~~~t'''\rightarrow \infty}.\nonumber \\ \label{mej3} \end{eqnarray} For the initial proton at rest with energy-momentum eigenstate $|P>$ and the final proton of momentum ${\vec p}$ with energy-momentum eigenstate $|P({\vec p})>$ we find that the electric form factor $G_E(Q^2)$ of the partons inside the proton is related to the proton matrix element $<P({\vec p})|j^q_0|P>$ of the quark electromagnetic current $j^q_0$ via the relation \cite{eff} \begin{eqnarray} G_E(Q^2)=\sqrt{\frac{2E_P({\vec p})}{E_P({\vec p})+M_P}}<P({\vec p})|j^q_0|P> \label{mpge} \end{eqnarray} where $E_P({\vec p})$ is the energy of the proton of momentum ${\vec p}={\vec q}$ and $Q^2=-q^2$. From eq. (\ref{frs}) we find for the proton at rest \begin{eqnarray} && |<P|{\cal O}_P(0)|0>|^2=[\frac{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P(t'',{\vec r}'') {\cal O}_P(0)|0>\times e^{ M_Pt''}}{e^{\int dt'' [\frac{\sum_{{\vec r}'''} <0|{\cal O}^\dagger_P(t''',{\vec r}''')[\sum_{q,{\bar q},g}\int dt'' \int d^3r'' \partial_i T^{i0}(t'',{\vec r}'')] {\cal O}_P(0)|0>}{\sum_{{\vec r}'''} <0|{\cal O}^\dagger_P(t''',{\vec r}'''){\cal O}_P(0)|0>}]_{t''' \rightarrow \infty}}}]_{t'' \rightarrow \infty}\nonumber \\ \label{mek3} \end{eqnarray} where $M_P$ is the mass of the proton. Similarly for the proton with momentum ${\vec p}$ we find \begin{eqnarray} && |<P({\vec p})|{\cal O}_P(0)|0>|^2=[\frac{\sum_{{\vec r}''} e^{i{\vec p}\cdot {\vec r}''}<0|{\cal O}^\dagger_P(t'',{\vec r}'') {\cal O}_P(0)|0>\times e^{t'' E_P({\vec p})}}{e^{\int dt'' [\frac{\sum_{{\vec r}'''} e^{i{\vec p}\cdot {\vec r}'''}<0|{\cal O}^\dagger_P(t''',{\vec r}''')[\sum_{q,{\bar q},g}\int dt'' \int d^3r'' \partial_i T^{i0}(t'',{\vec r}'')] {\cal O}_P(0)|0>}{\sum_{{\vec r}'''} e^{i{\vec p}\cdot {\vec r}'''}<0|{\cal O}^\dagger_P(t''',{\vec r}'''){\cal O}_P(0)|0>}]_{t''' \rightarrow \infty}}}]_{t'' \rightarrow \infty}\nonumber \\ \label{mel3} \end{eqnarray} where $E_P({\vec p})$ is the energy of the proton with momentum ${\vec p}$. Using eqs. (\ref{mek3}), (\ref{mel3}) and (\ref{mej3}) in (\ref{mpge}) we find \begin{eqnarray} &&G_E(Q^2)=\sqrt{\frac{2E_P({\vec p})}{E_P({\vec p})+M_P}}\nonumber \\ && \times \left[[\frac{\sum_{{\vec r}''} e^{i{\vec p}\cdot {\vec r}''}<0|{\cal O}^\dagger_P(t'',{\vec r}'') {\cal O}_P(0)|0>\times e^{t'' E_P({\vec p})}}{e^{\int dt'' [\frac{\sum_{{\vec r}'''} e^{i{\vec p}\cdot {\vec r}'''}<0|{\cal O}^\dagger_P(t''',{\vec r}''')[\sum_{q,{\bar q},g}\int dt'' \int d^3r'' \partial_i T^{i0}(t'',{\vec r}'')] {\cal O}_P(0)|0>}{\sum_{{\vec r}'''} e^{i{\vec p}\cdot {\vec r}'''}<0|{\cal O}^\dagger_P(t''',{\vec r}'''){\cal O}_P(0)|0>}]_{t''' \rightarrow \infty}}}]_{t'' \rightarrow \infty}\right]^{\frac{1}{2}}\nonumber \\ && \times \left[[\frac{\sum_{{\vec r}''} <0|{\cal O}^\dagger_P(t'',{\vec r}'') {\cal O}_P(0)|0>\times e^{ M_Pt''}}{e^{\int dt'' [\frac{\sum_{{\vec r}'''} <0|{\cal O}^\dagger_P(t''',{\vec r}''')[\sum_{q,{\bar q},g}\int dt'' \int d^3r'' \partial_i T^{i0}(t'',{\vec r}'')] {\cal O}_P(0)|0>}{\sum_{{\vec r}'''} <0|{\cal O}^\dagger_P(t''',{\vec r}'''){\cal O}_P(0)|0>}]_{t''' \rightarrow \infty}}}]_{t'' \rightarrow \infty} \right]^{\frac{1}{2}} \nonumber \\ &&\times [\frac{\sum_{{\vec r}''',{\vec r}'} e^{i{\vec q}\cdot ({\vec r}'-{\vec r}''')} <0|{\cal O}_P(t',{\vec r}')j^q_0(t''',{\vec r}''') {\cal O}_P(0)|0>}{[\sum_{{\vec r}'''} <0|{\cal O}_P(t''',{\vec r}''') {\cal O}_P(0)|0>][\sum_{{\vec r}'} e^{i{\vec p}\cdot {\vec r}'} <0|{\cal O}_P(t'-t''',{\vec r}') {\cal O}_P(0)|0>]}]_{t'>>>t''',~~~t'''\rightarrow \infty}\nonumber \\ \label{mem3} \end{eqnarray} where the initial proton is at rest and the final proton momentum is ${\vec p}={\vec q}$ with $Q^2=-q^2$. The eq. (\ref{mem3}) is the non-perturbative formula of the electric form factor $G_E(Q^2)$ of the proton derived from the first principle in QCD which can be calculated by using the lattice QCD method. The charge radius $R_P$ of the proton is obtained from this electric form factor $G_E(Q^2)$ in eq. (\ref{mem3}) by using eq. (\ref{rpj}). \section{Conclusions} Recently there has been disagreement between various experiments about the value of the proton radius which is known as the proton radius puzzle. Since the proton is not a point particle the charge radius of the proton depends on the charge distribution (the form factor) of the partons inside the proton. Since this form factor is a non-perturbative quantity in QCD it cannot be calculated by using the perturbative QCD (pQCD) method but it can be calculated by using the lattice QCD method. In this paper we have formulated the lattice QCD method to study the charge radius of the proton. We have derived the non-perturbative formula of the charge radius of the proton from the first principle in QCD which can be calculated by using the lattice QCD method.
1,108,101,564,476
arxiv
\section{Introduction} We will assume all rings to be commutative containing unity. Throughout this note we will use the notation $R[X_1,\dots,X_n]$ to mean a polynomial algebra in $n$ variables over $R$. Sometimes we will denote this ring by $R^{[n]}$. \medskip \noindent Let $A:=R[X_1,\dots,X_n]$. We recall that $m$ polynomials $f_1,\dots,f_m$ ($m\leq{n}$) in $A$ are said to form a partial coordinate system (of colength $n-m$) in $A$ if $A= R[f_1,\dots,f_m]^{[n-m]}$. If $m=n$ then we will say $f_1,\dots,f_n$ form a coordinate system in $A$. For an arbitrary $a\in{R}$, $f_1,\dots,f_m$ in $A$ are said to form an $a$-strongly partial residual coordinate system (of colength $n-m$) in $A$ if the images of $f_1,\dots,f_m$ form a partial coordinate system (of colength $n-m$) in $\dfrac{A}{aA}$ and also in $A_a$. We will say $f_1,\dots,f_m$ form a partial residual coordinate system (of colength $n-m$) in $A$ if, for each prime ideal $\mathpzc{p}$ of $R$ we have $k(\mathpzc{p})\otimes_R{A}=(k(\mathpzc{p})\otimes_{R}{R[f_1,\dots,f_m]})^{[n-m]}$, where $k(\mathpzc{p}):=\dfrac{R_{\mathpzc{p}}}{\mathpzc{p} R_{\mathpzc{p}}}$ is the residue field of $R$ at $\mathpzc{p}$. \medskip \noindent The following result has been proved by J. Berson, J. W. Bikker and A. van den Essen in \cite{BBE}. \begin{thm}\label{essen} Let $R$ be a ring containing $\mathbb Q$, $a\in{R}$ a non-zerodivisor and $A=R[X_1,\dots,X_n]$. If $n-1$ polynomials $f_1,\dots,f_{n-1}$ in $A$ form an $a$-strongly partial residual coordinate system in $A$, then $f_1,\dots,f_{n-1}$ form a partial coordinate system in $A$. \end{thm} They conjectured the following (\cite[Conjecture 4.4]{BBE}): \medskip \noindent {\bf Conjecture.} Theorem \ref{essen} also holds even when $a$ is a zerodivisor in $R$. \medskip In this note we observe that an affirmative solution to the above conjecture can be deduced from the following formulation of a result of Das-Dutta (\cite[Corollary 3.19]{DD}). \begin{thm}\label{colength 1} Let $R$ be a Noetherian ring containing $\mathbb Q$, $A=R^{[n]}$ and $f_1,\dots,f_{n-1}\in{A}$. Then the following are equivalent: \begin{enumerate} \item[\rm (i)] $f_1,\dots,f_{n-1}$ form a partial coordinate system in $A$. \item[\rm (ii)] $f_1,\dots,f_{n-1}$ form a partial residual coordinate system in $A$. \end{enumerate} \end{thm} \begin{rem} {\em We note the following observations on the above result (i) Although the result is stated in the paper \cite{DD} for Noetherian domains containing $\mathbb Q$, the proof is an application of Theorem 3.13 and Theorem 2.4 in \cite{DD} which hold over any Noetherain ring containing $\mathbb Q$ (not necessarily a domain). (ii) The case $n=2$ was previously proved by Bhatwadekar-Dutta (\cite[Theorem 3.1]{BDr}). } \end{rem} \section{Proof of the conjecture} \begin{thm} Let $R$ be a ring containing $\mathbb Q$, $a\in{R}$ be arbitrary and $A=R[X_1,\dots,X_n]$. If $n-1$ polynomials $f_1,\dots,f_{n-1}$ in $A$ form an $a$-strongly partial residual coordinate system of colength $1$ in $A$, then $f_1,\dots,f_{n-1}$ form a partial coordinate system in $A$. \end{thm} \begin{proof} Since $f_1,\dots,f_{n-1}$ form an $a$-strongly partial residual coordinate system in $A$, there exist $g, h\in{A}$ such that $$ \dfrac{R}{aR}[X_1,\dots,X_{n}]=\dfrac{R}{aR}[\overline{f_1},\dots,\overline{f_{n-1}},\overline{g}] $$ and $$ R_a[X_1,\dots,X_{n}]=R_a[f_1,\dots,f_{n-1},h]. $$ Hence we have, \begin{equation}\label{mod} X_i = G_i + a H_i \end{equation} for some $G_i\in{R[f_1,\dots,f_{n-1},g]}$ and $H_i\in{A}$ $(1\leq{i}\leq{n})$;\\ and \begin{equation}\label{local} X_i = \sum{\dfrac{e_{i,m_1,\dots,m_{n-1},m}}{a^{k_i}}{f_1^{m_1}f_2^{m_2}\dots f_{n-1}^{m_{n-1}}h^m}} \end{equation} where $e_{i,m_1,\dots,m_{n-1},m}\in{R}$, for all ${i,m_j,m}$ $(1\leq{i}\leq{n}, 1\leq{j}\leq{n-1},{m_j,m,k_i}\geq{0})$. Now, let $S$ be the $\mathbb Q$-subalgebra of $R$ generated by the subset of $R$ consisting of $a$; all coefficients of $G_i$ (for all $i$) as polynomials in $f_1,\dots,f_{n-1},g$; coefficients of $H_i$ (for all $i$); coefficients of $g, h, f_i$ (for all $i$) and $e_{i,m_1,\dots,m_{n-1},m}$ (for all $i$ and for all ${m,m_j})$. Then $S$ is a Noetherian $\mathbb Q$-algebra. Let $\mathpzc{p}$ be an arbitrary prime ideal of $S$ and $B:=S[X_1,\dots,X_n]$. If $a\in{\mathpzc{p}}$ then from (\ref{mod}), we have $k(\mathpzc{p})\otimes_S{B}=(k(\mathpzc{p})\otimes_{S}{S[f_1,\dots,f_{n-1}]})^{[1]}$. If $a\notin{\mathpzc{p}}$ then from (\ref{local}), we have $k(\mathpzc{p})\otimes_S{B}=(k(\mathpzc{p})\otimes_{S}{S[f_1,\dots,f_{n-1}]})^{[1]}$. So, $f_1,\dots,f_{n-1}$ form a partial residual coordinate system (of colength $1$) in $B$. Since $S$ is Noetherian, hence by Theorem \ref{colength 1}, $f_1,\dots,f_{n-1}$ form a partial coordinate system in $B$ and hence in $A$. \end{proof} \bigskip \noindent {\bf Acknowledgements.} The author acknowledges Council of Scientific and Industrial Research (CSIR) for their research grant. {\small{
1,108,101,564,477
arxiv
\section{Introduction} \vspace{-0.1cm} For supporting increasing heterogeneous quality-of-service requirements of future wireless networks, an emerging communication paradigm, i.e., simultaneously transmitting and reflecting RISs (STAR-RISs) \cite{IEEEhowto:YLiu2} becomes appealing. In contrast to conventional RISs \cite{IEEEhowto:YLiu} , STAR-RISs are able to transmit and reflect the incident signal simultaneously, to achieve full space coverage. Therefore, it is an ultra-interesting question how STAR-RISs perform in terms of coverage and capacity. Note that coverage and capacity optimization (CCO) is one of the typical operational tasks mentioned by the 3rd Generation Partnership Project (3GPP) \cite{3GPP}. Since the coverage and capacity have several conflicting relationships, simultaneously optimizing them is important. For example, high transmit power contributes to large coverage but high inter-cell interference reduces the capacity performance. To this end, multi-objective machine learning (MOML) \cite{IEEEhowto:EBalevi} can be a potential solution. \par Conventional performance optimization for STAR-RISs assisted networks focuses on a single objective: capacity or coverage. For capacity performance, there are some primary works. In \cite{Aldababsa2021}, a partitioning algorithm was proposed to determine the proper number of transmitting/reflecting elements that need to be assigned to each user, and maximize the system sum-rate. In STAR-RISs assisted non-orthogonal multiple access systems, the authors of \cite{Zuo2021} proposed a suboptimal two-layer iterative algorithm to maximize the achievable sum rate. For coverage performance, only one recent work has discussed its optimization problem. The STAR-RISs assisted two-user communication networks were studied in \cite{Wu12021}, where the one-dimensional search-based algorithms were proposed to obtain the optimal coverage range. There are mainly three CCO methods based on MOML: 1) Keep one objective in the objective function and move the rest objectives to constraints, while the obtained results are usually sub-optimal. 2) Assign a fixed weight to each objective. This method achieves the optimal results in a single scenario, while it cannot be used in other weight combinations, i.e., other network operation designs. 3) Obtain a set of optimal solutions according to Pareto-based multi-objective optimization algorithms, where one of these solutions can be selected to meet any specific optimization requirement. For the first method, an reinforcement learning (RL)-based solution for coverage and capacity optimization using base station antenna electrical tilt in mobile networks was proposed in \cite{Dandanov2017}. For the second method, in \cite{Skocaj2022}, a minimization of drive tests-driven deep RL algorithm was investigated to optimize coverage and capacity with fixed weights. For the third method, the authors in \cite{Dreifuerst2021} developed and compared two RL-based approaches for maximizing coverage and capacity. \par As can be seen from related works, the CCO of STAR-RISs assisted wireless networks is still at its early stage. Modeling the STAR-RISs assisted networks for coverage and capacity and exploring the RL-based solutions of simultaneously optimizing the coverage and capacity are still challenges. To solve these challenges and fully reap the advantages of STAR-RISs, in this paper, we propose a new ML-based on proximal policy optimization (PPO), named multi-objective PPO (MO-PPO), to provide the maximal coverage and capacity for STAR-RISs assisted networks. The main contributions of this paper can be summarized as follows: 1) We propose a new model for a narrow-band downlink STAR-RISs assisted network and formulates the CCO problem of STAR-RIS assisted networks by jointly optimizing the transmit power, the phase shift matrix to solve CCO problem. 2) We adopt a loss function-based update strategy for the MO-PPO algorithm, which is capable of simultaneously obtaining maximum coverage and capacity. 3) We demonstrated that the loss function-based update strategy MO-PPO algorithms are achieving higher benefits than the benchmarks. \vspace{-0.1cm} \section{System Model and Problem Formulation} \vspace{-0.1cm} \begin{figure*}[htbp] \setlength{\belowcaptionskip}{-0.7cm} \centering \includegraphics[scale = 0.25]{system_model_all.eps} \caption{Illustration of the considered STAR-RISs assisted networks.} \label{system_model} \end{figure*} As shown in Fig.~\ref{system_model}, we consider a narrow-band downlink STAR-RISs assisted network consisting of two single-antenna BSs and $N_s$ STAR-RISs of the same size equipped with $K = K_HK_V$ reconfigurable elements, where $K_H$ and $K_V$ denote the number of elements per row and column, respectively. The serving range is defined as a square region with the length of the side $R_s$. The BSs are located at the bottom left and bottom right corners with the same height $h_b$, while STAR-RISs with the height $h_{n_s}$ are deployed at designated locations in the square region. Assuming a three-dimensional (3D) Cartesian coordinate system, where the origin is set at the top-left corner. The locations of two BSs and $n_s$-th STAR-RISs are denoted by $\mathrm{B}_1 = (R_s, 0, h_b)$, $\mathrm{B}_2 = (R_s, R_s, h_b)$, and $\mathrm{A}_{n_s} = (x_{n_s},y_{n_s},h_{n_s})$, respectively. Note that $h_{n_s}$ is the height of each STAR-RISs module (including its stand and STAR-RISs), which is far lower than the height of BSs. Therefore, there is a direct link between each BS and any given sampling point. To characterize the coverage and capacity, the region is discretized into numerous square grids with the length of the side $R_g$, while the center point of each grid acts as the sample point. Accordingly, the total number of grids is $N = \lceil R_s/R_g \rceil^2$, where the set of sample points can be denoted as $\mathbf{s} = \{s_1,s_2,...,s_{N}\}$. In practical networks, in order to characterize the importance of each grid at each timestep $t$, two time-related weights, $w_{\mathrm{cov}, i}(t)$ and $w_{\mathrm{cap}, i}(t)$, are assigned for coverage and capacity of each sample points $s_i$ ($i\in [1,N]$), respectively. Moreover, the weights have been unified, i.e., $\sum_{i=1}^{N}w_{\mathrm{cov}, s_i}(t) = 1$ and $\sum_{i=1}^{N}w_{\mathrm{cap}, s_i}(t) = 1$. In this system model, we study to achieve long-term communication with a time period $\mathcal{T}$. For each sample point at any timestep, the weighted assignments $w_{\mathrm{cov}, s_i}(t)$ and $w_{\mathrm{cap}, s_i}(t)$ are influenced by the previous network performance and resource allocation strategy. Therefore, the considered problem can be regarded as a Markov Decision Process (MDP). \vspace{-0.2cm} \subsection{Spatially Correlated Channel Model} In this section, the fading channels from BSs to STAR-RISs, from STAR-RISs to sample points, and from BSs to sample points are introduced, as well as their spatial channel correlations. Denote $\mathbf{\Phi}_{\delta, n_s}$ as the coefficients of $n_s$-th STAR-RISs with mode $\delta$, where $\delta \in \{\mathrm{Re},\mathrm{Tr}\}$ represents the reflection and transmission modes. Due to the high path loss, this work assume that signals are only reflected and transmitted by the STAR-RISs once. We consider the non-ideal STAR-RISs with same constant amplitude and continuous phase shifters in each mode, where the phase shifters can be expressed as: $\phi_{\delta,n_s,k} \in [0, 2\pi), \forall k \in \{1,2,\cdots,K\}$. The coefficients of $n_s$-th STAR-RISs are denoted as $\mathbf{\Phi}_{\delta, n_s} = \mathrm{diag}\left(\sqrt{\beta_{\delta,n_s}}e^{j\phi_{\delta,n_s,1}}, ..., \sqrt{\beta_{\delta,n_s}}e^{j\phi_{\delta,n_s,K}}\right), \forall k \in \{1,2,\cdots,K\}$, where $\sqrt{\beta_{\delta,n_s}} \in (0, 1],\hspace{0.5em} \beta_{\mathrm{Re},n_s} + \beta_{\mathrm{Tr},n_s} = 1$. As shown in Fig.~\ref{system_model}, a spherical coordinate system is defined with azimuth angel $\psi$ and elevation angel $\theta$ based on the 3D space. Denote the area of each element as $M = M_HM_V$, where $M_H$ and $M_V$ are the horizontal width and vertical height, respectively. Thus, the total area of $K$ elements can be expressed as $M_a = KM$. For the $k$-th element, its location can be expressed as \cite{r}: \vspace{-0.1cm} \begin{align}\label{4} l_{k} = [0, x(k)M_H, y(k)M_V]^T, \end{align} \par \vspace{-0.1cm} \noindent where $x(k)$ = mod($k-1, K_H$) and $y(k)$ = $\lfloor (k-1)/K_H \rfloor$ are the indices of $k$-th element. Mod($\cdot,\cdot$) and $\lfloor \cdot \rfloor$ denote the modulus operation and truncates the argument. Assume a plane wave with wavelength $\lambda$ is impinging on the STAR-RISs, the array response vector is then given by: \vspace{-0.1cm} \begin{align}\label{5} \mathbf{a}(\psi, \theta) = [e^{j\mathrm{b}(\psi, \theta)^{T}l_{1}},e^{j\mathrm{b}(\psi, \theta)^{T}l_{2}},\cdots,e^{j\mathrm{b}(\psi, \theta)^{T}l_{K}}]^T, \end{align} \par \vspace{-0.2cm} \noindent where $\mathbf{b}(\psi, \theta) \in \mathbb{R}^{3 \times 1}$ is the wave vector, which can be expressed as follows: \vspace{-0.1cm} \begin{align}\label{6} \mathbf{b}(\psi, \theta) = \frac{2\pi}{\lambda}[\cos(\theta)\cos(\psi), \cos(\theta)\sin(\psi), \sin(\theta)]^T. \end{align} \par \vspace{-0.2cm} Assume that these channels are independently distributed and corresponding channel state information are perfect. Denote $\mathbf{h}_{a,n_s}$, $\mathbf{h}_{\delta,n_s,s_i}$, and $\mathbf{h}_{a,s_i}$ as the channel from $a$-th BS to $n_s$-th STAR-RISs, from $n_s$-th STAR-RISs to $s_i$-th sample point with mode $\delta$, and from $a$-th BS to $s_i$-th sample point, respectively. Here, the channels $\mathbf{h}_{a,n_s}$, $\mathbf{h}_{\delta,n_s,s_i}$, and $\mathbf{h}_{a,s_i}$ can be modeled as Rician fading model, which are expressed as: \vspace{-0.1cm} \begin{align}\label{70} \mathbf{h}_{a,n_s} = \sqrt{L_{a\mathrm{R}}} \Big( \sqrt{\frac{\alpha_{a\mathrm{R}}}{1+\alpha_{a\mathrm{R}}}}\mathbf{h}_{a,n_s}^{\mathrm{LOS}} + \sqrt{\frac{1}{1+\alpha_{a\mathrm{R}}}}\mathbf{h}_{a,n_s}^{\mathrm{NLOS}} \Big), \end{align} \vspace{-0.6cm} \begin{align}\label{71} \mathbf{h}_{\delta,n_s,s_i} = \sqrt{L_{\mathrm{RP}}} \Big( \sqrt{\frac{\alpha_{\mathrm{RP}}}{1+\alpha_{\mathrm{RP}}}}\mathbf{h}_{n_s,s_i}^{\mathrm{LOS}} + \sqrt{\frac{1}{1+\alpha_{\mathrm{RP}}}}\mathbf{h}_{n_s,s_i}^{\mathrm{NLOS}} \Big), \end{align} \vspace{-0.6cm} \begin{align}\label{72} \mathbf{h}_{a,s_i} = \sqrt{L_{a\mathrm{P}}} \Big( \sqrt{\frac{\alpha_{a\mathrm{P}}}{1+\alpha_{a\mathrm{P}}}}\mathbf{h}_{a,s_i}^{\mathrm{LOS}} + \sqrt{\frac{1}{1+\alpha_{a\mathrm{P}}}}\mathbf{h}_{a,s_i}^{\mathrm{NLOS}} \Big), \end{align} \par \vspace{-0.2cm} \noindent where $L_{(\mathrm{u})}$ and $\alpha_{(\mathrm{u})}, \mathrm{u} \in \{a\mathrm{R},\mathrm{RP},a\mathrm{P}\}$ denote the corresponding path loss and Rician factor, respectively. $h_{a,s_i}^{\mathrm{LOS}}$ $\sim$ $\mathcal{CN}(0, 1)$ denotes the Rayleigh fading-modeled deterministic line-of-sight (LoS) component of the channel from $a$-th BS to $s_i$-th sample point, while $\mathbf{h}_{a,n_s}^{\mathrm{LOS}} = \mathbf{b}(\psi^{a\mathrm{R}}, \theta^{a\mathrm{R}}) = \mathbf{b}\{\mathrm{arcsin}[ (h_{b}-h_{n_s}) /d_{a, n_s} ], \mathrm{arccos}[(R_s-x_{n_s})/\overline{d}_{a,n_s}]\}$ and $\mathbf{h}_{n_s,s_i}^{\mathrm{LOS}} = \mathbf{b}(\psi^{\mathrm{RP}}, \theta^{\mathrm{RP}}) = \mathbf{b}\{\mathrm{arcsin}(h_{n_s} /d_{n_s,s_i}), \mathrm{arccos}[(x_{n_s}-x_{s_i})/\overline{d}_{n_s,s_i}]\}$ are the deterministic LoS components for the channels from $a$-th BS to $n_s$-th STAR-RISs, and from $n_s$-th STAR-RISs to $s_i$-th sample point, respectively. Among them, $d_{a,n_s}$ and $d_{n_s,s_i}$ denote 3D distance between $a$-th BS and $n_s$-th STAR-RISs, and 3D distance between $n_s$-th STAR-RISs and $s_i$-th sample point, while $\overline{d}_{a,n_s}$ and $\overline{d}_{n_s,s_i}$ denote 2D distance between $a$-th BS and $n_s$-th STAR-RISs, and 2D distance between $n_s$-th STAR-RISs and $s_i$-th sample point. $x_{n_s}$, $x_{s_i}$ indicate the $n_s$-th STAR-RISs, and $s_i$-th sample point, respectively. $\mathbf{h}_{a,n_s}^{\mathrm{NLOS}} \sim \mathcal{CN}\big(0, \mathbb{E}\big[\mathbf{h}_{a,n_s}^{\mathrm{NLOS}}(\mathbf{h}_{a,n_s}^{\mathrm{NLOS}})^{H}\big]\big)$, $\mathbf{h}_{n_s,s_i}^{\mathrm{NLOS}} \sim \mathcal{CN}\big(0, \mathbb{E}\big[\mathbf{h}_{n_s,s_i}^{\mathrm{NLOS}}(\mathbf{h}_{n_s,s_i}^{\mathrm{NLOS}})^{H}\big]\big)$, and $\mathbf{h}_{a,s_i}^{\mathrm{NLOS}} \sim \mathcal{CN}(0, 1)$ are the non-line-of-sight (NLoS) components modeled as Rayleigh fading. Furthermore, for path loss $L_{(\mathrm{u})}$, it can be modeled as $L_{u} = Cd_\mathrm{v}^{-\gamma_\mathrm{v}}, \mathrm{v} \in \{\{a,n_s\},\{n_s,s_i\},\{a,s_i\}\}$, where $C$ denotes the path loss at the reference distance of 1 meter and $\gamma_\mathrm{v}$ represents the path loss factor. \vspace{-0.2cm} \subsection{Signal Model} \vspace{-0.1cm} Since the size of the STAR-RISs module affect the direct link. The received signal $y_{a,n_s,s_i} \in \mathbb{C}$ from the $a$-th BS to the $s_i$ sample point via $n_s$-th STAR-RISs can be written as: \vspace{-0.1cm} \begin{align}\label{signal model} y_{a,n_s,s_i} = \left(\mathbf{h}_{\delta,n_s,s_i}^\mathrm{H} \mathbf{\Phi}_{\delta, n_s} \mathbf{h}_{a,n_s} + \mathbf{h}_{a,s_i}\right)x + n_{a,n_s,s_i}, \end{align} \par \vspace{-0.2cm} \noindent where the total transmit power $P_t = |x|^2$ and $n_{a,n_s,s_i} \sim \mathcal{CN}(0, \sigma^2)$ is the additive white Gaussian noise variance. Based on the received signal power, the reference signal receiving power (RSRP) can be defined as the maximal useful signal power from all possible sources. The RSRP at the sample point $s_i$ is given by: \vspace{-0.1cm} \begin{align}\label{8} \mathrm{RSRP}_{s_i} = \max\limits_{a \in \{1, 2\}, n_s \in \{1, 2, \cdots, N_s\}} |y_{a,n_s,s_i}|^2. \end{align} \par \vspace{-0.2cm} The achievable signal-to-interference-plus-noise ratio (SINR) of $s_i$ sample point is calculated as follows: \vspace{-0.1cm} \begin{align}\label{9} \mathrm{SINR}_{a,n_s,s_i} = \frac{|y_{a,n_s,s_i} - n_{a,n_s,s_i}|^2}{ \sum_{n_s^{'}=1,n_s^{'}\neq n_s}^{|\mathbf{S}|} |y_{a^{'},n_s^{'},s_i}-n_{a^{'},n_s^{'},s_i}|^2+{n_{a,n_s,s_i}^2}}, \end{align} \par \vspace{-0.2cm} \noindent where $a=1$, $a^{'}=2$; and $a=2$, $a^{'}=1$, otherwise. Assume the minimal RSRP for all sample points is $\mathrm{R}_{th}$, the weighted coverage ratio at time $t$ can be written as \vspace{-0.1cm} \begin{align}\label{10} \mathrm{Coverage}(t) = \frac{|\mathbf{w}_{\mathrm{cov}, \check{\mathbf{s}}(t)} \cdot \check{\mathbf{s}}(t)|}{N}, \end{align} \par \vspace{-0.2cm} \noindent where $\check{\mathbf{s}}(t) = \{\check{s}_1(t),\check{s}_2(t),\cdots,\check{s}_{\tilde{N}}(t)\}$ is the set of the sample points at time $t$ that satisfying the condition $\mathrm{RSRP}_{\check{s}_{\tilde{n}}(t)} \geq \mathrm{R}_{th}, \check{s}_{\tilde{n}}(t) \in \check{\mathbf{s}}(t)$. $\mathbf{w}_{\mathrm{cov}, \check{\mathbf{s}}}(t) = \{w_{\mathrm{cov}, \check{s}_1}(t), w_{\mathrm{cov}, \check{s}_2}(t), \cdots, w_{\mathrm{cov}, \check{s}_{\tilde{N}}}(t)\}$ is the normalized corresponding coverage weights for the sample points $\check{\mathbf{s}}(t)$. For the network capacity, it is mainly determined by SINR, so at the time $t$, the weighted capacity can be represented by \vspace{-0.1cm} \begin{align}\label{11} \mathrm{Capacity}(t) = \sum_{s_i=1}^{N_s} w_{\mathrm{cap}, s_i}(t) \cdot \mathrm{B} \log_2\left(1+\mathrm{SINR}_{a^*,k^*,s_i}(t) \right), \end{align} \par \vspace{-0.2cm} \noindent where $\mathrm{B}$ is the system bandwidth and $a^*, n_s^*= \arg\max_{a \in \{1, 2\}, n_s \in \{1, 2, \cdots, N_s\}} |y_{a,n_s,s_i}|^2$. \subsection{Problem Formulation} We focus on maximizing the long-term coverage and capacity by optimizing the transmit power, the reflection phase shift matrix, the transmission phase shift matrix, and time $\mathcal{T}$. The formulated problem can be expressed as follows: \vspace{-0.3cm} \begin{align} &\underset{P_t, \mathbf{\Phi}_{\mathrm{Re}, n_s}, \mathbf{\Phi}_{\mathrm{Tr}, n_s}, \mathcal{T}}{\max} \hspace*{1em} \sum_{t=1} ^ \mathcal{T} \big[\mathrm{Coverage}(t), \mathrm{Capacity}(t)\big] \label{12}\\ &\mathrm{s.\ t.} \hspace*{1em} 0<P_t\le P_{t,\mathrm{max}},\tag{\ref{12}{a}} \label{12a}\\ & \hspace*{2.75em} 0< \mathrm{tr}(\mathbf{\Phi}^H_{\delta,n_s}\mathbf{\Phi}_{\delta,n_s})< 1, \tag{\ref{12}{b}} \label{12b}\\ & \hspace*{2.75em} 0<\mathrm{tr}(\mathbf{\Phi}^H_{\mathrm{Re},n_s}\mathbf{\Phi}_{\mathrm{Re},n_s}) + \mathrm{tr}(\mathbf{\Phi}^H_{\mathrm{Tr},n_s}\mathbf{\Phi}_{\mathrm{Tr},n_s})\le 1, \tag{\ref{12}{c}} \label{12c} \end{align} \par \vspace{-0.3cm} \noindent where $P_{\mathrm{max}}$ and $\mathbf{C} \subset \mathbb{R}^2$ denote the permitted maximum transmit power and the considered serving area, respectively. Constraint \eqref{12a} limits the range of the transmit power. According to the energy conservation principle, constraints \eqref{12b} and \eqref{12c} show that both the energy of different modes and the sum energy of the reflected and transmitted signals is less than one. However, the main difficulty in solving the problem \eqref{12} owing to the following reasons. Firstly, the NLoS components for STAR-RISs assisted links are hard to be determined before the STAR-RISs deployment, where the locations of STAR-RISs are infinite and rely on the no concave distribution of coverage and capacity of each sample point. Secondly, the distribution weights $w_{\mathrm{cov}, s_i}(t)$ and $w_{\mathrm{cap}, s_i}(t)$ at time $t$ for calculating the coverage and capacity is not a continuous function. Thirdly, with respect to the continuous-time $t$, it's difficult to handle infinite variables optimization, since any adjacent time is submitted to the Markov chain. Thus, conventional non-convex optimization methods are not suitable for solving these difficulties. In the next section, the Pareto optimal-based MO-PPO algorithm is invoked to solve this problem. \section{Pareto optimal-based MO-PPO Algorithm} In this section, we firstly elaborate on the MDP in the MO-PPO algorithm. Then, the update strategy of the Pareto optimal (PO)-based MO-PPO algorithm is proposed to verify the optimal policy applicable to the system model. \subsection{MO-PPO Framework} In MO-PPO algorithm, MDP is represented by a tuple $\langle \mathbf{S}, \mathbf{A}, \mathbf{p}, \mathbf{R}\rangle$ with state space $\mathbf{S}$, action space $\mathbf{A}$, and transition probability matrix $\mathbf{p}$. Define a controller as an agent, which can control both two BSs, to develop the policy from the BSs to sample points via STAR-RISs, and phase shifters. At each timestep $t$, the controller can observe the state $\mathbf{S}_t$ from state space $\mathbf{S}$, and carries out an action $\mathbf{A}_t$ from action space $\mathbf{A}$. The received reward is to make the transition to the next state $\mathbf{S}_{t+1}$. In this system model, the locations of STAR-RISs are randomly chosen. Note that, the locations of STAR-RISs are not overlapped. Therefore, the distance between any BS and $s_i$-th sample point is fixed, while the coverage and capacity are mainly determined by the distance between STAR-RISs and $s_i$-th point and the corresponding phase shift of the STAR-RISs, according to the \eqref{10} and \eqref{11}. Thus, the state $\mathbf{S}_{t}$ can be defined symbolically as follows: \vspace{-0.1cm} \begin{align}\label{state space} \mathbf{S}_t = \begin{bmatrix} \beta_{\mathrm{Re},n_s}(t), \beta_{\mathrm{Tr},n_s}(t), \mathbf{\Phi}_{\mathrm{Re},n_s}(t), \mathbf{\Phi}_{\mathrm{Tr},n_s}(t) \end{bmatrix}. \end{align} \par \vspace{-0.2cm} For the action $\mathbf{A}_t$, the $\beta_{\mathrm{Tr},n_s}$ of STAR-RISs is discreted with small step $z$ as numerous values between $(0, 1)$, while the $\beta_{\mathrm{Re},n_s}$ is determined by $(1-\beta_{\mathrm{Tr},n_s})$. The phase shifters follows the continuous phase definition $[0, 2\pi)$ of STAR-RISs. \vspace{-0.1cm} \begin{align}\label{action space} \mathbf{A}_t = \begin{bmatrix} \Delta \beta_{\mathrm{Re},n_s}, \Delta \beta_{\mathrm{Tr},n_s}, \Delta \phi_{\mathrm{Re},n_s}, \Delta \phi_{\mathrm{Tr},n_s} \end{bmatrix}. \end{align} \par \vspace{-0.2cm} \noindent where $\Delta \beta_{\mathrm{Re},n_s} \in \{z,2z,\cdots,1-z\}$, $\Delta \beta_{\mathrm{Tr},n_s} \in \{1-z,1-2z,\cdots,z\}$ and $\Delta \phi_{\delta,n_s} = \{ \phi_{\delta,n_s,1},\phi_{\delta,n_s,2},\cdots,\phi_{\delta,n_s,K}\}$ denote the possible values for the tranmission amplitude, reflection amplitude, and possible phases for $n_s$-th STAR-RISs with mode $\delta$, respectively. For $k$-th element, the phase is randomly selected from $[0,2\pi)$. To obtain the maximum transmission coverage and capacity that BSs can achieve in a time period $\mathcal{T}$, the reward is denoted as the difference of coverage and capacity in adjuscent timesteps, which can be expressed as: \vspace{-0.1cm} \begin{align}\label{Multi-objective reward} \mathbf{R}_t(\mathbf{S}_t, \mathbf{A}_t) = \begin{bmatrix} \Delta \mathrm{Cov}_{t \rightarrow t+1}, \Delta \mathrm{Cap}_{t \rightarrow t+1} \end{bmatrix}. \end{align} \par \vspace{-0.2cm} For the loss function in the PPO algorithm, there are two Approaches: The clipped surrogate objective and the Adaptive KL penalty coefficient, which can be used as an evaluation of the loss function. \vspace{-0.3cm} \subsection{Loss Function-based Update Strategy} In this work, we consider an update strategy for the Pareto optimal-based MO-PPO algorithm, i.e, the loss function-based update strategy, where the multi-task learning (MTL) method is employed. Different from the conventional update strategy, here are multiple gradient policies that need to be updated simultaneously. In the MTL-based MO-PPO problem, the empirical risk minimization formulation is generally followed: \vspace{-0.1cm} \begin{align}\label{empirical risk minimization} \min_{\pmb{\overline{\theta}}} \sum_{m=1}^{M} \varphi^m \hat{\mathcal{L}}^m(\pmb{\overline{\theta}}), \end{align} \par \vspace{-0.2cm} \noindent where $\varphi^m$ and $\hat{\mathcal{L}}^m(\pmb{\overline{\theta}})$ denote the weights for $m$-th task and the empirical loss of $m$-th task. Consider two sets of solutions $\pmb{\overline{\theta}}_1$ and $\pmb{\overline{\theta}}_2$, if $\hat{\mathcal{L}}^{1}(\pmb{\overline{\theta}}_1) > \hat{\mathcal{L}}^{1}(\pmb{\overline{\theta}}_2)$ and $\hat{\mathcal{L}}^{2}(\pmb{\overline{\theta}}_1) < \hat{\mathcal{L}}^{2}(\pmb{\overline{\theta}}_2)$, it can be obtained that the two tasks are mutually non-dominated, and therefore belong to the Pareto front. In this case, MTL problem can be formulated as MO optimization to explore the optimal results for conflicting objectives, where the vector-valued loss $\pmb{\mathcal{L}}$ are employed as follows: \vspace{-0.1cm} \begin{align}\label{vector loss} \min_{\overline{\pmb{\overline{\theta}}}} \pmb{\mathcal{L}}(\overline{\pmb{\overline{\theta}}}) = \min_{\pmb{\overline{\theta}}} [\hat{\mathcal{L}}^{1}(\pmb{\overline{\theta}}), \hat{\mathcal{L}}^{2}(\pmb{\overline{\theta}}), \cdots, \hat{\mathcal{L}}^{M}(\pmb{\overline{\theta}})]^T. \end{align} \par \vspace{-0.2cm} Hence, the optimization of equation \eqref{vector loss} is to find PO solutions. Define $\mathcal{F}=\{\pmb{\mathcal{L}}(\pmb{\overline{\theta}})\}, \pmb{\overline{\theta}} \in \pmb{\overline{\Theta}}$ as the Pareto front, where $\pmb{\overline{\theta}}$ and $\pmb{\overline{\Theta}}$ denote the any one set of optimal parameters and all possible sets of optimal parameters. \subsubsection{Multiple Gradient Descent Algorithms (MGDA)} Multiple gradient descent algorithms (MGDA) \cite{MGDA} is a proper method to converge to the Pareto stationary solution problem. According to the Karush-Kuhn-Tucker (KKT) conditions, there exists $\nu_1,\nu_2,\cdots,\nu_M$ such that: \begin{itemize} \item $\nu_1,\nu_2,\cdots,\nu_M \geq 0$. \item $\sum_{m=1}^{M}\nu_m = 1$ and $\sum_{m=1}^{M}\nu_m \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^m(\pmb{\overline{\theta}}) = 0$. \end{itemize} \par Before handling the MGDA algorithms, the objectives may have values of the different scales, while MGDA is sensitive to the different ranges. Thus, the following gradient normalization method is invoked to alleviate the value range: \vspace{-0.1cm} \begin{align}\label{normalization} \nabla_{\pmb{\overline{\theta}}}\pmb{\mathcal{L}}(\pmb{\overline{\theta}}) = \frac{\nabla_{\pmb{\overline{\theta}}}\pmb{\mathcal{L}}(\pmb{\overline{\theta}})}{\pmb{\mathcal{L}}(\hat{\pmb{\overline{\theta}}})}, \end{align} \par \vspace{-0.2cm} \noindent where $\hat{\pmb{\overline{\theta}}}$ is the initial parameters of the model. Consequently, the range of loss function has been limited to $[0, 1]$. \begin{definition}\label{definition 1} A solution $\pmb{\overline{\theta}}_1$ dominates a solution $\pmb{\overline{\theta}}_2$ if for all objectives satisfying $\hat{\mathcal{L}}^{m}(\pmb{\overline{\theta}}_1) \leq \hat{\mathcal{L}}^{m}(\pmb{\overline{\theta}}_2)$, while exists at least one objective satisfying $\hat{\mathcal{L}}^{n}(\pmb{\overline{\theta}}_1) < \hat{\mathcal{L}}^{n}(\pmb{\overline{\theta}}_2)$, $\forall m,n \in \{1,2,\cdots,M\}$. \end{definition} \begin{definition}\label{definition 2} A solution $\pmb{\overline{\theta}}_1$ is PO solution while there is no any other solution $\pmb{\overline{\theta}}_2$ dominates $\pmb{\overline{\theta}}_1$. \end{definition} \begin{definition}\label{definition 3} All non-dominated solutions $\hat{\pmb{\overline{\theta}}}$ are Pareto set. \end{definition} The solution that satisfies the conditions above is defined as a Pareto stationary solution, while the Pareto optimal solution is Pareto stationary solution. Since it has two objectives in problem \eqref{12}, the optimization problem can be defined as follows: \vspace{-0.1cm} \begin{align}\label{QCOP2} \min_{\nu \in [0,1]} ||\nu \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^1(\pmb{\overline{\theta}}) + (1-\nu) \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^2(\pmb{\overline{\theta}})||^2_2, \end{align} \par \vspace{-0.2cm} \noindent where $||\cdot||^2_2$ and $\nabla_{[\cdot]}$ denote the L2 norm and gradient descent (GD) operator. Define $\nabla_{\overline{\pmb{\overline{\theta}}}} \mathcal{L}(\overline{\pmb{\overline{\theta}}}) = \sum_{m=1}^{M}\nu_m \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^m(\pmb{\overline{\theta}})$, we have that: if $\nabla_{\overline{\pmb{\overline{\theta}}}} \mathcal{L}(\overline{\pmb{\overline{\theta}}}) = 0$, the solution is Pareto stationary; otherwise, it isn't Pareto stationary and $\nabla_{\overline{\pmb{\overline{\theta}}}} \mathcal{L}(\overline{\pmb{\overline{\theta}}})$ is the general GD vector. The optimization problem defined in \eqref{QCOP2} is equivalent to finding a minimum-norm point in the convex hull, which is a convex quadratic problem with linear constraints. Thus, an analytical solution to equation \eqref{QCOP2} can be expressed as: \vspace{-0.3cm} \begin{align}\label{QCOP2-solution} \nu = \bigg\{ \frac{[\nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^2(\pmb{\overline{\theta}}) - \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^1(\pmb{\overline{\theta}})]^T\nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^2(\pmb{\overline{\theta}})}{||\nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^1(\pmb{\overline{\theta}}) - \nabla_{\pmb{\overline{\theta}}} \hat{\mathcal{L}}^2(\pmb{\overline{\theta}})||^2_2} \bigg\}_{[0,1]}, \end{align} \par \vspace{-0.2cm} \noindent where $\{\}_{[0,1]}$ represents clipping $\nu$ to $[0,1]$. Alternate optimization of GD vector and $\nu$ produces different $\nu$, which covers all Pareto optimal solutions under constraints to form Pareto frontiers. According to the system model, it's suitable to select one PO solution as the optimal result. Therefore, we select the worse objective value optimized by the Pareto optimal solution between two objectives for comparison, and the smaller one is the final desired optimal solution. \subsubsection{Loss Function} Our goal is to train one policy containing two sub-policies, where each objective has a specific loss function and shares all parameters. Thus, combing with the PPO algorithm, the loss function for the MO-PPO algorithm based on the No clipping or penalty method, clipped method, and KL Penalty method can be expressed as \eqref{loss function1}, \eqref{loss function2}, and \eqref{loss function3}, where $\hat{\mathbf{A}}_t$ is an advantage estimator, it can be expressed as \eqref{advantage}. The pseudo code of the algorithm is shown in \textbf{Algorithm~\ref{loss-MOPPO}}. \begin{figure*}[hbp] \normalsize \begin{align} &\pmb{\mathcal{L}}^{\mathrm{NCP}}(\pmb{\overline{\theta}}) = \min_{\nu \in [0, 1]} \Bigg|\Bigg| \nu \mathbb{E}_{t}^{1}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}}}\Big]\Big\} + (1-\nu) \mathbb{E}_{t}^{2}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}}}\Big]\Big\}\Bigg|\Bigg|^{2}_{2}, \label{loss function1}\\ &\pmb{\mathcal{L}}^{\mathrm{CLIP}}(\pmb{\overline{\theta}}) = \min_{\nu \in [0, 1]} \Bigg|\Bigg| \nu \mathbb{E}_{t}^{1}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}},clip\Big(\frac{\pi_{\overline{\theta}^{*}}(S_t, A_t)}{\pi_{\overline{\theta}}(S_t, A_t)}, 1-\epsilon, \epsilon\Big)\hat{\mathbf{A}}_{t}^{\pi^{*}}}\Big]\Big\} \nonumber \\ &\hspace{16em} + (1-\nu) \mathbb{E}_{t}^{2}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}},clip\Big(\frac{\pi_{\overline{\theta}^{*}}(S_t, A_t)}{\pi_{\overline{\theta}}(S_t, A_t)}, 1-\epsilon, \epsilon\Big)\hat{\mathbf{A}}_{t}^{\pi^{*}}}\Big]\Big\}\Bigg|\Bigg|^{2}_{2}, \label{loss function2}\\ &\pmb{\mathcal{L}}^{\mathrm{KL}}(\pmb{\overline{\theta}}) = \min_{\nu \in [0, 1]} \Bigg|\Bigg| \nu \mathbb{E}_{t}^{1}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}}, \tilde{\beta}KL(\pi_{\overline{\theta}^{*}}(\mathbf{S}_t), \pi_{\overline{\theta}}(\mathbf{S}_t))}\Big]\Big\} \nonumber \\ &\hspace{16em} + (1-\nu) \mathbb{E}_{t}^{2}\Big\{\mathrm{min}\Big[\mathrm{\frac{\pi_{\overline{\theta}^{*}}(\mathbf{S}_t, \mathbf{A}_t)}{\pi_{\overline{\theta}}(\mathbf{S}_t, \mathbf{A}_t)}\hat{\mathbf{A}}_{t}^{\pi^{*}},\tilde{\beta}KL(\pi_{\overline{\theta}^{*}}(\mathbf{S}_t), \pi_{\overline{\theta}}(\mathbf{S}_t))}\Big]\Big\}\Bigg|\Bigg|^{2}_{2}, \label{loss function3} \end{align} \hrulefill \vspace*{0pt} \end{figure*} \begin{figure*}[hbp] \normalsize \begin{align}\label{advantage} \hat{\mathbf{A}}_{t}^{\pi^{*}} &= \sum_{t}^{\overline{T}}\mathbf{Q}_{\pi}(\mathbf{S}_t, \mathbf{A}_t) - V_{\pi}(\mathbf{S}_t) = \mathbf{R}_{t} + \gamma \mathbf{R}_{t+1} + \gamma^2 \mathbf{R}_{t+2} + \cdots + \gamma^{\overline{T}-t+1}\mathbf{R}_{\overline{T}-1} + \gamma^{\overline{T}-t}V_{\pi}(\mathbf{S}_{\overline{T}}) - V_{\pi}(\mathbf{S}_t). \end{align} \hrulefill \vspace*{0pt} \end{figure*} \vspace{-0.2cm} \begin{algorithm}[htbp] \caption{Pareto optimal-based MO-PPO algorithm, loss function-based update strategy} \label{loss-MOPPO} \begin{algorithmic}[1] \REQUIRE ~~\\% Input PPO network structure.\\ \ENSURE The policy network.\\ \STATE \textbf{Initialize:} Hyperparameters of PPO network. \FOR {iteration = 1, 2, $\cdots$} \FOR {objective = 1, 2, $\cdots$} \FOR {actor = 1, 2, $\cdots$, N} \STATE Run policy $\pi_{\overline{\theta}}$ in environment for $T$ timesteps for each objective. \STATE Compute advantage estimates $\hat{A}_{1}, \cdots, \hat{A}_{T}$ for each objective. \ENDFOR \ENDFOR \STATE Calculate loss function $\pmb{\mathcal{L}}$ wrt $\overline{\pmb{\theta}}$, with $\overline{U}$ epochs and minibatch size $M \leq \mathbf{\mathcal{U}}$ update frequency, according to equation \eqref{loss function1}, \eqref{loss function2}, or \eqref{loss function3}. \STATE Update $\overline{\pmb{\theta}}$ by min-norm solver. \ENDFOR \end{algorithmic} \end{algorithm} \section{Numerical Results} In this section, we provide numerical results to evaluate the performance of the proposed MO-PPO algorithm and the explored Pareto-optimal solution. Without loss of generality, a Poisson traffic model is employed to estimate the traffic flows or data sources in the proposed system model. The hyper-parameters for algorithms training are denoted as the default of the original PPO algorithm \cite{PPO}. Additionally, there are two cases are conceived to help evaluate the proposed update strategies: \textbf{Weights 0.3 and 0.7} and \textbf{Weights 0.6 and 0.4}, which indicates that the weights of coverage and capacity are fixed as 0.3 and 0.7, 0.6 and 0.4, respectively. Then, we discuss the number of STAR-RISs and the number of elements in STAR-RISs on the optimal coverage and capacity. \begin{figure*}[htbp] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.6cm} \centering \subfigure[The optimized coverage versus different number of STAR-RISs.] { \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[height=2in, width=3.2in]{Optimized_cov_num.eps} \label{optimized_cov_num} \end{minipage} }\hspace{0.75cm} \subfigure[The optimized capacity versus different number of STAR-RISs.] { \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[height=2in, width=3.2in]{Optimized_cap_num.eps} \label{optimized_cap_num} \end{minipage} } \caption{The Optimized coverage and capacity for the MO-PPO algorithm with fixed weights, action value-based update strategy, and loss function-based update strategy with different number $N_s$ of STAR-RISs, $K = 8$.} \label{optimized_num} \end{figure*} \subsubsection{Impact of the Number of STAR-RISs} Fig.~\ref{optimized_num} depicts the optimized coverage and capacity versus the different numbers of STAR-RISs. In this scenario, the elements $K$ are defined as: $K = 8$. As shown in Fig.~\ref{optimized_cov_num}, the coverage of all cases keeps growing steadily as the number of STAR-RISs increases. When the number of STAR-RISs $N_s$ reaches 4, the coverage of the \textbf{Weights 0.3 and 0.7} and \textbf{Weights 0.6 and 0.4} case can be promoted to over 0.4, and the proposed update strategy can arrive over 0.6. For the capacity depicted in Fig.~\ref{optimized_cap_num}, the gap between the loss function update strategy and \textbf{Weights 0.4 and 0.6} case are enlarged from 0.39 bits/s/HZ to 11.88 bits/s/HZ as the number of STAR-RISs $N_s$ increasing. These are because, with the increase in the number of STAR-RISs, STAR-RISs can help to compensate the received RSRP of some sample points to reach $\mathrm{R}_{th}$, where channels among these sample points and BS are severely attenuated by the distance. Thus, the proposed update strategy outperforms the benchmarks. \begin{figure*}[htbp] \setlength{\abovecaptionskip}{-0.1cm} \setlength{\belowcaptionskip}{-0.7cm} \centering \subfigure[The optimized coverage versus different elements of STAR-RISs.] { \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[height=2in, width=3.2in]{Optimized_cov_ele.eps} \label{optimized_cov_ele} \end{minipage} }\hspace{0.75cm} \subfigure[The optimized capacity versus different elements of STAR-RISs.] { \begin{minipage}[t]{0.45\textwidth} \centering \includegraphics[height=2in, width=3.2in]{Optimized_cap_ele.eps} \label{optimized_cap_ele} \end{minipage} } \caption{The Optimized coverage and capacity for the MO-PPO algorithm with fixed weights, action value-based update strategy, and loss function-based update strategy with different elements $K$ of each STAR-RISs, $N_s = 2$.} \label{optimized_ele} \end{figure*} \subsubsection{Impact of the Number of Element in Each STAR-RISs} Fig.~\ref{optimized_ele} describes the optimized coverage and capacity versus the different number of elements in each STAR-RISs. In this scenario, the number of STAR-RISs $N_s$ is defined as: $N_s = 2$. It can be observed that the coverage shows a slight change in Fig.\ref{optimized_cov_ele}, while the maximum gaps among the optimized capacity of three cases in Fig.\ref{optimized_cap_ele} keep increasing from 3.31 bits/s/HZ to 15.22 bits/s/HZ as the number of elements in each STAR-RISs upgrades. These are because the role of each element is to transmit the BS signal to each sampling point while increasing the number of elements of each STAR-RISs is adding multiple links to reduce loss. Compared with increasing the number of STAR-RISs, increasing the number of elements does not change the channel fast fading characteristics of distant sample points. Therefore, it can be obtained that the proposed update strategy outperforms the benchmarks. \vspace{-0.2cm} \section{Conclusion} \vspace{-0.1cm} In this paper, we proposed a new model for dynamic CCO in STAR-RISs assisted wireless networks, by optimizing the transmit power and the phase shift matrix. In order to simultaneously optimize the coverage and capacity, a loss function-based update strategy was investigated. The core idea of the proposed strategy was to consider the two-loss function for coverage and capacity, which was dynamically assigned the weights by a min-norm solver at each update. The numerical results proved that when considering the different number of STAR-RISs and the different number of elements in the STAR-RISs, the investigated update strategy outperforms the fixed weight-based MO algorithms. \vspace{-0.2cm}
1,108,101,564,478
arxiv
\section{Introduction and Preliminaries} In~\cite{Feller1,Feller2}, Feller investigated a general form of a generator of a strongly continuous contractive nonnegative semigroup of operators acting between the spaces of continuous functions on an interval, a half-line, or the whole line. Such a semigroup corresponds to the one-dimensional diffusion process and is now called the Feller semigroup. In the multidimensional case, the general form of a generator of a Feller semigroup has been obtained by Ventsel~\cite{Ventsel}. Under some regularity assumptions concerning the Markov process, he proved that the generator of the corresponding Feller semigroup is an elliptic differential operator of second order (possibly with degeneration) whose domain of definition consists of continuous (once or twice continuously differentiable, depending on the process) functions satisfying nonlocal conditions which involve an integral of a function over the closure of the region with respect to a nonnegative Borel measure $\mu(y,d\eta)$. The inverse question remains open: given an elliptic integro-differential operator whose domain of definition is described by nonlocal boundary conditions, whether or not this operator (or its closure) is a generator of a Feller semigroup. One distinguishes two classes of nonlocal boundary conditions: the so-called {\it transversal} and {\it nontransversal} ones. The order of nonlocal terms is less than the order of local terms in the transversal case, and these orders coincide in the nontransversal case (see, e.g.,~\cite{Taira3} for details and probabilistic interpretation). The transversal case was studied in~\cite{SU, BCP, Taira1, Taira3, Ishikawa, GalSkJDE}. The more difficult nontransversal nonlocal conditions are dealt with in~\cite{SkDAN89,SkRJMP95,GalSkMs, GalSkJDE}. It was assumed in~\cite{SkDAN89,SkRJMP95} that the coefficients at nonlocal terms decrease as the argument tends to the boundary. In~\cite{GalSkMs,GalSkJDE}, the authors considered nonlocal conditions with the coefficients that are less than one. This allowed them to regard (after reduction to the boundary) the nonlocal problem as a perturbation of the ``local'' Dirichlet problem. In this paper, we consider nontransversal nonlocal conditions on the boundary of a plane domain $G$, admitting ``limit case'' where the measure $\mu(y,\oG)$, after some normalization, may equal one (it cannot be greater than one~\cite{Ventsel}). We assume that if the support of the measure $\mu(y,d\eta)$ is ``close'' to the point $y$ for some $y\in\pG$ and $\mu(y,\oG)=1$, then the measure $\mu(y,d\eta)$ is atomic. Based on the Hille--Iosida theorem and on the solvability of elliptic equations with nonlocal terms supported near the boundary~\cite{GurMIAN2007}, we provide a class of Borel measures $\mu(y,d\eta)$ for which the corresponding nonlocal operator is a generator of a Feller semigroup. In the conclusion of this section, we remind the notion of a Feller semigroup and its generator and formulate a version of the Hille--Iosida theorem adapted for our purposes. \bigskip Let $G\subset\bbR^2$ be a bounded domain with piecewise smooth boundary $\pG$, and let $X$ be a closed subspace in $C(\oG)$ containing at least one nontrivial nonnegative function. A strongly continuous semigroup of operators $\bT_t:X\to X$ is called a {\it Feller semigroup} {\it on} $X$ if it satisfies the following conditions: 1. $\|\bT_t\|\le 1$, $t\ge0$; 2. $\bT_t u\ge0$ for all $t\ge0$ and $u\in X$, $u\ge0$. A linear operator $\bP:\Dom(\bP)\subset X\to X$ is called the ({\it infinitesimal}) {\it generator} of a strongly continuous semigroup $\{\bT_t\}$ if $ \bP u=\lim\limits_{t\to +0}{(\bT u-u)}/{t},\ \Dom(\bP)=\{u\in X: \text{the limit exists in } X\}. $ \begin{theorem}[the Hille--Iosida theorem, see Theorem~9.3.1 in~\cite{Taira1}]\label{thHI} \begin{enumerate} \item Let $\bP:\Dom(\bP)\subset X\to X$ be a generator of a Feller semigroup on $X$. Then the following assertions are true. \begin{enumerate} \item[$(a)$] The domain $\Dom(\bP)$ is dense in $X$. \item[$(b)$] For each $q>0$ the operator $q\bI-\bP$ has the bounded inverse $(q\bI-\bP)^{-1}:X\to X$ and $\|(q\bI-\bP)^{-1}\|\le 1/q$. \item[$(c)$] The operator $(q\bI-\bP)^{-1}:X\to X$, $q>0$, is nonnegative. \end{enumerate} \item Conversely, if $\bP$ is a linear operator from $X$ to $X$ satisfying condition $(a)$ and there is a constant $q_0\ge 0$ such that conditions $(b)$ and $(c)$ hold for $q>q_0$, then $\bP$ is the generator of a certain Feller semigroup on $X$, which is uniquely determined by $\bP$. \end{enumerate} \end{theorem} \section{Nonlocal Conditions near the Conjugation Points}\label{subsectStatement} Consider a set ${\cK}\subset\partial G$ consisting of finitely many points. Let $\partial G\setminus{\mathcal K}=\bigcup\limits_{i=1}^{N}\Gamma_i$, where $\Gamma_i$ are open (in the topology of $\partial G$) $C^\infty$ curves. Assume that the domain $G$ is a plane angle in some neighborhood of each point $g\in{\mathcal K}$. For an integer $k\ge0$, denote by $W_2^k(G)$ the usual Sobolev space. Denote by $W^k_{2,\loc}(G)$ ($k\ge0$ is an integer) the set of functions $u$ such that $u\in W_2^k(G')$ for any domain $G'$, $\overline{G'}\subset G$. Consider the differential operator $$ P_0u=\sum\limits_{j,k=1}^{2}p_{jk}(y)u_{y_jy_k}(y)+ \sum\limits_{j=1}^2p_j(y)u_{y_j}(y)+p_0(y)u(y), $$ where $p_{jk},p_j\in C^\infty(\bbR^2)$ are real-valued functions and $p_{jk}=p_{kj}$, $j,k=1,2$. \begin{condition}\label{cond1.1} 1. There is a constant $c>0$ such that $\sum\limits_{j,k=1}^{2}p_{jk}(y)\xi_j\xi_k\ge c|\xi|^2$ for $y\in\overline{G}$ and $\xi=(\xi_1,\xi_2)\in\bbR^2.$ 2. $p_0(y)\le0$ for $y\in\overline{G}$. \end{condition} In the sequel, we will use the following version of the well-known maximum principle. \begin{maximum}[see Theorem 9.6 in~\cite{GilbTrud}]\label{mp2} Let $D\subset\bbR^2$ be a bounded or unbounded domain, and let Condition~$\ref{cond1.1}$ hold with $G$ replaced by $D$. If a function $u\in C(D)$ achieves its positive maximum at a point $y^0\in D$ and\footnote{Here and below the operator $P_0$ acts in the sense of distributions.} $P_0u\in C(D)$, then $P_0 u(y^0)\le0$. \end{maximum} Introduce the operators corresponding to nonlocal terms supported near the set $\mathcal K$. For any set $\mathcal M$, we denote its $\varepsilon$-neighborhood by $\mathcal O_{\varepsilon}(\mathcal M)$. Let $\Omega_{is}$ ($i=1, \dots, N;$ $s=1, \dots, S_i$) be $C^\infty$ diffeomorphisms taking some neighborhood ${\mathcal O}_i$ of the curve $\overline{\Gamma_i\cap\mathcal O_{{\varepsilon}}(\mathcal K)}$ to the set $\Omega_{is}({\mathcal O}_i)$ in such a way that $\Omega_{is}(\Gamma_i\cap\mathcal O_{{\varepsilon}}(\mathcal K))\subset G$ and $ \Omega_{is}(g)\in\mathcal K$ for $ g\in\overline{\Gamma_i}\cap\mathcal K. $ Thus, the transformations $\Omega_{is}$ take the curves $\Gamma_i\cap\mathcal O_{{\varepsilon}}(\mathcal K)$ strictly inside the domain $G$ and the set of their end points $\overline{\Gamma_i}\cap\mathcal K$ to itself. Let us specify the structure of the transformations $\Omega_{is}$ near the set $\mathcal K$. Denote by $\Omega_{is}^{+1}$ the transformation $\Omega_{is}:{\mathcal O}_i\to\Omega_{is}({\mathcal O}_i)$ and by $\Omega_{is}^{-1}:\Omega_{is}({\mathcal O}_i)\to{\mathcal O}_i$ the inverse transformation. The set of points $\Omega_{i_qs_q}^{\pm1}(\dots\Omega_{i_1s_1}^{\pm1}(g))\in{\mathcal K}$ ($1\le s_j\le S_{i_j},\ j=1, \dots, q$) is said to be an {\em orbit} of the point $g\in{\mathcal K}$. In other words, the orbit of a point $g$ is formed by the points (of the set $\mathcal K$) that can be obtained by consecutively applying the transformations $\Omega_{i_js_j}^{\pm1}$ to the point $g$. The set $\mathcal K$ consists of finitely many disjoint orbits, which we denote by $\mathcal K_\nu$. Take a sufficiently small number $\varepsilon>0$ such that there exist neighborhoods $\mathcal O_{\varepsilon_1}(g_j)$, $ \mathcal O_{\varepsilon_1}(g_j)\supset\mathcal O_{\varepsilon}(g_j) $, satisfying the following conditions: 1. the domain $G$ is a plane angle in the neighborhood $\mathcal O_{\varepsilon_1}(g_j)$; 2. $\overline{\mathcal O_{\varepsilon_1}(g)}\cap\overline{\mathcal O_{\varepsilon_1}(h)}=\varnothing$ for any $g,h\in\mathcal K$, $g\ne h$; 3. if $g_j\in\overline{\Gamma_i}$ and $\Omega_{is}(g_j)=g_k,$ then ${\mathcal O}_{\varepsilon}(g_j)\subset\mathcal O_i$ and $\Omega_{is}\big({\mathcal O}_{\varepsilon}(g_j)\big)\subset{\mathcal O}_{\varepsilon_1}(g_k).$ For each point $g_j\in\overline{\Gamma_i}\cap\mathcal K_\nu$, we fix a linear transformation $Y_j: y\mapsto y'(g_j)$ (the composition of the shift by the vector $-\overrightarrow{Og_j}$ and rotation) mapping the point $g_j$ to the origin in such a way that $ Y_j({\mathcal O}_{\varepsilon_1}(g_j))={\mathcal O}_{\varepsilon_1}(0),\ Y_j(G\cap{\mathcal O}_{\varepsilon_1}(g_j))=K_j\cap{\mathcal O}_{\varepsilon_1}(0), $ $ Y_j(\Gamma_i\cap{\mathcal O}_{\varepsilon_1}(g_j))=\gamma_{j\sigma}\cap{\mathcal O}_{\varepsilon_1}(0)\ (\sigma=1\ \text{or}\ 2), $ where $ K_j$ is a plane angle of nonzero opening and $\gamma_{j\sigma}$ its sides. \begin{condition}\label{condK1} Let $g_j\in\overline{\Gamma_i}\cap\mathcal K_\nu$ and $\Omega_{is}(g_j)=g_k\in\mathcal K_\nu;$ then the transformation $ Y_k\circ\Omega_{is}\circ Y_j^{-1}:{\mathcal O}_{\varepsilon}(0)\to{\mathcal O}_{\varepsilon_1}(0) $ is the composition of rotation and homothety centered at the origin. \end{condition} Introduce the nonlocal operators $\mathbf B_{i}$ by the formulas \begin{equation}\label{eq3'} \mathbf B_{i}u=\sum\limits_{s=1}^{S_i} b_{is}(y) u(\Omega_{is}(y)),\quad y\in\Gamma_i\cap\mathcal O_{\varepsilon}(\mathcal K),\qquad \mathbf B_{i}u=0,\quad y\in\Gamma_i\setminus\mathcal O_{\varepsilon}(\mathcal K), \end{equation} where $b_{is}\in C^\infty(\mathbb R^2)$ are real-valued functions, $\supp b_{is}\subset\mathcal O_{{\varepsilon}}(\mathcal K)$. \begin{condition}\label{cond1.2} \begin{enumerate} \item $ b_{is}(y)\ge0,\qquad \sum\limits_{s=1}^{S_i} b_{is}(y)\le 1,\qquad y\in\overline{\Gamma_i}; $ \item $ \sum\limits_{s=1}^{S_i} b_{is}(g)+\sum\limits_{s=1}^{S_j} b_{js}(g)<2,\quad g\in\overline{\Gamma_i}\cap\overline{\Gamma_j}\subset\cK,\qquad\text{if}\ i\ne j\ \text{and}\ \overline{\Gamma_i}\cap\overline{\Gamma_j}\ne\varnothing. $ \end{enumerate} \end{condition} Now we formulate some auxiliary results to be used in the next sections. For any closed sets $Q\subset\oG$ and $K\subset\oG$ such that $Q\cap K\ne\varnothing$, we introduce the space \begin{equation}\label{eqC_K} C_K(Q)=\{u\in C(Q): u(y)=0,\ y\in Q\cap K\} \end{equation} with the maximum-norm. Consider the space of vector-valued functions $ \cC_\cK(\pG)=\prod\limits_{i=1}^N C_\cK(\overline{\Gamma_i}) $ with the norm $ \|\psi\|_{\cC_\cK(\pG)}= \max\limits_{i=1,\dots,N}\max\limits_{y\in\overline{\Gamma_i}}\|\psi_i\|_{C(\overline{\Gamma_i})}, $ where $\psi=\{\psi_i\}$, $\psi_i\in C_\cK(\overline{\Gamma_i})$. Consider the problem \begin{equation}\label{eq47-48} P_0u-q u =f_0(y), \ y\in G;\qquad u|_{\Gamma_i}-\bB_i u=\psi_i(y), \ y\in\Gamma_i,\ i=1,\dots,N. \end{equation} \begin{theorem}[see Theorem 4.1 in~\cite{GurMIAN2007}]\label{th1-2} Let Conditions~$\ref{cond1.1}$--$\ref{cond1.2}$ be fulfilled. Then there is a number $q_1>0$ such that, for any $f_0\in C(\oG)$, $\psi=\{\psi_i\}\in \cC_\cK(\pG)$, and $q\ge q_1$, there exists a unique solution $u\in C_\cK(\overline G)\cap W_{2,\loc}^2(G)$ of problem~\eqref{eq47-48}. Furthermore, if $f_0=0$, then $u\in C_\cK(\overline G)\cap C^\infty(G)$ and the following estimate holds{\rm:} \begin{equation}\label{eq49} \|u\|_{C_\cK(\overline G)}\le c_1\|\psi\|_{\cC_\cK(\pG)}, \end{equation} where $c_1>0$ does not depend on $\psi$ and $q$. \end{theorem} Let $u\in C^\infty(G)\cap C_\cK(\oG)$ be a solution of problem~\eqref{eq47-48} with $f_0=0$ and $\psi=\{\psi_i\}\in\cC_\cK(\pG)$. Denote $u=\bS_q\psi$. By Theorem~\ref{th1-2}, the operator $$ \bS_q: \cC_\cK(\pG)\to C_\cK(\overline G),\qquad q\ge q_1, $$ is bounded and $\|\bS_q\|\le c_1$, where $c_1>0$ does not depend on $q$. \begin{lemma}\label{l4} Let Conditions~$\ref{cond1.1}$--$\ref{cond1.2}$ hold, let $Q_1$ and $Q_2$ be closed sets such that $Q_1\subset\pG$, $Q_2\subset\overline G$, and $Q_1\cap Q_2=\varnothing$, and let $q\ge q_1$. Then the inequality $$ \|\bS_q\psi\|_{C(Q_2)}\le\dfrac{c_2}{q}\|\psi\|_{\cC_\cK(\pG)},\qquad q\ge q_1, $$ holds for any $\psi\in\cC_\cK(\pG)$ such that $\supp(\bS_q\psi)|_{\pG}\subset Q_1;$ here $c_2>0$ does not depend on $\psi$ and $q$. \end{lemma} \begin{proof} Using\footnote{It is supposed in Lemma 1.3 in~\cite{GalSkMs} that the boundary of domain is infinitely smooth. This assumption is needed to prove the existence of classical solution for elliptic equations with nonhomogeneous boundary condition. However, this assumption is needless for the validity of the first inequality in \eqref{eq55_0}, provided that the solution exists.} Lemma 1.3 in \cite{GalSkMs} and Theorem~\ref{th1-2}, we obtain \begin{equation}\label{eq55_0} \|\bS_q\psi\|_{C(Q_2)}\le \dfrac{k}{q} \|(\bS_q\psi)|_{\pG}\|_{C(\pG)}\le \dfrac{k}{q}\|\bS_q\psi\|_{C(\overline G)}\le\dfrac{kc_1}{q}\|\psi\|_{\cC_\cK(\pG)},\qquad q\ge q_1, \end{equation} where the number $q_1$ defined in Theorem~\ref{th1-2} is assumed to be large enough so that Lemma 1.3 in~\cite{GalSkMs} be valid for $q\ge q_1$; the number $k=k(q_1)$ does not depend on $\psi$ and $q$. \end{proof} \begin{lemma}\label{l5} Let Conditions~$\ref{cond1.1}$--$\ref{cond1.2}$ hold, let $Q_1$ and $Q_2$ be the same sets as in Lemma $\ref{l4}$, and let $q\ge q_1$. We additionally suppose that $Q_2\cap\cK=\varnothing$. Then the inequality $$ \|\bS_q\psi\|_{C(Q_2)}\le\dfrac{c_3}{q}\|\psi\|_{\cC_\cK(Q_1)},\qquad q\ge q_1, $$ holds for any $\psi\in\cC_\cK(\pG)$ such that $\supp \psi\subset Q_1;$ here $c_3>0$ does not depend on $\psi$ and $q$. \end{lemma} \begin{proof} 1. Consider a number $\sigma>0$ such that \begin{equation}\label{eq55_2} \dist(Q_1,Q_2)>3\sigma,\qquad \dist(\cK, Q_2)>3\sigma. \end{equation} Introduce a function $\xi\in C^\infty(\bbR^2)$ such that $0\le \xi(y)\le1$, $\xi(y)=1$ for $\dist (y,Q_2)\le\sigma$, and $\xi(y)=0$ for $\dist (y,Q_2)\ge 2\sigma$. Consider the auxiliary problem \begin{equation}\label{eq55_3-4} P_0v-q v=0,\ y\in G;\qquad v(y)=\xi(y)u(y),\ y\in\pG, \end{equation} where $u=\bS_q\psi\in C_\cK(\overline G)$. Applying Theorem~\ref{th1-2} with $\bB_i =0$, we see that there is a unique solution $v\in C^\infty(G)\cap C(\overline G)$ of problem \eqref{eq55_3-4}. If follows from Maximum Principle~\ref{mp2} and from the definition of the function $\xi$ that \begin{equation}\label{eq55_5} \|v\|_{C(\overline G)}\le \|\xi u\|_{C(\pG)}\le\max\limits_{i=1,\dots,N} \|u|_{Q_{2,2\sigma}\cap\overline{\Gamma_i}}\|_{C(Q_{2,2\sigma}\cap\overline{\Gamma_i})}, \end{equation} where $Q_{2,2\sigma}=\{y\in\pG: \dist(y,Q_2)\le 2\sigma\}$. Since $\supp \psi\cap Q_{2,2\sigma}=\varnothing$, it follows that \begin{equation}\label{eq55_6} u-\bB_i u=0,\qquad y\in Q_{2,2\sigma}\cap\overline{\Gamma_i}. \end{equation} Taking into account that $\bB_i u=0$ for $y\notin\cO_\varepsilon(\cK)$, we deduce from~\eqref{eq55_6} that \begin{equation}\label{eq55_7} u(y)=0,\qquad y\in [Q_{2,2\sigma}\cap\overline{\Gamma_i}]\setminus \cO_\varepsilon(\cK). \end{equation} Using~\eqref{eq55_5}--\eqref{eq55_7}, the definition of the operators $\bB_i $, and Condition~\ref{cond1.2}, we obtain \begin{equation}\label{eq55_8} \begin{aligned} \|v\|_{C(\overline G)}&\le \max\limits_{i=1,\dots,N} \|u|_{Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)}}\|_{ C(Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)})}\\ &\le \max\limits_{i=1,\dots,N}\max\limits_{s=1,\dots,S_i} \|u|_{\Omega_{is}(Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)})}\|_{ C(\Omega_{is}(Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)}))}. \end{aligned} \end{equation} Since $Q_{2,2\sigma}\cap\cK=\varnothing$ (see~\eqref{eq55_2}), it follows from the definition of the transformations $\Omega_{is}$ that $$ \Omega_{is}(Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)}))\subset G. $$ Therefore, using inequality~\eqref{eq55_8} and Lemma~\ref{l4} with $Q_1$ and $Q_2$ replaced by $\pG$ and $\Omega_{is}(Q_{2,2\sigma}\cap\overline{\Gamma_i}\cap\overline{\cO_\varepsilon(\cK)}))$, we have \begin{equation}\label{eq55_9} \|v\|_{C(\overline G)}\le \dfrac{c_2}{q}\|\psi\|_{\cC_\cK(\pG)}. \end{equation} 2. Set $w=u-v$. Clearly, the function $w$ satisfies the relations $$ P_0w-q w =0,\ y\in G;\qquad w(y)=u(y)-v(y) =0,\ y\in Q_{2,\sigma}. $$ Applying Lemma~\ref{l4} with $\overline{\pG\setminus Q_{2,\sigma}}$ substituted for $Q_1$ and $\bB_i=0$ and taking into account that $w|_{\pG}=(1-\xi) u|_{\pG}$, we obtain $$ \|w\|_{C(Q_2)}\le \dfrac{c_2}{q}\|w|_{\pG}\|_{C(\pG)}\le\dfrac{c_2}{q}\|u\|_{C(\overline G)}. $$ The latter inequality and Theorem~\ref{th1-2} imply $$ \|w\|_{C(Q_2)}\le \dfrac{c_2c_1}{q}\|\psi\|_{\cC_\cK(\pG)}. $$ Combining this estimate with~\eqref{eq55_9}, we complete the proof. \end{proof} \section{Bounded Perturbations of Elliptic Operators and Their Properties}\label{subsectBoundedHypoth} Introduce a linear operator $P_1$ satisfying the following condition. \begin{condition}\label{cond2.1'} The operator $P_1: C(\overline G)\to C(\overline G)$ is bounded, and $P_1 u(y^0)\le 0$ whenever $u\in C(\overline G)$ achieves its positive maximum at the point $y^0\in G$. \end{condition} The operator $P_1$ will play the role of a bounded perturbation for unbounded elliptic operators in the spaces of continuous functions (cf.~\cite{GalSkMs, GalSkJDE}). The following result is a consequence of Conditions~\ref{cond1.1} and~\ref{cond2.1'} and Maximum Principle~\ref{mp2}. \begin{lemma}\label{l2.1} Let Conditions $\ref{cond1.1}$ and $\ref{cond2.1'}$ hold. If a function $u\in C(\oG)$ achieves its positive maximum at a point $y^0\in G$ and $P_0u\in C(G)$, then $P_0u(y^0)+P_1 u(y^0)\le0$. \end{lemma} In this paper, we consider the following nonlocal conditions in the {\it nontransversal} case: \begin{equation}\label{eq56} b(y)u(y)+\int\limits_{\oG}[u(y)-u(\eta)]\mu(y,d\eta)=0,\qquad y\in\pG, \end{equation} where $b(y)\ge0$ and $\mu(y,\cdot)$ is a nonnegative Borel measure on $\oG$. Set $ \cN=\{y\in\pG: \mu(y,\oG)=0\}$ and $\cM=\pG\setminus \cN.$ Assume that $\cN$ and $\cM$ are Borel sets. \begin{condition}\label{cond2.1''} $\cK\subset \cN$. \end{condition} Introduce the function $ b_0(y)=b(y)+\mu(y,\oG). $ \begin{condition}\label{cond2.2} $b_0(y)>0$ for $y\in\pG$. \end{condition} Conditions~\ref{cond2.1''} and~\ref{cond2.2} imply that relation \eqref{eq56} can be written as follows: \begin{equation}\label{eq57} u(y)-\int\limits_\oG u(\eta)\mu_i(y,d\eta)=0,\ y\in\Gamma_i;\qquad u(y) =0,\ y\in\cK, \end{equation} where $ \mu_i(y,\cdot)=\dfrac{\mu(y,\cdot)}{b_0(y)},\ y\in\Gamma_i. $ By the definition of the function $b_0(y)$, we have \begin{equation}\label{eq59} \mu_i(y,\oG)\le 1,\qquad y\in\Gamma_i. \end{equation} For any set $Q$, we denote by $\chi_Q(y)$ the function equal to one on $Q$ and vanishing on $\bbR^2\setminus Q$. Let $b_{is}(y)$ and $\Omega_{is}$ be the same as above. We introduce the measures $\delta_{is}$ as follows: $$ \delta_{is}(y,Q)=\left\{ \begin{aligned} &b_{is}(y)\chi_Q(\Omega_{is}(y)),& &y\in\Gamma_i\cap\cO_\varepsilon(\cK),\\ &0,& &y\in\Gamma_i\setminus\cO_\varepsilon(\cK), \end{aligned}\right. $$ for any Borel set $Q$. We study those measures $\mu_i(y,\cdot)$ which can be represented in the form \begin{equation}\label{eq61} \mu_i(y,\cdot)=\sum\limits_{s=1}^{S_i}\delta_{is}(y,\cdot)+\alpha_i(y,\cdot)+\beta_i(y,\cdot),\qquad y\in\Gamma_i, \end{equation} where $\alpha_i(y,\cdot)$ and $\beta_i(y, \cdot)$ are nonnegative Borel measures to be specified below (cf.~\cite{GalSkMs,GalSkJDE}). For any Borel measure $\mu(y,\cdot)$, the closed set $ \spt\mu(y,\cdot)=\oG\setminus\bigcup\limits_{V\in T}\{V\in T: \mu(y,V\cap\oG)=0\} $ (where~$T$ denotes the set of all open sets in $\bbR^2$) is called the {\it support} of the measure $\mu(y,\cdot)$. \begin{condition}\label{cond2.3} There exist numbers $\varkappa_1>\varkappa_2>0$ and $\sigma>0$ such that \begin{enumerate} \item $\spt\alpha_i(y,\cdot)\subset\oG\setminus\cO_{\varkappa_1}(\cK)$ for $y\in\Gamma_i$, \item $\spt\alpha_i(y,\cdot)\subset\overline{G_\sigma}$ for $y\in\Gamma_i\setminus\cO_{\varkappa_2}(\cK),$ \end{enumerate} where $\cO_{\varkappa_1}(\cK)=\{y\in\bbR^2:\dist(y,\cK)<\varkappa_1\}$ and $G_\sigma=\{y\in G:\dist(y,\pG)<\sigma\}.$ \end{condition} \begin{condition}\label{cond2.4} $\beta_i(y,\cM)<1$ for $y\in\Gamma_i\cap\cM$, $i=1,\dots,N$. \end{condition} \begin{remark} Condition~\ref{cond2.4} is weaker than (analogous) Condition 2.2 in~\cite{GalSkMs} or Condition 3.2 in~\cite{GalSkJDE} because the latter two require that $\mu_i(y,\cM)<1$ for $y\in\Gamma_i\cap\cM$. \end{remark} \begin{remark} One can show that Conditions~\ref{cond2.2}--\ref{cond2.4} imply that $ b(y)+\mu(y,\oG\setminus\{y\})>0,\ y\in\pG, $ i.e., the boundary-value condition~\eqref{eq56} disappears nowhere on the boundary. \end{remark} Using relations~\eqref{eq61}, we write nonlocal conditions \eqref{eq57} in the form \begin{equation}\label{eq63} u(y)-\bB_i u(y)-\bB_{\alpha i}u(y)-\bB_{\beta i}u(y) =0,\ y\in\Gamma_i;\qquad u(y) =0,\ y\in\cK, \end{equation} where the operators $\bB_i $ are given by~\eqref{eq3'} and $$ \bB_{\alpha i}u(y)=\int\limits_\oG u(\eta)\alpha_i(y, d\eta),\qquad \bB_{\beta i}u(y)=\int\limits_\oG u(\eta)\beta_i(y, d\eta),\qquad y\in\Gamma_i. $$ Introduce the space\footnote{Clearly, nonlocal conditions \eqref{eq56} in the definition of the space $C_B(\oG)$ can be replaced by conditions~\eqref{eq57} or~\eqref{eq63}.} $ C_B(\oG)=\{u\in C(\oG): u\ \text{satisfy nonlocal conditions \eqref{eq56}}\}. $ It follows from the definition of the space $C_B(\oG)$ and from Condition~\ref{cond2.1''} that\footnote{The spaces $C_\cN(\cdot)$ and $C_\cK(\cdot)$ are given in \eqref{eqC_K}.} \begin{equation}\label{eqBNK} C_B(\oG)\subset C_\cN(\oG)\subset C_\cK(G). \end{equation} \begin{lemma}\label{l2.3} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$ and $\ref{cond2.1'}$--$\ref{cond2.4}$ hold. Let a function $u\in C_B(\oG)$ achieve its positive maximum at a point $y^0\in\overline G$ and $P_0u\in C(G)$. Then there is a point $y^1\in G$ such that $u(y^1)=u(y^0)$ and $P_0u(y^1)+P_1u(y^1)\le 0$. \end{lemma} \begin{proof} 1. If $y^0\in G$, then the conclusion of the lemma follows from Lemma~\ref{l2.1}. Let $y^0\in\pG$. Suppose that the lemma is not true, i.e., $u(y^0)>u(y)$ for $y\in G$. Since $u(y^0)>0$ and $u\in C_B(\oG)\subset C_\cN(\oG)$, it follows that $y^0\in \cM$. Let $y^0\in\Gamma_i\cap \cM$ for some $i$. If $\mu_i(y^0,G)>0$, then, taking into account~\eqref{eq59}, we have $$ u(y^0)-\int\limits_\oG u(\eta)\mu_i(y^0,d\eta)\ge \int\limits_G [u(y^0)-u(\eta)]\mu_i(y^0,d\eta)>0, $$ which contradicts~\eqref{eq57}. Therefore, $\spt\mu_i(y^0,\cdot)\subset\pG$. It follows from this relation, from~\eqref{eq61}, and from Condition~\ref{cond2.3} (part 1) that \begin{equation}\label{eq65} b_{is}(y^0)=0,\qquad \spt\alpha_i(y^0,\cdot)\subset\pG\setminus\cO_{\varkappa_1}(\cK),\qquad \spt\beta_i(y^0,\cdot)\subset\pG. \end{equation} 2. Suppose that $\alpha_i(y^0,\pG\setminus\cO_{\varkappa_1}(\cK))=0$. In this case, due to~\eqref{eq65}, \begin{equation}\label{eq67} \alpha_i(y^0,\oG)=0. \end{equation} Now it follows from~\eqref{eq61},~\eqref{eq65},~\eqref{eq67} and from Condition~\ref{cond2.4} that $$ \mu_i(y^0,\cdot)=\beta_i(y^0,\cdot),\qquad \spt\beta_i(y^0,\cdot)\subset\pG,\qquad \beta_i(y^0,\cM)<1. $$ Hence, the following inequalities hold for $u\in C_B(\oG)\subset C_\cN(\oG)$: $$ u(y^0)-\int\limits_\oG u(\eta)\mu_i(y^0,d\eta)= u(y^0)-\int\limits_\cM u(\eta)\beta_i(y^0,d\eta) \ge u(y^0)-u(y^0)\beta_i(y^0,\cM)>0, $$ which contradicts~\eqref{eq57}. This contradiction shows that $\alpha_i(y^0,\pG\setminus\cO_{\varkappa_1}(\cK))>0$. Therefore, taking into account Condition~\ref{cond2.3} (part 2), we have $y^0\in\cO_{\varkappa_2}(\cK)$. 3. We claim that there is a point \begin{equation}\label{eq66} y'\in\pG\setminus\cO_{\varkappa_1}(\cK) \end{equation} such that $u(y')=u(y^0)$. Indeed, assume the contrary: $u(y^0)>u(y)$ for $y\in\pG\setminus\cO_{\varkappa_1}(\cK)$. Then, using~\eqref{eq59},~\eqref{eq61}, and~\eqref{eq65}, we obtain \begin{equation}\label{eq66'} u(y^0)-\int\limits_\oG u(\eta)\mu_i(y^0,d\eta)\ge \int\limits_\oG [u(y^0)-u(\eta)]\mu_i(y^0,d\eta)\ge \int\limits_{\pG\setminus\cO_{\varkappa_1}(\cK)} [u(y^0)-u(\eta)]\alpha_i(y^0,d\eta)>0 \end{equation} because $\alpha_i(y^0,\pG\setminus\cO_{\varkappa_1}(\cK))>0$. Inequality~\eqref{eq66'} contradicts~\eqref{eq57}. Therefore, the function $u$ achieves its positive maximum at some point $y'\in\pG\setminus\cO_{\varkappa_1}(\cK)$. Repeating the arguments of items 1 and 2 of this proof yields $y'\in\cO_{\varkappa_2}(\cK)$, which contradicts~\eqref{eq66}. Thus, we have proved that there is a point $y^1\in G$ such that $u(y^1)=u(y^0)$. Applying Lemma~\ref{l2.1}, we obtain $P_0u(y^1)+P_1u(y^1)\le 0$. \end{proof} \begin{corollary}\label{cor2.1} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$ and $\ref{cond2.1'}$--$\ref{cond2.4}$ hold. Let $u\in C_B(\oG)$ be a solution of the equation $$qu(y)-P_0u(y)-P_1u(y)=f_0(y),\quad y\in G,$$ where $q>0$ and $f_0\in C(\oG)$. Then \begin{equation}\label{eqcor2.1} \|u\|_{C(\oG)}\le\dfrac{1}{q}\|f_0\|_{C(\oG)}. \end{equation} \end{corollary} \begin{proof} Let $\max\limits_{y\in\oG}|u(y)|=u(y^0)>0$ for some $y^0\in \oG$. In this case, by Lemma~\ref{l2.3}, there is a point $y^1\in G$ such that $u(y^1)=u(y^0)$ and $P_0u(y^1)+P_1u(y^1)\le 0$. Therefore, $$ \|u\|_{C(\oG)}=u(y^0)=u(y^1)=\dfrac{1}{q}(P_0u(y^1)+P_1u(y^1)+f_0(y^1))\le \dfrac{1}{q}\|f_0\|_{C(\oG)}. $$ \end{proof} \section{Reduction to the Operator Equation on the Boundary}\label{subsectBoundedReduction} In this section, we impose some additional restrictions on the nonlocal operators, which allow us to reduce nonlocal elliptic problems to operator equations on the boundary. Note that if $u\in C_\cN(\oG)$, then $\bB_i u$ is continuous on $\Gamma_i$ and can be extended to a continuous function on $\overline{\Gamma_i}$ (also denoted by $\bB_i u$), which belongs to $C_\cN(\overline{\Gamma_i})$. We assume that the operators $\bB_{\alpha i}$ and $\bB_{\beta i}$ possess the similar property. \begin{condition}\label{cond2.5} For any function $u\in C_\cN(\overline G)$, the functions $\bB_{\alpha i}u$ and $\bB_{\beta i}u$ can be extended to $\overline{\Gamma_i}$ in such a way that the extended functions {\rm(}which we also denote by $\bB_{\alpha i}u$ and $\bB_{\beta i}u$, respectively{\rm)} belong to $C_\cN(\overline{\Gamma_i})$. \end{condition} The next lemma directly follows from the definition of the nonlocal operators. \begin{lemma}\label{l2.2} Let Conditions $\ref{condK1}$, $\ref{cond1.2}$, $\ref{cond2.1''}$, $\ref{cond2.2}$, and $\ref{cond2.5}$ hold. Then the operators $\bB_i , \bB_{\alpha i},\bB_{\beta i}: C_\cN(\oG)\to C_\cN(\overline{\Gamma_i}) $ are bounded and $$ \|\bB_i u\|_{C_\cN(\overline{\Gamma_i})}\le\|u\|_{C_\cN(\oG)},\qquad \|\bB_{\alpha i}u\|_{C_\cN(\overline{\Gamma_i})}\le\|u\|_{C_\cN(\oG\setminus\cO_{\varkappa_1}(\cK))},\qquad \|\bB_{\beta i}u\|_{C_\cN(\overline{\Gamma_i})}\le\|u\|_{C_\cN(\oG)}, $$ $$ \|\bB_{\alpha i}u+\bB_{\beta i}u\|\le \|u\|_{C_\cN(\oG)},\qquad \|\bB_i u+\bB_{\alpha i}u+\bB_{\beta i}u\|\le \|u\|_{C_\cN(\oG)}. $$ \end{lemma} Consider the space of vector-valued functions $ \cC_\cN(\pG)=\prod\limits_{i=1}^N C_\cN(\overline{\Gamma_i}) $ with the norm $ \|\psi\|_{\cC_\cN(\pG)}= \max\limits_{i=1,\dots,N}\max\limits_{y\in\overline{\Gamma_i}}\|\psi_i\|_{C(\overline{\Gamma_i})}$, $ \psi=\{\psi_i\},\ \psi_i\in C_\cN(\overline{\Gamma_i}) $. Introduce the operators \begin{equation}\label{eqBAlphaBeta} \bB=\{\bB_i \}:C_\cN(\oG)\to\cC_\cN(\pG),\qquad \bB_{\alpha\beta}=\{\bB_{\alpha i}+\bB_{\beta i}\}:C_\cN(\oG)\to\cC_\cN(\pG). \end{equation} Using the operator $\bS_q$ defined in Sec. \ref{subsectStatement}, we introduce the bounded operator \begin{equation}\label{eq68} \bI-\bB_{\alpha\beta}\bS_q:\cC_\cN(\pG)\to\cC_\cN(\pG),\qquad q\ge q_1. \end{equation} Since $\bS_q\psi\in C_\cN(\oG)$ for $\psi\in\cC_\cN(\pG)$, the operator in~\eqref{eq68} is well defined. Now we formulate sufficient conditions under which the bounded operator $(\bI-\bB_{\alpha\beta}\bS_q)^{-1}:\cC_\cN(\pG)\to \cC_\cN(\pG)$ exists. We represent the measures $\beta_i(y,\cdot)$ in the form \begin{equation}\label{eq73} \beta_i(y,\cdot)=\beta_i^1(y,\cdot)+\beta_i^2(y,\cdot), \end{equation} where $\beta_i^1(y,\cdot)$ and $\beta_i^2(y,\cdot)$ are nonnegative Borel measures. Let us specify them. For each $p>0$, we consider the covering of the set $\overline{\cM}$ by the $p$-neighborhoods of all its points. Denote some finite subcovering by $\cM_p$. Since $\cM_p$ is a finite union of open disks, it is an open Borel set. Now for each $p>0$, we consider a cut-off function $\hat\zeta_p\in C^\infty(\bbR^2)$ such that $0\le\hat\zeta_p(y)\le 1$, $\hat\zeta_p(y)=1$ for $y\in\cM_{p/2}$, and $\hat\zeta_p(y)=0$ for $y\notin\cM_{p}$. Set $\tilde\zeta_p=1-\hat\zeta_p$. Introduce the operators $$ \hat\bB_{\beta i}^1 u(y)=\int\limits_{\oG}\hat\zeta_p(\eta)u(\eta)\beta_i^1(y,d\eta),\quad \tilde\bB_{\beta i}^1 u(y)=\int\limits_{\oG}\tilde\zeta_p(\eta) u(\eta)\beta_i^1(y,d\eta),\quad \bB_{\beta i}^2 u(y)=\int\limits_{\oG}u(\eta)\beta_i^2(y,d\eta). $$ \begin{condition}\label{cond2.7} The following assertions are true for $i=1,\dots,N${\rm:} \begin{enumerate} \item the operators $\hat\bB_{\beta i}^1,\tilde\bB_{\beta i}^1:C_\cN(\oG)\to C_\cN(\overline{\Gamma_i})$ are bounded{\rm;} \item there exists a number $p>0$ such that\footnote{Part 2 of Condition~\ref{cond2.7} may be replaced by the stronger assumption $\|\hat\bB_{\beta i}^1\|\to 0$ as $p\to0$, which is easier to verify in applications.} $$ \|\hat\bB_{\beta i}^1\|< \left\{ \begin{aligned} &\frac{1}{c_1} & &\text{if}\quad \alpha_j(y,\oG)=0\ \forall y\in\Gamma_j,\ j=1,\dots,N,\\ &\frac{1}{c_1(1+c_1)} & &\text{otherwise}, \end{aligned} \right. $$ where $c_1$ is the constant occurring in Theorem {\rm\ref{th1-2}.} \end{enumerate} \end{condition} \begin{remark} The operators $\hat\bB_{\beta i}^1,\tilde\bB_{\beta i}^1:C_\cN(\oG)\to C_\cN(\overline{\Gamma_i})$ are bounded if and only if the operator $\hat\bB_{\beta i}^1+\tilde\bB_{\beta i}^1:C_\cN(\oG)\to C_\cN(\overline{\Gamma_i})$ is bounded. This follows from the relations $\hat\bB_{\beta i}^1u=(\hat\bB_{\beta i}^1+\tilde\bB_{\beta i}^1)(\hat\zeta_p u)$ and $\tilde\bB_{\beta i}^1u=(\hat\bB_{\beta i}^1+\tilde\bB_{\beta i}^1)(\tilde\zeta_p u)$ and from the continuity of the functions $\hat\zeta_p$ and $\tilde\zeta_p$. \end{remark} \begin{condition}\label{cond2.8} The operators $\bB_{\beta i}^2:C_\cN(\oG)\to C_\cN(\overline{\Gamma_i})$, $i=1,\dots,N$, are compact. \end{condition} It follows from~\eqref{eq61} and~\eqref{eq73} that the measures $\mu_i(y,\cdot)$ have the following representation: \begin{equation*} \mu_i(y,\cdot)=\sum\limits_{s=1}^{S_i}\delta_{is}(y,\cdot)+\alpha_i(y,\cdot)+ \beta_i^1(y,\cdot)+\beta_i^2(y,\cdot),\qquad y\in\Gamma_i. \end{equation*} The measures $\delta_{is}(y,\cdot)$ correspond to nonlocal terms supported near the set $\cK$ of the conjugation points. The measures $\alpha_i(y,\cdot)$ correspond to nonlocal terms supported outside the set $\cK$. The measures $\beta_i^1(y,\cdot)$ and $\beta_i^2(y,\cdot)$ correspond to nonlocal terms with arbitrary geometrical structure of their support (in particular, their support may intersect with the set $\cK$); however, the measure $\beta_i^1(y,\cM_p)$ of the set $\cM_p$ must be small for small $p$ (Condition~\ref{cond2.7}) and the measure $\beta_i^2(y,\cdot)$ must generate a compact operator (Condition~\ref{cond2.8}). \begin{lemma}\label{l2.Boundary} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$, $\ref{cond2.1'}$--$\ref{cond2.4}$, and $\ref{cond2.5}$--$\ref{cond2.8}$ hold. Then there exists a bounded operator $(\bI-\bB_{\alpha\beta}\bS_q)^{-1}:\cC_\cN(\pG)\to \cC_\cN(\pG)$, $q\ge q_1$, where $q_1>0$ is sufficiently large. \end{lemma} \begin{proof} 1. Consider the bounded operators $ \hat\bB_{\beta}^1=\{\hat\bB_{\beta i}^1\}$, $\tilde\bB_{\beta}^1=\{\tilde\bB_{\beta i}^1\}$, $\bB_{\beta}^2=\{\bB_{\beta i}^2\}$, and $\bB_{\alpha}=\{\bB_{\alpha i}\} $ acting from $C_\cN(\oG)$ to $\cC_\cN(\pG)$ (cf.~\eqref{eqBAlphaBeta}). Let us prove that the operator $\bI-\bB_{\alpha}\bS_q:\cC_\cN(\pG)\to \cC_\cN(\pG)$ has the bounded inverse. Introduce a function $\zeta\in C^\infty(\oG)$ such that $0\le\zeta(y)\le 1$, $\zeta(y)=1$ for $y\in\overline{G_\sigma}$, and $\zeta(y)=0$ for $y\notin G_{\sigma/2}$, where $\sigma>0$ is the number from Condition~\ref{cond2.3}. We have \begin{equation}\label{eq75} \bI-\bB_{\alpha}\bS_q=\bI-\bB_{\alpha}(1-\zeta)\bS_q-\bB_{\alpha}\zeta\bS_q. \end{equation} 1a. First, we show that the operator $\bI-\bB_{\alpha}(1-\zeta)\bS_q$ has the bounded inverse. By Lemma \ref{l2.2} and Theorem~\ref{th1-2}, \begin{equation}\label{eq76} \|\bB_{\alpha}(1-\zeta)\bS_q\|\le c_1. \end{equation} Furthermore, $(1-\zeta)\bS_q\psi=0$ in $\overline{G_\sigma}$ for any $\psi\in \cC_\cN(\pG)$. Therefore, by Condition \ref{cond2.3}, \begin{equation}\label{eq77} \supp\bB_\alpha(1-\zeta)\bS_q\psi\subset\pG\cap\overline{\cO_{\varkappa_2}(\cK)}. \end{equation} Let us show that \begin{equation}\label{eq78} \|[\bB_{\alpha}(1-\zeta)\bS_q]^2\|\le \frac{c}{q},\qquad q\ge q_1, \end{equation} where $q_1>0$ is sufficiently large and $c>0$ does not depend on $q$. Consecutively applying (I) Lemma~\ref{l2.2}, (II) Lemma \ref{l5} and relation~\eqref{eq77}, and (III) Lemma~\ref{l2.2} and Theorem~\ref{th1-2}, we obtain \begin{align*} \|\bB_{\alpha}(1-\zeta)\bS_q\,\bB_{\alpha}(1-\zeta)\bS_q\psi\|_{\cC_\cN(\pG)}\le & \|\bS_q\bB_{\alpha}(1-\zeta)\bS_q\psi\|_{C_\cN(\oG\setminus\cO_{\varkappa_1}(\cK))}\le\\ &\dfrac{c_3}{q}\|\bB_{\alpha}(1-\zeta)\bS_q\psi\|_{C_\cN(\pG\cap\overline{\cO_{\varkappa_2}(\cK)})}\le \dfrac{c_3c_1}{q}\|\psi\|_{\cC_\cN(\pG)}. \end{align*} This yields~\eqref{eq78} with $c=c_3c_1$. If $q\ge 2c $, then the operator $\bI-[\bB_{\alpha}(1-\zeta)\bS_q]^2$ has the bounded inverse. Therefore, the operator $\bI-\bB_{\alpha}(1-\zeta)\bS_q$ also has the bounded inverse and \begin{equation}\label{eq79} [\bI-\bB_{\alpha}(1-\zeta)\bS_q]^{-1}=[\bI+\bB_{\alpha}(1-\zeta)\bS_q] [\bI-(\bB_{\alpha}(1-\zeta)\bS_q)^2]^{-1}. \end{equation} Representation~\eqref{eq79}, Lemma~\ref{l2.2}, Theorem~\ref{th1-2} and relations~\eqref{eq76} and~\eqref{eq78} imply that \begin{equation}\label{eq80} \|[\bI-\bB_{\alpha}(1-\zeta)\bS_q]^{-1}\|=1+c_1+O(q^{-1}),\qquad q\to+\infty. \end{equation} 1b. Now we estimate the norm of the operator $\bB_{\alpha}\zeta\bS_q$. Lemmas~\ref{l2.2} and~\ref{l5} imply that \begin{equation}\label{eq81} \|\bB_{\alpha}\zeta\bS_q\psi\|_{\cC_\cN(\pG)}\le \|\bS_q\psi\|_{C(\overline{G_{\sigma/2}})}\le \dfrac{c_2}{q}\|\psi\|_{\cC_\cN(\pG)}. \end{equation} Therefore, using representation~\eqref{eq75}, we see that the operator $\bI-\bB_{\alpha}\bS_q$ has the bounded inverse for sufficiently large $q$ and \begin{equation}\label{eq82} (\bI-\bB_{\alpha}\bS_q)^{-1}=[\bI-(\bI-\bB_{\alpha}(1-\zeta)\bS_q)^{-1}\bB_\alpha\zeta \bS_q]^{-1} [\bI-\bB_{\alpha}(1-\zeta)\bS_q]^{-1}. \end{equation} It follows from~\eqref{eq80}--\eqref{eq82} that \begin{equation}\label{eq83} \|(\bI-\bB_{\alpha}\bS_q)^{-1}\|=1+c_1+O(q^{-1}),\qquad q\to+\infty. \end{equation} 2. Let us prove that the operator $\bI-(\bB_{\alpha}+\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q:\cC_\cN(\pG)\to \cC_\cN(\pG)$ has the bounded inverse. 2a. It follows from the definition of the operator $\tilde\bB_{\beta}^1$ and from Lemma~\ref{l4} (with $Q_1=\overline\cM$ and $Q_2=\oG\setminus\cM_{p/2}$) that \begin{equation}\label{eq84} \|\tilde\bB_{\beta i}^1\bS_q\psi\|_{C_\cN(\overline{\Gamma_i})}\le\|\bS_q\psi\|_{C(\oG\setminus\cM_{p/2})} \le\dfrac{c_2}{q}\|\psi\|_{\cC_\cN(\pG)} \end{equation} because $(\oG\setminus\cM_{p/2})\cap\overline\cM=\varnothing$ and $\supp(\bS_q\psi)|_{\pG}\subset\overline\cM$ for $\psi\in\cC_\cN(\pG)$. 2b. Let $\alpha_j(y,\oG)\ne 0$ for some $j$ and $y\in\Gamma_j$. Due to Condition~\ref{cond2.7} (part 2) and Theorem~\ref{th1-2}, there is a number $d$ such that $0<2d<1/(1+c_1)$ and \begin{equation}\label{eq85} \|\hat\bB_{\beta i}^1\bS_q\psi\|_{C_\cN(\overline{\Gamma_i})}\le \Bigg(\dfrac{1}{c_1(1+c_1)}-\dfrac{2d}{c_1}\Bigg)\|\bS_q\psi\|_{C_\cN(\oG)}\le \Bigg(\dfrac{1}{1+c_1}-2d\Bigg)\|\psi\|_{\cC_\cN(\pG)}. \end{equation} Inequalities~\eqref{eq84} and~\eqref{eq85} yield \begin{equation}\label{eq86} \|(\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q\|\le \dfrac{1}{1+c_1}-d \end{equation} for sufficiently large $q$. Now it follows from~\eqref{eq83} and \eqref{eq86} that $ \|(\bI-\bB_{\alpha}\bS_q)^{-1}(\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q\|<1 $ for sufficiently large $q$. Hence, there exists the bounded inverse operator \begin{equation}\label{eq87} [\bI-(\bB_\alpha+\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q]^{-1}= [\bI-(\bI-\bB_{\alpha}\bS_q)^{-1}(\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q]^{-1} [\bI-\bB_\alpha\bS_q]^{-1}. \end{equation} 2c. If $\alpha_j(y,\oG)=0$ for $y\in\Gamma_j$, $j=1,\dots,N$, then, due to Condition~\ref{cond2.7} (part 1), inequality \eqref{eq85} assumes the form \begin{equation*} \|\hat\bB_{\beta i}^1\bS_q\psi\|_{C_\cN(\overline{\Gamma_i})}\le \Bigg(\dfrac{1}{c_1}-\dfrac{2d}{c_1}\Bigg)\|\bS_q\psi\|_{C_\cN(\oG)}\le (1-2d)\|\psi\|_{\cC_\cN(\pG)}. \end{equation*} Therefore, inequality~\eqref{eq86} reduces to \begin{equation}\label{eq88} \|(\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q\|\le 1-d. \end{equation} Since $\bB_\alpha=0$ in the case under consideration, it follows from~\eqref{eq88} that the operator $$ \bI-(\bB_\alpha+\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q= \bI-(\hat\bB_{\beta}^1+\tilde\bB_{\beta}^1)\bS_q $$ has the bounded inverse. 3. It remains to show that the operator $\bI-\bB_{\alpha\beta}\bS_q$ also has the bounded inverse. By Condition~\ref{cond2.8}, the operator $\bB_\beta^2$ is compact. Therefore, the operator $\bB_\beta^2\bS_q$ is also compact. Since the index of a Fredholm operator is stable under compact perturbation, we see that the operator $\bI-\bB_{\alpha\beta}\bS_q$ has the Fredholm property and $\ind(\bI-\bB_{\alpha\beta}\bS_q)=0$. To prove that $\bI-\bB_{\alpha\beta}\bS_q$ has the bounded inverse, it now suffices to show that $\dim\ker(\bI-\bB_{\alpha\beta}\bS_q)=0$. Let $\psi\in \cC_\cN(\pG)$ and $(\bI-\bB_{\alpha\beta}\bS_q)\psi=0$. Then the function $u=\bS_q\psi\in C^\infty(G)\cap C_\cN(\oG)$ is a solution of the problem \begin{gather*} P_0u-qu=0,\quad y\in G,\\ u(y)-\bB_i u(y)-\bB_{\alpha i}u(y)-\bB_{\beta i}u(y) =0,\ y\in\Gamma_i;\qquad u(y)=0,\ y\in\cK. \end{gather*} By Corollary~\ref{cor2.1}, we have $u=0$. Therefore, $\psi=\bB_{\alpha\beta}\bS_q\psi= \bB_{\alpha\beta}u=0$. \end{proof} \section{Existence of Feller Semigroups}\label{subsectBoundedExistence} In this section, we prove that the above bounded perturbations of elliptic equations with nonlocal conditions satisfying hypotheses of Secs.~\ref{subsectStatement}--\ref{subsectBoundedReduction} are generators of some Feller semigroups. Reducing nonlocal problems to the boundary and using Lemma~\ref{l2.Boundary}, we prove that the nonlocal problems are solvable in the space of continuous functions. \begin{lemma}\label{l2.4} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$, $\ref{cond2.1''}$--$\ref{cond2.4}$, and $\ref{cond2.5}$--$\ref{cond2.8}$ hold, and let $q_1$ be sufficiently large. Then, for any $q\ge q_1$ and $f_0\in C(\oG)$, the problem \begin{equation}\label{eql2.4_1} qu(y)-P_0u(y)=f_0(y),\quad y\in G, \end{equation} \begin{equation}\label{eql2.4_2} u(y)-\bB_i u(y)-\bB_{\alpha i}u(y)-\bB_{\beta i}u(y)=0,\ y\in\Gamma_i;\qquad u(y)=0,\ y\in\cK, \end{equation} admits a unique solution $u\in C_B(\oG)\cap W_{2,\loc}^{2}(G)$. \end{lemma} \begin{proof} Let us consider the auxiliary problem \begin{equation}\label{eq70} qv(y)-P_0v(y)=f_0(y),\ y\in G;\qquad v(y)-\bB_i v(y) =0,\ y\in\Gamma_i,\ i=1,\dots,N. \end{equation} Since $f_0\in C(\oG)$, it follows from Theorem~\ref{th1-2} that there exists a unique solution $v\in C_\cK(\oG)$ of problem \eqref{eq70}. Therefore, $v\in C_\cN(\oG)$. 2. Set $w=u-v$. The unknown function $w$ belongs to $C_\cN(\oG)$, and, by virtue of \eqref{eql2.4_1}--\eqref{eq70}, it satisfies the relations \begin{equation}\label{eq71} \begin{aligned} qw(y)-P_0w(y)&=0,& & y\in G,\\ w(y)-\bB_i w(y)-\bB_{\alpha i}w(y)-\bB_{\beta i}w(y)&=\bB_{\alpha i}v(y)+\bB_{\beta i}v(y), & & y\in\Gamma_i,\ i=1,\dots,N,\\ w(y)&=0, & & y\in\cK. \end{aligned} \end{equation} It follows from Condition~\ref{cond2.5} that problem~\eqref{eq71} is equivalent to the operator equation $\psi-\bB_{\alpha\beta}\bS_q\psi=\bB_{\alpha\beta}v$ for the unknown function $\psi\in\cC_\cN(\pG)$. Lemma~\ref{l2.Boundary} implies that this equation admits a unique solution $\psi\in\cC_\cN(\pG)$. In this case, problem~\eqref{eql2.4_1}, \eqref{eql2.4_2} admits a unique solution $$ u=v+w=v+\bS_q\psi=v+\bS_q(\bI-\bB_{\alpha\beta}\bS_q)^{-1}\bB_{\alpha\beta}v\in C_B(\oG). $$ Moreover, $u\in W^2_{2,\loc}(G)$ due to the interior regularity theorem for elliptic equations. \end{proof} Using Lemma~\ref{l2.4} and the assumptions concerning the bounded perturbations (see Condition~\ref{cond2.1'}), we prove that the perturbed problems are solvable in the space of continuous functions. \begin{lemma}\label{l2.5} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$, $\ref{cond2.1'}$--$\ref{cond2.4}$, and $\ref{cond2.5}$--$\ref{cond2.8}$ hold, and let $q_1$ be sufficiently large. Then, for any $q\ge q_1$ and $f_0\in C(\oG)$, the problem \begin{equation}\label{eql2.5_1} qu-(P_0+P_1)u=f_0(y),\qquad y\in G, \end{equation} \begin{equation}\label{eql2.5_2} u(y)-\bB_i u(y)-\bB_{\alpha i}u(y)-\bB_{\beta i}u(y) =0,\ y\in\Gamma_i;\qquad u(y) =0,\ y\in\cK, \end{equation} admits a unique solution $u\in C_B(\oG)\cap W_{2,\loc}^{2}(G)$. \end{lemma} \begin{proof} Consider the operator $qI-P_0$ as the operator acting from $C(\oG)$ to $C(\oG)$ with the domain $$ \Dom(qI-P_0)=\{u\in C_B(\oG)\cap W^2_{2,\loc}(G): P_0u\in C(\oG)\}. $$ Lemma~\ref{l2.4} and Corollary~\ref{cor2.1} imply that there exists the bounded operator $(qI-P_0)^{-1}: C(\oG)\to C(\oG)$ and $$ \|(qI-P_0)^{-1}\|\le 1/q. $$ Introduce the operator $qI-P_0-P_1: C(\oG)\to C(\oG)$ with the domain $\Dom(qI-P_0-P_1)=\Dom(qI-P_0)$. Since $$ qI-P_0-P_1= (I-P_1(qI-P_0)^{-1})(qI-P_0), $$ it follows that the operator $qI-P_0-P_1: C(\oG)\to C(\oG)$ has the bounded inverse for $q\ge q_1$, provided that $q_1$ is so large that $ \|P_1\|\cdot \|(qI-P_0)^{-1}\|\le 1/2$, $ q\ge q_1.$ \end{proof} We consider the unbounded operator $\bP_B: \Dom(\bP_B)\subset C_B(\overline G)\to C_B(\overline G)$ given by \begin{equation}\label{eqbP_BBoundedPert} \bP_B u=P_0u+P_1u,\qquad u\in \Dom(\bP_B)=\{u\in C_B(\oG)\cap W^2_{2,\loc}(G): P_0u+P_1u\in C_B(\overline G)\}. \end{equation} \begin{lemma}\label{l2.6} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$, $\ref{cond2.1'}$--$\ref{cond2.4}$, and $\ref{cond2.5}$--$\ref{cond2.8}$ hold. Then the set $\Dom(\bP_B)$ is dense in $C_B(\oG)$. \end{lemma} \begin{proof} We will follow the scheme proposed in~\cite{GalSkJDE}. 1. Let $u\in C_B(\oG)$. Since $C_B(\oG)\subset C_\cN(\oG)$ due to \eqref{eqBNK}, it follows that, for any $\varepsilon>0$ and $q\ge q_1$, there is a function $u_1\in C^\infty(\oG)\cap C_\cN(\oG)$ such that \begin{equation}\label{eq3.12} \|u-u_1\|_{C(\oG)}\le\min(\varepsilon,\varepsilon/(2c_1k_q)), \end{equation} where $k_q=\|(\bI-\bB_{\alpha\beta}\bS_q)^{-1}\|$. Set \begin{equation}\label{eq3.14} \begin{aligned} f_0(y)&\equiv qu_1-P_0 u_1, & & y\in G,\\ \psi_i(y)&\equiv u_1(y)-\bB_i u_1(y)-\bB_{\alpha i}u_1(y)-\bB_{\beta i}u_1(y),& & y\in\Gamma_i,\ i=1,\dots,N. \end{aligned} \end{equation} Since $u_1\in C_\cN(\oG)$, it follows from Condition~\ref{cond2.5} that $\{\psi_i\}\in \cC_\cN(\pG)$. Using the relation $$ u(y)-\bB_i u(y)-\bB_{\alpha i}u(y)-\bB_{\beta i}u(y)=0,\qquad y\in\Gamma_i, $$ inequality~\eqref{eq3.12}, and Lemma~\ref{l2.2}, we obtain \begin{equation}\label{eq3.13} \|\{\psi_i\}\|_{\cC_\cN(\pG)}\le\|u-u_1\|_{C(\oG)} +\|(\bB+\bB_{\alpha\beta})(u-u_1)\|_{\cC_\cN(\pG)}\le\varepsilon/(c_1k_q). \end{equation} Consider the auxiliary nonlocal problem \begin{equation}\label{eq3.15} \begin{gathered} qu_2-P_0 u_2 = f_0(y),\quad y\in G,\\ u_2(y)-\bB_i u_2(y)-\bB_{\alpha i}u_2(y)-\bB_{\beta i}u_2(y) =0,\ y\in\Gamma_i;\qquad u_2(y) =0,\ y\in\cK. \end{gathered} \end{equation} Since $f_0\in C^\infty(\oG)$, it follows from Lemma~\ref{l2.4} that problem~\eqref{eq3.15} has a unique solution $u_2\in C_B(\oG)\subset C_\cN(\oG)$. Using~\eqref{eq3.14},~\eqref{eq3.15}, and the relations $u_1(y)=u_2(y)=0$, $y\in\cK$, we see that the function $w_1=u_1-u_2$ satisfies the relations \begin{equation}\label{eq3.16} \begin{gathered} q w_1-P_0 w_1 =0,\quad y\in G,\\ w_1(y)-\bB_i w_1(y)-\bB_{\alpha i}w_1(y)-\bB_{\beta i}w_1(y) =\psi_i(y),\ y\in\Gamma_i;\qquad w_1(y) =0,\ y\in\cK. \end{gathered} \end{equation} It follows from Condition~\ref{cond2.5} that problem \eqref{eq3.16} is equivalent to the operator equation $\phi-\bB_{\alpha\beta}\bS_q\phi=\psi$ in $\cC_\cN(\pG)$, where $w_1=\bS_q\phi$. Lemma~\ref{l2.Boundary} implies that this equation admits a unique solution $\phi\in\cC_\cN(\pG)$. Therefore, using Theorem~\ref{th1-2} and inequality~\eqref{eq3.13}, we obtain \begin{equation}\label{eq3.17} \|w_1\|_{C(\oG)}\le c_1\|(\bI-\bB_{\alpha\beta}\bS_q)^{-1}\|\cdot \|\{\psi_i\}\|_{\cC_\cN(\pG)}\le c_1 k_q\varepsilon/(c_1k_q)=\varepsilon. \end{equation} 2. Finally, we consider the problem \begin{equation}\label{eq3.18} \begin{gathered} \lambda u_3-P_0u_3-P_1u_3 =\lambda u_2,\quad y\in G,\\ u_3(y)-\bB_i u_3(y)-\bB_{\alpha i}u_3(y)-\bB_{\beta i}u_3(y) =0,\ y\in\Gamma_i;\qquad u_3(y) =0,\ y\in\cK. \end{gathered} \end{equation} Since $u_2\in C_B(\oG)$, it follows from Lemma~\ref{l2.5} that problem~\eqref{eq3.18} admits a unique solution $u_3\in \Dom(\bP_B)$ for sufficiently large $\lambda$. Denote $w_2=u_2-u_3$. It follows from~\eqref{eq3.18} that $$ \lambda w_2-P_0w_2-P_1w_2=-P_0 u_2-P_1u_2=f_0-qu_2-P_1u_2. $$ Applying Corollary~\ref{cor2.1}, we have $$ \|w_2\|_{C(\oG)}\le\dfrac{1}{\lambda}\|f_0-qu_2-P_1u_2\|_{C(\oG)}. $$ Choosing sufficiently large $\lambda$ yields \begin{equation}\label{eq3.20} \|w_2\|_{C(\oG)}\le\varepsilon. \end{equation} Inequalities~\eqref{eq3.12},~\eqref{eq3.17}, and~\eqref{eq3.20} imply $$ \|u-u_3\|_{C(\oG)}\le\|u-u_1\|_{C(\oG)}+\|u_1-u_2\|_{C(\oG)}+ \|u_2-u_3\|_{C(\oG)}\le 3\varepsilon. $$ \end{proof} Now we can prove the main result of the paper. \begin{theorem}\label{th2.1} Let Conditions $\ref{cond1.1}$--$\ref{cond1.2}$, $\ref{cond2.1'}$--$\ref{cond2.4}$, and $\ref{cond2.5}$--$\ref{cond2.8}$ hold. Then the operator $\bP_B:\Dom(\bP_B)\subset C_B(\oG)\to C_B(\oG)$ is a generator of a Feller semigroup. \end{theorem} \begin{proof} 1. By Lemma~\ref{l2.5} and Corollary~\ref{cor2.1}, there exists the bounded operator $(qI-\bP_B)^{-1}: C_B(\oG)\to C_B(\oG)$ and $$ \|(qI-\bP_B)^{-1}\|\le 1/q $$ for all sufficiently large $q>0$. 2. Since the operator $(qI-\bP_B)^{-1}$ is bounded and defined on the whole space $C_B(\oG)$, it is closed. Therefore, the operator $qI-\bP_B:\Dom(\bP_B)\subset C_B(\oG)\to C_B(\oG)$ is closed. Hence, $\bP_B:\Dom(\bP_B)\subset C_B(\oG)\to C_B(\oG)$ is also closed. 3. Let us prove that the operator $(qI-\bP_B)^{-1}$ is nonnegative. Assume the contrary; then there exists a function $f_0\ge0$ such that a solution $u\in\Dom(\bP_B)$ of the equation $qu-\bP_Bu=f_0$ achieves its negative minimum at some point $y^0\in \oG$. In this case, the function $v=-u$ achieves its positive maximum at the point $y^0$. By Lemma~\ref{l2.3}, there is a point $y^1\in G$ such that $v(y^1)=v(y^0)$ and $\bP_B v(y^1)\le 0$. Therefore, $ 0<v(y^0)=v(y^1)=(\bP_Bv(y^1)-f_0(y^1))/q\le 0. $ This contradiction proves that $u\ge0$. Thus, all the hypotheses of the Hille--Iosida theorem (Theorem~\ref{thHI}) are fulfilled. Hence, $\bP_B:\Dom(\bP_B)\subset C_B(\oG)\to C_B(\oG)$ is a generator of a Feller semigroup. \end{proof} As a conclusion, we give an example of nonlocal conditions satisfying the assumptions of the paper. Let $G\subset\bbR^2$ be a bounded domain with boundary $\pG=\Gamma_1\cup\Gamma_2\cup\cK$, where $\Gamma_1$ and $\Gamma_2$ are $C^\infty$ curves open and connected in the topology of $\pG$ such that $\Gamma_1\cap\Gamma_2=\varnothing$ and $\overline{\Gamma_1}\cap\overline{\Gamma_2}=\cK$; the set $\cK$ consists of two points $g_1$ and $g_2$. We assume that the domain $G$ coincides with some plane angle in an $\varepsilon$-neighborhood of the point $g_i$, $i=1,2$. Let $\Omega_j$, $j=1,\dots,4$, be continuous transformations defined on $\overline{\Gamma_1}$ and satisfying the following conditions (see Fig.~\ref{figEx3-4}): \begin{figure}[ht] { \hfill\epsfxsize130mm\epsfbox{fig_ex3-4.eps}\hfill\ } \caption{Nontransversal nonlocal conditions} \label{figEx3-4} \end{figure} \begin{enumerate} \item $\Omega_1(\cK)\subset\cK$, $\Omega_1(\Gamma_1\cap\cO_\varepsilon(\cK))\subset G$, $\Omega_1(\Gamma_1\setminus\cO_\varepsilon(\cK))\subset G\cup\Gamma_2$, and $\Omega_1(y)$ is a composition of shift of the argument, rotation, and homothety for $y\in\overline{\Gamma_1}\cap\cO_\varepsilon(\cK)$; \item there exist numbers $\varkappa_1>\varkappa_2>0$ and $\sigma>0$ such that $\Omega_2(\overline{\Gamma_1})\subset\oG\setminus\cO_{\varkappa_1}(\cK)$ and $\Omega_2(\overline{\Gamma_1}\setminus\cO_{\varkappa_2}(\cK))\subset \overline{G_\sigma}$; moreover, $\Omega_2(g_1)\in\Gamma_1$ and $\Omega_2(g_2)\in G$; \item $\Omega_3(\overline{\Gamma_1})\subset G\cup{\Gamma_2}$ and $\Omega_3(\cK)\subset\Gamma_2$; \item $\Omega_4(\overline{\Gamma_1})\subset G\cup\overline{\Gamma_2}$ and $\Omega_4(\cK)\subset\cK$. \end{enumerate} Let $b_1\in C(\overline{\Gamma_1})\cap C^\infty(\overline{\Gamma_1}\cap\cO_\varepsilon(\cK))$, $b_2,b_3,b_4\in C(\overline{\Gamma_1})$, and $b_j\ge0$, $j=1,\dots,4$. Let $G_1$ be a bounded domain, $G_1\subset G$, and $\Gamma\subset \oG$ be a curve of class $C^1$. Introduce continuous nonnegative functions $c(y,\eta)$, $y\in\overline{\Gamma_1}$, $\eta\in\overline{G_1}$, and $d(y,\eta)$, $y\in\overline{\Gamma_1}$, $\eta\in\overline{\Gamma}$. Consider the following nonlocal conditions: \begin{equation}\label{eq89-90} \begin{aligned} u(y)-\sum\limits_{j=1}^4 b_j(y)u(\Omega_j(y))- \int\limits_{G_1}c(y,\eta)u(\eta)d\eta- \int\limits_{\Gamma}d(y,\eta)u(\eta)d\Gamma_\eta &=0, & &y\in\Gamma_1,\\ u(y) &=0, & &y\in\overline{\Gamma_2}. \end{aligned} \end{equation} Let $Q\subset\oG$ be an arbitrary Borel set; introduce the measure $\mu(y,\cdot)$, $y\in\pG$: \begin{equation*} \begin{aligned} \mu(y,Q)&=\sum\limits_{j=1}^4 b_j(y)\chi_Q(\Omega_j(y))+ \int\limits_{G_1\cap Q}c(y,\eta)d\eta+ \int\limits_{\Gamma\cap Q}d(y,\eta)u(\eta)d\Gamma_\eta, & &y\in\Gamma_1,\\ \mu(y,Q) &=0, & &y\in\overline{\Gamma_2}, \end{aligned} \end{equation*} Let $\cN$ and $\cM$ be defined as before. Assume that \begin{equation*} \begin{gathered} \mu(y,\oG)=\sum\limits_{j=1}^4 b_j(y)+\int\limits_{G_1} c(y,\eta)\,d\eta+\int\limits_\Gamma d(y,\eta)\,d\Gamma_\eta\le 1,\quad y\in\pG,\\ \int\limits_{\Gamma\cap\cM}d(y,\eta)d\Gamma_\eta<1,\quad y\in\cM;\\ b_2(g_1)=0\ \text{or}\ \mu(\Omega_2(g_1),\oG)=0,\quad b_2(g_2)=0;\quad b_4(g_j)=0;\quad c(g_j,\cdot)=0;\quad d(g_j,\cdot)=0. \end{gathered} \end{equation*} Setting $b(y)=1-\mu(y,\oG)$, we can rewrite \eqref{eq89-90} in the form (cf.~\eqref{eq56}) $$ b(y)u(y)+\int\limits_\oG [u(y)-u(\eta)]\mu(y,d\eta)=0,\quad y\in\pG. $$ Introduce a cut-off function $\zeta\in C^\infty(\bbR^2)$ supported in $\cO_\varepsilon(\cK)$, equal to $1$ on $\cO_{\varepsilon/2}(\cK)$, and such that $0\le\zeta(y)\le 1$ for $y\in\bbR^2$. Let $y\in\overline{\Gamma_1}$ and $Q\subset\oG$ be a Borel set; denote \begin{equation*} \begin{gathered} \delta(y,Q)=\zeta(y)b_1(y)\chi_Q(\Omega_1(y)),\qquad \alpha(y,Q)=b_2(y)\chi_Q(\Omega_2(y)),\\ \beta^1(y,Q)=\big(1-\zeta(y)\big)b_1(y)\chi_Q(\Omega_1(y))+\sum\limits_{j=3,4} b_j(y)\chi_Q(\Omega_j(y)),\\ \beta^2(y,Q)=\int\limits_{G_1\cap Q}c(y,\eta)d\eta+ \int\limits_{\Gamma\cap Q}d(y,\eta)u(\eta)d\Gamma_\eta \end{gathered} \end{equation*} (for simplicity, we have omitted the subscript ``1'' in the notation of the measures). One can directly verify that these measures satisfy Conditions~\ref{condK1},~\ref{cond1.2}, \ref{cond2.1''}--\ref{cond2.4}, and~\ref{cond2.5}--\ref{cond2.8}. \smallskip The author is grateful to Prof. A.L. Skubachevskii for attention to this work.
1,108,101,564,479
arxiv
\section{INTRODUCTION} The inclusive cross section for a double parton scattering, namely of an event where, in the same inelastic interaction, two different pairs of partons scatter independently with large momentum transfer, is written as\cite{Double}: \begin{eqnarray} \sigma_D=\int_{p_t^c}&D_2(x_A,x_A';{\bf b}) \hat{\sigma}(x_A,x_B)\nonumber\\ &\hat{\sigma}(x_A',x_B') D_2(x_B,x_B';{\bf b})\\ &d{\bf b}dx_Adx_Bdx_A'dx_B'\nonumber \end{eqnarray} \par\noindent $ \hat{\sigma}(x_A,x_B)$ is the parton-parton cross section integrated with the cut off $p_t^c$, which is the lower threshold to observe final state partons as minijets, $x$ is the momentum fraction, $A$ and $B$ are labels to identify the two interacting hadrons. $\sigma_D$ is a function of the product $\hat{\sigma}(x_A,x_B) \hat{\sigma}(x_A',x_B')$. Actually the two different partonic interactions are localized in two regions in transverse space with a size of order $(1/p_t^c)^2$ and at a relative distance of the order of the hadronic radius $r$, in such a way that the two partonic interactions add incoherently in the double scattering cross section. The non perturbative input in Eq.(1) is the two-body parton distribution $D_2(x,x';{\bf b})$, which depends on the fractional momenta of the two partons taking part to the interaction and on their relative transverse distance ${\bf b}$. The transverse distance ${\bf b}$ has to be the same for the two partons of hadron $A$ and the two partons of hadron $B$, in order to have the alinement which is needed for a double collision to occur. $D_2$ is a dimensional quantity and therefore the process introduces a non perturbative scale factor which is related to the hadronic transverse size. \par\noindent The simplest possibility to consider is the one where the dependence of $D_2$ on the different variables is factorized: \begin{equation} D_2(x,x';{\bf b})=f_{eff}(x)f_{eff}(x')F({\bf b}) \end{equation} \noindent $f_{eff}$ is the effective parton distribution, namely the gluon plus $4/9$ of the quark and anti-quark distributions and $F({\bf b})$ is normalized to one. Multiparton distribution are then uncorrelated and $D_2$ does not contain further informations with respect to the one-body parton distribution (actually $f_{eff}$) apart form the dependence on ${\bf b}$, whose origin is the dimensionality of $D_2$ and which gives rise to the scale factor $\sigma_{eff}$. In fact in this case one may write \begin{equation} \sigma_D={\sigma_S^2\over\sigma_{eff}} \end{equation} \noindent with \begin{equation} {1\over\sigma_{eff}}=\int F^2({\bf b})d^2b \end{equation} \noindent and \begin{equation} \sigma_S=\int_{p_t^c}f_{eff}(x_A)f_{eff}(x_B) \hat{\sigma}(x_A,x_B)dx_Adx_B, \end{equation} \noindent the single scattering expression of the perturbative QCD parton model. \par Eq.(2) is the basic hypothesis underlying the signature of a double parton collision which one has been looking for experimentally\cite{CDF,Dexp}. The expected characteristic feature of a double collision is in fact that it should produce a final state analogous to the one obtained by super-posing two single scattering processes. By looking at the dependence of $\sigma_{eff}$ on $x$ CDF has been able to verify the correctness of the factorization hypothesis in Eq.(2). The range of values of $x$ available is limited to $x\le.2$, for the interaction producing a pair of minijets, and to $x\le.4$ for the interaction giving rise to a minijet and a photon. In the limited range of values of $x$ available, the factorization hypothesis has shown to be consistent with the experimental evidence. \par Since the uncorrelation hypothesis does not contradict the experiment, one can work out the case where all multiparton distributions are uncorrelated and one may look for the sum of all multiparton interactions to the hadronic inelastic cross section. The subset where all multiple parton collisions are disconnected can be easily summed up in the uncorrelated case\cite{Amet}. The result is the semi-hard hadronic cross section $\sigma_H$, which represents the contribution to the hadronic inelastic cross section from events with at least one semi-hard partonic interaction. The actual expression is \begin{eqnarray} \sigma_H&=&\int d^2\beta\Bigl[1-e^{-\sigma_SF(\beta)}\Bigr]\nonumber\\ &=&\sum_{n=1}^{\infty}\int d^2\beta{\bigl(\sigma_SF(\beta)\bigr)^n\over n!} e^{-\sigma_SF(\beta)} \end{eqnarray} \par\noindent The integration on the impact parameter of the hadronic collision, $\beta$, gives the dimensionality to the cross section. The argument of the integral has the meaning of a Poissonian distribution of multiple semi-hard partonic interactions with average number depending on the impact parameter. \par The actual value of $\sigma_{eff}$ can be obtained by taking twice the opposite of the second term of the expansion of $\sigma_H$ in powers of multiple collisions, so the actual value of $\sigma_H$ is related to the value of $\sigma_{eff}$ through Eq.(6). The single and the double parton scattering cross sections are however related to the average number of parton scatterings and to the second moment. Indeed if one writes the average number of parton scatterings one obtains: \begin{eqnarray} \langle n\rangle\sigma_H&=&\int d^2\beta\sum_{n=1}^{\infty} {\bigl(n\sigma_SF(\beta)\bigr)^n\over n!} e^{-\sigma_SF(\beta)}\nonumber\\ &=&\int d^2\beta \sigma_SF(\beta)=\sigma_S \end{eqnarray} \noindent and for the second moment: \begin{eqnarray} \langle n(n-1)\rangle&\sigma_H&=\nonumber\\ =\int d^2\beta& \sum_{n=1}^{\infty}& {\bigl(n(n-1)\sigma_SF(\beta)\bigr)^n\over n!} e^{-\sigma_SF(\beta)}\nonumber\\ =\int d^2\beta& \sigma_S^2&\bigl[F(\beta)\bigr]^2=\sigma_D \end{eqnarray} \noindent The relation between $\sigma_S$ and $\langle n\rangle$ and the relation between $\sigma_D$ and $\langle n(n-1)\rangle$ just obtained do not hold only in the simplest case of the Poissonian distribution of multiple parton collisions. They are indeed much more general validity. One can in fact obtain the same relations in the most general case of multiparton distributions and keeping moreover into account all semi-hard parton rescatterings\cite{Funct}. One may therefore write \begin{equation} \langle n\rangle\sigma_H=\sigma_S\quad {\rm and}\quad \langle n(n-1)\rangle\sigma_H=\sigma_D \end{equation} \noindent The effective cross section is defined by the relation \begin{equation} \sigma_D={\sigma_S^2\over\sigma_{eff}}\nonumber \end{equation} \noindent one may therefore write \begin{equation} \langle n(n-1)\rangle=\langle n\rangle^2{\sigma_H\over\sigma_{eff}} \end{equation} \noindent which implies that in case of an overall Poissonian distribution of multiple parton collisions one would have $\sigma_{eff}=\sigma_H$. When the number of parton collisions is very large, in the simplest uncorrelated case, the distribution is Poissonian at a fixed value of the impact parameter. The expectation is therefore that the overall distribution in the number of parton collisions has a larger dispersion as compared with the Poissonian case. In that regime $\sigma_{eff}$ is therefore smaller with respect to $\sigma_H$. The comparison between the actual value of $\sigma_{eff}$ and of $\sigma_H$ depends on the functional form of $F(\beta)$. In the simplest case where $F(\beta)={\rm exp}(-\beta^2/R^2)/\pi R^2$ one obtains a closed analytic expression for $\sigma_H$: \begin{equation} \sigma_H=2\pi R^2\bigl[\gamma+{\rm ln}\kappa+E_1(\kappa)\bigr] \end{equation} \par\noindent where $\gamma=0.5772\dots$ is Euler's constant, $\kappa=\sigma_S/(2\pi R^2)$ and $E_1(x)$ is the exponential integral. In this example the relation with the hadronic radius $r$ is $R=r\sqrt2$. For small $\kappa$ one obtains $\sigma_H\to 2\pi R^2\kappa=\sigma_S$, for large $\kappa$, namely $\sigma_S\to \infty$, one obtains $\sigma_H\to2\pi R^2\bigl(\gamma+{\rm}ln\kappa\big)$. Here $\sigma_{eff}=2\pi R^2$. The value of $\sigma_H$ is therefore proportional to the measured value of $\sigma_{eff}$, the proportionality factor is slightly dependent on energy and on the cutoff. Sensible values of the hadron-hadron c.m. energy and of the cutoff give values for $\sigma_H$ which are some $30-40\%$ larger with respect to the value of $\sigma_{eff}$. Different analytic forms for $F(\beta)$ give qualitatively similar results. \par\noindent The effective cross section quoted by CDF is indeed different with respect to the effective cross section discussed here and in most of the papers on double parton scatterings. $\sigma_{eff}$ has a simple link with the overlap of matter distribution in the hadronic collision when $\sigma_D$ is obtained from the second moment of the distribution in the number of partonic collisions, as discussed above. In the sample of events with double parton collisions CDF on the contrary has removed all events where triple parton collisions are present. The correction is not a minor one since the fraction of events with triple collisions is 17\%. In the simplest uncorrelated case just discussed the double parton scattering cross section measured by CDF would correspond to the expression \begin{equation} \bigl[\sigma_D\bigr]_{CDF}= \int d^2\beta{\bigl(\sigma_SF(\beta)\bigr)^2\over 2} e^{-\sigma_SF(\beta)} \end{equation} \noindent The relation above shows which is the complication introduced by the requirement of an exclusive cross section. In order to make the exclusive selection of the events with double parton collisions only, one has to introduce the exponential factor which represents the probability of not having any further parton interaction. This factor, in principle, has a rather complicated dependence on the overlap of the matter distribution of the two hadrons since, by unitarity, it is related to the whole series of multiple parton collisions. The effective cross section quoted by CDF, $(\sigma_{eff})_{CDF}= 14.5\pm1.7^{+1.7}_{-2.3}mb$, refers to the exclusive measurement and therefore it has to be regarded as an upper bound on the value of the effective cross section related to an inclusive measurement, as it has been presently discussed. The experimental indication is therefore that the effective cross section is rather small as compared with the naive expectation. The simplest assumptions underlying the derivation of Eq.(6) have therefore to be revised. \par The main hypothesis which has been done to obtain the expression for $\sigma_H$ in Eq.(6) is the Poissonian multiparton distribution. On the other hand one has to expect correlations between partons as a consequence of the binding force. While most probably correlations will affect the $x$ dependence of the multiparton distribution only for finite values of $x$, and therefore at large rapidities, correlations in the transverse parton coordinates are present in every kinematical regime. Indeed the main reason of interest in multiple parton collisions, besides the identification of the process itself, is precisely the measure of the many-body parton correlations, which is an information on the hadron structure independent on the one-body parton distributions usually considered in hard processes. \par In the next paragraph we discuss the most general expression for the semihard cross section $\sigma_H$, which one obtains by \par\noindent 1) assuming that only two-body parton correlations are present in the many-body parton distributions and by \par\noindent 2) summing all disconnected multiple parton interactions. \section{SEMI-HARD CROSS SECTION AND CORRELATIONS} \par At a given resolution, provided by the cut off $p_t^{min}$ that defines the lower threshold for the production of minijets, one can find the hadron in various partonic configurations. The probability of an exclusive $n$-parton distribution, namely the probability to find the hadron in a configuration with $n$-partons, is denoted by $W_n(u_1\dots u_n)$. $u_i\equiv({\bf b}_i,x_i)$ represents the transverse partonic coordinate ${\bf b}_i$ and longitudinal fractional momentum $x_i$ while color and flavor variables are not considered explicitly. The distributions are symmetric in the variables $u_i$. One defines the generating functional of the multiparton distributions as: \begin{eqnarray} {\cal Z}[J]=&\sum_n&{1\over n!}\int J(u_1)\dots J(u_n)\nonumber\\ &W_n&(u_1\dots u_n) du_1\dots du_n, \end{eqnarray} \noindent where the dependence on the infrared cutoff $p_t^{min}$ is implicitly understood, and one may introduce also the logarithm of the generating functional: ${\cal F}[J]={\rm ln}\bigl({\cal Z}[J]\bigr)$. The conservation of the probability yields the overall normalization condition \begin{equation} {\cal Z}[1]=1. \end{equation} \noindent One may use the generating functional to derive the many body densities, i.e. the inclusive distributions $D_n(u_1\dots u_n)$: \begin{eqnarray} D_1(u)&=&{\delta{\cal Z}\over \delta J(u)}\biggm|_{J=1} ,\nonumber\\ D_2(u_1,u_2)&=& {\delta^2{\cal Z}\over \delta J(u_1)\delta J(u_2)} \biggm|_{J=1},\nonumber\\ &\dots\nonumber\\ \end{eqnarray} \noindent The many body parton correlations are defined by expanding ${\cal F}[J]$ in the vicinity of $J=1$: \begin{eqnarray} {\cal F}[J]&=&\int D(u)[J(u)-1]du\nonumber\\ &+&\sum_{n=2}^{\infty}{1\over n!} \int C_n(u_1\dots u_n)\bigl[J(u_1)-1\bigr]\dots\nonumber\\ &\dots&\bigl[J(u_n)-1\bigr] du_1\dots du_n \end{eqnarray} \noindent Here $D=D_1$ and the correlations $C_n$ describe how much the distribution deviates from a Poisson distribution, which corresponds in fact to $C_n\equiv 0, n\ge 2$. \par In the case of hadron-nucleus and nucleus-nucleus collisions a systematic use of the AGK cutting rules\cite{Agk} allows one to express the total inelastic cross section as a probabilistic superposition of nucleon-nucleon interaction probabilities\cite{Cs}. The same feature holds for the self-shadowing cross sections\cite{Co}. When considering hadron-hadron collisions as interactions between objects composed with partons, one can make the assumption that similar relations hold with nucleons in the place of nuclei and partons replacing nucleons. Of course, contrary to the nucleon number in the nucleus the parton number is not fixed. In this respect semihard parton-parton interactions have to be regarded as a particular case of self-shadowing interactions\cite{Cn}. The semi-hard nucleon-nucleon cross section is then expressed as the sum of all the probabilities of multiple parton collisions: \begin{equation} \sigma_H=\int d^2\beta\sigma_H(\beta) \end{equation} with \begin{eqnarray} &\sigma_H&\!(\beta)=\int\sum_n{1\over n!} {\delta\over \delta J(u_1)}\dots {\delta\over \delta J(u_n)}{\cal Z}_A[J]\nonumber\\ &\times&\sum_m{1\over m!} {\delta\over \delta J'(u_1'-\beta)}\dots {\delta\over \delta J'(u_m'-\beta)}{\cal Z}_B[J']\nonumber\\ &\times&\Bigl\{1-\prod_{i=1}^n\prod_{j=1}^m\bigl[1-\hat{\sigma}_{i,j}(u,u')\bigr ] \Bigr\}\prod dudu'\Bigm|_{J=J'=0}\nonumber\\ \end{eqnarray} \noindent where $\beta$ is the impact parameter between the two interacting hadrons $A$ and $B$ and $\hat{\sigma}_{i,j}$ is the elementary probability for parton $i$ (of $A$) to have a hard interaction with parton $j$ (of $B$). The semi-hard cross section is constructed summing over all possible partonic configurations of the two interacting hadrons (the sums over $n$ and $m$) and, for each configuration with $n$ partons from $A$ and $m$ partons from $B$, summing over all possible multiple partonic interactions. This last sum is constructed asking for the probability of no interaction between the two configurations ( actually $\prod_{i=1}^n\prod_{j=1}^m[1-\hat{\sigma}_{i,j}]$ ). One minus the probability of no interaction is equal to the sum over all semi-hard interaction probabilities. \par The presence of multiple parton interactions is induced by the large flux of partons which is effective at large energies. The most important contribution to the semi-hard cross section, as a consequence, is the contribution of the disconnected partonic collisions, namely the interactions where each parton undergoes at most one semi-hard collision. These are, in fact, those multiple partonic interactions that, at a given number of partonic collisions, maximize the parton flux. Indeed the search and the observation of the first evidence of multiple semi-hard parton interactions has been focused to the case of double disconnected parton interactions\cite{CDF,Dexp}. We simplify therefore the problem by expanding the interaction probability ( the factor in curly brackets ) as sums and by removing all the addenda containing repeated indices: \begin{eqnarray} \Bigl\{1&-&\prod_{i,j}^{n,m}\bigl[1-\hat{\sigma}_{ij}\bigr]\Bigr\} \Rightarrow\\ \sum_{ij}\hat\sigma_{ij}&-&{1\over 2!} \sum_{ij}\sum_{k\not=i,l\not=j}\hat\sigma_{ij}\hat\sigma_{kl}+ \dots \end{eqnarray} \noindent as a result the semi-hard cross section is constructed with multiple disconnected parton collisions only, where disconnected refers to the perturbative component of the interaction. Because of the symmetry of the derivative operators in Eq.(19) one can replace the expression in Eq.(21) with: \begin{equation} nm\hat\sigma_{11}-{1\over 2!}n(n-1)m(m-1)\hat\sigma_{11}\hat\sigma_{22} +\dots \end{equation} \noindent in such a way that the sums over $m$ and $n$ can be performed explicitly. As a consequence the cross section at fixed impact parameter, $\sigma_H(\beta)$, can be expressed by the operatorial form: \begin{eqnarray} \sigma_H(\beta)&=& \Bigl[ 1-\exp\bigl(-\delta\cdot\hat{\sigma}\cdot\delta'\bigr)\Bigr]\cr\cr &\cdot&{\cal Z}_A[J+1]{\cal Z}_B[J'+1]\Bigm|_{J=J'=0} \end{eqnarray} \noindent We have avoided writing explicitly the variables $u$ and $u'$ and the functional derivative ${\delta /\delta J(u_i)}$ has been simply indicated as $\delta_i$. \par The form of $\sigma_H(\beta)$ given by Eq.(23) is still too complicated to be worked out in its general form, since all possible multi-parton correlations are present in ${\cal Z}$. Therefore we further simplify the problem by taking into account two-body parton correlations only. Our explicit expression for ${\cal F}$ is therefore: \begin{eqnarray} {\cal F}[J+1]=&\int& D(u)J(u)du\cr\cr+ {1\over 2}&\int& C(u,v)J(u)J(v)dudv \end{eqnarray} \noindent where $D(u)$ is the average number of partons and $C(u,v)$ is the two-body parton correlation. \par Either by using techniques of functional integration or by means of a suitable diagrammatic expansion\cite{Cd} one is able to obtain in this case a closed expression for $\sigma_H(\beta)$: \begin{equation} \sigma_H(\beta)=1-\exp\Bigl[- \frac{1}{2} \sum_n a_n -\frac{1}{2} \sum_n b_n /n\Bigr] \end{equation} \noindent where $a_n$ and $b_n$ are functions of the impact parameter $\beta$ and are given by \begin{eqnarray} a_n&=&\!\int D_A(u_1)\hat \sigma (u_1,u'_1)\times\cr\cr &\times&C_B(u'_1-\beta,u'_2-\beta)\hat \sigma (u'_2,u_2) C_A(u_2,u_3)\dots\cr\cr &\dots& D_B(u'_n-\beta) \prod du_i du'_i \end{eqnarray} \begin{eqnarray} b_n&=&\!\int C_A(u_n ,u_1)\hat \sigma (u_1,u'_1)\times\cr\cr &\times&C_B(u'_1-\beta,u'_2-\beta)\hat \sigma (u'_2,u_2) \dots\cr\cr &\dots& C_B(u'_{n-1}-\beta,u'_n-\beta)\hat \sigma (u'_n,u_n) \cr\cr &\prod& du_i du'_i\,. \end{eqnarray} The actual expression for $a_n$ holds for $n$ odd. When $n$ is odd one may also have the symmetric case, where the expression begins with $D_B$ and ends with $D_A$. When $n$ is even the initial and final distribution are either both $D_A$ or both $D_B$. In the definition of $b_n$ $n$ is always even, so that one of the ends is $A$ and the other is $B$. One may notice that, at a given order in the number of partonic interactions, one can obtain a term of kind {\it a} from a term of kind {\it b} by replacing one $C$ with a pair of $D$'s. The operation can be done in $n$ ways. The combinatorial meaning of the $1/n$ factor multiplying each term of kind {\it b} in Eq.(25) is then understood. The factor $1/2$ in Eq.(25) is the consequence of the symmetry between $A$ and $B$. \par The cross section is given by an integral on the impact parameter of the interaction probability, $\sigma_H(\beta)$, that is expressed as one minus the probability of no interaction. The probability of no interaction is given by the negative exponential of the sum over all possible different connected structures, namely all structures of kind $a_n$ and of kind $b_n$. With our approximations, Eq.(21) and Eq.(24), these are in fact all possible connected structures which can be built with the average numbers $D_{A,B}$, the two-body correlations $C_{A,B}$ and the interaction $\hat{\sigma}$ . Expanding the exponential, the cross section can then be expressed as the sum over all possible structures, both connected and disconnected. \par One will notice that, when no correlations are present, all terms of kind {\it b} disappear and only the first of the terms of kind {\it a}, namely $D_A\hat{\sigma}D_B$ is left. In that limit the cross section is given simply by: \begin{equation} \sigma_H=\int d^2\beta\bigl\{1-e^{-\langle n(\beta)\rangle}\bigr\} \end{equation} \noindent where \begin{equation} \langle n(\beta)\rangle= \int D_A(u-\beta)D_B(u')\hat{\sigma}(u,u')dudu' \end{equation} \par\noindent which corresponds to the Poissonian distribution discussed in the introduction. \section{TWO DIFFERENT QUALITATIVE FEATURES OF THE CORRELATION TERM} The small value of $\sigma_{eff}$, the dimensional parameter characterizing double parton scatterings, which has been measured recently by CDF, is an indication that two-body parton correlations, in the many-body parton distribution of the proton, are likely to be sizable. In the case of an uncorrelated many-body parton distribution, the value of $\sigma_{eff}$ puts a constraint on the range of possible values of $\sigma_H$, the semi-hard contribution to the hadronic inelastic cross section. The actual measured value of $\sigma_{eff}$ would give rise to values of $\sigma_H$ of the order of $\sigma_{inel}/2$ also at very large c.m. energies, where one would rather expect $\sigma_H\simeq\sigma_{inel}$. The experimental evidence is also that, in the $x$ region accessible experimentally namely at small $x$ values, the correlation in fractional momenta is not a large effect. \par $\sigma_H$ can be worked out rather explicitly when only two-body parton correlations are included in the many-body parton distributions and when each parton can have at most one semi-hard interaction. Two qualitatively different features can be present in the two-body parton correlation, and both change the relation between $\sigma_H$ and $\sigma_{eff}$ with respect to the uncorrelated case: 1- The distribution in the number of partons is not any more a Poissonian, although the dependence on the kinematical variables of the different partons is factorized. 2- The overall distribution in the number of partons, obtained after integrating on the partonic kinematical variables, is a Poissonian but the dependence on the partonic kinematical variables is not factorized, in this case the two-body parton correlation integrates to zero. \par\noindent The general case is obviously a combination of the two possibilities. We point out however that both cases separately can give rise to a small value of $\sigma_{eff}$ while keeping the value of $\sigma_H$ close to $\sigma_{inel}$. \par One can work out explicitly the expression for the semi-hard cross section in Eq.(25) considering explicit examples\cite{Cu}. The general result is however that the critical value of the impact parameter $\beta_c$, which gives the size to the cross section $\sigma_H$, is the value which makes small the argument of the exponential in the expression of $\sigma_H(\beta)$. The detailed dependence of the argument of the exponential at $\beta<\beta_c$ is not of great importance for the determination of $\sigma_H$ when, for $\beta<\beta_c$, the argument of the exponential is already large: $\sigma_H$ is obtained by integrating the probability of having at least one semi-hard interaction. When the probability to have at least one semi-hard interaction is close to one, the contribution to the integral is very similar for events with the same impact parameter and with different but large average number of partonic collisions. \par The critical value of the impact parameter which gives the size to $\sigma_H$ is therefore determined by the argument of the exponential at the edge of the interaction region. The dominant contribution at the edge is due to the single scattering term, since higer order collision terms are important when the density of overlapping matter of the two hadrons is large. This is precisely the argument of the exponential in the uncorrelated case and the consequence is that the resulting value of $\sigma_H$ is not very different with respecy to the uncorrelated case. Actually $2\pi R^2$ as discussed in the introduction. \par The correlation term is on the contrary able to change sizably the effective cross section. One may modify the number distribution, without introducing non-factorized two-body correlations in ${\bf b}$, by using by using for example the factorized expression \begin{equation} C(u,u')=-\lambda D(u) D(u') \end{equation} \par\noindent One obtains in this case the relation\cite{Cu} \begin{equation} \sigma_{eff}={2\pi R^2\over (1+\lambda)^2} \end{equation} \noindent If one introduces a correlation term which does not modify parton number distribution and which therefore integrates to zero, the double scattering cross section is incresed, with respect to the uncorrelated case, by an additive term corresponding to the convolution of two correlations\cite{Cu}: \begin{eqnarray} {1\over\sigma_{eff}}&=&{1\over2\pi R^2}+\cr\cr &+&\int C({\bf b}, {\bf b}') C({\bf b}-\beta, {\bf b}'-\beta)\cr\cr &\times&d^2bd^2b'd^2\beta \end{eqnarray} \par A qualitative feature is that in both cases one obtains a value of $\sigma_{eff}$ which may be sizably smaller with respect to $2\pi R^2\simeq\sigma_H$. While, on the other hand, nothing prevents the value of $\sigma_H$ from being close to the value of $\sigma_{inel}$. The smaller value of $\sigma_{eff}$, with respect to the expectation of the uncorrelated case, is rather generally associated with the increased dispersion of the distribution in the number of partonic collisions: In the case of no correlations the distribution is strictly Poissonian when the impact parameter is fixed. When correlations are introduced the distribution in the number of parton collisions, at fixed $\beta$, is not Poissonian any more and the natural consequence is that the dispersion in the number of collisions is increased. \par The indication from the measure of the rate of double parton scatterings is therefore that two-body parton correlations are likely to be important while, unfortunately, one cannot say much about dynamical quantities, like the the correlation length. Useful observables to be measured, in order to get some more insight into the problem, would be the semi-hard cross section $\sigma_H$ and the triple parton scattering cross section. The measure of $\sigma_H$, in association with $\sigma_{eff}$, would help considerably in clarifying the size of the effect induced by the presence of the two-body parton correlations: All present considerations are based on the prejudice that $\sigma_H$ should have a value rather close to the value of $\sigma_{inel}$. \par\noindent The measure of triple and of higher order parton scatterings would give important constraints on models of the many body parton distributions. For example if only lower order correlations where important one would be able to fix all the correlation parameters. \par While a lot of effort has been put in the study of the proton structure as a function of the momentum fraction $x$, one should keep in mind that the distribution of partons depends on three degrees of freedom, the momentum fraction $x$ and the transverse parton coordinate ${\bf b}$. The measure of multiple parton collisions is the essential tool which allows us to learn on the parton structure of the proton in transverse plane.
1,108,101,564,480
arxiv
\section{INTRODUCTION} \label{sec:intro} The general belief that QCD undergoes a phase transition to a quark-gluon plasma phase at high temperature has triggered a lot of activity both on the theoretical as well as on the experimental side. The original argument put forward by Casher~\cite{Casher:1979vw} suggesting that confinement implies dynamical chiral symmetry breaking and hence that the chiral and deconfinement phase transitions take place simultaneously at least at zero chemical potential, has been pursued and so far confirmed in theoretical studies on the lattice~\cite{Karsch:1998hr}. This also agrees with the phenomenological determinations of the vacuum energy density in the bag model, with an energy density difference between the Wigner and Goldstone realizations of chiral symmetry. It has also been shown that in the large $N_c$ limit with the temperature $T$ kept fixed, if a chiral phase transition takes place it should be first order~\cite{Gocksch:1982en}. The coupling of QCD distinctive order parameters at finite temperature to hadronic properties has been the subject of much attention over the recent past \cite{Sannino:2002wb,Mocsy:2003qw, Fukushima:2003fw,Fukushima:2003fm,Gocksch:1984yk,Meisinger:2003uf} mainly in connection with theoretical expectations on the formation of quark-gluon plasma and the onset of deconfinement. Indeed, even if such a state of matter is produced in existing (RHIC, SPS \cite{McLerran:2002jb,Heinz:2002gs}) and future (LHC) facilities, the states which are detected are hadrons created in a hot environment. Thus, it makes sense to study the properties of hadrons in a medium which can undergo a confinement-deconfinement phase transition. For heavy masses, quarks become static sources and there is a general consensus that the order parameter can be taken to be the Polyakov loop or thermal Wilson line \cite{KorthalsAltes:2003sv} where the breaking of the center symmetry signals the onset of deconfinement. Dynamical light quarks, however, break explicitly the center symmetry and no criterion for deconfinement has been established yet \cite{Meyer:1983hm,Svetitsky:1986ye}. In QCD, there has been an increasing interest in developing effective actions for the Polyakov loop as a confinement-deconfinement order parameter because of their relevance in describing the phase transition from above the critical temperature~\cite{Dumitru:2001xa,Meisinger:2001cq, Dumitru:2003hp,Meisinger:2003id}. On the other hand, in a hot medium, one also expects that the spontaneously broken chiral symmetry is restored at some critical temperature. For this chiral phase transition the quark condensate is commonly adopted as the relevant order parameter. The melting of the chiral quark condensate has been observed on the lattice~\cite{Karsch:1998hr}, is suggested by chiral perturbation theory extrapolations~\cite{Gasser:1986vb,Gerber:1988tt} and is numerically reproduced in chiral quark models before~\cite{Bernard:1987ir,Christov:1991se} and after inclusion of pion corrections~\cite{Florkowski:1996wf} (for a review see e.g. Ref.~\cite{Oertel:2000jp}). Where theory has most problems is precisely in the interesting intermediate temperature regime around the phase transition, because both the lightest Goldstone particles and the Polyakov loop degrees of freedom should play a role, if they coexist. Up to now it is uncertain how the corresponding states couple to each other from a fundamental QCD viewpoint, hence some modeling is required. Based on previous works~\cite{Meisinger:1995ih, Fukushima:2003fw,Gocksch:1984yk} and to comply with chiral symmetry it seems natural to couple chiral quark models and the Polyakov loop in a minimal way as an effective space dependent color chemical potential. The work in Ref.~\cite{Gocksch:1984yk} accounts for a crossover between the restoration of chiral symmetry and the spontaneous breaking of the center symmetry, reproducing qualitatively the features observed on the lattice simulations \cite{Kaczmarek:2002mc} and which find a natural explanation in terms of dimension two condensates~\cite{Megias:2005ve}. In this regard we want to argue below that the special role played by the gauge symmetry at finite temperature actually requires this coupling, and elaborate on the consequences of it when the quantum gluon effects are considered. The organization of the paper is as follows. We review some facts on large gauge symmetry at finite temperature in Sect.~\ref{sec:lgt} which are put into the context of chiral quark models. Next, we address the problem suffered by chiral quark models at finite temperature in Section~\ref{sec:problem} where we argue that the origin of the difficulty is related to a defective treatment of the large gauge symmetry at finite tempe\-rature. Thus, to comply with gauge invariance at finite temperature one should at least couple the quarks to the $A_0 $ gluon field. We do this in Section \ref{sec:polyakovcoupling}. This is equi\-valent to make the replacement \begin{eqnarray} \partial_0 \to \partial_0 + i A_0 \end{eqnarray} which corresponds to an $\vec{x}$-dependent chemical potential coupling in the color fundamental representation. Obviously, this coupling introduces a color source into the problem for a fixed $A_0$ field. In order to project onto the color neutral states we integrate over the $A_0$ field, in a gauge invariant manner. In Section~\ref{sec:oneloop} we describe the consequences of such a coupling and projection in chiral quark models for a variety of observables at the one quark loop approximation. Actually, as we will show, there is an accidental $\mathbb{Z}(N_c)$ symmetry in the model which gene\-rates a triality (super)selection rule at this level of approximation, from which a strong thermal suppression, $ {\cal O} (e^{-N_c M /T} )$ follows in the quenched approximation. This puts some doubts on whether chiral quark models do predict a chiral phase transition at realistic temperatures as we advanced in previous communications~\cite{Megias:2004kc,Megias:2004gy}. Corrections beyond one quark loop are discussed in Section~\ref{sec:corrections} where we see that the suppression at low temperatures actually becomes ${\cal O} (e^{-m_\pi /T})$, very much along the expectations of Chiral Perturbation Theory (ChPT)~\cite{Gasser:1986vb}. Gluonic corrections and local corrections in the Polyakov loop are also analyzed in this section. In view of our discussions we illustrate in Section~\ref{sec:unquenched} the situation with schematic dynamical calculations involving quantum and local Polyakov loops in the unquenched theory as compared to lattice studies. In Section~\ref{sec:phase-tran} we extend these calculations to the region around the phase transition. Finally, in Section~\ref{sec:concl}, we summarize our points and draw our main conclusions. \section{Gauge invariance of chiral quark models at finite temperature and the Polyakov loop} \label{sec:lgt} In this section we review some relevant and naively very disparate concepts of gauge symmetry at finite temperature, Sect.~\ref{sec:large-gauge}, and the center symmetry in gluodynamics, Sect.~\ref{sec:center}, as well as the standard chiral quark models, Sect.~\ref{sec:cqmod}, in order to fix our notation for the rest of the paper. Both subjects are well known on their own, although rarely discussed simultaneously, and the reader familiar with any of them may skip the corresponding subsections. Advancing the result of subsequent discussions made in latter sections, the basic Polyakov Chiral Quark Model is first introduced in Sect.~\ref{sec:pcqm}. The conflict between both large gauge symmetry and chiral quark models is discussed in Sect.~\ref{sec:problem}. The solution to the problem is elaborated in Sect.~\ref{sec:polyakovcoupling} where the coupling of the Polyakov loop to chiral quark models is motivated. \subsection{Large gauge symmetry } \label{sec:large-gauge} One of the most striking features of a gauge theory like QCD at finite temperatures is the non-perturbative manifestation of the non Abelian gauge symmetry. Indeed, in the Matsubara formalism of quantum field theory at finite temperature the space-time becomes a topological cylinder: one introduces a compactified Euclidean imaginary time~\cite{Landsman:1986uw} and evaluates the path integral subjecting the fields to periodic or antiperiodic boundary conditions for bosons and fermions respectively in the imaginary time interval $ \beta =1/T $ where $T$ is the temperature. We use the Euclidean notation $x_4=i x_0$ and $A_4(\vec x , x_4)=i A_0(\vec x , x_0)$. Thus, only periodic gauge transformations, $g(\vec x , x_4 ) = g(\vec x , x_4 + \beta ) $, are acceptable since the quark and gluon fields are stable under these transformations. In the Polyakov gauge $\partial_4 A_4 =0 $ with $A_4 $ a diagonal traceless $N_c \times N_c $ matrix, one has for the gauge $SU(N_c)$ group element \begin{eqnarray} g (x_4 ) = {\textrm {diag}}(e^{i 2 \pi x_4n_j T }) \label{eq:1.1} \end{eqnarray} $(\sum_{j=1}^{N_c}n_j=0)$ the following gauge transformation on the $A_4$ component of the gluon field \begin{eqnarray} A_4 \to A_4 + 2 \pi T {\textrm {diag}}(n_j)\,. \end{eqnarray} Thus, in this particular gauge, gauge invariance manifests as the periodicity in the $A_4 $ gluon field. This property is crucial and prevents from the very beginning from the use of a perturbative expansion in the gluon field, $A_4 $, at finite temperature. This large gauge symmetry\footnote{Technically speaking the transformations (\ref{eq:1.1}) may not be large in the topological sense (i.e., homotopically non trivial). This depends on the topology of the spatial manifold as well as on the gauge group \cite{Garcia-Recio:2000gt}. They are topologically large within the Polyakov gauge.} can be properly accounted for by considering the Polyakov loop or untraced Wilson line as an independent degree of freedom, \begin{eqnarray} \Omega ( x ) = {\cal T} \exp{ i \int_{x_4}^{x_4+1/T} d x_4^\prime A_4 (\vec x , x_4^\prime) } \end{eqnarray} where ${\cal T}$ indicates the Euclidean temporal ordering ope\-rator and $A_4$ the gluon field. Under a general periodic gauge transformation one gets \begin{eqnarray} \Omega (x) \to g(x) \Omega (x) g^\dagger (x) \,. \end{eqnarray} In the Polyakov gauge, which we assume from now on, $\Omega$ becomes \begin{eqnarray} \Omega(\vec{x}) = e^{ i A_4 (\vec x) /T } \end{eqnarray} and so it is invariant under the set of transformations (\ref{eq:1.1}). The failure of perturbation theory at finite temperature in a gauge theory has generated lot of discussion in the past mainly in connection with topological aspects, Chern-Simons terms, anomalies, etc. In the case of the topolo\-gical Chern-Simons term radiatively induced by fermions in $2+1$ dimensions \cite{Redlich:1984kn} it was puzzling to find, in the perturbative treatment, that the Chern-Simons quantization condition \cite{Deser:1981wh} was violated at finite temperature \cite{Pisarski:1986gq,Babu:1987rs}. It was subsequently shown that, within a non perturbative treatment, no contradiction arises \cite{Dunne:1996yb}. In \cite{Salcedo:1998sv,Salcedo:2002pr} it was shown that a derivative expansion approach, suitably defined at finite temperature, was appropriate to deal with this problem. We will use this approach in the present work. \subsection{Center symmetry in gluodynamics} \label{sec:center} In pure gluodynamics at finite temperature one can use the center of the gauge group to extend the periodic transformations to aperiodic ones \cite{'tHooft:1979uj}, \begin{eqnarray} g(\vec x, \frac1T ) = z g (\vec x, 0 ) , \qquad z^{N_c} = 1 \label{eq:ALGT1} \end{eqnarray} so that $z$ is an element of $\mathbb{Z}(N_c)$. An example of such a transformation (with $z=e^{i2\pi/N_c}$) in the Polyakov gauge is given by \begin{eqnarray} g (x_4 ) = {\textrm {diag}}(e^{i 2 \pi x_4 n_j T / N_c }), \nonumber\\ n_1=1-N_c,\quad n_{j\ge 2}=1 \label{eq:ALGT} \end{eqnarray} and the gauge transformation on the $A_4$ component of the gluon field is \begin{equation} A_4 \to A_4 + \frac{2 \pi T }{ N_c}{\textrm {diag}}(n_j)\,. \end{equation} Under these transformations both gluonic action, measure and boundary conditions are invariant. The Polyakov loop, however, transforms as the fundamental representation of the $\mathbb{Z}(N_c)$ group, i.e. $\Omega \to z \Omega $, yielding $ \langle \Omega \rangle= z\langle \Omega \rangle $ and hence $\langle \Omega \rangle =0 $. More generally, in the center symmetric or confining phase \begin{eqnarray} \langle \Omega^n \rangle =0 \quad \text{for} \quad n \neq k N_c\,, \quad k\in\mathbb{Z} \,. \label{eq:1.9} \end{eqnarray} Actually, this center symmetry is spontaneously broken {\em above} a critical temperature, $T_D\approx 270\,$MeV for $N_c=3$~\cite{Karsch:2001cy}. The antiperiodic quark field boundary conditions are not preserved under non trivial center transformations since $q(\vec{x}, 1/T )\to g(\vec{x}, 1/T )q(\vec{x}, 1/T )=-z g(\vec{x},0)q(\vec{x},0)$ instead of $-g(\vec{x},0)q(\vec{x},0)$. A direct consequence of such property is the vanishing of contributions to the quark bilinear of the form\footnote{In this formula $\langle \bar q ( n /T ) q( 0) \rangle$ denotes contributions to the quark propagator including only paths which wind $n$ times around the thermal cylinder. The average is for the quenched theory.} \begin{eqnarray} \langle \bar q ( n /T ) q( 0) \rangle =0 \quad\text{for} \quad n \neq kN_c, \quad k\in\mathbb{Z} \label{eq:1.10} \end{eqnarray} (in the confining phase) since under the large aperio\-dic transformations given by Eq.~(\ref{eq:ALGT1}) $ \bar q ( n /T ) q( 0) \to z^{-n} \bar q ( n /T ) q( 0) $. This generates an exact selection rule in quenched QCD. The center symmetry is explicitly broken by the presence of dynamical quarks and the choice of an order parameter for confinement is not obvious \cite{Fukushima:2002bk}. As a consequence the selection rule implied by eq. (\ref{eq:1.10}) is no longer fulfilled. Nevertheless such selection rule becomes relevant to chiral quark models in the large $N_c$ limit and departures from it are found to be suppressed within chiral quark models in the large $N_c$ limit at low temperatures, due to the spontaneous breaking of chiral symmetry which generates heavier constituent quarks from light current quarks.\footnote{We emphasize that our use of the approximate rule is in contrast to the so-called canonical ensemble description of QCD where, upon projection, triality is assumed to be exact even in the presence of dynamical quarks. See e.g. the discussion in \cite{Fukushima:2002bk}.} This issue will be analyzed along this paper. \subsection{Chiral quark models at finite temperature} \label{sec:cqmod} Chiral quark models have been used in the past to provide some semiquantitative understanding of hadronic features in the low energy domain. At zero temperature chiral quark models are invariant under {\em global} $\text{SU}(N_c)$ transformations. There has always been the question of how the corresponding constituent quark fields transform under {\em local} color transformations or whether a physi\-cal gauge invariant definition can be attached to those fields~\cite{Lavelle:1995ty}. If we assume that they transform in the same way as bare quarks, it seems unavoidable to couple gluons to the model in the standard way to maintain gauge invariance as done in previous works (see e.g. Refs.~\cite{Espriu:1989ff,Bijnens:1992uz}). These gluon effects are treated within perturbation theory at $T=0$. This approximation induces some sub-leading corrections in the calculation of color singlet states where the effects of confinement can be almost completely ignored for the low lying states \cite{Glozman:1995fu}. This perturbative gluon dressing also complies with the interpretation that the whole quark model is defined at a low renormalization scale, from which QCD perturbative evolution to high energies processes can be successfully applied~\cite{RuizArriola:2002wr}. When going to finite temperature, chiral quark models predict already at the one loop level a chiral phase transition~\cite{Bernard:1987ir,Christov:1991se} at realistic temperatures. However, even at low temperatures single quark states are excited what is obviously not very realistic for it means that the hot environment is in fact a hot plasma of quarks. On the other hand since the constituent quark mass is about a factor of 2 larger than the pion mass, pion loops dominate at low temperatures~\cite{Florkowski:1996wf} (for a review see e.g. Ref.~\cite{Oertel:2000jp}), as expected from chiral perturbation theory~\cite{Gasser:1986vb,Gerber:1988tt}. In the present work we will deal with two chiral quark models, the Nambu--Jona-Lasinio (NJL) model~\cite{Klevansky:1992qe,Christov:1995vm,Alkofer:1994ph} where quarks are cha\-racterized by a constant constituent mass in the propagator due to the spontaneous breaking of chiral symmetry and the recently proposed spectral quark model (SQM)~\cite{RuizArriola:2001rr,RuizArriola:2003bs,RuizArriola:2003wi,Megias:2004uj} where the notion of analytic confinement is explicitly verified. For completeness we review briefly the corresponding effective action below. One common and attractive feature of chiral quark models is that there is a one-to-one relation to the large $N_c$ expansion and the saddle point approximation of a given path integral both at zero and at finite temperature. \subsubsection{The NJL model} The NJL Lagrangian as will be used in this paper reads in Minkowski space\footnote{We use Bjorken-Drell convention throughout the paper.} \begin{eqnarray} {\cal L}_{\rm NJL} &=& \bar{q} (i\slashchar\partial - \hat{M}_0 )q \nonumber \\ &+& {G \over 2}\sum_{a=0}^{N_f^2-1} \left( (\bar{q}\lambda_a q)^2 +(\bar{q}\lambda_a i \gamma_5 q)^2 \right) \end{eqnarray} where $q=(u,d,s, \ldots )$ represents a quark spinor with $N_c $ colors and $N_f$ flavors. The $\lambda$'s are the Gell-Mann flavor matrices of the $U(N_f)$ group and $\hat M_0= {\textrm {diag}} (m_u, m_d, m_s,\ldots) $ stands for the current quark mass matrix. In the limiting case of vanishing current quark masses the classical NJL-action is invariant under the global $U(N_f)_R \otimes U(N_f)_L $ group of transformations. Using the standard bosonization procedure~\cite{Eguchi:1976iz} it is convenient to introduce auxiliary bosonic fields $(S,P,V,A)$ so that after formally integrating out the quarks one gets the effective action\footnote{Obviously at finite temperature the quark fields satisfy antiperiodic boundary conditions whereas the bosonized fields obey periodic boundary conditions.} \begin{eqnarray} \Gamma_{\rm NJL} [S,P] &=&-i N_c {\rm Tr} \log \left( i {\bf D} \right) \nonumber \\ &-& {1\over 4G } \int d^4 x \,{\rm tr}_f \left( S^2 + P^2 \right) \,. \label{eq:eff_ac_njl} \end{eqnarray} We use ${\rm Tr}$ for the full functional trace, ${\textrm {tr}}_f $ for the trace in flavor space, and ${\textrm {tr}}_c $ for the trace in color space. Here, the Dirac operator is given by \begin{eqnarray} i {\bf D} &=& i\slashchar{\partial} - {\hat M_0} - \left( S + i \gamma_5 P \right) \,. \label{eq:dirac_op_njl} \end{eqnarray} The divergencies in Eq.~(\ref{eq:eff_ac_njl}) from the Dirac determinant can be regularized in a chiral gauge invariant manner by means of the Pauli-Villars method, although the issue of regularization is of little relevance at finite temperature~\cite{Christov:1991se} for $T \ll \Lambda $. This model is known not to confine and to produce a constituent quark mass $M \sim 300 {\rm MeV}$ due to the spontaneous breaking of chiral symmetry at zero temperature. The Goldstone bosons can be parameterized by taking \begin{eqnarray} S+ iP = \sqrt{U} \Sigma \sqrt{U} \end{eqnarray} with $U $ a unitary matrix (see Eq.~(\ref{eq:pionU}) ) with $ \Sigma^\dagger = \Sigma $, and one can use that $\Sigma = M + \phi $ with $\phi$ the scalar field fluctuation. The partition function for this model can be written as \begin{eqnarray} Z_{\rm NJL} = \int DU D\Sigma \, e^{i \Gamma_{\rm NJL} [ U , \Sigma ]} \,. \label{eq:Z_njl} \end{eqnarray} By minimizing $\Gamma_{\rm NJL}$ one gets $S=M$, which generates the spontaneous breaking of chiral symmetry, and one obtains the gap equation \begin{equation} \frac{1}{G} = -i4N_c \sum_i c_i \int \frac{d^4 k}{(2\pi)^4} \frac{1}{k^2-M^2-\Lambda_i^2} \,, \label{eq:gap_eq} \end{equation} where the Pauli-Villars regularization has been used. The Pauli-Villars regulators fulfill $c_0=1$, $\Lambda_0=0$ and the conditions $\sum_i c_i=0$, $\sum_i c_i \Lambda_i^2=0$, in order to render finite the logarithmic and quadratic divergencies, respectively. In practice it is common to take two cutoffs in the coincidence limit $\Lambda_1 \to \Lambda_2 = \Lambda$ and hence $\sum_i c_i f(\Lambda_i^2) = f(0)-f(\Lambda^2) + \Lambda^2 f^\prime (\Lambda^2)$. \subsubsection{The SQM model} In the SQM the effective action reads \begin{eqnarray} \Gamma_{\rm SQM}[U] =- i N_c \int d \omega \rho(\omega) {\rm Tr} \log \left( i {\bf D} \right), \label{eq:eff_ac_sqm} \end{eqnarray} where the Dirac operator is given by \begin{eqnarray} i {\bf D} &=& i\slashchar{\partial} - \omega U^{\gamma_5} - {\hat M_0} \label{eq:dirac_op_sqm} \end{eqnarray} and $\rho(\omega)$ is the spectral function of a generalized Lehmann representation of the quark propagator with~$\omega$ the spectral mass defined on a suitable contour of the complex plane~\cite{RuizArriola:2001rr,RuizArriola:2003bs,RuizArriola:2003wi, Megias:2004uj}. The use of certain spectral conditions guarantees finiteness of the action. The matrix $ U = u^2 = e^{ { i} \sqrt{2} \Phi / f } $ ( $f$ is the pion weak decay constant in the chiral limit) is the flavor matrix representing the pseudoscalar octet of mesons in the non-linear representation, \begin{eqnarray} \Phi = \left( \matrix{ \frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta & \pi^+ & K^+ \cr \pi^- & - \frac{1}{\sqrt{2}} \pi^0 + \frac{1}{\sqrt{6}} \eta & K^0 \cr K^- & \bar{K}^0 & - \frac{2}{\sqrt{6}} \eta } \right) . \label{eq:pionU} \end{eqnarray} A judicious choice of the spectral function based on vector meson dominance generates a quark propagator with no-poles (analytic confinement). More details of the SQM at zero and finite temperature relevant for the present paper are further developed at Appendix~\ref{sec:sqm}. The partition function for the SQM can be written as \begin{eqnarray} Z_{\rm SQM} = \int DU e^{i \Gamma_{\rm SQM} [ U ]} \,. \label{eq:Z_sqm} \end{eqnarray} \subsection{The Polyakov-Chiral Quark Model} \label{sec:pcqm} As we will show in Sect.~\ref{sec:problem} there is a conflict between large gauge invariance at finite temperature, reviewed in the previous Sects. \ref{sec:large-gauge} and \ref{sec:center}, and the standard chiral quark models presented in Sect.~\ref{sec:cqmod}. The chiral quark model coupled to the Polyakov loop that will be motivated in Sect.~\ref{sec:polyakovcoupling} and analyzed in the rest of this paper synthesizes the solution and corresponds to simply make the replacement \begin{eqnarray} \partial_4 \to \partial_4 - i A_4 \end{eqnarray} in the Dirac operators, eq.~(\ref{eq:dirac_op_njl}) and eq.~(\ref{eq:dirac_op_sqm}), and integrating further over the $A_4$ gluon field in a gauge invariant manner~\cite{Reinhardt:1996fs} yielding a generic partition function of the form \begin{eqnarray} Z = \int DU D\Omega \, e^{i \Gamma_G [\Omega]} e^{i \Gamma_Q [ U , \Omega ]} \label{eq:Z_pnjl} \end{eqnarray} where $DU$ is the Haar measure of the chiral flavor group $SU(N_f)_R \times SU(N_f)_L $ and $D\Omega$ the Haar measure of the color group $SU(N_c)$, $\Gamma_G $ is the effective gluon action whereas $\Gamma_Q$ stands for the quark effective action. If the gluonic measure is left out $A_4=0$ and $\Omega=1$ we recover the original form of the corresponding chiral quark model, where there exists a one-to-one mapping between the loop expansion and the large $N_c$ expansion both at zero and finite temperature. Equivalently one can make a saddle point approximation and corrections thereof. In the presence of the Polyakov loop such a correspondence does not hold, and we will proceed by a quark loop expansion, i.e. a saddle point approximation in the bosonic field $U$, keeping the integration on the Polyakov loop $\Omega$. The work of Ref.~\cite{Fukushima:2003fw} corresponds to make also a saddle point approximation in $\Omega$. In Section~\ref{sec:oneloop} we stick to the one loop approximation and keep the group integration. This is the minimal way to comply with center symmetry at low temperatures. Although in principle $\Omega(x)$ is a local variable, in what follows we will investigate the consequences of a spatially constant Polyakov loop. In this case the functional integration $D\Omega$ becomes a simple integration over the gauge group $d\Omega$. The issue of locality is reconsidered in Section \ref{sec:local}. \section{Unnaturalness of chiral quark models at finite temperature} \label{sec:problem} In this section we analyze the problem of chiral quark models at finite temperature, its interpretation in terms of thermal Boltzmann factors as well as the corresponding conflicts with Chiral Perturbation Theory at finite temperature. \subsection{The problem} As already mentioned, chiral quark models at finite temperature have a pro\-blem since, even at low temperatures, excited states with any number of quarks are involved, whether they can form a color singlet or not. This is hardly a new observation, the surprising thing is that nothing has been done about it so far, attributing the failure to common diseases of the model, such as the lack of confinement. To illustrate this point in some more detail we will use a constituent quark model like the NJL model, where the quark propagator has a constant mass. To be specific, let us consider as an example the calculation of the quark condensate for a single flavor in cons\-tituent quark models with mass $M$. At finite temperature in the Matsubara formulation we have the standard rule \begin{eqnarray} \int \frac{d k_0}{2\pi} F(\vec k, k_0) \to i T \sum_{n=-\infty}^\infty F( \vec k,i \omega_n ) \end{eqnarray} with $\omega_n$ the fermionic Matsubara frequencies, $ \omega_n = 2 \pi T ( n+1/2 ) $. For the discussion in this and forthcoming sections it is convenient to elaborate this formula a bit further. Using Poisson's summation formula \begin{eqnarray} \sum_{m=-\infty}^\infty F ( m ) = \sum_{n=-\infty}^\infty \int_{-\infty}^\infty d x F ( x ) e^{ i 2\pi x n } \label{eq:poisson} \end{eqnarray} one gets the rule \begin{equation} \int \frac{d k_0}{2\pi} F( \vec k, k_0 ) \to i \sum_{n=-\infty}^\infty (-1)^n \int \frac{d k_4}{2\pi} F( \vec k, i k_4 ) e^{ i n k_4 / T } \,. \nonumber \label{eq:rule_pois} \end{equation} In terms of the Fourier transform, one obtains for a finite temperature fermionic propagator starting and ending at the same point, \begin{eqnarray} \tilde{F}(x; x) \to \sum_{n=-\infty}^\infty (-1)^n \tilde{F}( \vec x,x_0+in /T ;\vec x,x_0) \,. \label{eq:2.4} \end{eqnarray} Note that the zero temperature contribution corresponds to the term $n=0$ in the sum. From a path integral point of view, the zero temperature term comes from contractile closed paths whereas thermal contributions come from closed paths which wind $n$ times around the thermal cylinder. For fermions, each winding picks up a $-1$ factor. For the (single flavor) condensate we get\footnote{In what follows we use an asterisk as upperscript for finite temperature quantities, i. e. ${\cal O}^* = {\cal O}_T $.} \begin{eqnarray} \langle \bar q q \rangle^* &=& -i N_c \sum_{n=-\infty}^\infty (-1)^n {\textrm {tr}}_{\text{Dirac}} S (x) \Big|_{x_0=i n /T } \nonumber \\ &=& -i 4 M N_c \sum_{n=-\infty}^\infty \int \frac{d^4 k}{(2\pi)^4} \frac{e^{ -i k \cdot x } (-1)^{n} }{k^2 - M^2 } \Big|_{x_0=i n /T } \nonumber \\ &=& \langle \bar q q \rangle - 2 \frac{N_c M^2 T }{\pi^2} \sum_{n = 1}^\infty \frac{(-1)^n}{n} K_1 ( n M /T ) \,. \label{eq:cond} \end{eqnarray} In writing the previous formula, finite cut-off corrections, appearing in the chiral quark models such as the NJL model at finite temperature have been neglected. This is not a bad approximation provided the temperature is low enough $ T \ll \Lambda $ (typically one has $ \Lambda \approx 1\,$GeV so even for $ T \approx M \approx 300\,$MeV the approximation works). At low temperatures we may use the asymptotic form of the Bessel function \cite{Abramowitz:1970bk} \begin{eqnarray} K_n (z) \sim e^{-z} \sqrt{\frac{\pi}{2z}} \label{eq:bess_asy} \end{eqnarray} to get for the leading contribution, \begin{eqnarray} \langle \bar q q \rangle^* &\sim& \langle \bar q q \rangle + 4N_c \left(\frac{ M T }{2\pi} \right)^{3/2} e^{-M / T} \, . \end{eqnarray} This means a rather flat dependence on temperature for $ T \lesssim M $. (Numerically, the correction is about $ 1 \% $ for $ T \approx 100\,$MeV for $ M = 300\,$MeV and $ \langle \bar q q \rangle \approx - (240 \,\text{MeV})^3 $). The strong attractive interaction which causes chiral dynamical symmetry breaking is reduced at finite temperature and the energy is decreased by a decreasing cons\-tituent quark mass $M^*$, eventually leading to a chiral phase transition~\cite{Bernard:1987ir,Christov:1991se}, the critical temperature is $ T\approx 200\,$MeV.\footnote{The minimization can be written as the equation $ \langle \bar q q \rangle^* (M^*) / M^* =\langle \bar q q \rangle (M) / M $, so one has to know the mass dependence of the condensate at zero temperature.} The coincidence of this number with lattice simulations has been considered a big success of chiral quark models and has triggered a lot of activity in the past (see e.g. Ref.~\cite{Oertel:2000jp} and references therein). We show below that this apparent success might be completely accidental, as it does not incorporate basic physical requirements. \subsection{Interpretation} An interpretation of the previous formula for the condensate is in terms of statistical Boltzmann factors. Using the definition of the quark propagator in coordinate space \begin{eqnarray} S(x) = \int \frac{d^4 k}{(2\pi)^4}\frac{e^{-i k \cdot x}}{\slashchar{k} - M} = \left( i \slashchar{\partial} + M \right) \Delta ( x ) \end{eqnarray} with \begin{eqnarray} \Delta(x ) = \int \frac{d^4 k}{(2\pi)^4}\frac{e^{-i k \cdot x}}{k^2 - M^2} = \frac{M^2}{4 \pi^2 i } \frac{K_1 ( \sqrt{-M^2 x^2} )}{\sqrt{-M^2 x^2}}\,, \end{eqnarray} at low temperature we get \begin{eqnarray} S( \vec x ,i /T ) \sim e^{-M /T } \,. \end{eqnarray} Thus, for $\langle{\bar q}q\rangle^*$ and up to prefactors, we have the exponential suppression for a single quark propagator at low temperature. Using Eq.~(\ref{eq:cond}) and Eq.~(\ref{eq:bess_asy}) the quark condensate can be written in terms of Boltzmann factors with a mass formula $ M_n = n M $ corresponding to any number of quark states. One might object against the previous interpretation by arguing that these factors only reflect in the Euclidean coordinate space the pole of the propagator in Minkowski momentum space, and hence that they are a natural consequence of the lack of confinement. While the former statement is true, in the sense that singularities in Minkowski momentum space can be seen at large Euclidean coordinate values, the conclusion drawn from there is incorrect. As shown in Ref.~\cite{RuizArriola:2003wi} (see Appendix~\ref{sec:sqm}) quark propagators with no poles but cuts can also produce a Boltzmann factor {\it without} prefactors, as it should be.\footnote{Actually, the previous counter example shows that the lack of confinement has more to do with the presence of the exponential prefactors which are related to the available phase space.} To the same level of approximation, i.e. one quark loop, in the SQM we get (see Appendix~\ref{sec:sqm} for details) \begin{eqnarray} \frac{\langle \bar q q \rangle^*}{\langle \bar q q \rangle} &=& \tanh \left( M_S /4T \right) \nonumber \\ &=& 1 - 2 e^{- M_S /2T } + 2 e^{- M_S/T } + \cdots \label{eq:cond_SQM_nopol} \end{eqnarray} where the ``Boltzmann'' constituent mass can be identified with half the scalar meson mass $ M = M_S / 2$.\footnote{This relation together with the large $N_c$ quark-hadron duality relation $M_S=M_V$ discussed in Ref.~\cite{Megias:2004uj} yields $M= M_V/2 \sim 385 {\rm MeV}$, a reasonable value.} These calculations illustrate our main point and can be extended to any observables which are color singlets in the zero temperature limit; quark model calculations at finite temperature in the one loop approximation gene\-rate {\it all} possible quark states, \begin{eqnarray} {\cal O}^* = {\cal O} + {\cal O}_q e^{-M/T} + {\cal O}_{qq} e^{-2 M/T} + \cdots \end{eqnarray} While there is no doubt that the leading term~${\cal O}_q$ has a Boltzmann factor corresponding to a single quark state, the term with mass $ 2 M $ could in principle be equally a $qq$ diquark state or a $\bar q q $ meson state. The latter possi\-bility should be discarded, however. At one loop a $\bar q q$ pair can only come from the quark line going upwards and then downwards in imaginary time propagation. Since such a path does not wind around the thermal cylinder it is already counted in the zero temperature term. The $qq$ contribution, instead, corresponds to the single quark line looping twice around the thermal cylinder and is a proper thermal contribution. This is confirmed below. These Boltzmann factors control the whole physics and temperature effects are sizeable for $ T \approx M $. \subsection{Conflicts with ChPT} Our observation on the Boltzmann factor is rather puzz\-ling because it seems hard to understand how is it possible to generate non singlet states by just increasing the temperature. The reason has to do with the fact that the condensate itself is not invariant under $\mathbb{Z}(N_c)$ transformations at finite temperature. For the example of the condensate we trivially obtain \begin{eqnarray} \langle \bar q q \rangle^* &=& \sum_{n=-\infty}^\infty (-1)^n \langle \bar q(x_0 ) q (0) \rangle \Big|_{x_0=i n /T } \end{eqnarray} i.e., the condensate at finite temperature can be written as a coherent sum of nonlocal quark condensates at zero temperature. If we make a gauge transformation of the central type, we get \begin{eqnarray} \langle \bar q q \rangle^* &\to & \sum_{n=-\infty}^\infty (-z)^n \langle \bar q(x_0 ) q (0) \rangle \Big|_{x_0=i n /T } \label{eq:2.14} \end{eqnarray} i.e., the condensate can be decomposed as a sum of irreducible representations of a given triality $n$. Thus, the state with Boltzmann factor $e^{-n M /T }$ is indeed a multiquark state. This avoids the paradox, and suggests that in order to make a (centrally extended) gauge invariant definition of the condensate we could simply discard from the sum those terms which do not have zero triality, i.e. we would get \begin{eqnarray} \langle \bar q q \rangle^* \Big|_{\text{singlet}} &= & \sum_{n=-\infty}^\infty (-1)^{nN_c} \langle \bar q(x_0 ) q (0) \rangle \Big|_{x_0=i N_c n /T } \label{eq:2.15} \end{eqnarray} This would generate as a first thermal correction a term with a Boltzmann factor corresponding to mass $ N_c M $ (a baryon) which is obviously very much suppressed. Since a quark loop generates a dependence proportional to $N_c$ we would obtain a $ N_c e^{-M N_c / T } $ dependence. Another problem now comes from comparison with the expectations of chiral perturbation theory at finite temperature~\cite{Gasser:1986vb}. In the chiral limit, i.e., for $ m_\pi \ll 2 \pi T \ll 4 \pi f_\pi $ the leading thermal corrections to the quark condensate for $N_f=2$, for instance, are given by \begin{eqnarray} \langle \bar q q \rangle^* \Big|_{\text{ChPT}} &= & \langle \bar q q \rangle \left[ 1- \frac{T^2 } {8 f_\pi^2} - \frac{T^4 } {384 f_\pi^4} + \cdots \right] \label{eq:2.16} \end{eqnarray} Thus, the finite temperature correction is $N_c$-suppressed as compared to the zero temperature value, since $f_\pi^2$ scales as $N_c$. This feature remains for finite pion mass, and is generic to any thermal correction in ChPT; the dominant contribution comes from quantum pionic fluctuations and not from quark thermal excitations. Although the previous formula predicts a lowering of the quark condensate, it cannot describe the chiral phase transition since ChPT assumes from the start a non vanishing chiral condensate. In this sense, the scaling behavior of the critical temperature with $f_\pi$ and therefore with $N_c$ suggested from direct extrapolation of the formula can only be regarded as an assumption. At this point we should remind that the mechanism by which the chiral symmetry is restored at finite temperature in standard chiral quark models in the one quark loop approximation is quite different from the trend deduced from ChPT based mainly on pion loops. While in the first case it is due to populating states of the Dirac levels with the Fermi-Dirac thermal factor and a sudden decrease of the constituent quark mass gap $2M$, in ChPT the ``phase transition'' is merely due to large quark-antiquark excitations with the lightest pion quantum numbers with a fixed gap (otherwise ChPT method cannot be applied). These two pictures of the chiral symmetry restoration are not dual to each other; the $N_c$ behavior of the critical temperature is different since in chiral quark models one has $T_c \sim M \sim N_c^0 $ while in ChPT the extrapolated value of the ``critical temperature'' is $T_c \sim 2 \sqrt{2} f_\pi \sim \sqrt{N_c}$. Quantum fluctuations have been included in chiral quark models at finite temperature~~\cite{Florkowski:1996wf} (for a review see e.g. Ref.~\cite{Oertel:2000jp}) and they are known to be $1/N_c $ suppressed. Actually, the sub-leading $1/N_c$ contribution reproduces the first term of ChPT, eq.~(\ref{eq:2.16}), thus largely dominating at low temperatures. Taking into account that ChPT by itself and more refined approaches incorporating meson resonance effects~\cite{Pelaez:1998vx,Pelaez:2002xf} provide a similar values of the ``critical temperature'' quite close to the lattice predictions~\cite{Karsch:1998hr} for dynamical fermions and extrapolated to the chiral limit, one may wonder what is the meaning of the mean field quark chiral phase transition predicted in the past~\cite{Bernard:1987ir,Christov:1991se,Oertel:2000jp} and which has become a justification for chiral quark models at finite temperatures. These problems are also common to models where quarks and mesons are regarded as independent degrees of freedom. We will see in the rest of the paper how a convenient $N_c$ suppression of quark thermal corrections arises naturally when a color source, the Polyakov loop, is coupled to the chiral quark model and subsequent projection onto color neutral states is carried out. In this scenario one would have a large transition temperature $T_c \sim N_c M $ due to quarks, i.e. no symmetry restoration due to filling in the states above the Dirac levels in the absence of dynamical gluons and in the quenched approximation (Polyakov cooling). Non perturbative gluonic corrections modify this picture; they do predict instead a critical temperature roughly equal the deconfinement phase transition, $T_c = T_D$. Finally, pion loops are protected from additional suppressions, so that the final result will be fully compatible with the ChPT behavior at low temperature. \section{Coupling the Polyakov Loop in Chiral Quark Models} \label{sec:polyakovcoupling} \subsection{General considerations} As we have said, one can formally maintain gauge invariance at zero temperature by coupling gluons to the model. In the spirit of the model these degrees of freedom should be treated within perturbation theory, since the constituent quarks carry some information on non-perturbative gluon effects (see e.g. Ref.~\cite{Espriu:1989ff,Bijnens:1992uz} for explicit calculations in the low energy limit). At finite temperature the situation is radically different; a perturbative treatment of the $A_0$ component of gluon field manifestly breaks gauge invariance (namely, under large gauge transformations). The consequences of treating such a coupling non-perturbatively in the case of a constant $A_0$ field are straightforward and enlightening (see below for a discussion on the $x$-dependent case). Actually, in a more general context, the Polyakov loop appears naturally in any finite temperature calculation in the presence of minimally coupled vector fields within a derivative expansion or a heat-kernel expansion approach. In this case, as shown in \cite{Garcia-Recio:2000gt,Megias:2002vr}, a local one loop quantity, such as the effective Lagrangian or an observable, takes the form \begin{equation} {\cal L}(x)=\sum_n {\textrm {tr}}\left[ f_n(\Omega(x)){\cal O}_n(x)\right] \,, \label{eq:3.1} \end{equation} where ${\textrm {tr}}$ acts on all internal degrees of freedom, $n$ labels all possible local gauge invariant operators ${\cal O}_n(x)$ (i.e. containing covariant derivatives), possibly with brea\-king of Lorentz symmetry down to rotational symmetry, and $f_n(\Omega(x))$ are temperature dependent functions of the Polyakov loop which replace the numerical coefficients present in the zero temperature case. In this general context $\Omega(x)$ would be the local Polyakov loop of all mini\-mally coupled fields.\footnote{As noted below, in a model with vector mesons, there would be a corresponding flavor Polyakov loop. Such a contribution is expected to be much suppressed due to the large physical mass of the mesons.} In particular, a chemical potential would give a contribution $e^{\mu /T }$. Here we can see the necessity of the presence of $\Omega$ in (\ref{eq:3.1}): being $\mu$ a constant, it gives no contribution in the covariant derivative and so in ${\cal O}_n(x)$, therefore the chemical potential can only act through the presence of the Polyakov loop in the expression. This consideration also illustrates the breaking of gauge invariance in a perturbative treatment of $\Omega$: $e^{ \mu /T }$ depends periodically on the chemical potential, with period $2\pi i T$, this is a consequence of the coupling of $\mu$ to the integer quantized particle (or rather charge) number. Such periodicity would be spoiled in a perturbative treatment. Note that such periodicity is equivalent to one-valuedness of the functions $f_n$ in (\ref{eq:3.1}). \subsection{Coupling the Polyakov Loop} Coming back to chiral quark models with gluonic Polyakov loops, in fact, the analogy with the chemical potential has been invoked in a recent proposal of K. Fukushima~\cite{Fukushima:2003fw}~\footnote{After our work was sent for publication Refs.~\cite{Ratti:2005jh,Ghosh:2006qh} appeared, extending the results of Fukushima.}, which suggests coupling chiral quark mo\-dels to the Polyakov loop at finite temperature in this way. Our own approach is similar, except that, as in (\ref{eq:3.1}), we consider a {\em local} Polyakov loop $\Omega({\vec x})$ coupled to the quarks. This is what comes out of explicit one loop calculations within a derivative expansion approach at finite temperature \cite{Megias:2002vr,Megias:2003ui,Megias:2004bj}. In those calculations there is a loop momenta integration at each given $x$, and the Polyakov loop appears minimally coupled, i.e., through the modified fermionic Matsubara frequencies, \begin{eqnarray} {\widehat \omega}_n = 2 \pi T (n+1/2 + \hat \nu) \,, \label{eq:3.2} \end{eqnarray} which are shifted by the logarithm of the Polyakov loop \begin{eqnarray} \Omega = e^{i 2 \pi \hat \nu}\,, \end{eqnarray} i.e. $ \hat \nu(\vec x) = A_4(\vec x) /(2 \pi T) $. In our considerations, this is the only place where explicit dependence on color degrees of freedom appears, so it is useful to think of $\hat \nu $ as the corresponding eigenvalues. The effect of such a shift corresponds to change Eq.~(\ref{eq:2.4}) into \begin{eqnarray} \tilde{F}(x; x) \to \sum_{n=-\infty}^\infty (-\Omega(\vec x))^n \tilde{F}( \vec x,x_0+in /T ;\vec x,x_0) \,. \label{eq:2.4a} \end{eqnarray} \begin{figure}[tbc] \begin{center} \epsfig{figure=loop.eps,height=4cm,width=6cm} \end{center} \caption{Typical one quark loop diagram with a non trivial Wilson line. For $n$ windings around the $U(1)$ compactified imaginary time the quarks get a topological factor $\Omega^n $ in addition to the Fermi-Dirac statistical factor $(-1)^n$. Wavy lines are external fields. The total contribution to the diagram is obtained by summing over all windings and tracing over color degrees of freedom.} \label{fig:loop} \end{figure} The interpretation of this formula can be visualized in Fig.~\ref{fig:loop}; in a one quark loop with any number of external fields at finite temperature and with a non-trivial Polyakov line, the quarks pick up a phase $(-1)$ due to Fermi-Dirac statistics, and a non Abelian Aharonov-Bohm\footnote{This is an electric type of phase and not the standard magnetic one. The name is nonetheless appropriate since this electric phase was discussed first in the original AB paper.} factor $\Omega$ each time the quarks wind around the compactified Euclidean time. The total contribution to the diagram is obtained by summing over all windings and tracing over color degrees of freedom. \subsection{Dynamical Polyakov loop} \label{sec:dyn-pol} The above prescription gives the contribution for a given gluon field configuration, of which we have only retained the Polyakov loop.\footnote{In addition, gluons appear also perturbatively through the covariant derivative. This will produce perturbative gluon exchange contributions as in the zero temperature case. We will not consider those in this work.} The next step is to integrate the gluons according to the QCD dynamics. This implies an average over the local Polyakov loop with some normalized weight $\rho(\Omega;\vec{x}) d\Omega$. Here $d\Omega$ is the Haar measure of SU($N_c$) and $\rho(\Omega;\vec{x})$ the (temperature dependent) probability distribution of $\Omega(\vec{x})$ over the gauge group. The emergence of the Haar measure of the integral representation of the Yang-Mills partition function was explicitly shown in Ref.~\cite{Reinhardt:1996fs}. Due to gauge invariance, $\rho(\Omega)$ will be invariant under similari\-ty transformations, and hence it is really a function of the eigenvalues of $\Omega$. In this section we will mainly remain within a quenched approximation and so the weight follows from a pure Yang-Mills dynamics, in particular the weight will be $\vec{x}$ independent, as we do not consider external fields coupled to the gluons.\footnote{In Sections~\ref{sec:local} and \ref{sec:unquenched} we will discuss some implications about local corrections in the Polyakov loop and unquenched results, respectively.} In Yang-Mills dynamics (in four dimensions and three colors) it its known to take place a first order transition from a center symmetric phase to a broken symmetry or deconfining phase. Note that this is a rather peculiar phase transition where the symmetry is restored {\it below} the critical temperature, just the opposite as the standard case. Since the transition is discontinuous in observables such as the expectation value of the Polyakov loop, the probability distribution $\rho(\Omega)$ will also be discontinuous as a function of the temperature at the critical temperature. In the confining phase $\rho(\Omega)$ will be invariant under $\mathbb{Z}(N_c)$, $\rho(\Omega)=\rho(z\Omega)$. In the deconfining phase, such symmetry is spontaneously broken and one expects the Polyakov loop to concentrate around one of the elements of the center, at random. The coupling of dynamical quarks favors the perturbative value $\Omega=1$ ($A_4=0$) as follows from computations of the effective potential at high temperature \cite{Weiss:1980rj,Weiss:1981ev,Gross:1980br}. So in that phase we expect to have $\Omega$ concentrated near $\Omega=1$, which would be equivalent to no Polyakov loop in the calculation. Actually, one does not need the full distribution of $\Omega$ on SU($N_c$), but only the marginal distribution of eigenvalues. Denoting the Polyakov loop average by $\langle ~\rangle $, we have for a quark observable \begin{equation} {\cal L}(x)=\sum_n \langle {\textrm {tr}}_c f_n(\Omega) \rangle\, {\textrm {tr}}\, {\cal O}_n(x) \,. \label{eq:3.1a} \end{equation} Consistently with gauge invariance, the functions $f_n(\Omega)$ are just ordinary functions $f_n(z)$ evaluated at $z=\Omega$ (e.g. $e^\Omega$) hence, if $e^{i\phi_j}$, $j=1,\ldots,N_c$ are the eigenvalues of $\Omega$ \begin{eqnarray} \left\langle \frac{1}{N_c} {\textrm {tr}}_c f(\Omega) \right\rangle &=& \int_{\text{SU($N_c$)}}\!\!\! d\Omega \,\rho(\Omega) \frac{1}{N_c} \sum_{j=1}^{N_c}f(e^{i\phi_j}) \nonumber \\ &=& \int_{-\pi}^{\pi}\frac{d\phi}{2\pi}\widehat\rho(\phi) f(e^{i\phi}) \label{eq:one-body} \end{eqnarray} with \begin{eqnarray} \widehat\rho(\phi) &:=& \int_{\text{SU($N_c$)}}\!\!\! d\Omega \, \rho(\Omega) \frac{1}{N_c} \sum_{j=1}^{N_c}2\pi\delta(\phi-\phi_j) \,. \label{eq:4.7} \end{eqnarray} Equivalently, all that it is needed is the set of momenta of the distribution, $\langle{\textrm {tr}}_c(\Omega^n)\rangle$. \subsection{Group averaging} At sufficiently low temperature in the quenched theory we can go further on the analytical side, since the distribution of the Polyakov loop becomes just the Haar measure in this regime. As it will be discussed in section~\ref{sec:gluon}, this fact is justified with results based on strong coupling expansions and in one massive gluon loop approximation. Actually, from eq. (\ref{eq:3.8}) we find that in observables such as the quark condensate, the effect of $\rho(\Omega)$ being different from unity is almost negligible for all temperatures below the transition, implying that a Haar measure distribution is an extremely good approximation in the confined phase. We elaborate further on gluonic corrections in section~\ref{sec:gluon}. The corresponding density of eigenvalues of the SU($N_c$) group is given by \cite{Miller:1972bk,Gross:1983pk} \begin{eqnarray} \frac{1}{N_c!} 2\pi \delta\Big( \sum_{i=1}^{N_c} \phi_i \Big) \, \prod_{i < j }^{N_c} | e^{i\phi_i} - e^{i\phi_j} |^2 \prod_{i=1}^{N_c} \frac{d \phi_i}{2\pi}\,, \label{eq:average1} \end{eqnarray} so $\widehat\rho(\phi)$ of (\ref{eq:4.7}) is simply \begin{eqnarray} \widehat\rho(\phi)= 1 - \frac{2 (-1)^{N_c}}{N_c} \cos( N_c \phi ) \,. \end{eqnarray} Using this result one can easily deduce the following useful formulas for the average over the SU($N_c$) Haar measure \begin{eqnarray} \langle{\textrm {tr}}_c(-\Omega)^n\rangle_{\text{SU($N_c$)}} = \left\{\matrix{ N_c\,, & n=0 \label{eq:p1}\\ -1 \,, & n=\pm N_c \label{eq:p2} \\ 0 \,, & \text{otherwise} \label{eq:p3}\\ } \right. \nonumber \end{eqnarray} When this is inserted in, e.g., Eq. (\ref{eq:3.1a}), one finds that the effect is not only to remove the triality breaking terms, as in Eq. (\ref{eq:2.15}), but additionally, the surviving thermal contributions are $N_c$ suppressed as compared to the naive expectation. This solves the second problem noted in Section \ref{sec:problem}. \subsection{Polyakov cooling mechanism} In an insightful work, Fukushima \cite{Fukushima:2003fw} has modeled the coupling of the Polyakov loop to chiral quarks, with emphasis in the description of the deconfining and chiral phase transitions (or rather, crossovers). The fact that the critical temperatures for both transitions are nearly equal, according to lattice calculations \cite{Karsch:2001cy}, finds a natu\-ral explanation in that model. This follows from what we will call the {\em Polyakov cooling} mechanism, namely, the observation that, upon introduction of coupling with the Polyakov loop, any quark observable at temperature $T$ (below $T_D$) roughly corresponds to the same observable in the theory without Polyakov loop but at a lower temperature, of the order of $T/N_c$, as already noted in \cite{Oleszczuk:1992yg}. This is a direct consequence of triality conservation. As discussed for Eqs. (\ref{eq:2.14}) and (\ref{eq:2.15}) at the end of Section \ref{sec:problem}, Boltzmann weights $e^{-M/T}$ are suppressed in favor of $e^{-N_cM/T}$. An extreme example of cooling would come from considering a U(1) gauge theory in a confined phase in such a way that $\Omega$ is a completely random phase coupled to the quark. This would be equivalent to a uniform average of the variable $\hat\nu$ in Eq. (\ref{eq:3.2}) in the interval $[0,1]$. Clearly, such an average completely removes the discretization of the Matsubara frequencies and gives back the continuum frequency of the zero temperature theory. The same extreme cooling would obtain in a U($N_c$) gauge theory. In the SU($N_c$) case the average is not so effective since the phases corresponding to each of the $N_c$ colors are not changed independently, owing to the restriction $\det\Omega=1$. The cooling mechanism will be substantially modified in the unquenched theory, since sea quark loops allow to create thermal (i.e., with $n$ different from zero in e.g. Eq. (\ref{eq:2.4})) color singlet quark-antiquark pairs which propagate without any direct influence of the Polyakov loop. The way Polyakov cooling brings the chiral and deconfining critical points to coincide is as follows. In the chiral theory without Polyakov loop, the critical temperature of the chiral transition is such that $T^{\Omega=1}_\chi<T_D$ yet $ N_c T_\chi^{\Omega=1}>T_D$. Hence, in the theory with coupling to the Polyakov loop, one finds that for $T<T_D$ Polyakov cooling acts, $\langle\bar{q}q\rangle^*$ becomes roughly that of $T/N_c$ which is below $T_\chi^{\Omega=1}$ and chiral symmetry is broken. On the other hand, for $T>T_D$, Polyakov cooling no longer acts and $\Omega$ quickly becomes unity, as in the theory without Polyakov loop at the same temperature; since $T$ is above $T_\chi^{\Omega=1}$, chiral symmetry is restored. As a consequence the chiral transition is moved up and takes place at the same temperature as the deconfining transition, $T_\chi^{\langle\Omega\rangle}\approx T_D$. This result is consistent with \cite{Coleman:1980mx} where it is shown that, at least in the large $N_c$ limit, confinement implies chiral symmetry breaking. We note a difference in our treatment of the Polyakov loop coupling and that in \cite{Fukushima:2003fw}, namely, we use a local Polyakov loop subject to thermal and quantum fluctuations, as described by the distribution $\rho(\Omega;\vec{x}) d\Omega$. This is in contrast with \cite{Fukushima:2003fw} where $\Omega$ is global and does not fluctuate. Instead $\Omega$ is determined through a mean field minimization procedure plus a specific choice of the allowed values (orbit) of $\Omega$ on the group manifold. In this way a model is obtained which is simple and at the same time can be used to address key issues of QCD at finite temperature. Nevertheless let us argue why such an approach needs to be improved. At sufficiently low temperature the model in Ref.~\cite{Fukushima:2003fw} for the gluon dynamics consist just of the invariant Haar measure on the gauge group, therefore any group element is just as probable as any other. If one takes some coordinates on the group manifold and makes a maximization of the resulting probability density, one is actually maxi\-mazing the Jacobian and the result will depend on the coordinates chosen. In the deconfined phase the local Polyakov loop is still subject to fluctuations (even in the thermodynamic limit). A different quantity is $\overline\Omega$, the spatial average of the local loop.\footnote{The quantity $\overline\Omega$ so defined does not lie on the group manifold, so some prescription should be devised to map it onto the group.} This is a global object by construction. Both quantities, $\Omega(x)$ and $\overline\Omega$, have the same expectation value, due to translational invariance, but $\overline\Omega$ does not fluctuate in the thermodynamic limit. The usual effective potential is so defined that its minimum gives the correct expectation value, and so $\overline\Omega$, but it does not give information on the fluctuations of $\Omega(x)$. In the confining phase of the quenched theory triality is preserved, hence, after gluon average, Eq. (\ref{eq:2.4a}) becomes \begin{eqnarray} && \tilde{F}(x; x) \to \label{eq:2.4b} \\ &&\sum_{n=-\infty}^\infty \langle (-\Omega(\vec x))^{nN_c} \rangle \tilde{F}( \vec x,x_0+inN_c /T ;\vec x,x_0) \,, \nonumber \end{eqnarray} which is the quenching invoked in Section \ref{sec:problem}. \section{One quark loop results} \label{sec:oneloop} The calculations outlined above in Sect.~\ref{sec:polyakovcoupling} can be routinely applied to all observables. A more thorough and systematic study will be presented elsewhere. As an illustration we show here low temperature results (i.e. retaining only the Haar measure in the gluon averaging) for the quark condensate and the pion weak and electromagnetic anomalous decays for their relevance in chiral symmetry breaking, both for the NJL model as well as for the SQM at the one quark loop level. In Section~\ref{sec:higher} we discuss the structure of higher order corrections due to quark loops while in Section~\ref{sec:gluon} dynamical gluonic effects are considered. Corrections beyond the quenched approximation will be explicitly computed in Section~\ref{sec:unquenched}. In Ref.~\cite{Megias:2006prep} we compute the full chiral Lagrangian at finite temperature at the one quark loop level. \subsection{Results for Constituent Quark Models} To visualize the additional suppression we apply the previous result to the calculation of the condensate at finite temperature. At the one loop level we just make the substitution $ N_c(-1)^n \to {\textrm {tr}}_c \langle (-\Omega)^n \rangle $. We get \begin{eqnarray} \langle \bar q q \rangle^* &=& -i 4 M \sum_{n=-\infty}^\infty {\textrm {tr}}_c \langle(-\Omega)^n \rangle \int \frac{d^4 k}{(2\pi)^4} \frac{e^{ -i k x} }{k^2- M^2 } \Big|_{x_0=i n/T} \nonumber \\ \label{eq:qq_wpl} \end{eqnarray} This yields \begin{eqnarray} \langle \bar q q \rangle^* = \langle \bar q q \rangle + \frac{2 M^2 T }{ \pi^2 N_c} K_1 ( N_c M/T ) + \cdots \end{eqnarray} The dots indicate higher gluonic or sea quark effects. Because $T$ is small we have further \begin{eqnarray} \langle \bar q q \rangle^* & \sim & \langle \bar q q \rangle + 4 \left(\frac{ M T }{2\pi N_c} \right)^{3/2} e^{-N_c M / T} \, . \label{eq:cond_CQM} \end{eqnarray} When compared to the ChPT result Eq.~(\ref{eq:2.16}) we see that the $N_c$ suppression of the constituent quark loop model is consistent with the expectations. For the pion weak decay constant we obtain \begin{eqnarray} f_{\pi}^*{}^2 &=& -i 4 M^2 \nonumber \\ &\times& \sum_{n=-\infty}^\infty {\textrm {tr}}_c \langle(-\Omega)^n \rangle \int \frac{d^4 k}{(2\pi)^4} \frac{e^{ -i k \cdot x} }{[k^2- M^2 ]^2} \Big|_{x_0 = i n /T} \nonumber \\ \end{eqnarray} yielding \begin{eqnarray} \frac{f^*_\pi{}^2}{f_\pi^2} &=& 1 - \frac{M^2 }{ \pi^2 f_\pi^2} K_0 (N_c M / T) + \cdots \end{eqnarray} The $\pi^0 \to \gamma \gamma $ amplitude is given by \begin{eqnarray} F_{\pi \gamma \gamma }^* &=& i \frac{8 M^2 }{N_c f_\pi} \nonumber \\ &\times& \sum_{n=-\infty}^\infty {\textrm {tr}}_c \langle(-\Omega)^n \rangle \int \frac{d^4 k}{(2\pi)^4} \frac{e^{ -i k \cdot x} }{[k^2- M^2 ]^3} \Big|_{x_0 = i n /T} \,. \nonumber \\ \end{eqnarray} Using the value obtained at zero temperature, $ F_{\pi \gamma \gamma} = 1/ 4\pi^2 f_\pi $, consistent with the anomaly, we get \begin{eqnarray} \frac{F_{\pi \gamma \gamma }^*}{F_{\pi \gamma \gamma }} &=& 1-\frac{2M}{T} K_1 ( N_c M / T) + \cdots \end{eqnarray} This obviously complies again to the fact that the leading low temperature corrections should be encoded in pionic thermal excitations rather than quark excitations. \subsection{Spectral Quark Model} In the spectral quark model one averages with a given spectral function the previous result (\ref{eq:cond_SQM_nopol}) and including the Polyakov loop average we get (see Appendix~\ref{sec:sqm} for details) \begin{equation} \frac{\langle \bar q q \rangle^*}{\langle \bar q q \rangle} = 1- \frac{2}{N_c} e^{-N_c M_S / 2T } + \cdots \label{eq:cond_SQM} \end{equation} For the pion weak decay constant we obtain \begin{eqnarray} \frac{f^*_\pi{}^2}{f_\pi^2} &=& 1- \frac{1}{N_c}\left(2+\frac{N_c M_V}{T}\right) e^{-N_c M_V/ 2T } + \cdots \nonumber \\ \label{eq:f_SQM} \end{eqnarray} and the $\pi^0 \to \gamma \gamma $ amplitude is given by \begin{widetext} \begin{eqnarray} \frac{F_{\pi \gamma \gamma }^*}{F_{\pi \gamma \gamma }} = 1- \frac{1}{6N_c} \left[ 12+\frac{ 6 N_c M_V}{T} +\left(\frac{N_c M_V}{T}\right)^2\right] e^{-N_c M_V/2T}+ \cdots \nonumber \\ \label{eq:F_SQM} \end{eqnarray} \end{widetext} \begin{figure*}[tbc] \begin{center} \epsfig{figure=3loop.eps,height=4cm,width=10cm} \end{center} \caption{Typical higher quark loop diagram for the quark condensate operator $ \bar q q $. Quark lines with independent momenta may wind n-times around the compactified Euclidean time, yielding a Fermi-Polyakov factor $ (-\Omega)^n $. Triality conservation allows the internal quark-antiquark lines to wind with opposite signs only once, yielding an exponential suppression $ e^{- 2 M /T }$ for diagram a). A similar suppression occurs for diagram b) if the quark-antiquark windings happen at any of the bubbles. Diagram c) corresponds to summing up all intermediate states with the same quantum numbers and can be interpreted as a meson line.} \label{fig:3loop} \end{figure*} \section{Corrections beyond one quark loop} \label{sec:corrections} In the previous sections we have restricted to the one quark loop approximation for observables. This corresponds to the quenched approximation within the model, and to some extent this provides an oversimplified picture. In the present section we discuss the kind of corrections that we expect to this approximation. \subsection{Higher Quark Loop Corrections} \label{sec:higher} Going beyond the one quark loop approximation may require tedious calculations (see e.g. Refs.~\cite{Florkowski:1996wf,Oertel:2000jp} for explicit calculations in the standard NJL with no Polyakov loop). However, some general features based on $N_c$ counting rules at finite temperature can be deduced as follows. Let us, for instance, consider the three loop diagram of Fig.~(\ref{fig:3loop}a) contribution to the quark condensate in the NJL model in terms of quark propagators. Writing out for simplicity the Matsubara frequencies only we have \begin{eqnarray} {\rm Fig}.(\ref{fig:3loop} a) &=& \sum_{w^{(1)}, w^{(2)}) , w^{(3)}} S ( w^{(1)} ) \otimes S( w^{(1)}) \\ &\otimes& S( w^{(2)}) \otimes S( w^{(3)}) \otimes S( w^{(1)}+ w^{(3)}- w^{(2)}) \nonumber \end{eqnarray} where $\otimes$ means tensor product in the Dirac and internal space sense. Using the Poisson's summation formula, Eq.~(\ref{eq:poisson}), and going to Euclidean time space we get \begin{widetext} \begin{eqnarray} {\rm Fig}.(\ref{fig:3loop} a) &=& \sum_{n_1,n_2,n_3} \langle \Omega^{n_1+n_2+n_3} \rangle \nonumber \\ &\times& \int_{-\infty}^\infty d \tau_1 d \tau_3 S ( \tau_1 ) \otimes S( -\tau_1 - \tau_3 + n_1 /T + n_3 /T ) \otimes S( -\tau_3 + n_2 /T +n_3 /T ) \otimes S( \tau_3 ) \otimes S( \tau_3 - n_3 /T ) \nonumber \\ &\sim & \sum_{n_1,n_2,n_3} \langle \Omega^{n_1+n_2+n_3} \rangle e^{- M /T (|n_1|+|n_2|+|n_3|)} \end{eqnarray} \end{widetext} For this diagram triality conservation implies, $ n_1+n_2+n_3 = k N_c $ and the minimum argument of the exponent corresponds to take $n_1=n_2=n_3= 0$, which is the zero temperature contribution. The next thermal correction at low temperature is given by $n_1=0$ , $ n_2=-n_3= 1$ so the 3 loop diagram of Fig.~(\ref{fig:3loop}a) is suppressed by a thermal factor $ e^{-2 M /T } $, to be compared to the one quark loop suppression $ e^{-N_c M /T } $. A similar thermal suppression is obtained by inserting the standard bubble summation which can be coupled to meson quantum numbers transforming the argument of the exponent $ 2 M \to M_{\bar q q} $. Obviously, this contribution becomes most important for the lightest pion state. Actually, the quark-meson diagram in Fig.~(\ref{fig:3loop}b) looks as a two loop bosonized diagram as shown in Fig.~(\ref{fig:3loop}c). For such a bosonized diagram the previous argument becomes actually much simpler, since the number of loops equals the number quark propagators. The pion polarization operator, proportional to the pion propagator can then be taken at zero temperature, since the most important suppression comes from the quark lines not coupled to pion quantum numbers. For a bosonized diagram with L quark loops we have to consider L-fold Matsubara generalization of the previous one quark loop correction Eq.~(\ref{eq:2.4a}). Actually, the analysis becomes simpler in coordinate space. Regardless of the total number of quark propagators we may choose to apply the Poisson's summation to $L$ quark propagators. This can be seen by just using the formula \begin{eqnarray} \sum_{n,m=-\infty}^\infty \int_0^{1/T} d x_4 F( x_4 + n /T m /T ) \nonumber \\ = \sum_{n=-\infty}^\infty \int_{-\infty}^\infty d x_4 F( x_4 + n /T ) \end{eqnarray} and its multidimensional generalization both in the sum and in the integral sense. This effectively means that it is possible to remove as many Poisson summations as coordinate integrals appear in the expression. Using $ L= I-(V-1)$ and $ 4 V = E + 2I$ we also have \begin{equation} {\prod}_{i=0}^L \int d^4 z_i G^{2L} \sum_{n_1, \ldots , n_L} {\prod}_{i=1}^L (-\Omega)^{n_i} S ( \vec x_i , t_i + i n_i /T ) \,. \end{equation} Actually, this rule does not depend on the precise form of the quark interaction. At low temperatures, each quark line with an independent Poisson index generates a constituent quark mass suppression. Thus, the contribution to an observable can schematically be decomposed as follows \begin{equation} {\cal O}^* = \sum_L \sum_{n_1 , \ldots , n_L } {\cal O}_{n_1 \ldots n_L } \langle \Omega^{n_1+ \cdots + n_L } \rangle e^{- ( |n_1| + \cdots + |n_L| ) M /T } \,. \label{eq:loop} \end{equation} Triality conservation of the measure $ \Omega \to z \Omega $ at this level implies \begin{eqnarray} n_1 + \cdots + n_L = N_c k \label{eq:n-ality} \end{eqnarray} with $ k$ an integer. The dominant term in the previous expansion is the one for which $ n_1 = \ldots = n_L =0$ with any arbitrary number of quark loops $L$ and corresponds to the zero temperature contribution. One also sees that for $L=1$ we only have contributions from $n_1=k N_c$, which give the correction $e^{-N_c M /T } $, hence reproducing the results of Sect.~\ref{sec:oneloop}. According to Eq.~(\ref{eq:loop}), we can organize the thermal expansion at finite but low temperatures. The most important contributions comes from minimizing $ \sum_{i=1}^L | n_i | $ subjected to the triality constraint, Eq.~(\ref{eq:n-ality}). At finite $T$ and for $N_c \ge 3 $ we have the leading temperature dependent contribution is given by $L \ge 2$ and $ n_1 = -n_2 = 1 $ with $n_3= \cdots = n_L =0 $, which gives a factor $ e^{-2 M /T } $ and corresponds to a $\bar q q $ singlet meson state. This contribution has an additional $1/N_c$ power suppression, as compared to the zero temperature contribution. For $N_c=3$ the next term in the expansion would correspond to $L \ge 3$ and $ n_1= n_2 =n_3 =1 $ and yields a finite temperature suppression $ e^{-N_c M /T } $. For $N_c \ge 5 $ we would instead get $L \ge 4 $ and $ n_1=-n_2=n_3=n_4 = 1 $ and $ n_5 = \cdots n_L =0 $. Assuming $ N_c = 3$ we have\footnote{In the case without Polyakov loop one would have $ Z_{ q^{N_q} (\bar q q)^{N_m} } \sim \frac1{N_c^{N_m}} e^{-(2 N_m +N_q ) M / T} $ instead. So the leading contributions are those corresponding to one quark state.} \begin{eqnarray} Z_{\bar q q } &\sim& \frac1{N_c} e^{-2 M / T} \\ Z_{qqq} & \sim& e^{-N_c M / T} \\ Z_{qqq \bar q q } &\sim& \frac1{N_c} e^{-(2+N_c) M / T} \\ && \dots \\ Z_{ (\bar q q)^{N_M} (qqq)^{N_B} } &\sim& \frac1{N_c^{N_M}} e^{-(2 N_M +N_B N_c) M / T} \end{eqnarray} Obviously, for $N_c =3$ the meson loop contribution dominates over the baryon loop contribution. The previous argument ignores completely the quark binding effects so we should actually consider the relevant meson mass $m$, thus in summary one would get \begin{eqnarray} {\cal O} = 1 + \sum_{m} {\cal O}_{m} \frac1{N_c}e^{-m/T} + \sum_{B} {\cal O}_B e^{-M_B/T } + \cdots \nonumber \\ \end{eqnarray} This is how quark-hadron duality works at finite temperature in chiral quark models. As we see contributions of pion loops are the most important ones, even though they are $1/N_c$ suppressed. Higher meson states contribute next to the total observable at finite $T$. This is what one naively expects and it is rewarding to see that such a feature arises as a consequence of including the Polyakov loop into the chiral quark model and subsequent projecting onto the gauge invariant color singlet sector. Thus, at finite temperature there are the standard power like $1/N_c \,e^{-2 M/T} $ suppression for meson loops accompanied by an exponential suppression and a finite temperature exponential $ e^{-N_c M/T} $ for baryon loops. Obviously the most important contributions at large $N_c$ or low $T$ are those due to meson loops. We conclude from this discussion that thermal pion loops are protected. The previous discussion has concentrated on quark observables. For an observable like the Polyakov loop one would have instead \begin{equation} \sum_L \sum_{n_1 , \ldots , n_L } {\cal O}_{n_1 \ldots n_L } \langle \Omega^{1+n_1+ \cdots + n_L } \rangle e^{- ( |n_1| + \cdots + |n_L| ) M /T } \label{eq:loop1} \end{equation} and \begin{eqnarray} 1+n_1 + \cdots + n_L = N_c k \label{eq:n-ality1} \end{eqnarray} The leading low temperature contribution (in this case there is no zero temperature term) is then of the type $n_1=-1$, $n_2=\cdots=n_L=0$, corresponding to a single antiquark loop screening the charge of the test Polyakov loop. The leading term scales as $e^{-M /T }$ and is controlled by the constituent quark mass. Unlike the quark condensate case this behavior should remain unchanged by pionic loops. \subsection{Gluonic Corrections} \label{sec:gluon} Up to now we have chosen to represent the full dynamical gluonic measure by a simple group integration. Unfortunately, we do not know at present any general argument supporting the idea that there is a low temperature exponential suppression of gluon degrees of freedom, leaving only the Haar measure as the only remnant of gluon dynamics. However, results based on strong coupling expansions~\cite{Polonyi:1982wz,Gross:1983pk} and in one massive gluon loop approximation~\cite{Meisinger:2001cq,Meisinger:2003id} do provide such a suppression and indeed recent lattice findings confirm a striking universality in all group representations and favoring the simple group averaging dominance mechanism in gluodynamics below the phase transition \cite{Dumitru:2003hp}. More specifically, one finds both from lattice calculations \cite{Dumitru:2003hp} and from the group measure that \begin{equation} \langle\widehat {\rm tr}_c \, \widehat \Omega\rangle = 0 \end{equation} in the confining phase for the Polyakov loop in the adjoint representation. (In the group integration case, the previous formula follows from (\ref{eq:7.4}) below). We stress that this result is not a consequence of triality preservation since $\widehat \Omega$ is invariant under 't Hooft transformations. The previous equation is equivalent to $\langle |\mathrm{tr_c} \Omega|^2 \rangle = 1 $. We note in passing that in the mean field approximation \cite{Fukushima:2003fw} $\langle |\mathrm{tr_c}\Omega|^2 \rangle$ vanishes instead, due to the absence of fluctuations. We analyze now the two above mentioned models. \subsubsection{Strong coupling expansion} \label{sec:strong-coupling} The gluon potential at the leading order result of the strong coupling expansion, for $N_c=3$, is taken as~\cite{Polonyi:1982wz,Gross:1983pk} \begin{equation} -i\Gamma_G[\Omega] = V_{\text{glue}}[ \Omega ]\cdot a^3/T=-2(d-1)\,\mathrm{e}^{-\sigma a/T} \bigl|\mathrm{tr_c} \Omega \bigr|^2 \label{eq:G_potential_sce} \end{equation} with the string tension $\sigma=(425\,\text{MeV})^2$. At the mean field level $V_{\text{glue}}$ leads to a first order phase transition with the critical coupling $2(d-1)\mathrm{e}^{-\sigma a/T_D}=0.5153$. One can fix the deconfinement transition temperature as the empirical value $T_D=270\,\text{MeV}$ by choosing $a^{-1}=272\,\text{MeV}$ \cite{Fukushima:2003fw}. The corresponding mass is $m_G = \sigma a = 664\,\text{MeV}$. At low temperatures we may expand the exponential in powers of the gluon action, \begin{equation} e^{i\Gamma_G} = 1 + i \Gamma_G - \frac{1}{2} \Gamma_G^2 + \cdots \end{equation} which introduces an exponential suppression for $ e^{- m_G /T }$. For a treatment based on an average over the Polyakov loop, the normalized weight $\rho(\Omega) d\Omega$ suggested by the strong coupling expansion will be \begin{equation} \rho(\Omega) = N\exp\left( 2(d-1)\, e^{-m_G/T}|{\textrm {tr}}_c\Omega|^2\right)\,, \label{eq:3.8} \end{equation} where $N$ is the normalization constant. Such distribution preserves exact triality. At low temperature $\rho(\Omega)$ is close to unity and the distribution coincides with the Haar measure, hence $\Omega$ is completely random with equal pro\-bability to take any group value. At higher temperature $\rho(\Omega)$ tends to favor concentration of $\Omega$ near the central elements of the group, with equal probability. This provides the following mass formula for the Boltzmann argument of the exponential (in the notation of subsection \ref{sec:higher}) \begin{eqnarray} {\cal M} = n N_c M_q + m M_{\bar q q } + l m_G \end{eqnarray} which clearly shows that the leading thermal contribution at low temperatures is, again, provided by pion thermal loops, corresponding to $n=l=0$ and $m=1$ due to $N_c M_q \gg m_G \gg M_{\bar q q }=m_\pi$. Note that numerically, even the two pion contribution would be more important than gluonic corrections. \subsubsection{One massive gluon loop approximation} In a series of recent works~\cite{Meisinger:2001cq,Meisinger:2003id} the equation of state has been deduced for a gas of massive gluons with a temperature dependent mass in the presence of the Polyakov loop, reproducing the lattice data quite accurately above the deconfinement phase transition. The vacuum energy density reads \begin{equation} V_{\text{glue}}[ \Omega ]= T \int \frac{d^3 k}{(2\pi)^3} \widehat{\rm tr}_{c} \log \left[ 1 - e^{-\omega_k /T } \widehat \Omega \right] \end{equation} where $ \omega_k = \sqrt{k^2 + m_G^2 } $, with $ m_G $ the gluons mass and $\widehat \Omega $ and $\widehat {\rm tr}_c $ are the Polyakov loop and the color trace in the adjoint representation respectively. This expression was discussed with a temperature dependent mass in the deconfined phase given by the plugging the Debye screening mass $m_G(T)= T g(T) \sqrt{2} $, which at the phase transition, $T=T_c$, takes the value $ m_G(T_c) = 1.2-1.3 T_c $. It is worth noticing that, if one assumes a constant value for the gluon mass below the phase transition one gets at low temperatures \begin{equation} V_{\text{glue}}[ \Omega ]=- T \sum_{n=1}^\infty \frac1{n} ( \bigl|\mathrm{tr_c} \Omega^n \bigr|^{2} -1) \int \frac{d^3 k}{(2\pi)^3} e^{-n \omega_k /T } \end{equation} where the identity \begin{eqnarray} \widehat {\rm tr}_c \, \widehat \Omega^n = \Bigl|\mathrm{tr}_c \Omega^n \bigr|^{2}-1 \label{eq:pol-adjoint} \end{eqnarray} has been used. Using the asymptotic representation of the Bessel functions we see that, up to prefactors, a similar suppression of the sort described in the strong coupling limit, Sect.~\ref{sec:strong-coupling}, takes place. \subsection{Local corrections in the Polyakov loop} \label{sec:local} Up to now we have assumed a constant $\Omega$ field in space in our calculations. Quite generally, however, the Polyakov loop depends both on the Euclidean time and the space coordinate, as it comes out of explicit one loop calculations within a derivative expansion approach at finite temperature \cite{Megias:2002vr,Megias:2003ui,Megias:2004bj}. In the Polyakov gauge the temporal dependence becomes simple, but there is still an unknown space coordinate dependence. In such a case, the previous rules have to be modified, since Polyakov loop insertions carry finite momentum, and the result depends on the ordering of these insertions. If we still assume that the Polyakov loop is the only color source in the problem, we are naturally lead to consider Polyakov loop correlation functions. In the confining phase we expect a cluster decomposition property to hold for any pair of variables. A convenient model to account for Polyakov loop correlations is \begin{eqnarray} \langle {\textrm {tr}}_c \Omega (\vec{x}) \; {\textrm {tr}}_c \Omega^{-1} (\vec{y}) \rangle = e^{-\sigma |\vec{x}-\vec{y}| /T} \,, \label{eq:corr-func} \end{eqnarray} with $\sigma$ the string tension. This includes the correct screening of the color charge at large distances due to confinement and is consistent with (\ref{eq:7.5}) for two Polyakov loops at the same point. Thus, very different values of the spatial coordinate are suppressed, and it makes sense to make a sort of local approximation within the correlation length, and expand correlation functions in gradients in that limited region of space. Effectively, this corresponds to replace the volume to given confinement domain, by means of the rule \begin{equation} \frac{V}{T} = \frac{1}{T} \int d^3 x \to \frac{1}{T} \int d^3 x \, e^{-\sigma r /T} = \frac{8 \pi T^2}{\sigma^3} \,. \label{eq:rule-vol} \end{equation} In Ref.~\cite{Megias:2006prep} we will see explicitly that when computing the low energy chiral Lagrangian by expanding the effective action in derivatives of the meson fields there appear also gradients of the Polyakov loop. Actually, since we couple the coordinate dependent Polyakov loop effectively as a $x$-dependent color chemical potential our approach resembles a non-abelian generalization of the local density approximation of many body physics in nuclear physics and condensed matter systems, very much in the spirit of a density functional theory. \section{Results beyond the quenched approximation at low temperatures} \label{sec:unquenched} \subsection{General remarks} The full Polyakov-Chiral quark model is given in Sect.~\ref{sec:pcqm} by Eq.~(\ref{eq:Z_pnjl}). Therefore any expectation value is defined as \begin{eqnarray} \langle {\cal O}\rangle^* = \frac{1}{Z}\int DU D\Omega \, e^{i \Gamma_G [\Omega]} e^{i \Gamma_Q [ U , \Omega ]} \, {\cal O} \label{eq:O_pnjl} \end{eqnarray} with $\Gamma_G [\Omega]$ given in (\ref{eq:G_potential_sce}) and $\Gamma_Q[U,\Omega]$ the quark contribution to the full action, given by (\ref{eq:eff_ac_njl}) in the NJL model and (\ref{eq:eff_ac_sqm}) for the SQM case. In the latter model the full quark contribution coincides with the fermion determinant, while in the NJL model there is an additional term arising from the bosonization procedure, \begin{equation} e^{i\Gamma_Q[U,\Omega]} = {\textrm {Det}}(i{\bf D})_\Omega \exp \left( -\frac{i}{4G}\int d^4 x \, {\textrm {tr}}_f (M-\hat{M}_0)^2\right) \,. \label{eq:det_b_Q} \end{equation} (Note that here we have included in ${\bf D}$ the color degrees of freedom.) In this section we gather all our results to provide an estimate of the Polyakov loop expectation value at low temperatures as well as the quark condensate. This is particularly interesting since in the quenched approximation $\langle {\textrm {tr}}_c \Omega \rangle =0$, due to triality conservation. The fermion determinant does not conserve triality, but we show below that at low temperatures the violation is exponentially suppressed, so that it is still a good quantum number, and the Polyakov loop can be used as an order parameter for center symmetry in the same way as the chiral condensate provides a measure of chiral symmetry restoration away from the chiral limit. In order to go beyond the quenched approximation, we will evaluate the fermion determinant in the presence of a slowly varying Polyakov loop following the techniques developed in our previous work~\cite{Megias:2002vr}. According to our discussion of Sect.~\ref{sec:local} of local corrections, such an approximation makes sense in a confining region where there are very strong correlations between Polyakov loops. In the presence of the Polyakov loop the quark contribution can be generally written as \begin{equation} e^{i\Gamma_Q[U,\Omega]} = e^{i\int d^4 x {\cal L} (x, \Omega) } \end{equation} where ${\cal L}$ is the chiral Lagrangian as a function of the Polyakov loop which will be computed at finite temperature in Ref.~\cite{Megias:2006prep} in chiral quark models for non-vanishing meson fields. For our purposes here only the vacuum contribution with vanishing meson fields will be needed. \subsection{SQM model} In our case it is simpler to consider first the SQM. We have \begin{eqnarray} e^{i\Gamma_Q[U,\Omega]}={\textrm {Det}} (i{\bf D})_\Omega = e^{ V B^*/T } \,, \end{eqnarray} where $V$ is the three dimensional volume and $-B^*$ the vacuum energy density at finite temperature in the presence of the Polyakov loop. The result for $B^*$ is quite simple and is listed in (\ref{eq:A.19}) in Appendix~\ref{sec:sqm}. At low temperatures we may expand to get, \begin{equation} e^{ V B^*/T } = e^{ V B/T } \left[ 1 - \frac{V B}{T} e^{-M/T} \frac1{N_c} {\textrm {tr}}_c \left( \Omega + \Omega^{-1} \right)+ \cdots \right] \end{equation} with $M=M_V/2$ the constituent quark mass in the SQM and $-B$ is the vacuum energy density at zero temperature, $B= M_V^4 N_c N_f /192 \pi^2 = (0.2 {\rm GeV})^4 $ for three flavors (see Appendix \ref{sec:sqm}). The calculation of observables requires the group integration formula~\cite{Creutz:1984mg}, \begin{eqnarray} \int d\Omega \, \Omega_{ij} \Omega_{kl}^* = \frac{1}{N_c} \delta_{ik} \delta_{jl} \label{eq:7.4} \end{eqnarray} whence one gets for the constant Polyakov loop case \begin{eqnarray} \int d\Omega \, {\textrm {tr}}_c \Omega \, {\textrm {tr}}_c \Omega^{-1} =1 \label{eq:7.5} \end{eqnarray} Note that the effect of ignoring the Polyakov loop (i.e., setting $\Omega=1)$ promotes this result by two orders in $N_c$. In this model the average over pion fields is trivial since the vacuum energy density does not depend on $U$ at the one quark loop level. Neglecting momentarily the gluonic corrections $\Gamma_G$, using the previous formulas and (\ref{eq:O_pnjl}) we get the leading order result \begin{eqnarray} L = \left\langle\frac{1}{N_c} {\textrm {tr}}_c \Omega\right\rangle = -\frac{1}{N_c^2} \frac{B V}{T} e^{-M_V/2T} \,. \label{eq:L_SQM_lowT} \end{eqnarray} Note that at this order the contribution from the denominator is trivial. As expected, triality is not preserved due to the presence of dynamical quarks, but the relevant scale is the constituent quark mass. In addition, note that since $B$ is proportional to $N_c$ there is an extra $1/N_c$ suppression. So the Polyakov loop can be effectively used as an order parameter. Actually, our calculation suggests that a low temperature calculation of the Polyakov loop in full QCD might provide a method of extracting a gauge invariant constituent quark mass. Proceeding in a similar way from the expression of the quark condensate (\ref{eq:A.18a}) we get the leading order contribution \begin{eqnarray} \frac{\langle \bar q q \rangle^*}{\langle \bar q q \rangle} = 1 + \frac{2BV}{N_c^2 T} e^{-(M_V + M_S) /2T} + \cdots \,. \end{eqnarray} It is noteworthy that the thermal correction scales as $1/N_c$ ($B$ scales as $N_c$), as in the ChPT case. This again is not just a consequence of triality, but requires the proper integration over the Polyakov loop manifold. The presence of the (infinite) four-volume factor $V/T$ has to do with our assumption on a constant Polyakov loop. As we have argued in Section~\ref{sec:local}, one has indeed a local Polyakov loop and the volume should be replaced according to the rule in Eq.~(\ref{eq:rule-vol}) by an effective confinement-domain volume.\footnote{For the expectation value of a local observable ${\cal O}(\vec{x})$, points outside the volume $V$ are not correlated and their contribution approximately cancels in numerator and denominator.} The first gluonic correction contributes in $L$ as $e^{-(M_V+2m_G)/2T}$, and in the quark condensate as $e^{-(M_V+M_S+2m_G)/2T}$. \subsection{NJL model} The previous computation can also be considered within the NJL model. In this model the fermion determinant can be obtained by means of a derivative expansion~\cite{Megias:2002vr,Megias:2003ui}. The result will be presented in Ref.~\cite{Megias:2006prep}. Retaining only the vacuum contribution, which coincides with the result given in Eq.~(3) of \cite{Fukushima:2003fw}, we have \begin{equation} {\textrm {Det}}(i{\bf D})_\Omega = \exp \left( i\int d^4x \, ({\cal L}_q(T=0) + {\cal L}_q(\Omega,T))\right) \,, \label{eq:det_NJL} \end{equation} where ${\cal L}_q(T=0)$ is the zero temperature contribution. At low temperature, the thermal correction reads \begin{equation} {\cal L}_q(\Omega,T) = N_f\sqrt{\frac{M^3 T^5}{2\pi^3}}e^{-M/T}{\textrm {tr}}_c\,(\Omega + \Omega^{-1}) + \cdots \,. \end{equation} Using the volume rule $\int d^4x\,{\cal L}_q \to (V/T){\cal L}_q$, expanding Eq.~(\ref{eq:det_NJL}) in powers of ${\cal L}_q(\Omega,T)$ and considering the group integration formula as above, we get the leading order result\footnote{Actually we find a negative value for the SQM and positive for the NJL model. While based on color-charge conjugation symmetry it can rigorously be shown that $L$ must be real no proof exists to our knowledge that $L > 0$ at any temperature, although lattice data~\cite{Kaczmarek:2005ui} favor the positive case.} \begin{equation} L = \left\langle\frac{1}{N_c} {\textrm {tr}}_c \Omega\right\rangle = \frac{N_f}{N_c} \frac{V}{T} \sqrt{\frac{M^3 T^5}{2\pi^3}} e^{-M/T} \,. \label{eq:L_NJL_lowT} \end{equation} (Since the NJL bosonization term in (\ref{eq:det_b_Q}) cancels in the calculation of observables, it needs not be included in this calculation. Also the gluonic corrections have been omitted. Their effect is discussed below). For the quark condensate we take into account the result similar to Eq.~(\ref{eq:cond}) but replacing $(-1)^n$ with $(-\Omega)^n$, corresponding to the quark condensate for fixed Polyakov loop. Thus, including the leading fermion determinant contribution, using (\ref{eq:7.4}), and taking into account that $\langle {\textrm {tr}}_c \Omega\rangle = \langle {\textrm {tr}}_c \Omega^{-1}\rangle$, we get for the single flavor condensate \begin{equation} \langle \bar q q\rangle^* = \langle \bar q q\rangle + \frac{ N_f V}{\pi^3}(M T)^3 e^{-2M/T} \,. \label{eq:qq_NJL_lowT} \end{equation} Note that the $N_f$ factor comes from the fermion determinant. As in the spectral quark model, the first gluonic correction contributes in $L$ with $e^{-(M+m_G)/T}$, and in the quark condensate with~$e^{-(2M+m_G)/T}$. As we see, beyond the quenched approximation the Polyakov cooling persists although is a bit less effective as in the quenched case, and for instance the temperature dependence of the low energy constants of the tree level chiral effective Lagrangian becomes $ L_i^* - L_i \stackrel{\rm Low \; T} \sim e^{- M_V /T} $~\cite{Megias:2006prep}. Finally, on top of this one must include higher quark loops, or equivalently mesonic excitations, from which the pions are the dominant ones. They yield exactly the results of ChPT~\cite{Florkowski:1996wf} for the chiral condensate $\langle \bar q q \rangle $ and for the would-be Goldstone bosons, pions dominate at low temperatures. Thus, we see that when suitably coupled to chiral quark models the Polyakov loop provides a quite natural explanation of results found long ago on purely hadronic grounds~\cite{Gasser:1986vb} as a direct consequence of the genuinely non-perturbative finite temperature gluonic effects. The expected leading correction effect on the Polyakov loop is also an additional exponential suppression ${\cal O} (e^{-m_\pi /T})$. \section{Implications for the phase transition} \label{sec:phase-tran} The inclusion of the Polyakov loop has the consequence that one changes the one quark state Boltzmann factor $N_c e^{-M/T} $ into $ \langle {\textrm {tr}}_c \Omega \rangle $ at low temperatures. In the quenched approximation one has $\langle {\textrm {tr}}_c \Omega \rangle =0$, whereas the first non-vanishing contribution stemming from the Dirac sea behaves as $\langle {\textrm {tr}}_c \Omega \rangle \sim e^{-M/T}$ due to the explicit breaking of the center symmetry induced by the fermion determinant. Likewise, for the quark condensate $\langle \bar q q \rangle $ the finite temperature correction changes $N_c e^{-M/T} \to e^{-2 M/T} $ after the Polyakov loop integration is considered. Taking into account the large number of approximations and possible sources of corrections it is difficult to assess the accuracy of these Polyakov Chiral Quark Models, in spite of the phenomenological success achieved in Refs.~\cite{Fukushima:2003fw,Ratti:2005jh} within the mean field approach. Nevertheless, it is tempting to see how these results may be modified not only at low temperatures but also in the region around the phase transition when the proper quantum and local nature of the Polyakov loop is considered. This requires going beyond low temperature truncations like (\ref{eq:L_NJL_lowT}) and (\ref{eq:qq_NJL_lowT}). Clearly a proper description would demand a good knowledge of the Polyakov loop distribution as a function of the temperature. Unfortunately, such a distribution is poorly known and lattice simulations are not designed to extract it, since a subtle renormalization issue is involved \cite{Kaczmarek:2002mc,Dumitru:2003hp}. As a first step to investigate the phase transition in the Polyakov chiral quark model beyond the mean field approximation we just take the strong coupling model for the gluonic action of (\ref{eq:3.8}). Due to the rather large exponential suppression, this ansatz has the virtue of reducing to the Haar measure in the low temperature regime, and as a consequence the vanishing of the adjoint Polyakov loop expectation value observed in lattice calculations \cite{Dumitru:2003hp} follows. In our view this is a compelling reason to go beyond mean field by integrating over Polyakov loops. However, such a distribution preserves center symmetry and would not generate a phase transition per se in gluodynamics. This is unlike the mean field approximation where the action is minimized by center symmetry breaking configurations. As discussed before a side product of this approximation is to miss the fluctuations and also to introduce an explicit coordinate dependence in the gauge group. In our model the breaking of the center symmetry is attributed only to quarks. As we will see this explicit breaking is rather large precisely due to the simultaneous restoration of the chiral symmetry, since the constituent quark mass drops to zero. The qualitative agreement with lattice calculations in full QCD suggests that an important part of the physics has been retained by the model, leaving room for improvement in the Polyakov loop distribution. We will present calculations only for the NJL model. In practice, we use (\ref{eq:O_pnjl}), where the fermion determinant corresponds to Eq.~(3) of \cite{Fukushima:2003fw} plus the volume rule (\ref{eq:rule-vol}). The Polyakov loop integration is carried out numerically. Due to gauge invariance the Polyakov loop dependence is through its eigenvalues, and thus one may use the marginal distribution of eigenvalues (\ref{eq:average1}), which for $N_c=3$ amounts to two independent integration variables. Full details are given in appendix \ref{sec:njl_app}. \begin{figure}[ttt] \begin{center} \epsfig{figure=qq-pol.ps,height=6cm,width=8cm} \end{center} \caption{Temperature dependence of the chiral condensate $\langle \bar q q \rangle $ and Polyakov loop expectation value $ L= \langle {\textrm {tr}}_c \Omega \rangle /N_c $ in relative units. The standard result for $\langle \bar q q\rangle^*$ corresponds to the pure NJL model uncoupled to the Polyakov loop. The result of $L$ for gluodynamics within the strong coupling expansion is also displayed. We compare the mean field approach of Ref.~\cite{Fukushima:2003fw} where the Polyakov loop is classical and coupled to the quarks, with the integration over the Polyakov loop $\Omega$.} \label{fig:phase_transition} \end{figure} In Fig.~\ref{fig:phase_transition} we show the effect on both the chiral condensate $\langle \bar q q \rangle $ and Polyakov loop expectation value $ L= \langle {\textrm {tr}}_c \Omega \rangle /N_c $ within several schemes. In all cases we always minimize with respect to the quark mass and use $\rho(\Omega)$ in (\ref{eq:3.8}) for all temperatures. We compare the standard NJL model with no Polyakov loop with the mean field calculation of Ref.~\cite{Fukushima:2003fw}, which corresponds to minimize the vacuum energy as a function of the constituent mass and a given choice of the Polyakov loop matrix. We also compare with the result one obtains by integrating in the Polyakov loop instead and minimizing with respect to the quark mass afterwards. We work in these calculations with the NJL model with 2-flavor, $N_f=2$, and consider for the current quark mass matrix $\hat M_0={\textrm {diag}}(m_u,m_d)$ the isospin-symmetric limit with $m_u = m_d \equiv m_q = 5.5\,\text{MeV}$. The zero temperature part of the effective action of Eq.~(\ref{eq:eff_ac_njl}) is regulated by the Pauli-Villars method, with the cut-off $\Lambda_{\text PV}= 828\,\text{MeV}$, corresponding to a constituent quark mass $M=300\,\text{MeV}$. The coupling is $G=13.13\,\text{GeV}^{-2}$, which is obtained from the gap equation~(\ref{eq:gap_eq}). These parameters reproduce the empirical values of the pion weak-decay constant and the quark condensate at zero temperature. Aspects of locality have been considered in the treatment of the NJL model with the integration in the Polyakov loop, by introducing the volume rule~(\ref{eq:rule-vol}), where the string tension has been fixed to its zero temperature value~$\sigma=(425 \, \text{MeV})^2$. It is also displayed in the figure the expectation value $L$ in gluodynamics within the model of Eq.~(\ref{eq:G_potential_sce}) in the mean field approximation, which leads to a first order phase transition at $T_D=270\,\text{MeV}$. As we see the net effect of the Polyakov loop integration is to displace the transition temperature to somewhat higher values. So, the method based on the integration provides an effective cooling at higher temperatures for fixed parameters. As we can see in Fig.~\ref{fig:susc}, the crossover transitions for the chiral condensate~$\langle \bar q q\rangle$ and for the Polyakov loop expectation value~$L$ coincide at the value~$T_c \simeq 256 \,\text{MeV}$. \begin{figure}[ttt] \begin{center} \epsfig{figure=sus.ps,height=6cm,width=8cm} \end{center} \caption{Temperature dependence of $\partial\langle \bar q q \rangle^*/\partial T $ and $ \partial L/\partial T$ in the NJL model when the integration over the Polyakov loop~$\Omega$ is carried out.} \label{fig:susc} \end{figure} We have checked that a temperature dependence of the string tension may accommodate the unquenched lattice results~\cite{Kaczmarek:2005ui}, as we can see in Fig.~\ref{fig:phase_transition2}. This provides a range of string tensions $\sigma = 0.181 \pm 0.085 \,\text{GeV}^2$ with somehow account for an estimate of the uncertainty in the present model. In Fig.~\ref{fig:phase_transition2} the error band associated to such an uncertainty reflects a critical temperature of about $T_D = 250 \pm 50 \,\text{MeV}$. This is compatible with the large rescaling advocated in Ref.~\cite{Ratti:2005jh}. At present, and taking into account the many possible sources of corrections to our calculations we do not see how more accurate predictions could reliably be made in the context of Polyakov-Chiral Quark Models. Nevertheless the semiquantitative success indicates that essential features for the center symmetry breaking phase transition are encapsulated by these models, and further attempts along these lines should be striven. Nevertheless, it should be reminded that although the breaking of the center symmetry in this model is only attributed to the presence of quarks, one also has a contribution from gluons. In this regard let us mention that ignoring the exponentially suppressed gluon action (\ref{eq:3.8}) in the averaging has almost no effect below the phase transition and shifts up the transition temperature by about $ 30\, \text{MeV} $, a value within our error estimate. Given the importance of quarks in the phase transition one may wonder if the temperature dependent volume enhances the breaking of the center symmetry. In fact, the volume at the transition temperature is roughly equal to gluon volume $a^3$ in (\ref{eq:G_potential_sce}). At low temperatures the exponential suppression dominates in the Polyakov loop expectation value where the volume appears as a harmless prefactor, see e.g. (\ref{eq:L_NJL_lowT}). The effect of replacing the temperature dependent volume by a constant one can be seen in Fig. \ref{fig:volume}. Again changes are within our expected uncertainties. \begin{figure}[ttt] \begin{center} \epsfig{figure=qq-pol-lat.ps,height=6cm,width=8cm} \end{center} \caption{Temperature dependence of the chiral condensate $\langle \bar q q \rangle $ and Polyakov loop expectation value $ L= \langle {\textrm {tr}}_c \Omega \rangle /N_c $ in relative units, in the NJL model when the integration over the Polyakov loop $\Omega$ is carried out. The error bands are associated to an uncertainty in the string tension of $\sigma = 0.181 \pm 0.085\,{\text GeV}^2$. We compare with lattice data correspond to 2-flavor QCD, taken from \cite{Kaczmarek:2005ui}.} \label{fig:phase_transition2} \end{figure} As we have argued the expectation value of the Polyakov loop is rather small at temperatures well below the phase transition. The difference between the mean field and the direct integration can be best quantified at the level of the fluctuations. While at the mean field level the probability of finding a given Polyakov loop would be a delta function, one expects a spreading of such probability due to quantum effects. For $N_c=3$ the Polyakov loop contains two independent variables, which correspond to gluon fields in temperature units. \begin{eqnarray} \Omega = {\textrm {diag}} \left( e^{i \phi_1} , e^{i \phi_2} , e^{-i (\phi_1+\phi_2) }\right) \label{eq:Omega_param} \end{eqnarray} The joint distribution $\rho(\phi_1,\phi_2)$ can be factorized as a product of the purely gluonic and the quark determinant contributions (see appendix \ref{sec:njl_app}) \begin{eqnarray} \rho(\phi_1, \phi_2 ) = \rho_G (\phi_1, \phi_2 ) \rho_Q (\phi_1, \phi_2 ) \end{eqnarray} echoing the effective action displayed in Eq.~(\ref{eq:Z_pnjl}) in Euclidean space. Note that $\rho(\phi_1, \phi_2 )$ is not normalized to unity, instead its integral gives the full partition function (see appendix \ref{sec:njl_app}). As noted in Sect.~\ref{sec:dyn-pol} by gauge invariance the distribution is invariant under permutation of the three angles $\phi_1$, $\phi_2$ and $\phi_3=-\phi_1-\phi_2$. The use of such a symmetry is that the trace of any arbitrary function of the Polyakov loop $ f(\Omega) $ (a one-body operator) can be averaged over the group by integrating out one angle, Eq.~(\ref{eq:one-body}). Thus one obtains an equivalent one-body distribution as \begin{eqnarray} \widehat\rho(\phi) \propto \frac1{2\pi} \int_{-\pi}^\pi d \phi^\prime \rho_G ( \phi, \phi^\prime ) \rho_Q ( \phi, \phi^\prime) \,. \label{eq:rhoG_rhoQ} \end{eqnarray} It is interesting to compare how this distribution evolves across the phase transition, and to look for the effects generated explicitly by the fermion determinant. In Fig.~\ref{fig:polyakov_prob} we present such a comparison. Below the phase transition, and as already advanced in Sect.~\ref{sec:dyn-pol}, the weighting function presents three maxima at equidistant values, as required by the center symmetry. In this case the quark determinant plays a negligible role, although a tiny, indeed exponentially small, center symmetry breaking can be observed. As we see there appears an interesting concentration of angles in the region around the origin as the phase transition takes place. The quarks are very effective suppressing contributions not near $\Omega=1$. As a consequence the lack of the spontaneous breaking of the center symmetry in (\ref{eq:3.8}) becomes not very relevant for temperatures above the transition. \begin{figure*}[tbc] \begin{center} \epsfig{figure=dist1.ps,height=5cm,width=5cm} \epsfig{figure=dist2.ps,height=5cm,width=5cm} \epsfig{figure=dist3.ps,height=5cm,width=5cm} \end{center} \caption{Temperature dependence of the one-angle Polyakov loop distribution $\hat\rho(\phi)$ of (\ref{eq:rhoG_rhoQ}) as a function of the angle. Dash-dotted lines: quenched result ($\rho_G$ is included, $\rho_Q$ is not). Center symmetry is preserved. Solid lines: unquenched result (both factors $\rho_G$ and $\rho_Q$ are included) in the NJL model. Center symmetry is explicitly broken. Three temperatures nearby the transition (255$\,$MeV) are considered. For convenience all distributions have been normalized to unity.} \label{fig:polyakov_prob} \end{figure*} A further trace of fluctuations can be seen by considering higher group representations of the Polyakov loop. In Fig.~\ref{fig:fluctuation} we also show the expectation value of the Polyakov loop in the adjoint representation, $\langle \widehat {\rm tr}_c \, \widehat \Omega \rangle /(N_c^2-1) $. According to the lattice results of the matrix model in Ref.~\cite{Dumitru:2003hp} one has a vanishing expectation below the phase transition. As we have argued above, this feature is not preserved at the mean field level, where a non-vanishing value $-1/(N_c^2-1)$ is obtained instead (see Eq.~(\ref{eq:pol-adjoint}) for the case $n=1$). Considering the Polyakov loop integration, as we do, complies with the lattice expectations and indicates that further developments should consider these constraints. The full fluctuation of the Polyakov loop, $\delta$, is defined by \begin{eqnarray} \delta^2 &\equiv& \left(\langle |{\textrm {tr}}_c\Omega |^2\rangle - \langle {\textrm {tr}}_c \Omega \rangle^2 \right)/N_c^2 \,, \nonumber \\ &=& \left(1+ \langle \widehat{\textrm {tr}}_c \widehat\Omega\rangle - \langle {\textrm {tr}}_c \Omega\rangle^2 \right)/N_c^2 \,. \label{eq:fluctuation_L} \end{eqnarray} The fluctuation is also shown in Fig.~\ref{fig:fluctuation}. $\delta$ goes to zero in the large~$T$ regime, and this is compatible with the fact that the one-body distribution~$\widehat\rho(\phi)$ tends to concentrate near $\phi=0$ as the temperature increases. In the second equality of Eq.~(\ref{eq:fluctuation_L}) we have used the identity~(\ref{eq:pol-adjoint}) with $n=1$. \begin{figure}[ttt] \begin{center} \epsfig{figure=pol-adj.ps,height=6cm,width=8cm} \end{center} \caption{Temperature dependence of the Polyakov loop expectation value in the fundamental, $ \langle {\textrm {tr}}_c \,\Omega \rangle /N_c $, and adjoint, $\langle \widehat {\rm tr}_c \, \widehat \Omega \rangle /(N_c^2-1) $, representations and the total fluctuation $\delta$ of the Polyakov loop, in the NJL model when the integration over the Polyakov loop is carried out.} \label{fig:fluctuation} \end{figure} \section{conclusions} \label{sec:concl} In the present work we have discussed how the pro\-blem of conventional chiral quark models at finite temperature may be overcome by introducing the Polyakov loop. In order to maintain gauge invariance at finite temperature some non-perturbative explicit gluonic degrees of freedom must be kept. In practice, and in particular gauges such as the Polyakov gauge, the approach corresponds to treat the $A_0$ component of the gluon field as a color dependent chemical potential in the quark propagator. This introduces, however, a color source which generates any possible color non-singlet states, calling for a projection onto the physical color singlet states, or equivalently evaluating the path integral over the $A_0$ field in a gauge invariant fashion. As such, the average includes both the gluon action and the quark determinant. Models for the gluonic part have been discussed on the light of pure gluodynamics results on the lattice. The net result is that, contrary to standard chiral quark model calculations at finite temperature, no single quark excitations are allowed in physical observables. More generally, the leading thermal corrections at the one quark loop level start only at temperatures near the deconfinement transition. Given the fact that this strong suppression effect is triggered by a group averaging of Polyakov loops we have named this effect Polyakov cooling of the quark excitations. Thus, and to a very good approximation, we do not expect any important finite temperature effect on quark observables below the deconfinement transition. In particular the chiral symmetry breaking transition cannot occur before the deconfinement transition. In such a situation the biggest change of observables such as the quark condensate should come from pseudoscalar loops at low temperatures and perhaps other higher meson resonances at intermediate temperatures. This is precisely what one expects from ChPT or unitarized approaches thereof which effectively include these loops on resonances. It is rewarding to see how in practice the apparent contradiction between chiral quark models and ChPT in the intermediate temperature region may be resolved by a judicious implementation of the Polyakov loop. The extrapolation of these ideas to the phase transition is straightforward but more ingredients are needed. As an illustration we have investigated in a model the kind of effects one might expect from such a schematic Polyakov-Chiral Quark Model when both the quantum and local nature of the Polyakov loop are taken into account. Several interesting features arise out of such an investigation. At low temperatures the Polyakov loop is suppressed exponentially in the constituent quark mass suggesting that eventually more accurate lattice measurements might provide a method to extract the constituent quark mass in a gauge invariant fashion. According to our analysis corrections to this leading behavior are provided by pion loops. It would be extremely helpful to find a general theoretical setup where these chiral corrections might be reliably computed. Moreover, we find that the explicit breaking of the center symmetry due to dynamical quarks at low temperature is $1/N_c $ suppressed. This is a direct consequence of averaging over gauge field configurations and confirms the current usage of the Polyakov loop as an order parameter in the unquenched case. On the light of the present findings one might conjecture that in the large $N_c$ limit the Polyakov loop becomes a true order parameter of full QCD. Another feature we find is that the contribution of the gluon dynamics below the phase transition does not seem to be crucial. This is welcome since this is precisely the region where least information can be deduced from lattice simulations besides the known preservation of the center symmetry. Nevertheless, it would rather interesting for our understanding of the low temperature gluon dynamics to compute directly from the lattice the Polyakov loop probability distribution. From our results we deduce that although the qualitative features observed in more simplified treatments are confirmed by calculations, one might expect large uncertainties in the determination of critical parameters, such as the critical temperature. Our estimate is $T_D = 250 \pm 50 {\rm MeV}$ for $N_f=2$. Even given these large uncertainties, the very fact that a crossover between chiral symmetry restoration and center symmetry breaking takes place in the bulk part of the expected lattice QCD simulations with a minimal number of parameters is very encouraging and motivates that further studies along these lines should be pursued. Finally, a more intriguing aspect regards what kind of model independent information could be inferred out of these models, where quarks and Polyakov loops are coupled, in the regime around the phase transition. For instance, the low temperature behavior of the chiral condensate can be described using Chiral Perturbation Theory in terms of the zero temperature chiral condensate with no explicit reference to the underlying quark and gluonic degrees of freedom due to the dominance of pionic fluctuations. Given the fact that the Polyakov loop is a gauge invariant object which vanishes at zero temperature, it would be extremely helpful to isolate what physical states could equivalently describe such an observable and what specific zero temperature QCD operators drive its low temperature behavior. \begin{acknowledgments} We thank W. Broniowski for discussions. This work is supported in part by funds provided by the Spanish DGI and FEDER founds with grant no. FIS2005-00810, Junta de Andaluc\'{\i}a grant no. FM-225 and EURIDICE grant number HPRN-CT-2003-00311. \end{acknowledgments}
1,108,101,564,481
arxiv
\section{Conclusion} \input{Part1_Introduction01} \input{Part2_Preliminaries01} \input{Part3_PhaseLimitations01} \input{Part3a_Intervals} \input{Part4_Examples01} \section{Conclusion} We have presented a simple graphical test that can rule out the existence of suitable OZF multipliers. The test can be implemented efficiently and systematically. The graphical interpretations provide considerable insight to the frequency behaviour of the OZF multipliers. Results show significantly improved results over those in the literature. The test can be derived either from the duality approach \cite{Jonsson:96,Jonsson96thesis,Jonsson97,Jonsson99} or from the frequency interval approach \cite{Megretski:95,Wang:18}. Guaranteeing there is no suitable OZF multiplier does not necessarily imply a Lurye system is not absolutely stable, although we have conjectured this to be the case \cite{Carrasco:EJC,Wang:18}. Kong and Su \cite{Khong20} show that the implication is true with a wider class of nonlinearity; for this case the results of this paper may be applied directly. For the discrete-time case, Seiler and Carrasco \cite{Seiler21} provide a construction, for certain phase limitations, of a nonlinearity within the class for which the discrete-time Lurye system has a periodic solution. However the conjecture remains open for both continuous-time and discrete-time systems. More generally results for discrete-time systems are quite different. For discrete-time systems an FIR search for multipliers is effective and outperforms others \cite{Wang:TAC}. With the interval approach it is possible to find a nontrivial threshold such that the phase of a multiplier cannot be above the threshold over a certain frequency inteval \cite{Wang:18}. The duality approach leads to both a simple graphical test at simple frequencies and a condition at multiple frequencies that can be tested by linear program \cite{Zhang:20}. This paper's results are for continuous-time single-input single-output multipliers of \cite{Zames68}. Although multivariable extensions of the OZF multipliers are considered in the literature \cite{Safonov2000, DAmato2001, Kulkarni2002,Mancera2005, Fetzer2017}, it remains open what restrictions there might be. Similarly more general nonlinearities can be addressed with a reduced subset of the OZF multipliers \cite{Rantzer2001, Materassi11, Altshuller:13, Heath2021} and the analysis of this paper might be generalised to such cases. It also remains open whether a systematic procedure can be found with more points or intervals. \input{app_proofs} \bibliographystyle{IEEEtran} \section{Introduction} The continuous-time OZF (O'Shea-Zames-Falb) multipliers were discovered by O'Shea~\cite{OShea67} and formalised by Zames and Falb \cite{Zames68}. They preserve the positivity of monotone memoryless nonlinearities. Hence they can be used, via loop transformation, to establish the absolute stability of Lurye systems with slope-restricted memoryless nonlinearities. An overview is given in \cite{Carrasco:EJC}. Recent interest is largely driven by their compatability with the integral quadratic constraint (IQC) framework of Megretski and Rantzer \cite{Megretski97} and the availability of computational searches \cite{Safonov:87, Gapski:94,Chang:12,Chen:95,Chen:96,Turner2009,Carrasco12,Turner:12,Carrasco:14}. A modification of the search proposed in \cite{Chen:95} is used in the Matlab IQC toolbox \cite{Kao:04} and analysed by Veenman and Scherer \cite{Veenman14}. No single search method outperforms the others, and often a hand-tailored search outperforms an automated search \cite{Carrasco:14}. This motivates the analysis of conditions where a multiplier cannot exist. There are two main approaches in the literature. J\"{o}nsson and Laiou \cite{Jonsson:96} give a condition that must be satisfied at a number of isolated frequencies. Their result is a particular case of a more general analsysis based on duality in an optimization framework \cite{Jonsson96thesis,Jonsson97,Jonsson99}; we will refer to this as the ``duality approach.'' Their result requires a non-trivial search over a finite number of parameters. By contrast Megretski \cite{Megretski:95} gives a threshold such that the phase of a multiplier cannot be simultaneously above the threshold over a certain frequency interval and below its negative value on another. The idea is generalised in \cite{Wang:18}, where in particular the threshold for the second interval is allowed to have a different value. We will refer to this as the ``frequency interval approach.'' Both the duality approach and the frequency interval approach lead to powerful and useful results, but neither allows a systematic approach. With respect to the duality approach J\"{o}nsson states \cite{Jonsson96thesis} “it is in most applications hard to find a suitable frequency grid for the application of the results.'' With respect to the interval approach, in \cite{Wang:18} we conclude that the most insightful choice of interval remains open. In this paper we present a simple phase condition on two frequencies whose ratio is rational. The condition can be be tested systematically. At each frequency ratio the condition leads to a graphical criterion similar to the off-axis circle criterion \cite{Cho:68} in that it can be expressed as a bound on the phase of a transfer function. We derive the condition via the duality approach, but we also show that it is equivalent to a limiting case of the frequency interval approach. We illustrate the criterion on three examples: we show it gives a significantly better results for the numerical example in \cite{Jonsson:96}; we show it gives new bounds for the gain with O'Shea's classical example \cite{OShea67,Carrasco:EJC}; we provide an example of a third order transfer function with delay that does not satisfy the Kalman Conjecture. The structure of this paper as follows. Section~\ref{Prem} provides the necessary background material and includes the following minor contribution: Theorems`\ref{Jthm_a} and~\ref{Jthm_b} provide frequency conditions similar in spirit to the duality approach of \cite{Jonsson:96}, but more widely applicable; specifically the conditions allow both the system transfer function and the multiplier to be irrational. The main results of the paper are presented in Section~\ref{Main}. Theorems~\ref{thm:2a} and~\ref{thm:2b} give a phase condition that has a simple graphical interpretation and can be implemented systematically. We prove Theorems~\ref{thm:2a} and~\ref{thm:2b} via the duality approach. We discuss both the graphical interpretation and the numerical implementation of Theorems~\ref{thm:2a} and~\ref{thm:2b}. In Section~\ref{Int} we show that the results can also be derived via the frequency interval approach: Corollaries~\ref{m_corollary_a} and~\ref{m_corollary_b} provide a version of the interval approach \cite{Wang:18} for the limiting case where the length of interval goes to zero; Theorems~\ref{Meg_equiv_a} and~\ref{Meg_equiv_b} state these corollaries are respectively equivalent to Theorems~\ref{thm:2a} and~\ref{thm:2b}. Section~\ref{Exa} includes three examples: the first shows we achieve improved results over those reported in \cite{Jonsson:96}; the second is the benchmark problem of O'Shea\cite{OShea67} where we obtain improved results over those reported in \cite{Wang:18}; finally, in the third, we show that a third order with delay system provides a counterexample to the Kalman Conjecture. All proofs, where not immediate, are given in the Appendix. \section{Preliminaries}\label{Prem} \subsection{Multiplier theory} We are concerned with the input-output stability of the Lurye system given by \begin{equation} y_1=Gu_1,\mbox{ } y_2=\phi u_2,\mbox{ } u_1=r_1-y_2 \mbox{ and }u_2 = y_1+r_2.\label{eq:Lurye} \end{equation} Let $\mathcal{L}_2$ be the space of finite energy Lebesgue integrable signals and let $\mathcal{L}_{2e}$ be the corresponding extended space (see for example \cite{desoer75}). The Lurye system is said to be stable if $r_1,r_2\in\mathcal{L}_2$ implies $u_1,u_2,y_1,y_2\in\mathcal{L}_2$. The Lurye system~(\ref{eq:Lurye}) is assumed to be well-posed with $G:\mathcal{L}_{2e}\rightarrow\mathcal{L}_{2e}$ linear time invariant (LTI) causal and stable, and with $\phi:\mathcal{L}_{2e}\rightarrow\mathcal{L}_{2e}$ memoryless and time-invariant. With some abuse of notation we will use $G(s)$ to denote the transfer function corresponding to $G$. The nonlinearity $\phi$ is assumed to be montone in the sense that $(\phi u) (t_1)\geq (\phi u)(t_2)$ for all $u(t_1)\geq u(t_2)$. It is also assumed to be bounded in the sense that there exists a $C\geq 0$ such that $|(\phi u)(t)|\leq C|u(t)|$ for all $u(t)\in\mathbb{R}$. We say $\phi$ is slope-restricted on $[0,k]$ if $0\leq (\phi u)(t_1) -(\phi u) (t_2))/(u(t_1)-u(t_2))\leq k$ for all $u(t_1)\neq u(t_2)$. We say $\phi$ is odd if $(\phi u)(t_1)=-(\phi u)(t_2)$ whenever $u(t_1)=-u(t_2)$. \begin{definition}\label{def1} Let $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ be LTI. We say $M$ is a suitable multiplier for $G$ if there exists $\varepsilon>0$ such that \begin{align} \mbox{Re}\left \{ M(j\omega) G(j\omega) \right \} > \varepsilon\mbox{ for all } \omega \in \mathbb{R}. \end{align} \end{definition} \begin{remark}\label{rem_phase} Suppose $M$ is a suitable multiplier for $G$ and $\angle G(j\omega) \leq -\pi/2 -\theta$ for some $\omega$ and $\theta$. Then $\angle M(j\omega) > \theta$. Similarly if $\angle G(j\omega) \geq \pi/2 +\theta$ then $\angle M(j\omega) < -\theta$. \end{remark} \begin{subtheorem}{definition} \begin{definition}\label{def2a} Let $\mathcal{M}$ be the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose implulse response is given by \begin{equation}\label{def_m} m(t) = m_0 \delta(t)-h(t)-\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} \begin{split} h(t) & \geq 0 \mbox{ for all } t\mbox{, }h_i\geq 0 \mbox{ for all } i\\ & \mbox{and } \| h\|_1+\sum_{i=1}^{\infty} h_i \leq m_0. \end{split} \end{equation} \end{definition} \begin{definition}\label{def2b} Let $\mathcal{M}_{\mbox{odd}}$ be the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose implulse response is given by (\ref{def_m}) with \begin{equation} \| h\|_1+\sum_{i=1}^{\infty} |h_i| \leq m_0. \end{equation} \end{definition} \end{subtheorem} \begin{remark} $\mathcal{M}\subset\mathcal{M}_{\mbox{odd}}$. \end{remark} The Lurye system (\ref{eq:Lurye}) is said to be absolutely stable for a particular $G$ if it is stable for all $\phi$ in some class $\Phi$. In particular, if there is a suitable $M\in\mathcal{M}$ for $G$ then it is absolutely stable for the class of memoryless time-invariant monotone bounded nonlinearities; if there is a suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$ then it is absolutely stable for the class of memoryless time-invariant odd monotone bounded nonlinearities. Furthermore, if there is a suitable $M\in\mathcal{M}$ for $1/k+G$ then it is absolutely stable for the class of memoryless time-invariant slope-restricted nonlinearities in $[0,k]$; if there is a suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $1/k+G$ then it is absolutely stable for the class of memoryless time-invariant odd slope-restricted nonlinearities \cite{Zames68,Carrasco:EJC}. \subsection{Other notation} Let $x = [y]_{[z,w]} $ denote $y$ modulo the interval $[z,w]$: i.e. the unique number $x\in[z,w)$ such that there is an integer $n$ with $y = x + n(w-z)$. In our statement of results (i.e. Sections~\ref{Main},~\ref{Int} and~\ref{Exa}) phase is expressed in degrees. In the technical proofs (i.e. the Appendix) phase is expressed in radians. \subsection{Duality approach} The following result is similar in spirit to that in \cite{Jonsson:96} where a proof is sketched for the odd case. Both results can be derived from the duality theory of J\"{o}nsson \cite{Jonsson96thesis,Jonsson97,Jonsson99}; see \cite{Zhang:21} for the corresponding derivation in the discrete-time case. Nevertheless, several details are different. In particular, in \cite{Jonsson:96} only rational plants $G$ and rational multipliers $M$ are considered; this excludes both plants with delay and so-called ``delay multipliers.'' Expressing the results in terms of single parameter delay multipliers also gives insight. We exclude frequencies $\omega=0$ and $\omega\rightarrow\infty$; it is immediate that we must have $\mbox{Re}\left \{M(0)G(0)\right \}\geq 0$; by contrast $M(\infty)$ need not be well-defined in our case. \begin{definition} Define the single parameter delay multipliers $M^-_\tau$ and $M^+_\tau$ as $M^-_\tau(s) = 1 -e^{-\tau s}$ and $M^+_\tau(s) = 1 +e^{-\tau s}$ with $\tau\in\mathbb{R}\backslash 0$. Let $\mathcal{M}^- \subset \mathcal{M}$ be the set $\mathcal{M}^- = \{M^-_{\tau} \,:\, \tau \in \mathbb{R} \backslash 0\}$. Let $\mathcal{M}^+ \subset \mathcal{M_{\mbox{odd}}}$ be the set $\mathcal{M}^+ = \{M^+_\tau\,:\, \tau \in \mathbb{R}\backslash 0\}$. \end{definition} \begin{subtheorem}{theorem} \begin{theorem}\label{Jthm_a} Let $G$ be causal, LTI and stable. Assume there exist $0<\omega_1<\cdots<\omega_N<\infty$, and non-negative $\lambda_1, \lambda_2, \ldots, \lambda_N$, where $\sum_{r=1}^N\lambda_r>0$, such that \begin{equation}\label{thm1_ineq} \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ M^-_{\tau}(j\omega_r) G(j\omega_r) \right \} \leq 0 \mbox{ for all }M^-_{\tau}\in\mathcal{M}^-. \end{equation} Then there is no suitable $M\in\mathcal{M}$ for $G$. \end{theorem} \begin{theorem}\label{Jthm_b} Let $G$ be causal, LTI and stable. Assume, in addition to the conditions of Theorem~\ref{Jthm_a}, that \begin{equation}\label{thm1b_ineq} \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ M^+_{\tau}(j\omega_r) G(j\omega_r) \right \} \leq 0 \mbox{ for all }M^+_{\tau}\in\mathcal{M}^+. \end{equation} Then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. \end{theorem} \end{subtheorem} \begin{remark} The observation is made in \cite{Chang:12} that by the Stone-Weirstrass theorem it is sufficient to characterise $\mathcal{M}$ in terms of delay multipliers: i.e. as the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose impulse response is given by \begin{equation} m(t) = m_0 \delta(t)-\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} h_i\geq 0 \mbox{ for all } i\mbox{ and } \sum_{i=1}^{\infty} h_i \leq m_0. \end{equation} Similarly $\mathcal{M}_{\mbox{odd}}$ can be characterised as the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose impulse response is given by \begin{equation} m(t) = m_0\delta(t) -\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} \sum_{i=1}^{\infty} |h_i| \leq m_0. \end{equation} Such delay multipliers are excluded entirely from \cite{Jonsson:96}, but in this sense both Theorems~\ref{Jthm_a} and~\ref{Jthm_b} follow almost immediately. \end{remark} \subsection{Frequency interval approach} In \cite{Wang:18} we presented the following phase limitation for the frequency intervals $[\alpha,\beta]$ and $[\gamma,\delta]$. \begin{subtheorem}{theorem} \begin{theorem}[\cite{Wang:18}]\label{Meg_a} Let $0<\alpha<\beta<\gamma<\delta$ and define \begin{equation} \rho^c = \sup_{t>0}\frac{|\psi(t)|}{\phi(t)}, \end{equation} with \begin{equation} \begin{split} \psi(t) & = \frac{\lambda \cos (\alpha t)}{t}-\frac{\lambda \cos (\beta t)}{t}- \frac{\mu \cos (\gamma t)}{t}+\frac{\mu \cos (\delta t)}{t},\\ \phi(t) & = \lambda(\beta-\alpha)+\kappa\mu(\delta-\gamma)+\phi_1(t),\\ \phi_1(t) & = \frac{\lambda \sin (\alpha t)}{t}-\frac{\lambda \sin (\beta t)}{t}+ \frac{\kappa\mu \sin (\gamma t)}{t}-\frac{\kappa\mu \sin (\delta t)}{t}, \end{split} \end{equation} and with $\lambda>0$ and $\mu>0$ satisfying \begin{equation} \frac{\lambda}{\mu} = \frac{\delta^2-\gamma^2}{\beta^2-\alpha^2}, \end{equation} and $\kappa>0$. Let $M$ be an OZF multiplier and suppose \begin{equation}\label{M_up} \mbox{Im}(M(j\omega))>\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\alpha,\beta], \end{equation} and \begin{equation}\label{M_dn} \mbox{Im}(M(j\omega))<-\kappa\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\gamma,\delta], \end{equation} for some $\rho>0$. Then $\rho<\rho^c$ if $M\in\mathcal{M}$ The result also holds if we replace (\ref{M_up}) and (\ref{M_dn}) with \begin{equation} \mbox{Im}(M(j\omega))<-\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\alpha,\beta], \end{equation} and \begin{equation} \mbox{Im}(M(j\omega))>\kappa\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\gamma,\delta]. \end{equation} \end{theorem} \begin{theorem}[\cite{Wang:18}]\label{Meg_b} Suppose, in addition to the conditions of Theorem~\ref{Meg_a}, that \begin{equation} \rho^c_{\mbox{odd}} = \sup_{t>0}\frac{|\psi(t)|}{\tilde{\phi}(t)}, \end{equation} with \begin{equation} \tilde{\phi}(t) = \lambda(\beta-\alpha)+\kappa\mu(\delta-\gamma)-|\phi_1(t)|. \end{equation} Then $\rho<\rho^c_{\mbox{odd}}$ if $M\in\mathcal{M}_{\mbox{odd}}$. \end{theorem} \end{subtheorem} \section{Main results: duality approach}\label{Main} Applying Theorem~\ref{Jthm_a} or~\ref{Jthm_b} with $N=1$ yields no significant result beyond the trivial statement that if $\mbox{Re}[G(j\omega)]<0$ and $\mbox{Im}[G(j\omega)]=0$ at any $\omega$ then there can be no suitable multiplier. This is in contrast with the discrete-time case where there are non-trivial phase limitations at single frequencies \cite{Zhang:21}. Even with $N=2$, it is not straightforward to apply Theorems~\ref{Jthm_a} or~\ref{Jthm_b} directly, as they require an optimization at each pair of frequencies. Nevertheless, setting $N=2$ yields the following phase limitations: \begin{subtheorem}{theorem} \begin{theorem}\label{thm:2a} Let $a, b \in \mathbb{Z}^+$ and let $G$ be causal, LTI and stable. If there exists $\omega_0\in\mathbb{R}$ such that \begin{align}\label{G_ineq} \left | \frac{ b\angle G(aj\omega_0 ) - a \angle G(bj\omega_0) } {a+b-p} \right |> 180^o, \end{align} with $p=1$ then there is no suitable $M\in\mathcal{M}$ for $G$. \end{theorem} \begin{theorem}\label{thm:2b} Let $a, b \in \mathbb{Z}^+$ and let $G$ be causal, LTI and stable. If there exists $\omega_0\in\mathbb{R}$ such that (\ref{G_ineq}) holds where $p=1$ when both $a$ and $b$ are odd but $p=1/2$ if either $a$ or $b$ are even, then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. \end{theorem} \end{subtheorem} Figs~\ref{test02_fig1} and~\ref{test02_fig2} illustrate Theorems~\ref{thm:2a} and~\ref{thm:2b} respectively for the specific case that $\angle{G( j\omega_a}) > 170^o$ for some frequency $\omega_a$. The results put limitations on the phase of $G$ at frequencies that are rational multiples of $\omega_a$ (i.e. at $b\omega_0$ where $\omega_a=a\omega_0$ and where $a$ and $b$ are coprime integers). \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test02_fig1} \caption{Forbidden regions for the phase of $G(j\omega)$ when the phase at some $\omega_a$ is greater than $170^o$. }\label{test02_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test02_fig2} \caption{Forbidden regions for the phase of $G(j\omega)$ when the phase at some $\omega_a$ is greater than $170^o$ (odd nonlinearity). }\label{test02_fig2} \end{center} \end{figure} The results may also be expressed as phase limitations on the multipliers themselves. Counterparts to Theorems~\ref{thm:2a} and~\ref{thm:2b} follow as corollaries and are equivalent results. \begin{subtheorem}{corollary} \begin{corollary}\label{cor:2a} Let $a, b \in \mathbb{Z}^+$ and let $M\in\mathcal{M}$. Then \begin{equation}\label{cor_ineq} \left |\frac{b\angle M(aj\omega )-a\angle M(bj\omega )}{a/2+b/2-p}\right | \leq 180^o, \end{equation} for all $\omega\in\mathbb{R}$ with $p=1$. \end{corollary} \begin{corollary}\label{cor:2b} Let $a, b \in \mathbb{Z}^+$ and let $M\in\mathcal{M}_{\mbox{odd}}$. Then inequality (\ref{cor_ineq}) holds for all $\omega\in\mathbb{R}$ where $p=1$ when both $a$ and $b$ are odd but $p=1/2$ if either $a$ or $b$ are even. \end{corollary} \begin{proof} Immediate: see Remark~\ref{rem_phase}. \end{proof} \end{subtheorem} Figs~\ref{test04_fig1} and~\ref{test04_fig2} are the counterparts to Figs~\ref{test02_fig1} and~\ref{test02_fig2} (if the phase of $G$ is greater than $170^o$ at some $\omega_a$ then any suitable multiplier $M$ must have phase less than $-80^o$ at $\omega_a$). Corollaries~\ref{cor:2a} and~\ref{cor:2b} can also be visualised for specific values of $a$ and $b$ with plots of the phase of $M(bj\omega_0 )$ against the phase of $M(aj\omega_0 )$ as $\omega_0$ varies: see Figs~\ref{Fig03_1} to~\ref{Fig03_3}. Fig~\ref{Fig03_1} also shows boundary points parameterised by $\kappa$ which is associated with the frequency interval apprach and discussed in Section~\ref{Int}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test04_fig1a} \caption{Forbidden regions for the phase of $M\in\mathcal{M}$ when the phase at some $\omega_a$ is less than $-80^o$. }\label{test04_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test04_fig2a} \caption{Forbidden regions for the phase of $M\in\mathcal{M}_{\mbox{odd}}$ when the phase at some $\omega_a$ is less than $-80^o$. }\label{test04_fig2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig1c} \caption{Phase vs phase plot illustrating Corollary~\ref{cor:2a} with $a=2$, $b=3$. If $M\in\mathcal{M}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ are shown in magenta. Also shown are the points $(\arctan \rho^c,-\arctan \kappa \rho^c)$ when $a=2$ and $b=3$, when $\kappa$ takes the values $0.2$, $1$ and $5$ and when $\rho^c$ is defined as in Corollary~\ref{m_corollary_a}. }\label{Fig03_1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig2a} \caption{Phase vs phase plot illustrating both Corollaries~\ref{cor:2a} and~\ref{cor:2b} with $a=1$, $b=3$. If $M\in\mathcal{M}$ or $M\in\mathcal{M}_{\mbox{odd}}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ and $\mathcal{M}^+$ coincide and are shown in magenta.}\label{Fig03_2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig3a} \caption{Phase vs phase plot illustrating Corollary~\ref{cor:2b} with $a=2$, $b=3$. If $M\in\mathcal{M}_{\mbox{odd}}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ are shown in magenta (compare Fig~\ref{Fig03_1}) while the phase vs phase plots of elements of $\mathcal{M}^+$ are shown in cyan.}\label{Fig03_3} \end{center} \end{figure} The bounds are tight in the sense that if $a$ and $b$ are coprime then there exist (many) $M_{\tau}^-\in\mathcal{M}^-$ such that $b\angle M_{\tau}^-(a j\omega_0 )-a\angle M_{\tau}^-(b j\omega_0) = (a/2+b/2-1)180^o$. Specifically this holds for any $\tau$ that satisfies $[a\tau/\omega_0]_{[0,2\pi]}>2\pi-2\pi/b$ and $[b\tau/\omega_0]_{[0,2\pi]} < 2\pi/a$. Similarly if $a$ and $b$ are coprime and either $a$ or $b$ are even there exist (many) $M_{\tau}^+\in\mathcal{M}^+$ such that $b\angle M_{\tau}^+(a j\omega_0)-a\angle M_{\tau}^+(b j\omega_0 ) = (a/2+b/2-1/2)180^o$. Specifically this holds for any $\tau$ that satisfies $\pi-\pi/b <[a\tau/\omega_0]_{[0,2\pi]}<\pi$ and $\pi<[b\tau/\omega_0]_{[0,2\pi]}<\pi+\pi/a$. In the examples below the phases of the objects $G(a j\omega )$ and $G(b j\omega )$ are computed separately. They should each have phase on the interval $(-180^o, 180^o)$ and so may be easily computed without the possibility of phase wrapping ambiguity at local points or over local regions. Provided the transfer functions are sufficiently smooth they can be computed accurately. Nevertheless, it is possible to write (\ref{G_ineq}) in terms of a single transfer function since \begin{equation} b\angle G(a j\omega )-a\angle G(b j\omega ) = \angle \bar{G}_{a,b}(j\omega) \end{equation} where \begin{equation} \bar{G}_{a,b}(s) = \frac{G( a s)^b}{G( b s)^a}. \end{equation} It thus requires, for given values of $a$ and $b$, the computation of the maximum (or minimum) phase of a single transfer function. In this sense the computational requirement is comparable to that of the off-axis circle criterion \cite{Cho:68}, a classical tool. It may also be necessary to compute the criterion for several positive integer values of $a$ and $b$. The number of different values is finite and can be bounded. Suppose the maximum phase of $G$ is $180^o-\phi_{\min}$ and the minimum phase is $-180^o+\theta_{\max}$, where $\phi_{\min}>0, \theta_{\max}>0$. Then $a \theta_{\max} +b\phi_{\min} < p\times 180^o$. So it is sufficient to choose (say) all $a<p/\theta_{\max} \times 180^o$ and $b<p/\phi_{\min} \times 180^o$ which yields a finite set of values. \section{Relation to the frequency interval approach}\label{Int} Corollaries \ref{cor:2a} and \ref{cor:2b} may be interpreted as saying that given an upper (or lower) threshold on the phase of a suitable multiplier $M$ at frequency $a\omega_0$ there is a lower (or upper) threshold on the phase on $M$ at frequency $b\omega$. It is natural to compare this with the frequency interval approach, where an upper (or lower) threshold on the phase of $M$ over an interval $[\alpha,\beta]$ implies a lower (or upper) threshold on the phase of $M$ over the interval $[\gamma,\delta]$. Let us begin by considering Theorems~\ref{Meg_a} and~\ref{Meg_b} in the limit as the length of the intervals becomes zero. We obtain the following corollaries. The results requires the ratio of the limiting frequencies to be rational. \begin{subtheorem}{corollary} \begin{corollary}\label{m_corollary_a} { \everymath={\displaystyle} For $t>0$, define \begin{equation} q_-(t) = \left \{ \begin{array}{l} \frac{b\sin (a t)-a\sin (b t)}{ b+\kappa a- b \cos (a t)-\kappa a\cos(b t)} \mbox{ for }[t]_{[0,\pi]}\neq 0,\\ \\ 0 \mbox{ for } [t]_{[0,\pi]}=0, \end{array} \right .\\ \end{equation} where $a$ and $b$ are coprime and $\kappa>0$. }Define also \begin{equation} \overline{\rho}^c = \sup_{t>0} |q_-(t)|. \end{equation} Let $M$ be an OZF multiplier and suppose \begin{equation} \mbox{Im}(M(aj\omega_0)>\rho\mbox{Re}(M(aj\omega_0)), \end{equation} and \begin{equation} \mbox{Im}(M(bj\omega_0)<-\kappa\rho\mbox{Re}(M(bj\omega_0)), \end{equation} for some $\omega_0>0$ and $\rho>0$. Then $\rho<\rho^c$ if $M\in\mathcal{M}$ \end{corollary} \begin{corollary}\label{m_corollary_b} { \everymath={\displaystyle} In addition to the conditions of Corollary~\ref{m_corollary_a}, define \begin{equation} q_+(t) = \left \{ \begin{array}{l} \frac{b\sin (a t)-a\sin (b t)}{ b+\kappa a+b \cos (a t)+\kappa a\cos(b t)} \mbox{ for }[t]_{[0,\pi]}\neq 0,\\ \\ 0 \mbox{ for } [t]_{[0,\pi]}=0, \end{array} \right . \end{equation} and \begin{equation} \overline{\rho}^c_{\mbox{odd}} = \max\left (\sup_{t>0}|q_-(t) |,\sup_{t>0}|q_+(t)|\right ) . \end{equation} Then $\rho<\overline{\rho}^c$ if $M\in\mathcal{M}_{\mbox{odd}}$. } \end{corollary} \end{subtheorem} \begin{remark} Equivalently, we can say if $\angle M(aj\omega_0)>\arctan \rho$ and $\angle M(b j \omega_0)<-\arctan \kappa \rho$ then $\rho <\rho^c$ if $M\in\mathcal{M}$ and $\rho <\rho^c_{\mbox{odd}}$ if $M\in\mathcal{M}_{\mbox{odd}}$. \end{remark} It turns out that this is equivalent to the phase condition derived via the duality approach. The inequality boundaries $\angle M(aj\omega_0)=\arctan \rho^c$ and $\angle M(b j \omega_0)=-\arctan \kappa \rho^c)$ (or $\angle M(aj\omega_0)=\arctan \rho^c_{\mbox{odd}}$ and $\angle M(b j\omega_0)=-\arctan \kappa \rho^c_{\mbox{odd}}$) are the same as those for Corollary~\ref{cor:2a} (or~\ref{cor:2b}), as illustrated in Fig~\ref{Fig03_1}. Specifically we may say: \begin{subtheorem}{theorem} \begin{theorem}\label{Meg_equiv_a} Corollary~\ref{m_corollary_a} and Theorem~\ref{thm:2a} are equivalent results. \end{theorem} \begin{theorem}\label{Meg_equiv_b} Corollary~\ref{m_corollary_b} and Theorem~\ref{thm:2b} are equivalent results. \end{theorem} \end{subtheorem} \section{Examples}\label{Exa} We demonstrate the new condition with three separate examples. In Examples~1 and~2 below we test the criterion for a finite number of coprime integers $a$ and $b$, and for all $\omega>0$; we also search over the slope restriction~$k$. We run a bisection algorithm for~$k$ and, for each candidate value of $k$, $a$ and~$b$, check whether the condition is satisfied for any $\omega>0$. Provided the phase of $1/k+G$ is sufficiently smooth, this can be implemented efficiently and systematically, for example by gridding $\omega$ sufficiently finely. There are several possible ways to reorder the computation. \subsection{Example 1} J\"{o}nsson and Laiou \cite{Jonsson:96} consider the plant \begin{equation}\label{JL_G} G(s) = \frac{s^2}{(s^2+\alpha)(s^2+\beta)+10^{-4}(14s^3+21s)}, \end{equation} with $\alpha=0.9997$ and $\beta=9.0039$ and with positive feedback. They show that the rational multliper \begin{equation}\label{JL_M} M(s) = 1 - \left (\frac{2.5}{s+2.5} \right )^2. \end{equation} is suitable for $1/k-G(s)$ when $k=0.0048$. Figure \ref{test05a_fig1} shows the phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0048$. It can be seen to lie on the interval $[-90^o,90^o]$. They also show no rational multiplier in $\mathcal{M}_{\mbox{odd}}$ exists when $k=0.0061$ by applying their criterion with $N=2$ and the choice $\omega_1=1$ and $\omega_2=3$. Fig \ref{test05a_fig2} shows $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $k=0.0061$. It can be seen that the value drops below $-180^o$ near $\omega=1$. Thus Theorem~\ref{thm:2a} confirms there is no suitable multipler in either $\mathcal{M}$ or $\mathcal{M}_{\mbox{odd}}$. J\"{o}nsson and Laiou \cite{Jonsson:96} state `the choice of frequencies [...] is a delicate task.''' But a simple line search shows that there is an $\omega$ such that $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3<-180^o$ when $k = 0.0058926$ (see Fig \ref{test05a_fig3}) but $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3>-180^o$ for all $\omega$ when $k = 0.0058925$. By Theorem~\ref{thm:2a} there is no multiplier when $k=0.0058926$. By contrast, for this case the choice \begin{equation}\label{WPH_M} M(s)=1-0.99999e^{-0.93287s} \end{equation} is a suitable multiplier when $k=0.0058924$ (Fig \ref{test05a_fig4}). The various computed slopes $k$ are set out in Table~\ref{table_ex1}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05a_fig1} \caption{Example 1. Phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0048$ when $G$ is given by (\ref{JL_G}) and $M$ by (\ref{JL_M}). The phase lies on the interval $[-90^o,90^o]$ so this choice of $M$ is a suitable multiplier for $1/k-G$.}\label{test05a_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05b_fig6} \caption{Example 1. The phase difference $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $G$ is given by (\ref{JL_G}) with $k=0.0061$. The value drops below $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier.}\label{test05a_fig2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05b_fig7} \caption{Example 1. The phase difference $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $G$ is given by (\ref{JL_G}) with $k=0.0058926$. The value drops below $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier.}\label{test05a_fig3} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05a_fig4} \caption{Example 1. Phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0058924$ when $G$ is given by (\ref{JL_G}) and $M$ by (\ref{WPH_M}). The phase lies on the interval $[-90^o,90^o]$ so this choice of $M$ is a suitable multiplier for $1/k-G$.}\label{test05a_fig4} \end{center} \end{figure} \begin{table} \begin{center} \begin{tabular}{ | l | c | c |} \hline & \cite{Jonsson:96} & This paper\\ \hline Slope $k$ for which a multiplier & & \\ is found & 0.0048 & 0.0058924\\ \hline Slope $k$ for which there is & & \\ guaranteed to be no multiplier & 0.0061 & 0.0058926\\ \hline \end{tabular} \caption{Various slopes for Example 1}\label{table_ex1} \end{center} \end{table} \subsection{Example 2} Consider the plant \[G(s) = \frac{s^2}{(s^2+2\xi s + 1)^2}\mbox{ with }\xi>0.\] O'Shea \cite{OShea67} shows that there is a suitable multiplier in $\mathcal{M}$ for $1/k+G$ when $\xi>1/2$ and $k>0$. By contrast in \cite{Wang:18} we showed that there is no suitable multiplier in $\mathcal{M}$ when $\xi=0.25$ and $k$ is sufficiently large. Specifically the phase of $G(j\omega)$ is above $177.98^o$ on the interval $\omega\in [0.02249,0.03511]$ and below $-177.98^o$ on the interval $\omega\in [1/0.03511,1/0.02249]$. A line search yields that the same condition is true for the phase of $1/k+G(j\omega)$ with $k\geq 269,336.3$ (see Fig~\ref{test19e_fig1}). Hence there is no suitable multipler $M\in\mathcal{M}$ for $1/k+G$ with $k\geq 269,336.3$. By contrast, Theorem~\ref{thm:2a} with $a=4$ and $b=1$ yields there is no suitable multipler $M\in\mathcal{M}$ for $1/k+G$ with $k\geq 32.61$. Specifically the phase $(4\angle (1/k+G(j\omega))-\angle(1/k+G(4j\omega)))/4$ exceeds $180^o$ when $k\geq 32.61$ (see Figs~\ref{test19e_fig3} and~\ref{test19f}). Similarly, Theorem~\ref{thm:2b} with $a=3$ and $b=1$ yields there is no suitable multipler $M\in\mathcal{M}_{odd}$ for $1/k+G$ with $k\geq 39.93$. Specifically the phase $(3\angle (1/k+G(j\omega))-\angle(1/k+G(3j\omega)))/3$ exceeds $180^o$ when $k\geq 32.61$. These results show a non-trivial improvement over those in \cite{Wang:18}. While it should be possible to achieve identical results using either the condition of \cite{Jonsson:96} or that of \cite{Wang:18} (see Appendix), the conditions of Theorems~\ref{thm:2a} and~\ref{thm:2b} can be applied in a systematic manner. Fig~\ref{test19d_fig1} shows the bounds for several other values of $\zeta$ while Fig~\ref{test19d_fig2} shows the value of $a$ yielding the lowest bound for each test (the value of $b$ is $1$ for each case). \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19e_fig1} \caption{Example 2. O'Shea's example with $\zeta=0.25$. Application of the condition in \cite{Wang:18} yields there to be no suitable multiplier $M\in\mathcal{M}$ when $k\geq 270,000$.}\label{test19e_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19g_fig5} \caption{Example 2. O'Shea's example with $\zeta=0.25$. Application of Theorem~\ref{thm:2a} with $a=4$ and $b=1$ yields there to be no suitable multiplier $M\in\mathcal{M}$ when $k\geq 32.61$.}\label{test19e_fig3} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19f} \caption{Example 2.O'Shea's example with $\zeta=0.25$. The phase of $1/k+G(j\omega)$ with $k=32.61$ is shown. The phase of $1/k+G(j\omega_a)$ is $149.42^o$ at $\omega_a = 0.3938$ and the corresponding forbidden regions are shown (compare Fig~\ref{test02_fig1}). The phase touches the bound at $4\omega_a$.}\label{test19f} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19d_fig1} \caption{Example 2. Bounds on the slope above which Theorem~\ref{thm:2a} or~\ref{thm:2b} guarantee there can be no suitable multiplier as damping ratio~$\zeta$ varies.}\label{test19d_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19d_fig2} \caption{Example 2. Values of $a$ used to find the slope bounds shown in Fig~\ref{test19d_fig1}. The value of $b$ is $1$ for all shown results.}\label{test19d_fig2} \end{center} \end{figure} \subsection{Example 3} In \cite{Wang:18} we argue that phase limitatons are closely linked to the Kalman Conjecture. This plays an important role in the theory of absolute stability for Lurye systems. Barabanov \cite{Barabanov88} shows it to be true for third-order systems via a subclass of the OZF multipliers but fourth-order counterexamples are known \cite{Fitts66,Leonov13}. It is trivial that negative imaginary systems satisfy the Kalman Conjecture \cite{Carrasco17}. In \cite{Zhang18} we indicate via the tailored construction of OZF multipliers that second-order systems with delay satisfy the Kalman Conjecture. Until now it has remained an open question whether third-order systems with delay satisfy the Kalman Conjecture. Consider the third-order system with delay that has transfer function \begin{equation} G(s) = e^{-s} \frac{s^2+0.8s+1.5}{s^3+1.2s^2+1.12s+0.32}. \end{equation} The Nyquist gain is $k_N=2.0931$. That is to say for all $0\leq k < k_N$ the sensitivity function $ \left [ \begin{array}{cc} 1 & G\\ -k & 1 \end{array} \right ]^{-1} }$ is stable. Fig.~\ref{test22_fig_over} shows $(2\angle \left (1/2+G(j\omega)\right ) - \angle \left (1/2 + G(2j\omega)\right ))/2$ against frequency. The value drops significantly below $-180^o$, and hence by Theorem~\ref{thm:2a} there is no suitable $M\in\mathcal{M}$ for $1/2+G$. The phases of $1/2+G(j\omega)$ and of $1/2+G(2j\omega)$ are superimposed. Fig.~\ref{test22_fig3} shows a time response of a Lurye system with gain $2$, a step input at time $t=0$ and simple saturation. The response appears to be periodic. The stable linear response (i.e. without saturation) is superimposed. These results indicate that this is a (first) example of a third order plant with delay which does not satisfiy the Kalman Conjecture. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test22_fig_over_c} \caption{Example 3. The value of $(2\angle \left (1/2+G(j\omega)\right )-\angle \left (1/2+G(2j\omega)\right ))/2$ drops below significantly $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier. The phase of $1/2+G(j\omega)$ (blue dotted) and the phase of $1/2+G(2j\omega)$ (red dotted) are also shown.}\label{test22_fig_over} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test22_fig3} \caption{Example 3. Time response of the Lurye system, with and without saturation.}\label{test22_fig3} \end{center} \end{figure} \section{Proof of Theorems~\ref{thm:2a} and~\ref{thm:2b} via duality approach} In the following we apply Theorems~\ref{Jthm_a} and~\ref{Jthm_b} with $N=2$. Furthermore, we assume $\omega_2/\omega_1$ is rational, i.e. that there is some $\omega_0>0$ and integers $a$ and $b$ such that either $\omega_1=a \omega_0$ and $\omega_2=b\omega_0$ or $\omega_1=b \omega_0$ and $\omega_2=a\omega_0$. We begin with two technical lemmas. \begin{subtheorem}{lemma} \begin{lemma}\label{lem1a} Let $a$ and $b$ be coprime positive integers and \begin{equation}\label{def_f1} \begin{split} f_1(\omega) & = -b \sin\theta (\cos\phi-\cos(\phi-aw))\\ f_2(\omega) & = -a\sin\phi(\cos\theta-\cos(\theta+b\omega)) \end{split} \end{equation} with $\omega\in\mathbb{R}$ and $\theta,\phi\geq 0$. Then \begin{equation}f_1(\omega)+f_2(\omega) \leq 0 \mbox{ for all }\omega,\end{equation} provided \begin{equation}\label{new_p_pi} a\theta+b\phi < p \pi, \end{equation} with $p=1$. \end{lemma} \begin{lemma}\label{lem1b} Let $a$ and $b$ be coprime positive integers and \begin{equation}\label{def_f3} \begin{split} f_3(\omega) & = -b \sin\theta (\cos\phi+\cos(\phi-aw))\\ f_4(\omega) & = -a\sin\phi(\cos\theta+\cos(\theta+b\omega)) \end{split} \end{equation} with $\omega\in\mathbb{R}$ and $\theta,\phi\geq 0$. Then \begin{equation}f_3(\omega)+f_4(\omega) \leq 0 \mbox{ for all }\omega,\end{equation} provided (\ref{new_p_pi}) holds with $p=1$ when $a$ and $b$ are both odd and $p=1/2$ when either $a$ or $b$ is even. \end{lemma} \end{subtheorem} \begin{proof}[Proof of Lemma \ref{lem1a}] The term $f_1(\omega)$ is only positive when $[a\omega]_{2\pi}\in(0,2\phi)$. Similarly the term $f_2(\omega)$ is only positive when $[-b\omega]_{2\pi}\in(0,2\theta)$. When $p=1$ there is no $\omega$ such that $f_1(\omega)$ and $f_2(\omega)$ are simultaneously positive. Specifically, suppose $\omega$ is a frequency such that $a\omega+2m\pi\in(0,2\phi)$ and $-b\omega+2n\pi\in(0,2\theta)$ for some integers $m$ and $n$. Then $2(mb+na)\pi\in(0,2p\pi)$. This cannot be the case with $p<1$; when $a$ and $b$ are coprime then it can be satisfied with $p>1$ provided $m$ and $n$ are chosen such that $mb+n=1$. Hence, with $p=1$, it suffices to show that $f_1(\omega)+f_2(\omega)\leq 0$ when $f_1(\omega)\geq 0$, i.e. on the intervals $0\leq[a\omega]_{2\pi}\leq2\phi$. A similar argument will follow by symmetry for intervals where $f_2(\omega)\geq 0$. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig1} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/15$ and $\phi =\pi/4$. The functions $f_1(\cdot)$ and $f_2(\cdot)$ are never simultaneously positive. We have the relations $f_1(\omega)=f_1(2\phi/a-\omega)$ when $\phi/a\leq\omega\leq2\phi/a$ and also $f_1(\omega)=f_1(\omega-\pi)$ when $\pi\leq\omega\leq\pi+2\phi/a$. Similarly $f_2(\omega)\leq f_2(2\phi/a-\omega)$ when $\phi/a\leq\omega\leq2\phi/a$, $f_2(\omega)\leq f_2(\omega-\pi)$ when $\pi\leq\omega\leq\pi+\phi/a$ and $f_2(\omega)\leq f_2(\pi+2\phi/a-\omega)$ when $\pi +\phi/a \leq\omega\leq\pi+2\phi/a$. Hence to show $f_1(\omega)+f_2(\omega)\leq 0$ when $f_1(\omega)\geq 0$, it suffices to consider the interval $0\leq\omega\leq \phi/a$.} \end{center} \end{figure} Consider first the interval $a\omega\in[0,\phi]$. We have \begin{equation}\begin{split} \frac{df_1}{d\omega}(\omega) & = ab\sin\theta \sin(\phi-a\omega)\\ \frac{df_2}{d\omega}(\omega) & =-ab\sin\phi\sin(\theta+b\omega) \end{split}\end{equation} But \begin{equation}\begin{split} \sin (\phi-a\omega) & \leq \sin\phi -a\omega \cos \phi\mbox{ (by slope restriction), and}\\ \sin(\theta+b\omega) & \geq \sin\theta +\frac{a\omega}{\phi}\left [\sin\left ( \theta+\frac{b\phi}{a}\right )-\sin\theta\right ]\\ & \mbox{\hspace{3 cm} (by local convexity)}. \end{split}\end{equation} Hence \begin{equation}\begin{split} \frac{df_1}{d\omega}(\omega)+\frac{df_2}{d\omega}(\omega) \leq & -a^2b\omega\sin\theta \cos \phi\\ & -\frac{a^2b\omega}{\phi}\left [\sin\left ( \theta+\frac{b\phi}{a}\right )-\sin\theta\right ]\sin\phi\\ \leq & 0. \end{split}\end{equation} Since $f_1(0)=f_2(0)=0$ if follows that $f_1(\omega)+f_2(\omega)\leq 0$ on the interval $a\omega\in[0, \phi]$. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig2} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/15$ and $\phi =\pi/4$. On the interval $0\leq\omega\leq \phi/a$ the derivative of $f_1(\cdot)$ is bounded above by its gradient at $\omega=0$ while the derivative of $f_2(\cdot)$ is bounded above by the chord joining its two end points. It follows that $f_1(\cdot)+f_2(\cdot)$ is non-positive on this interval. } \end{center} \end{figure} Consider next the interval $a\omega\in[\phi,2 \phi]$. By symmetry $f_1(\omega) = f_1(2\phi-\omega)$ on this interval. Since $f_2(\omega)\leq 0$ on this interval we must have $f_2(\omega) \leq f_2(2\phi-\omega)$ on this same interval. Hence $f_1(\omega)+f_2(\omega)\leq 0$ on the interval $a\omega\in[\phi, 2\phi]$. Similar arguments follow: firstly on the intervals $[a\omega]_{2\pi}\in[0,\phi]$ where $f_1(\omega) = f_1([a\omega]_{2\pi}/a)$ and $f_2(\omega) \leq f_2([a\omega]_{2\pi}/a)$; secondly on the intervals $[a\omega]_{2\pi}\in[\phi,2\phi]$ where $f_1(\omega) = f_1(2\phi-[a\omega]_{2\pi}/a)$ and $f_2(\omega) \leq f_2(2\phi-[a\omega]_{2\pi}/a)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem1b}] The term $f_3(\omega)$ is only positive when $[a\omega]_{2\pi}\in(\pi,\pi+2\phi)$. Similarly the term $f_4(\omega)$ is only positive when $[-b\omega]_{2\pi}\in(\pi,\pi+2\theta)$. Let us consider conditions for which they are simultaneously positive. Suppose $\omega$ is a frequency such that $a\omega+2m\pi\in(\pi,\pi+2\phi)$ and $-b\omega+2n\pi\in(\pi,\pi+2\theta)$ for some integers $m$ and $n$. Then $2(mb+na)\pi\in((a+b)\pi,(a+b+2p)\pi)$. If $a$ and $b$ are both odd, then $a+b$ is even and hence this can only be true when $p>1$. By contrast, if either $a$ or $b$ is even (but not both, as they are coprime) then $a+b$ is odd and we can choose $mb+na=a+b+1$ when $p>1/2$. It then follows that $f_3(\omega)+f_4(\omega)\leq 0$ for all $\omega$ by an argument similar to that in the proof of Lemma~\ref{lem1a}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig3} \caption{Illustration of Lemma~\ref{lem1a} with $a=1$, $b=3$, $\theta=\pi/2$ and $\phi =\pi/7$. The functions $f_3(\cdot)$ and $f_4(\cdot)$ are never simultaneously positive. The function $f_3(\omega)$ is non-negative on the interval $\pi\leq\omega\leq\pi+2\phi/a$. The function $f_4(\omega)$ is non-negative on the interval $\pi-2\theta/b\leq \omega\leq\pi$.} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig4} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/11$ and $\phi =\pi/11$. The functions $f_3(\cdot)$ and $f_4(\cdot)$ are never simultaneously positive. The function $f_3(\omega)$ is non-negative on the interval $\pi/2\leq\omega\leq\pi/2+2\phi/a$. The function $f_4(\omega)$ is non-negative on the interval $\pi-2\theta/b\leq \omega\leq\pi$.} \end{center} \end{figure} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2a}] Without loss of generality suppose $a$ and $b$ are coprime, and consider the case where $ b\angle G(a j\omega_0 ) > a \angle G(b j\omega_0)$. Put \begin{align}\label{G_def} G(a j\omega_0 ) & = g_a e^{j(\pi-\phi)}\mbox{ and }\nonumber\\ G(b j\omega_0 ) & = g_b e^{j(-\pi+\theta)}\mbox{ with } \theta, \phi, g_a,g_b\in\mathbb{R}^+, \end{align} and \begin{equation}\label{p_ineq} a\theta+b\phi < p\pi, \end{equation} so that (\ref{G_ineq}) holds. Immediately we have \begin{equation}G(a j\omega_0 ) = -g_a e^{-j\phi}\mbox{ and }G(b j\omega_0 ) = -g_b e^{j\theta}.\end{equation} Theorem~\ref{Jthm_a} then states that if there exist non-negative $\lambda_a, \lambda_b$, with $\lambda_a+\lambda_b>0$, such that \begin{align}\label{N=2a} \lambda_a \mbox{Re} & \left \{ M^-_{\tau}(a j\omega_0) G(a j\omega_0) \right \}\nonumber\\ & + \lambda_b \mbox{Re}\left \{ M^-_{\tau}(b j\omega_0 ) G(b j\omega_0) \right \} \leq 0 \mbox{ for all }M^-_{\tau}\in\mathcal{M}^-, \end{align} then there is no suitable $M\in\mathcal{M}$ for $G$. If we set $\omega=\tau \omega_0$ we can write this $f(\omega)\leq 0$ for all $\omega$ with \begin{align}\label{f_def1} f(\omega) = -\lambda_a g_a & (1-\cos a \omega)\cos\phi+\lambda_a g_a \sin a \omega \sin \phi\nonumber\\ & -\lambda_b g_b (1-\cos b \omega)\cos\theta-\lambda_b g_b \sin b \omega \sin \theta. \end{align} Choose \begin{equation}\label{def_lam} \lambda_a = g_b b \sin \theta\mbox{ and }\lambda_b = g_a a \sin \phi. \end{equation} Then \begin{equation}\label{def_f} f(\omega) = g_a g_b (f_1(\omega) + f_2(\omega)) \end{equation} with $f_1$ and $f_2$ given by (\ref{def_f1}). Hence by Lemma~\ref{lem1a} $f(\omega)\leq 0$ for all $\omega$ when $p=1$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2b}] As with Theorem~\ref{thm:2a}, suppose without loss of generality that $a$ and $b$ are coprime, and consider the case where $ b\angle G(a j\omega_0 ) > a \angle G(b j\omega_0)$. Let $G(a j\omega_0 )$ and $G (b j\omega_0 )$ be given by (\ref{G_def}) with (\ref{p_ineq}) so that (\ref{G_ineq}) holds. Theorem~\ref{Jthm_b} then states that if there exist non-negative $\lambda_a, \lambda_b$, with $\lambda_a+\lambda_b>0$, such that (\ref{N=2a}) holds and in addition \begin{align}\label{N=2b} & \lambda_a \mbox{Re} \left \{ M^-+{\tau}(a j\omega_0) G(a j\omega_0) \right \}\nonumber\\ & + \lambda_b \mbox{Re}\left \{ M^-+{\tau}(b j\omega_0) G(b j\omega_0) \right \} \leq 0 \mbox{ for all }M^+_{\tau}\in\mathcal{M}^+, \end{align} then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. For condition (\ref{N=2a}) the analysis is the same as for Theorem~\ref{thm:2a}; hence we require $p\leq 1$. We can write condition (\ref{N=2b}) as $f(\omega)\leq 0$ for all $\omega$ with \begin{equation}\begin{split} f(\omega) = -\lambda_a g_a & (1+\cos a \omega)\cos\phi-\lambda_a g_a \sin a \omega \sin \phi\\ & -\lambda_b g_b (1+\cos b \omega)\cos\theta+\lambda_b g_b \sin b \omega \sin \theta. \end{split}\end{equation} with (\ref{p_ineq}). As before, choose $\lambda_a$ and $\lambda_b$ according to (\ref{def_lam}). Then \begin{equation}\label{def_f} f(\omega) = g_a g_b (f_3(\omega) + f_4(\omega)) \end{equation} with $f_3$ and $f_4$ given by (\ref{def_f3}). Hence by Lemma~\ref{lem1b} $f(\omega)\leq 0$ for all $\omega$ when $p=1$ if both $a$ and $b$ are odd and when $p=1/2$ if either $a$ or $b$ are even. \end{proof} \section{Proof of Theorems~\ref{thm:2a} and~\ref{thm:2b} via frequency interval approach} \begin{proof}[Proof of Theorem \ref{Meg_equiv_a}] Consider $q_-(t)$ on $t>0$. Since $q_-(t)$ is periodic it suffices to consider the interval $0< t \leq 2\pi$. Define \begin{equation} r_-(t) = b \arctan q_-(t) + a \arctan \kappa q_-(t). \end{equation} We will show that for each $\kappa$ all turning points of $r_-(t)$ are bounded by $\pm (a+b-2)\frac{\pi}{2}$ and that at least one turning point touches the bounds. This is sufficient to establish the equivalence between Corollary~\ref{m_corollary_a} and Corollary~\ref{cor:2a}, which is in turn equivalent to Theorem~\ref{thm:2a}. The turning points of $r_-(t)$ occur at the same values of $t$ as the turning points of $q_-(t)$. Specifically \begin{equation} \frac{d}{dt} r_-(t) = \left (\frac{b}{1+q_-(t)^2} + \frac{a\kappa}{1+\kappa^2q_-(t)^2}\right ) \frac{d}{dt}q_-(t). \end{equation} When $[t]_\pi\neq 0$ the derivative of $q_-(t)$ is given by \begin{equation} \frac{d}{dt}q_-(t) =ab \frac{m_-(t)n_-(t)}{d_-(t)^2} \end{equation} with \begin{equation} \begin{split} m_-(t) & = \sin \frac{at}{2} \cos \frac{bt}{2} + \kappa \sin\frac{bt}{2}\cos\frac{at}{2}\\ n_-(t) & = b\sin \frac{at}{2} \cos \frac{bt}{2} -a \sin\frac{bt}{2}\cos\frac{at}{2}\\ d_-(t) & = b\sin^2 \frac{at}{2} + \kappa a \sin^2 \frac{bt}{2} \end{split} \end{equation} On the interval $0<t\leq 2\pi$ with $[t]_\pi\neq 0$ the derivatives of both $q_-(t)$ and $r_-(t)$ are zero when either $m_-(t)=0$ or $n_-(t)=0$. We consider the two cases separately. In both cases we use the identity \begin{equation} q_-(t) = \frac{ b\tan\frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) - a\tan\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } { b \tan^2 \frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) +\kappa a \tan^2\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } \end{equation} \begin{description} \item[Case 1] Suppose $t_1$ satisfies $m_-(t_1)=0$. At these values \begin{equation}q_-(t_1) = \cot \frac{at_1}{2}\end{equation} and \begin{equation}\kappa q_-(t_1) =- \cot \frac{bt_1}{2}\end{equation} Hence if we define \begin{align}\label{rstar} r_-^*(t) = b\left [ \frac{\pi}{2}- \frac{at}{2}\right ]_{[-\pi/2,\pi/2]}+a\left [-\frac{\pi}{2}+ \frac{bt}{2}\right ]_{[-\pi/2,\pi/2]} \end{align} for $t \in [0,2\pi]$ we find $r_-(t_1) = r_-^*(t_1)$ for all $t_1$ satisfying $m_-(t_1)=0$ The function $r_-^*(\cdot)$ is piecewise constant, taking values $(-a-b+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b-1$. On each piecewise constant interval there is a $t_1$ satisfying $m_-(t_1)=0$. Hence these turning points of $r_-(t)$ lie within the bounds $\pm(a+b-2)\frac{\pi}{2}$ with at least one on the bound. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test23a1} \caption{Phase functions $r_-$ (blue), $r_-^*$ (red) and $r_-^\dagger$ (green) with $a=3$ and $b=10$. The turning points of $r_-$ where $m_-(t)=0$ take the value $(a+b-2\lambda)\pi/2$ with $\lambda$ an integer. The function $r_-^*(\cdot)$ is piecewise constant and takes these same values. The turning points of $r_-$ where $n_-(t)=0$ take the values of $r_-^\dagger $, whose bounds are also shown.}\label{test23a} \end{center} \end{figure} \item[Case 2] Define \begin{equation}q^\dagger_-(t) = \frac{ (b^2-a^2)\sin at_2 } { a^2+b^2+\kappa a b -(b^2-a^2)\cos at_2 } \end{equation} and \begin{equation}r^\dagger_-(t) = b \arctan q^\dagger_-(t) + a \arctan \kappa q^\dagger_-(t).\end{equation} Then $q_-(t_2)=q^\dagger_-(t_2)$ and $r_-(t_2)= r_-^\dagger(t_2)$ for all $t_2$ satisfying $n_-(t_2)=0$. It follows that $|r_-(t_2)|\leq |\bar{r}^\dagger|$ for all such $t_2$ where \begin{equation}\label{def_rbar} \begin{split} \bar{r}^\dagger & = b \arctan \bar{q}^\dagger + a \arctan \kappa \bar{q}^\dagger\\ \bar{q}^\dagger & = \frac{b^2-a^2}{2\sqrt{ab(a+\kappa b)(b+\kappa a)}} \end{split} \end{equation} With some abuse of notation, write $\bar{r}^\dagger=\bar{r}^\dagger(\kappa)$; i.e. consider $\bar{r}^\dagger$ as a function of $\kappa$. We find \begin{equation}\begin{split} \frac{d}{d\kappa}\bar{r}^\dagger(\kappa) = & \frac{-(a+b\kappa )(a^2-b^2)^2 } {(2ab+ (a^2+b^2)\kappa)(2ab\kappa +a^2+b^2) } \\ & \times \sqrt{ \frac{ab}{(a+b\kappa)(a\kappa+b)} }\end{split}\end{equation} Hence $|\bar{r}^\dagger(\kappa)| \leq \max(|\bar{r}^\dagger(0)|,\lim_{\kappa\rightarrow \infty}|\bar{r}^\dagger(\kappa)|)$. Furthermore \begin{equation}\begin{split} \bar{r}^\dagger(0) & = b\arctan \left (\frac{b^2-a^2}{2ab}\right )\\ \lim_{\kappa\rightarrow \infty} \bar{r}^\dagger(\kappa) & = a\arctan \left (\frac{b^2-a^2}{2ab}\right ) \end{split}\end{equation} Hence it suffices to show \begin{equation}\max(a,b) \arctan \left |\frac{b^2-a^2}{2ab}\right | \leq (a+b-2)\frac{\pi}{2}\end{equation} If both $a$ and $b$ are both greater than 1 then this is immediate, since in this case $\max(a,b)\leq a+b-2$. Hence it suffices to show \begin{equation} b \arctan \frac{b^2-1}{2b} \leq (b-1)\frac{\pi}{2}\end{equation} or equivalently, with $b\geq 2$, that \begin{equation} \frac{b^2-1}{2b}\sin\left ( \frac{\pi}{2b}\right ) \leq \cos\left ( \frac{\pi}{2b}\right )\end{equation} We can quickly check \begin{equation}\frac{b^2-1}{2b}\sin\left ( \frac{\pi}{2b}\right ) \leq \frac{(b^2-1)\pi}{4b^2} \leq 1 - \frac{\pi^2}{8b^2} \leq \cos\left ( \frac{\pi}{2b}\right ) \end{equation} \end{description} \end{proof} \begin{proof}[Proof of Theorem \ref{Meg_equiv_b}] The proof is similar to that for Theorem~\ref{Meg_equiv_a}. We have already established appropriate bounds for $r_-(t)$. If we define \begin{equation}r_+(t) = b \arctan q_+(t) + a \arctan \kappa q_+(t)\end{equation} then we need to show it is also bounded appropriately. Similar to the previous case, the turning points of $r_+(t)$ occur at the same values of $t$ as the turning points of $q_+(t)$. When $[t]_\pi\neq 0$ the derivative of $q_+(t)$ is given by \begin{equation}\frac{d}{dt}q_+(t) =ab \frac{m_+(t)n_+(t)}{d_+(t)^2}\end{equation} with \begin{equation}\begin{split} m_+(t) & = \kappa \sin \frac{at}{2} \cos \frac{bt}{2} + \sin\frac{bt}{2}\cos\frac{at}{2}\\ n_+(t) & = b \sin\frac{bt}{2}\cos\frac{at}{2} - a\sin \frac{at}{2} \cos \frac{bt}{2} \\ d_+(t) & = b\cos^2 \frac{at}{2} + \kappa a \cos^2 \frac{bt}{2} \end{split}\end{equation} We will consider the cases $m_+(t)=0$ and $n_+(t)=0$ separately. This time we use the identity \begin{equation}\begin{split} q_+(t) & = \frac{ b\tan\frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) - a\tan\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } { b \left (1+\tan^2\frac{bt}{2}\right ) +\kappa a \left (1+\tan^2\frac{at}{2}\right ) } \end{split}\end{equation} \begin{description} \item[Case 1] Suppose $t_1$ satisfies $m_+(t_1)=0$. Then \begin{equation}\begin{split} q_+(t_1) & = \tan \frac{at_1}{2} \end{split}\end{equation} and \begin{equation}\begin{split} \kappa q_+(t_1) & = - \tan \frac{bt_1}{2} \end{split}\end{equation} Hence if we define \begin{align}\label{rstar} r_+^*(t) = b\left [ \frac{at}{2}\right ]_{[-\pi/2,\pi/2]}-a\left [ \frac{bt}{2}\right ]_{[-\pi/2,\pi/2]} \end{align} for $t \in [0,2\pi]$ we find $r_+(t_1) = r_+^*(t_1)$ for all $t_1$ satisfying $m_+(t_1)=0$. The function $r_+^*(\cdot)$ is piecewise constant, taking values $(-a-b-1+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b$ when either $a$ or $b$ are even, and values $(-a-b+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b-1$ when $a$ and $b$ are both odd. On each piecewise constant interval there is a $t_1$ satisfying $m_+(t_1)=0$. Hence these turning points of $r_+(t)$ lie within the bounds $\pm(a+b-1)\frac{\pi}{2}$ (if either $a$ or $b$ even) or $\pm(a+b-2)\frac{\pi}{2}$ (if $a$ and $b$ both odd) with at least one on the bound. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test25_1} \caption{ Phase functions $r_+$ (blue), $r_+^*$ (red) and $r_+^\dagger$ (green) with $a=3$ and $b=10$. The turning points of $r_+$ where $m_+(t)=0$ take the value $(a+b+1-2\lambda)\pi/2$ with $\lambda$ an integer. The function $r_+^*(\cdot)$ is piecewise constant and takes these same values. The turning points of $r_+$ where $n_+(t)=0$ take the values of $r_+^\dagger $, whose bounds are also shown. }\label{test25} \end{center} \end{figure} \item[Case 2] Define \begin{equation}q^\dagger_+(t) = \frac{ (b^2-a^2)\sin at_2 } { a^2+b^2+\kappa a b +(b^2-a^2)\cos at_2 } \end{equation} and \begin{equation}r^\dagger_+(t) = b \arctan q^\dagger_+(t) + a \arctan \kappa q^\dagger_+(t).\end{equation} Then $q_+(t_2)=q^\dagger_+(t_2)$ and $r_+(t_2)= r_+^\dagger(t_2)$ for all $t_2$ satisfying $n_+(t_2)=0$. It follows that $|r_+(t_2)|\leq |\bar{r}^\dagger|$ for all such $t_2$ where $\bar{r}^\dagger$ is given by (\ref{def_rbar}). As we have the same bounds as before, the previous analysis establishes that these turning points lie within the bounds. \end{description} \end{proof} \subsection{Proof of Corollary~\ref{m_corollary}} \subsection{Proofs of Theorems~\ref{Jthm_a} and~\ref{Jthm_b}} \begin{proof}[Proof of Theorem~\ref{Jthm_a}] Let $M\in\mathcal{M}$ take the form of Definition~\ref{def2a}. Then \begin{equation} \begin{split} M(j\omega) & = m_0-\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt-\sum_{i=1}^{\infty}h_ie^{-j\omega t_i},\\ & = \bar{m}_0 -\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt+\sum_{i=1}^{\infty}h_iM^-_{t_i}(j\omega), \end{split} \end{equation} where \begin{equation}\label{barm_ineq} \bar{m}_0 = m_0-\sum_{i=1}^{\infty}h_i\geq \| h\|_1, \end{equation} and \begin{equation} \begin{split} \sum_{r=1}^N \lambda_r \mbox{Re} & \left \{M(j\omega_r) G(j\omega_r) \right \} = \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & - \int_{-\infty}^{\infty}h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \, dt\\ & + \sum_{i=1}^{\infty}h_i \sum_{r=1}^N \lambda_r \mbox{Re}\left \{M^-_{t_i}(j\omega_r) G(j\omega_r) \right \}. \end{split} \end{equation} Suppose the conditions of Theorem~\ref{Jthm_a} hold. Then, by (\ref{thm1_ineq}), \begin{equation} \begin{split} \sum_{r=1}^N \lambda_r \mbox{Re} & \left \{M(j\omega_r) G(j\omega_r) \right \} \leq \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & - \int_{-\infty}^{\infty}h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \, dt\label{no_sum} \end{split} \end{equation} In addition, we can write (\ref{thm1_ineq}) as \begin{align}\label{thm1_ineq_alt} \sum_{r=1}^N\lambda_r & \mbox{Re}\left \{ G(j\omega_r) \right \} \nonumber\\ & \leq \sum_{r=1}^N\lambda_r \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \} \mbox{ for all } \tau\in \mathbb{R}\backslash 0. \end{align} Averaging this expression over $\tau$ yields \begin{equation}\label{ineq2} \begin{split} \sum_{r=1}^N\lambda_r & \mbox{Re}\left \{ G(j\omega_r) \right \} =\lim_{T\rightarrow\infty}\frac{1}{T}\int_0^T \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \}\, dt\\ & \leq \lim_{T\rightarrow\infty}\frac{1}{T}\int_0^T\sum_{r=1}^N\lambda_r \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \}\,dt \\ & = \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ \lim_{T\rightarrow\infty}\frac{1}{T}\int_0^Te^{-j\omega_r\tau}\, dt\, G(j\omega_r) \right \} \\ & = 0. \end{split} \end{equation} From (\ref{barm_ineq}) and (\ref{ineq2}) we obtain \begin{equation} \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r)\right \} \leq \|h\|_1 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \}. \end{equation} This, with (\ref{thm1_ineq_alt}), yields \begin{align}\label{no_sum2} \bar{m}_0 \sum_{r=1}^N \lambda_r & \mbox{Re}\left \{ G(j\omega_r)\right \}\nonumber\\ & \leq \int_{-\infty}^{\infty} h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \}\,dt. \end{align} Together (\ref{no_sum}) and (\ref{no_sum2}) yield \begin{equation}\label{final_thm1a} \sum_{r=1}^N \lambda_r \mbox{Re}\left \{M(j\omega_r ) G(j\omega_r) \right \} \leq 0. \end{equation} It follows from Definition~\ref{def1} that $M$ is not suitable for $G$. \end{proof} \begin{proof}[Proof of Theorem~\ref{Jthm_b}] Let $M\in\mathcal{M}$ take the form of Definition~\ref{def2b}. Define $\mathcal{H}^+=\{i\in\mathbb{Z}^+\mbox{ such that }h_i\geq 0\}$ and $\mathcal{H}^-=\{i\in\mathbb{Z}^+\mbox{ such that }h_i< 0\}$. Then \begin{equation} \begin{split} M(j\omega) = & \bar{m}_0 -\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt\\ & +\sum_{i\in\mathcal{H}^+}^{\infty}h_iM^-_{t_i}(j\omega)+\sum_{i\in\mathcal{H}^-}^{\infty}|h_i|M^+_{t_i}(j\omega) \end{split} \end{equation} where this time \begin{equation}\label{barm_ineq_b} \bar{m}_0 = m_0-\sum_{i=1}^{\infty}|h_i|\geq \| h\|_1. \end{equation} Suppose the conditions of both Theorem~\ref{Jthm_a} and~\ref{Jthm_b} hold. Then (\ref{thm1_ineq}) and (\ref{thm1b_ineq})) yield (\ref{no_sum}) as before, but with $\bar{m}_0$ given by (\ref{barm_ineq_b}). Furthermore, we can write (\ref{thm1_ineq}) and (\ref{thm1b_ineq}) together as \begin{equation}\label{thm1_ineq_alt_b} \begin{split} \sum_{r=1}^N & \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & \leq - \sum_{r=1}^N\lambda_r \left | \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \} \right |\mbox{ for all } \tau\in \mathbb{R}\backslash 0. \end{split} \end{equation} Since (\ref{ineq2}) still holds, from (\ref{barm_ineq_b}), (\ref{thm1_ineq_alt_b}) and (\ref{ineq2}) we obtain \begin{align}\label{no_sum2b} \bar{m}_0 \sum_{r=1}^N & \lambda_r \mbox{Re}\left \{ G(j\omega_r)\right \}\nonumber\\ & +\int_{-\infty}^{\infty} |h(t)| \sum_{r=1}^N \lambda_r \left | \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \right | \,dt \leq 0. \end{align} Together (\ref{no_sum}) and (\ref{no_sum2b}) yield (\ref{final_thm1a}) as before. It follows from Defintion~\ref{def1} that $M$ is not suitable for $G$. \end{proof} \subsection{Proof of Theorems~\ref{thm:2a} and \ref{thm:2b}} \input{Part5_Jonsson01} \subsection{Proofs of Corollaries~\ref{m_corollary_a} and ~\ref{m_corollary_b}} \begin{proof}[Proof of Corollary~\ref{m_corollary_a}] Without loss of generality let $a<b$. The result follows by setting the intervals \begin{align} [\alpha,\beta] =[a\omega_0 - \varepsilon, a\omega_0 + \varepsilon]\mbox{ and }[\gamma,\delta] =[b\omega_0 - \varepsilon, b\omega_0 + \varepsilon] \end{align} with $\varepsilon>0$ and taking the limit as $\varepsilon \rightarrow 0$. Specifically we find \begin{equation} \begin{split} \psi(t) & = \frac{2\lambda}{t}\sin(a\omega_0t)\sin(\varepsilon t) -\frac{2\mu}{t}\sin(b\omega_0t)\sin(\varepsilon t) \\ \phi(t) & = 2\varepsilon\lambda+2\varepsilon\kappa\mu+\phi_1(t)\\ \phi_1(t) & = -\frac{2\lambda}{t}\cos(a\omega_0t)\sin(\varepsilon t) -\frac{2\kappa\mu}{t}\cos(b\omega_0t)\sin(\varepsilon t), \end{split} \end{equation} with $a\lambda=b\mu$. Hence \begin{equation} \overline{\rho}^c = \lim_{\varepsilon\rightarrow 0}\rho^c\\ \end{equation} \end{proof} \begin{proof}[Proof of Corollary~\ref{m_corollary_b}] In addition \begin{equation} \tilde{\phi}(t) = 2\varepsilon\lambda+2\varepsilon\kappa\mu-|\phi_1(t)| \end{equation} and hence \begin{equation} \overline{\rho}^c_{\mbox{odd}} = \lim_{\varepsilon\rightarrow 0}\rho^c_{\mbox{odd}} \end{equation} \end{proof} \subsection{Proof of Theorems~\ref{Meg_equiv_a} and~\ref{Meg_equiv_b} } \input{Part6_Megretski01} \section{Preliminaries}\label{Prem} \subsection{Multiplier theory} We are concerned with the input-output stability of the Lurye system given by \begin{equation} y_1=Gu_1,\mbox{ } y_2=\phi u_2,\mbox{ } u_1=r_1-y_2 \mbox{ and }u_2 = y_1+r_2.\label{eq:Lurye} \end{equation} Let $\mathcal{L}_2$ be the space of finite energy Lebesgue integrable signals and let $\mathcal{L}_{2e}$ be the corresponding extended space (see for example \cite{desoer75}). The Lurye system is said to be stable if $r_1,r_2\in\mathcal{L}_2$ implies $u_1,u_2,y_1,y_2\in\mathcal{L}_2$. The Lurye system~(\ref{eq:Lurye}) is assumed to be well-posed with $G:\mathcal{L}_{2e}\rightarrow\mathcal{L}_{2e}$ linear time invariant (LTI) causal and stable, and with $\phi:\mathcal{L}_{2e}\rightarrow\mathcal{L}_{2e}$ memoryless and time-invariant. With some abuse of notation we will use $G(s)$ to denote the transfer function corresponding to $G$. The nonlinearity $\phi$ is assumed to be montone in the sense that $(\phi u) (t_1)\geq (\phi u)(t_2)$ for all $u(t_1)\geq u(t_2)$. It is also assumed to be bounded in the sense that there exists a $C\geq 0$ such that $|(\phi u)(t)|\leq C|u(t)|$ for all $u(t)\in\mathbb{R}$. We say $\phi$ is slope-restricted on $[0,k]$ if $0\leq (\phi u)(t_1) -(\phi u) (t_2))/(u(t_1)-u(t_2))\leq k$ for all $u(t_1)\neq u(t_2)$. We say $\phi$ is odd if $(\phi u)(t_1)=-(\phi u)(t_2)$ whenever $u(t_1)=-u(t_2)$. \begin{definition}\label{def1} Let $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ be LTI. We say $M$ is a suitable multiplier for $G$ if there exists $\varepsilon>0$ such that \begin{align} \mbox{Re}\left \{ M(j\omega) G(j\omega) \right \} > \varepsilon\mbox{ for all } \omega \in \mathbb{R}. \end{align} \end{definition} \begin{remark}\label{rem_phase} Suppose $M$ is a suitable multiplier for $G$ and $\angle G(j\omega) \leq -\pi/2 -\theta$ for some $\omega$ and $\theta$. Then $\angle M(j\omega) > \theta$. Similarly if $\angle G(j\omega) \geq \pi/2 +\theta$ then $\angle M(j\omega) < -\theta$. \end{remark} \begin{subtheorem}{definition} \begin{definition}\label{def2a} Let $\mathcal{M}$ be the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose implulse response is given by \begin{equation}\label{def_m} m(t) = m_0 \delta(t)-h(t)-\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} \begin{split} h(t) & \geq 0 \mbox{ for all } t\mbox{, }h_i\geq 0 \mbox{ for all } i\\ & \mbox{and } \| h\|_1+\sum_{i=1}^{\infty} h_i \leq m_0. \end{split} \end{equation} \end{definition} \begin{definition}\label{def2b} Let $\mathcal{M}_{\mbox{odd}}$ be the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose implulse response is given by (\ref{def_m}) with \begin{equation} \| h\|_1+\sum_{i=1}^{\infty} |h_i| \leq m_0. \end{equation} \end{definition} \end{subtheorem} \begin{remark} $\mathcal{M}\subset\mathcal{M}_{\mbox{odd}}$. \end{remark} The Lurye system (\ref{eq:Lurye}) is said to be absolutely stable for a particular $G$ if it is stable for all $\phi$ in some class $\Phi$. In particular, if there is a suitable $M\in\mathcal{M}$ for $G$ then it is absolutely stable for the class of memoryless time-invariant monotone bounded nonlinearities; if there is a suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$ then it is absolutely stable for the class of memoryless time-invariant odd monotone bounded nonlinearities. Furthermore, if there is a suitable $M\in\mathcal{M}$ for $1/k+G$ then it is absolutely stable for the class of memoryless time-invariant slope-restricted nonlinearities in $[0,k]$; if there is a suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $1/k+G$ then it is absolutely stable for the class of memoryless time-invariant odd slope-restricted nonlinearities \cite{Zames68,Carrasco:EJC}. \subsection{Other notation} Let $x = [y]_{[z,w]} $ denote $y$ modulo the interval $[z,w]$: i.e. the unique number $x\in[z,w)$ such that there is an integer $n$ with $y = x + n(w-z)$. In our statement of results (i.e. Sections~\ref{Main},~\ref{Int} and~\ref{Exa}) phase is expressed in degrees. In the technical proofs (i.e. the Appendix) phase is expressed in radians. \subsection{Duality approach} The following result is similar in spirit to that in \cite{Jonsson:96} where a proof is sketched for the odd case. Both results can be derived from the duality theory of J\"{o}nsson \cite{Jonsson96thesis,Jonsson97,Jonsson99}; see \cite{Zhang:21} for the corresponding derivation in the discrete-time case. Nevertheless, several details are different. In particular, in \cite{Jonsson:96} only rational plants $G$ and rational multipliers $M$ are considered; this excludes both plants with delay and so-called ``delay multipliers.'' Expressing the results in terms of single parameter delay multipliers also gives insight. We exclude frequencies $\omega=0$ and $\omega\rightarrow\infty$; it is immediate that we must have $\mbox{Re}\left \{M(0)G(0)\right \}\geq 0$; by contrast $M(\infty)$ need not be well-defined in our case. \begin{definition} Define the single parameter delay multipliers $M^-_\tau$ and $M^+_\tau$ as $M^-_\tau(s) = 1 -e^{-\tau s}$ and $M^+_\tau(s) = 1 +e^{-\tau s}$ with $\tau\in\mathbb{R}\backslash 0$. Let $\mathcal{M}^- \subset \mathcal{M}$ be the set $\mathcal{M}^- = \{M^-_{\tau} \,:\, \tau \in \mathbb{R} \backslash 0\}$. Let $\mathcal{M}^+ \subset \mathcal{M_{\mbox{odd}}}$ be the set $\mathcal{M}^+ = \{M^+_\tau\,:\, \tau \in \mathbb{R}\backslash 0\}$. \end{definition} \begin{subtheorem}{theorem} \begin{theorem}\label{Jthm_a} Let $G$ be causal, LTI and stable. Assume there exist $0<\omega_1<\cdots<\omega_N<\infty$, and non-negative $\lambda_1, \lambda_2, \ldots, \lambda_N$, where $\sum_{r=1}^N\lambda_r>0$, such that \begin{equation}\label{thm1_ineq} \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ M^-_{\tau}(j\omega_r) G(j\omega_r) \right \} \leq 0 \mbox{ for all }M^-_{\tau}\in\mathcal{M}^-. \end{equation} Then there is no suitable $M\in\mathcal{M}$ for $G$. \end{theorem} \begin{theorem}\label{Jthm_b} Let $G$ be causal, LTI and stable. Assume, in addition to the conditions of Theorem~\ref{Jthm_a}, that \begin{equation}\label{thm1b_ineq} \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ M^+_{\tau}(j\omega_r) G(j\omega_r) \right \} \leq 0 \mbox{ for all }M^+_{\tau}\in\mathcal{M}^+. \end{equation} Then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. \end{theorem} \end{subtheorem} \begin{remark} The observation is made in \cite{Chang:12} that by the Stone-Weirstrass theorem it is sufficient to characterise $\mathcal{M}$ in terms of delay multipliers: i.e. as the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose impulse response is given by \begin{equation} m(t) = m_0 \delta(t)-\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} h_i\geq 0 \mbox{ for all } i\mbox{ and } \sum_{i=1}^{\infty} h_i \leq m_0. \end{equation} Similarly $\mathcal{M}_{\mbox{odd}}$ can be characterised as the class of LTI $M:\mathcal{L}_{2}\rightarrow\mathcal{L}_{2}$ whose impulse response is given by \begin{equation} m(t) = m_0\delta(t) -\sum_{i=1}^{\infty}h_i \delta(t-t_i), \end{equation} with \begin{equation} \sum_{i=1}^{\infty} |h_i| \leq m_0. \end{equation} Such delay multipliers are excluded entirely from \cite{Jonsson:96}, but in this sense both Theorems~\ref{Jthm_a} and~\ref{Jthm_b} follow almost immediately. \end{remark} \subsection{Frequency interval approach} In \cite{Wang:18} we presented the following phase limitation for the frequency intervals $[\alpha,\beta]$ and $[\gamma,\delta]$. \begin{subtheorem}{theorem} \begin{theorem}[\cite{Wang:18}]\label{Meg_a} Let $0<\alpha<\beta<\gamma<\delta$ and define \begin{equation} \rho^c = \sup_{t>0}\frac{|\psi(t)|}{\phi(t)}, \end{equation} with \begin{equation} \begin{split} \psi(t) & = \frac{\lambda \cos (\alpha t)}{t}-\frac{\lambda \cos (\beta t)}{t}- \frac{\mu \cos (\gamma t)}{t}+\frac{\mu \cos (\delta t)}{t},\\ \phi(t) & = \lambda(\beta-\alpha)+\kappa\mu(\delta-\gamma)+\phi_1(t),\\ \phi_1(t) & = \frac{\lambda \sin (\alpha t)}{t}-\frac{\lambda \sin (\beta t)}{t}+ \frac{\kappa\mu \sin (\gamma t)}{t}-\frac{\kappa\mu \sin (\delta t)}{t}, \end{split} \end{equation} and with $\lambda>0$ and $\mu>0$ satisfying \begin{equation} \frac{\lambda}{\mu} = \frac{\delta^2-\gamma^2}{\beta^2-\alpha^2}, \end{equation} and $\kappa>0$. Let $M$ be an OZF multiplier and suppose \begin{equation}\label{M_up} \mbox{Im}(M(j\omega))>\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\alpha,\beta], \end{equation} and \begin{equation}\label{M_dn} \mbox{Im}(M(j\omega))<-\kappa\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\gamma,\delta], \end{equation} for some $\rho>0$. Then $\rho<\rho^c$ if $M\in\mathcal{M}$ The result also holds if we replace (\ref{M_up}) and (\ref{M_dn}) with \begin{equation} \mbox{Im}(M(j\omega))<-\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\alpha,\beta], \end{equation} and \begin{equation} \mbox{Im}(M(j\omega))>\kappa\rho\mbox{Re}(M(j\omega))\mbox{ for all } \omega\in[\gamma,\delta]. \end{equation} \end{theorem} \begin{theorem}[\cite{Wang:18}]\label{Meg_b} Suppose, in addition to the conditions of Theorem~\ref{Meg_a}, that \begin{equation} \rho^c_{\mbox{odd}} = \sup_{t>0}\frac{|\psi(t)|}{\tilde{\phi}(t)}, \end{equation} with \begin{equation} \tilde{\phi}(t) = \lambda(\beta-\alpha)+\kappa\mu(\delta-\gamma)-|\phi_1(t)|. \end{equation} Then $\rho<\rho^c_{\mbox{odd}}$ if $M\in\mathcal{M}_{\mbox{odd}}$. \end{theorem} \end{subtheorem} \section{Examples}\label{Exa} We demonstrate the new condition with three separate examples. In Examples~1 and~2 below we test the criterion for a finite number of coprime integers $a$ and $b$, and for all $\omega>0$; we also search over the slope restriction~$k$. We run a bisection algorithm for~$k$ and, for each candidate value of $k$, $a$ and~$b$, check whether the condition is satisfied for any $\omega>0$. Provided the phase of $1/k+G$ is sufficiently smooth, this can be implemented efficiently and systematically, for example by gridding $\omega$ sufficiently finely. There are several possible ways to reorder the computation. \subsection{Example 1} J\"{o}nsson and Laiou \cite{Jonsson:96} consider the plant \begin{equation}\label{JL_G} G(s) = \frac{s^2}{(s^2+\alpha)(s^2+\beta)+10^{-4}(14s^3+21s)}, \end{equation} with $\alpha=0.9997$ and $\beta=9.0039$ and with positive feedback. They show that the rational multliper \begin{equation}\label{JL_M} M(s) = 1 - \left (\frac{2.5}{s+2.5} \right )^2. \end{equation} is suitable for $1/k-G(s)$ when $k=0.0048$. Figure \ref{test05a_fig1} shows the phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0048$. It can be seen to lie on the interval $[-90^o,90^o]$. They also show no rational multiplier in $\mathcal{M}_{\mbox{odd}}$ exists when $k=0.0061$ by applying their criterion with $N=2$ and the choice $\omega_1=1$ and $\omega_2=3$. Fig \ref{test05a_fig2} shows $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $k=0.0061$. It can be seen that the value drops below $-180^o$ near $\omega=1$. Thus Theorem~\ref{thm:2a} confirms there is no suitable multipler in either $\mathcal{M}$ or $\mathcal{M}_{\mbox{odd}}$. J\"{o}nsson and Laiou \cite{Jonsson:96} state `the choice of frequencies [...] is a delicate task.''' But a simple line search shows that there is an $\omega$ such that $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3<-180^o$ when $k = 0.0058926$ (see Fig \ref{test05a_fig3}) but $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3>-180^o$ for all $\omega$ when $k = 0.0058925$. By Theorem~\ref{thm:2a} there is no multiplier when $k=0.0058926$. By contrast, for this case the choice \begin{equation}\label{WPH_M} M(s)=1-0.99999e^{-0.93287s} \end{equation} is a suitable multiplier when $k=0.0058924$ (Fig \ref{test05a_fig4}). The various computed slopes $k$ are set out in Table~\ref{table_ex1}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05a_fig1} \caption{Example 1. Phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0048$ when $G$ is given by (\ref{JL_G}) and $M$ by (\ref{JL_M}). The phase lies on the interval $[-90^o,90^o]$ so this choice of $M$ is a suitable multiplier for $1/k-G$.}\label{test05a_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05b_fig6} \caption{Example 1. The phase difference $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $G$ is given by (\ref{JL_G}) with $k=0.0061$. The value drops below $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier.}\label{test05a_fig2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05b_fig7} \caption{Example 1. The phase difference $(3\angle(1/k-G(j\omega))-\angle(1/k-G(3j\omega)))/3$ when $G$ is given by (\ref{JL_G}) with $k=0.0058926$. The value drops below $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier.}\label{test05a_fig3} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test05a_fig4} \caption{Example 1. Phase of $M(j\omega)(1/k-G(j\omega))$ when $k=0.0058924$ when $G$ is given by (\ref{JL_G}) and $M$ by (\ref{WPH_M}). The phase lies on the interval $[-90^o,90^o]$ so this choice of $M$ is a suitable multiplier for $1/k-G$.}\label{test05a_fig4} \end{center} \end{figure} \begin{table} \begin{center} \begin{tabular}{ | l | c | c |} \hline & \cite{Jonsson:96} & This paper\\ \hline Slope $k$ for which a multiplier & & \\ is found & 0.0048 & 0.0058924\\ \hline Slope $k$ for which there is & & \\ guaranteed to be no multiplier & 0.0061 & 0.0058926\\ \hline \end{tabular} \caption{Various slopes for Example 1}\label{table_ex1} \end{center} \end{table} \subsection{Example 2} Consider the plant \[G(s) = \frac{s^2}{(s^2+2\xi s + 1)^2}\mbox{ with }\xi>0.\] O'Shea \cite{OShea67} shows that there is a suitable multiplier in $\mathcal{M}$ for $1/k+G$ when $\xi>1/2$ and $k>0$. By contrast in \cite{Wang:18} we showed that there is no suitable multiplier in $\mathcal{M}$ when $\xi=0.25$ and $k$ is sufficiently large. Specifically the phase of $G(j\omega)$ is above $177.98^o$ on the interval $\omega\in [0.02249,0.03511]$ and below $-177.98^o$ on the interval $\omega\in [1/0.03511,1/0.02249]$. A line search yields that the same condition is true for the phase of $1/k+G(j\omega)$ with $k\geq 269,336.3$ (see Fig~\ref{test19e_fig1}). Hence there is no suitable multipler $M\in\mathcal{M}$ for $1/k+G$ with $k\geq 269,336.3$. By contrast, Theorem~\ref{thm:2a} with $a=4$ and $b=1$ yields there is no suitable multipler $M\in\mathcal{M}$ for $1/k+G$ with $k\geq 32.61$. Specifically the phase $(4\angle (1/k+G(j\omega))-\angle(1/k+G(4j\omega)))/4$ exceeds $180^o$ when $k\geq 32.61$ (see Figs~\ref{test19e_fig3} and~\ref{test19f}). Similarly, Theorem~\ref{thm:2b} with $a=3$ and $b=1$ yields there is no suitable multipler $M\in\mathcal{M}_{odd}$ for $1/k+G$ with $k\geq 39.93$. Specifically the phase $(3\angle (1/k+G(j\omega))-\angle(1/k+G(3j\omega)))/3$ exceeds $180^o$ when $k\geq 32.61$. These results show a non-trivial improvement over those in \cite{Wang:18}. While it should be possible to achieve identical results using either the condition of \cite{Jonsson:96} or that of \cite{Wang:18} (see Appendix), the conditions of Theorems~\ref{thm:2a} and~\ref{thm:2b} can be applied in a systematic manner. Fig~\ref{test19d_fig1} shows the bounds for several other values of $\zeta$ while Fig~\ref{test19d_fig2} shows the value of $a$ yielding the lowest bound for each test (the value of $b$ is $1$ for each case). \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19e_fig1} \caption{Example 2. O'Shea's example with $\zeta=0.25$. Application of the condition in \cite{Wang:18} yields there to be no suitable multiplier $M\in\mathcal{M}$ when $k\geq 270,000$.}\label{test19e_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19g_fig5} \caption{Example 2. O'Shea's example with $\zeta=0.25$. Application of Theorem~\ref{thm:2a} with $a=4$ and $b=1$ yields there to be no suitable multiplier $M\in\mathcal{M}$ when $k\geq 32.61$.}\label{test19e_fig3} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19f} \caption{Example 2.O'Shea's example with $\zeta=0.25$. The phase of $1/k+G(j\omega)$ with $k=32.61$ is shown. The phase of $1/k+G(j\omega_a)$ is $149.42^o$ at $\omega_a = 0.3938$ and the corresponding forbidden regions are shown (compare Fig~\ref{test02_fig1}). The phase touches the bound at $4\omega_a$.}\label{test19f} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19d_fig1} \caption{Example 2. Bounds on the slope above which Theorem~\ref{thm:2a} or~\ref{thm:2b} guarantee there can be no suitable multiplier as damping ratio~$\zeta$ varies.}\label{test19d_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test19d_fig2} \caption{Example 2. Values of $a$ used to find the slope bounds shown in Fig~\ref{test19d_fig1}. The value of $b$ is $1$ for all shown results.}\label{test19d_fig2} \end{center} \end{figure} \subsection{Example 3} In \cite{Wang:18} we argue that phase limitatons are closely linked to the Kalman Conjecture. This plays an important role in the theory of absolute stability for Lurye systems. Barabanov \cite{Barabanov88} shows it to be true for third-order systems via a subclass of the OZF multipliers but fourth-order counterexamples are known \cite{Fitts66,Leonov13}. It is trivial that negative imaginary systems satisfy the Kalman Conjecture \cite{Carrasco17}. In \cite{Zhang18} we indicate via the tailored construction of OZF multipliers that second-order systems with delay satisfy the Kalman Conjecture. Until now it has remained an open question whether third-order systems with delay satisfy the Kalman Conjecture. Consider the third-order system with delay that has transfer function \begin{equation} G(s) = e^{-s} \frac{s^2+0.8s+1.5}{s^3+1.2s^2+1.12s+0.32}. \end{equation} The Nyquist gain is $k_N=2.0931$. That is to say for all $0\leq k < k_N$ the sensitivity function $ \left [ \begin{array}{cc} 1 & G\\ -k & 1 \end{array} \right ]^{-1} }$ is stable. Fig.~\ref{test22_fig_over} shows $(2\angle \left (1/2+G(j\omega)\right ) - \angle \left (1/2 + G(2j\omega)\right ))/2$ against frequency. The value drops significantly below $-180^o$, and hence by Theorem~\ref{thm:2a} there is no suitable $M\in\mathcal{M}$ for $1/2+G$. The phases of $1/2+G(j\omega)$ and of $1/2+G(2j\omega)$ are superimposed. Fig.~\ref{test22_fig3} shows a time response of a Lurye system with gain $2$, a step input at time $t=0$ and simple saturation. The response appears to be periodic. The stable linear response (i.e. without saturation) is superimposed. These results indicate that this is a (first) example of a third order plant with delay which does not satisfiy the Kalman Conjecture. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test22_fig_over_c} \caption{Example 3. The value of $(2\angle \left (1/2+G(j\omega)\right )-\angle \left (1/2+G(2j\omega)\right ))/2$ drops below significantly $-180^o$ so by Theorem~\ref{thm:2a} there is no suitable multiplier. The phase of $1/2+G(j\omega)$ (blue dotted) and the phase of $1/2+G(2j\omega)$ (red dotted) are also shown.}\label{test22_fig_over} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test22_fig3} \caption{Example 3. Time response of the Lurye system, with and without saturation.}\label{test22_fig3} \end{center} \end{figure} \section{Conclusion} \input{Part1_Introduction01} \input{Part2_Preliminaries01} \input{Part3_PhaseLimitations01} \input{Part3a_Intervals} \input{Part4_Examples01} \section{Conclusion} We have presented a simple graphical test that can rule out the existence of suitable OZF multipliers. The test can be implemented efficiently and systematically. The graphical interpretations provide considerable insight to the frequency behaviour of the OZF multipliers. Results show significantly improved results over those in the literature. The test can be derived either from the duality approach \cite{Jonsson:96,Jonsson96thesis,Jonsson97,Jonsson99} or from the frequency interval approach \cite{Megretski:95,Wang:18}. Guaranteeing there is no suitable OZF multiplier does not necessarily imply a Lurye system is not absolutely stable, although we have conjectured this to be the case \cite{Carrasco:EJC,Wang:18}. Kong and Su \cite{Khong20} show that the implication is true with a wider class of nonlinearity; for this case the results of this paper may be applied directly. For the discrete-time case, Seiler and Carrasco \cite{Seiler21} provide a construction, for certain phase limitations, of a nonlinearity within the class for which the discrete-time Lurye system has a periodic solution. However the conjecture remains open for both continuous-time and discrete-time systems. More generally results for discrete-time systems are quite different. For discrete-time systems an FIR search for multipliers is effective and outperforms others \cite{Wang:TAC}. With the interval approach it is possible to find a nontrivial threshold such that the phase of a multiplier cannot be above the threshold over a certain frequency inteval \cite{Wang:18}. The duality approach leads to both a simple graphical test at simple frequencies and a condition at multiple frequencies that can be tested by linear program \cite{Zhang:20}. This paper's results are for continuous-time single-input single-output multipliers of \cite{Zames68}. Although multivariable extensions of the OZF multipliers are considered in the literature \cite{Safonov2000, DAmato2001, Kulkarni2002,Mancera2005, Fetzer2017}, it remains open what restrictions there might be. Similarly more general nonlinearities can be addressed with a reduced subset of the OZF multipliers \cite{Rantzer2001, Materassi11, Altshuller:13, Heath2021} and the analysis of this paper might be generalised to such cases. It also remains open whether a systematic procedure can be found with more points or intervals. \input{app_proofs} \bibliographystyle{IEEEtran} \section{Proof of Theorems~\ref{thm:2a} and~\ref{thm:2b} via duality approach} In the following we apply Theorems~\ref{Jthm_a} and~\ref{Jthm_b} with $N=2$. Furthermore, we assume $\omega_2/\omega_1$ is rational, i.e. that there is some $\omega_0>0$ and integers $a$ and $b$ such that either $\omega_1=a \omega_0$ and $\omega_2=b\omega_0$ or $\omega_1=b \omega_0$ and $\omega_2=a\omega_0$. We begin with two technical lemmas. \begin{subtheorem}{lemma} \begin{lemma}\label{lem1a} Let $a$ and $b$ be coprime positive integers and \begin{equation}\label{def_f1} \begin{split} f_1(\omega) & = -b \sin\theta (\cos\phi-\cos(\phi-aw))\\ f_2(\omega) & = -a\sin\phi(\cos\theta-\cos(\theta+b\omega)) \end{split} \end{equation} with $\omega\in\mathbb{R}$ and $\theta,\phi\geq 0$. Then \begin{equation}f_1(\omega)+f_2(\omega) \leq 0 \mbox{ for all }\omega,\end{equation} provided \begin{equation}\label{new_p_pi} a\theta+b\phi < p \pi, \end{equation} with $p=1$. \end{lemma} \begin{lemma}\label{lem1b} Let $a$ and $b$ be coprime positive integers and \begin{equation}\label{def_f3} \begin{split} f_3(\omega) & = -b \sin\theta (\cos\phi+\cos(\phi-aw))\\ f_4(\omega) & = -a\sin\phi(\cos\theta+\cos(\theta+b\omega)) \end{split} \end{equation} with $\omega\in\mathbb{R}$ and $\theta,\phi\geq 0$. Then \begin{equation}f_3(\omega)+f_4(\omega) \leq 0 \mbox{ for all }\omega,\end{equation} provided (\ref{new_p_pi}) holds with $p=1$ when $a$ and $b$ are both odd and $p=1/2$ when either $a$ or $b$ is even. \end{lemma} \end{subtheorem} \begin{proof}[Proof of Lemma \ref{lem1a}] The term $f_1(\omega)$ is only positive when $[a\omega]_{2\pi}\in(0,2\phi)$. Similarly the term $f_2(\omega)$ is only positive when $[-b\omega]_{2\pi}\in(0,2\theta)$. When $p=1$ there is no $\omega$ such that $f_1(\omega)$ and $f_2(\omega)$ are simultaneously positive. Specifically, suppose $\omega$ is a frequency such that $a\omega+2m\pi\in(0,2\phi)$ and $-b\omega+2n\pi\in(0,2\theta)$ for some integers $m$ and $n$. Then $2(mb+na)\pi\in(0,2p\pi)$. This cannot be the case with $p<1$; when $a$ and $b$ are coprime then it can be satisfied with $p>1$ provided $m$ and $n$ are chosen such that $mb+n=1$. Hence, with $p=1$, it suffices to show that $f_1(\omega)+f_2(\omega)\leq 0$ when $f_1(\omega)\geq 0$, i.e. on the intervals $0\leq[a\omega]_{2\pi}\leq2\phi$. A similar argument will follow by symmetry for intervals where $f_2(\omega)\geq 0$. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig1} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/15$ and $\phi =\pi/4$. The functions $f_1(\cdot)$ and $f_2(\cdot)$ are never simultaneously positive. We have the relations $f_1(\omega)=f_1(2\phi/a-\omega)$ when $\phi/a\leq\omega\leq2\phi/a$ and also $f_1(\omega)=f_1(\omega-\pi)$ when $\pi\leq\omega\leq\pi+2\phi/a$. Similarly $f_2(\omega)\leq f_2(2\phi/a-\omega)$ when $\phi/a\leq\omega\leq2\phi/a$, $f_2(\omega)\leq f_2(\omega-\pi)$ when $\pi\leq\omega\leq\pi+\phi/a$ and $f_2(\omega)\leq f_2(\pi+2\phi/a-\omega)$ when $\pi +\phi/a \leq\omega\leq\pi+2\phi/a$. Hence to show $f_1(\omega)+f_2(\omega)\leq 0$ when $f_1(\omega)\geq 0$, it suffices to consider the interval $0\leq\omega\leq \phi/a$.} \end{center} \end{figure} Consider first the interval $a\omega\in[0,\phi]$. We have \begin{equation}\begin{split} \frac{df_1}{d\omega}(\omega) & = ab\sin\theta \sin(\phi-a\omega)\\ \frac{df_2}{d\omega}(\omega) & =-ab\sin\phi\sin(\theta+b\omega) \end{split}\end{equation} But \begin{equation}\begin{split} \sin (\phi-a\omega) & \leq \sin\phi -a\omega \cos \phi\mbox{ (by slope restriction), and}\\ \sin(\theta+b\omega) & \geq \sin\theta +\frac{a\omega}{\phi}\left [\sin\left ( \theta+\frac{b\phi}{a}\right )-\sin\theta\right ]\\ & \mbox{\hspace{3 cm} (by local convexity)}. \end{split}\end{equation} Hence \begin{equation}\begin{split} \frac{df_1}{d\omega}(\omega)+\frac{df_2}{d\omega}(\omega) \leq & -a^2b\omega\sin\theta \cos \phi\\ & -\frac{a^2b\omega}{\phi}\left [\sin\left ( \theta+\frac{b\phi}{a}\right )-\sin\theta\right ]\sin\phi\\ \leq & 0. \end{split}\end{equation} Since $f_1(0)=f_2(0)=0$ if follows that $f_1(\omega)+f_2(\omega)\leq 0$ on the interval $a\omega\in[0, \phi]$. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig2} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/15$ and $\phi =\pi/4$. On the interval $0\leq\omega\leq \phi/a$ the derivative of $f_1(\cdot)$ is bounded above by its gradient at $\omega=0$ while the derivative of $f_2(\cdot)$ is bounded above by the chord joining its two end points. It follows that $f_1(\cdot)+f_2(\cdot)$ is non-positive on this interval. } \end{center} \end{figure} Consider next the interval $a\omega\in[\phi,2 \phi]$. By symmetry $f_1(\omega) = f_1(2\phi-\omega)$ on this interval. Since $f_2(\omega)\leq 0$ on this interval we must have $f_2(\omega) \leq f_2(2\phi-\omega)$ on this same interval. Hence $f_1(\omega)+f_2(\omega)\leq 0$ on the interval $a\omega\in[\phi, 2\phi]$. Similar arguments follow: firstly on the intervals $[a\omega]_{2\pi}\in[0,\phi]$ where $f_1(\omega) = f_1([a\omega]_{2\pi}/a)$ and $f_2(\omega) \leq f_2([a\omega]_{2\pi}/a)$; secondly on the intervals $[a\omega]_{2\pi}\in[\phi,2\phi]$ where $f_1(\omega) = f_1(2\phi-[a\omega]_{2\pi}/a)$ and $f_2(\omega) \leq f_2(2\phi-[a\omega]_{2\pi}/a)$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem1b}] The term $f_3(\omega)$ is only positive when $[a\omega]_{2\pi}\in(\pi,\pi+2\phi)$. Similarly the term $f_4(\omega)$ is only positive when $[-b\omega]_{2\pi}\in(\pi,\pi+2\theta)$. Let us consider conditions for which they are simultaneously positive. Suppose $\omega$ is a frequency such that $a\omega+2m\pi\in(\pi,\pi+2\phi)$ and $-b\omega+2n\pi\in(\pi,\pi+2\theta)$ for some integers $m$ and $n$. Then $2(mb+na)\pi\in((a+b)\pi,(a+b+2p)\pi)$. If $a$ and $b$ are both odd, then $a+b$ is even and hence this can only be true when $p>1$. By contrast, if either $a$ or $b$ is even (but not both, as they are coprime) then $a+b$ is odd and we can choose $mb+na=a+b+1$ when $p>1/2$. It then follows that $f_3(\omega)+f_4(\omega)\leq 0$ for all $\omega$ by an argument similar to that in the proof of Lemma~\ref{lem1a}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig3} \caption{Illustration of Lemma~\ref{lem1a} with $a=1$, $b=3$, $\theta=\pi/2$ and $\phi =\pi/7$. The functions $f_3(\cdot)$ and $f_4(\cdot)$ are never simultaneously positive. The function $f_3(\omega)$ is non-negative on the interval $\pi\leq\omega\leq\pi+2\phi/a$. The function $f_4(\omega)$ is non-negative on the interval $\pi-2\theta/b\leq \omega\leq\pi$.} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test01_fig4} \caption{Illustration of Lemma~\ref{lem1a} with $a=2$, $b=3$, $\theta=\pi/11$ and $\phi =\pi/11$. The functions $f_3(\cdot)$ and $f_4(\cdot)$ are never simultaneously positive. The function $f_3(\omega)$ is non-negative on the interval $\pi/2\leq\omega\leq\pi/2+2\phi/a$. The function $f_4(\omega)$ is non-negative on the interval $\pi-2\theta/b\leq \omega\leq\pi$.} \end{center} \end{figure} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2a}] Without loss of generality suppose $a$ and $b$ are coprime, and consider the case where $ b\angle G(a j\omega_0 ) > a \angle G(b j\omega_0)$. Put \begin{align}\label{G_def} G(a j\omega_0 ) & = g_a e^{j(\pi-\phi)}\mbox{ and }\nonumber\\ G(b j\omega_0 ) & = g_b e^{j(-\pi+\theta)}\mbox{ with } \theta, \phi, g_a,g_b\in\mathbb{R}^+, \end{align} and \begin{equation}\label{p_ineq} a\theta+b\phi < p\pi, \end{equation} so that (\ref{G_ineq}) holds. Immediately we have \begin{equation}G(a j\omega_0 ) = -g_a e^{-j\phi}\mbox{ and }G(b j\omega_0 ) = -g_b e^{j\theta}.\end{equation} Theorem~\ref{Jthm_a} then states that if there exist non-negative $\lambda_a, \lambda_b$, with $\lambda_a+\lambda_b>0$, such that \begin{align}\label{N=2a} \lambda_a \mbox{Re} & \left \{ M^-_{\tau}(a j\omega_0) G(a j\omega_0) \right \}\nonumber\\ & + \lambda_b \mbox{Re}\left \{ M^-_{\tau}(b j\omega_0 ) G(b j\omega_0) \right \} \leq 0 \mbox{ for all }M^-_{\tau}\in\mathcal{M}^-, \end{align} then there is no suitable $M\in\mathcal{M}$ for $G$. If we set $\omega=\tau \omega_0$ we can write this $f(\omega)\leq 0$ for all $\omega$ with \begin{align}\label{f_def1} f(\omega) = -\lambda_a g_a & (1-\cos a \omega)\cos\phi+\lambda_a g_a \sin a \omega \sin \phi\nonumber\\ & -\lambda_b g_b (1-\cos b \omega)\cos\theta-\lambda_b g_b \sin b \omega \sin \theta. \end{align} Choose \begin{equation}\label{def_lam} \lambda_a = g_b b \sin \theta\mbox{ and }\lambda_b = g_a a \sin \phi. \end{equation} Then \begin{equation}\label{def_f} f(\omega) = g_a g_b (f_1(\omega) + f_2(\omega)) \end{equation} with $f_1$ and $f_2$ given by (\ref{def_f1}). Hence by Lemma~\ref{lem1a} $f(\omega)\leq 0$ for all $\omega$ when $p=1$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:2b}] As with Theorem~\ref{thm:2a}, suppose without loss of generality that $a$ and $b$ are coprime, and consider the case where $ b\angle G(a j\omega_0 ) > a \angle G(b j\omega_0)$. Let $G(a j\omega_0 )$ and $G (b j\omega_0 )$ be given by (\ref{G_def}) with (\ref{p_ineq}) so that (\ref{G_ineq}) holds. Theorem~\ref{Jthm_b} then states that if there exist non-negative $\lambda_a, \lambda_b$, with $\lambda_a+\lambda_b>0$, such that (\ref{N=2a}) holds and in addition \begin{align}\label{N=2b} & \lambda_a \mbox{Re} \left \{ M^-+{\tau}(a j\omega_0) G(a j\omega_0) \right \}\nonumber\\ & + \lambda_b \mbox{Re}\left \{ M^-+{\tau}(b j\omega_0) G(b j\omega_0) \right \} \leq 0 \mbox{ for all }M^+_{\tau}\in\mathcal{M}^+, \end{align} then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. For condition (\ref{N=2a}) the analysis is the same as for Theorem~\ref{thm:2a}; hence we require $p\leq 1$. We can write condition (\ref{N=2b}) as $f(\omega)\leq 0$ for all $\omega$ with \begin{equation}\begin{split} f(\omega) = -\lambda_a g_a & (1+\cos a \omega)\cos\phi-\lambda_a g_a \sin a \omega \sin \phi\\ & -\lambda_b g_b (1+\cos b \omega)\cos\theta+\lambda_b g_b \sin b \omega \sin \theta. \end{split}\end{equation} with (\ref{p_ineq}). As before, choose $\lambda_a$ and $\lambda_b$ according to (\ref{def_lam}). Then \begin{equation}\label{def_f} f(\omega) = g_a g_b (f_3(\omega) + f_4(\omega)) \end{equation} with $f_3$ and $f_4$ given by (\ref{def_f3}). Hence by Lemma~\ref{lem1b} $f(\omega)\leq 0$ for all $\omega$ when $p=1$ if both $a$ and $b$ are odd and when $p=1/2$ if either $a$ or $b$ are even. \end{proof} \section{Main results: duality approach}\label{Main} Applying Theorem~\ref{Jthm_a} or~\ref{Jthm_b} with $N=1$ yields no significant result beyond the trivial statement that if $\mbox{Re}[G(j\omega)]<0$ and $\mbox{Im}[G(j\omega)]=0$ at any $\omega$ then there can be no suitable multiplier. This is in contrast with the discrete-time case where there are non-trivial phase limitations at single frequencies \cite{Zhang:21}. Even with $N=2$, it is not straightforward to apply Theorems~\ref{Jthm_a} or~\ref{Jthm_b} directly, as they require an optimization at each pair of frequencies. Nevertheless, setting $N=2$ yields the following phase limitations: \begin{subtheorem}{theorem} \begin{theorem}\label{thm:2a} Let $a, b \in \mathbb{Z}^+$ and let $G$ be causal, LTI and stable. If there exists $\omega_0\in\mathbb{R}$ such that \begin{align}\label{G_ineq} \left | \frac{ b\angle G(aj\omega_0 ) - a \angle G(bj\omega_0) } {a+b-p} \right |> 180^o, \end{align} with $p=1$ then there is no suitable $M\in\mathcal{M}$ for $G$. \end{theorem} \begin{theorem}\label{thm:2b} Let $a, b \in \mathbb{Z}^+$ and let $G$ be causal, LTI and stable. If there exists $\omega_0\in\mathbb{R}$ such that (\ref{G_ineq}) holds where $p=1$ when both $a$ and $b$ are odd but $p=1/2$ if either $a$ or $b$ are even, then there is no suitable $M\in\mathcal{M}_{\mbox{odd}}$ for $G$. \end{theorem} \end{subtheorem} Figs~\ref{test02_fig1} and~\ref{test02_fig2} illustrate Theorems~\ref{thm:2a} and~\ref{thm:2b} respectively for the specific case that $\angle{G( j\omega_a}) > 170^o$ for some frequency $\omega_a$. The results put limitations on the phase of $G$ at frequencies that are rational multiples of $\omega_a$ (i.e. at $b\omega_0$ where $\omega_a=a\omega_0$ and where $a$ and $b$ are coprime integers). \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test02_fig1} \caption{Forbidden regions for the phase of $G(j\omega)$ when the phase at some $\omega_a$ is greater than $170^o$. }\label{test02_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test02_fig2} \caption{Forbidden regions for the phase of $G(j\omega)$ when the phase at some $\omega_a$ is greater than $170^o$ (odd nonlinearity). }\label{test02_fig2} \end{center} \end{figure} The results may also be expressed as phase limitations on the multipliers themselves. Counterparts to Theorems~\ref{thm:2a} and~\ref{thm:2b} follow as corollaries and are equivalent results. \begin{subtheorem}{corollary} \begin{corollary}\label{cor:2a} Let $a, b \in \mathbb{Z}^+$ and let $M\in\mathcal{M}$. Then \begin{equation}\label{cor_ineq} \left |\frac{b\angle M(aj\omega )-a\angle M(bj\omega )}{a/2+b/2-p}\right | \leq 180^o, \end{equation} for all $\omega\in\mathbb{R}$ with $p=1$. \end{corollary} \begin{corollary}\label{cor:2b} Let $a, b \in \mathbb{Z}^+$ and let $M\in\mathcal{M}_{\mbox{odd}}$. Then inequality (\ref{cor_ineq}) holds for all $\omega\in\mathbb{R}$ where $p=1$ when both $a$ and $b$ are odd but $p=1/2$ if either $a$ or $b$ are even. \end{corollary} \begin{proof} Immediate: see Remark~\ref{rem_phase}. \end{proof} \end{subtheorem} Figs~\ref{test04_fig1} and~\ref{test04_fig2} are the counterparts to Figs~\ref{test02_fig1} and~\ref{test02_fig2} (if the phase of $G$ is greater than $170^o$ at some $\omega_a$ then any suitable multiplier $M$ must have phase less than $-80^o$ at $\omega_a$). Corollaries~\ref{cor:2a} and~\ref{cor:2b} can also be visualised for specific values of $a$ and $b$ with plots of the phase of $M(bj\omega_0 )$ against the phase of $M(aj\omega_0 )$ as $\omega_0$ varies: see Figs~\ref{Fig03_1} to~\ref{Fig03_3}. Fig~\ref{Fig03_1} also shows boundary points parameterised by $\kappa$ which is associated with the frequency interval apprach and discussed in Section~\ref{Int}. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test04_fig1a} \caption{Forbidden regions for the phase of $M\in\mathcal{M}$ when the phase at some $\omega_a$ is less than $-80^o$. }\label{test04_fig1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test04_fig2a} \caption{Forbidden regions for the phase of $M\in\mathcal{M}_{\mbox{odd}}$ when the phase at some $\omega_a$ is less than $-80^o$. }\label{test04_fig2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig1c} \caption{Phase vs phase plot illustrating Corollary~\ref{cor:2a} with $a=2$, $b=3$. If $M\in\mathcal{M}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ are shown in magenta. Also shown are the points $(\arctan \rho^c,-\arctan \kappa \rho^c)$ when $a=2$ and $b=3$, when $\kappa$ takes the values $0.2$, $1$ and $5$ and when $\rho^c$ is defined as in Corollary~\ref{m_corollary_a}. }\label{Fig03_1} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig2a} \caption{Phase vs phase plot illustrating both Corollaries~\ref{cor:2a} and~\ref{cor:2b} with $a=1$, $b=3$. If $M\in\mathcal{M}$ or $M\in\mathcal{M}_{\mbox{odd}}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ and $\mathcal{M}^+$ coincide and are shown in magenta.}\label{Fig03_2} \end{center} \end{figure} \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test03_fig3a} \caption{Phase vs phase plot illustrating Corollary~\ref{cor:2b} with $a=2$, $b=3$. If $M\in\mathcal{M}_{\mbox{odd}}$ then the pink regions are forbidden. The phase vs phase plots of elements of $\mathcal{M}^-$ are shown in magenta (compare Fig~\ref{Fig03_1}) while the phase vs phase plots of elements of $\mathcal{M}^+$ are shown in cyan.}\label{Fig03_3} \end{center} \end{figure} The bounds are tight in the sense that if $a$ and $b$ are coprime then there exist (many) $M_{\tau}^-\in\mathcal{M}^-$ such that $b\angle M_{\tau}^-(a j\omega_0 )-a\angle M_{\tau}^-(b j\omega_0) = (a/2+b/2-1)180^o$. Specifically this holds for any $\tau$ that satisfies $[a\tau/\omega_0]_{[0,2\pi]}>2\pi-2\pi/b$ and $[b\tau/\omega_0]_{[0,2\pi]} < 2\pi/a$. Similarly if $a$ and $b$ are coprime and either $a$ or $b$ are even there exist (many) $M_{\tau}^+\in\mathcal{M}^+$ such that $b\angle M_{\tau}^+(a j\omega_0)-a\angle M_{\tau}^+(b j\omega_0 ) = (a/2+b/2-1/2)180^o$. Specifically this holds for any $\tau$ that satisfies $\pi-\pi/b <[a\tau/\omega_0]_{[0,2\pi]}<\pi$ and $\pi<[b\tau/\omega_0]_{[0,2\pi]}<\pi+\pi/a$. In the examples below the phases of the objects $G(a j\omega )$ and $G(b j\omega )$ are computed separately. They should each have phase on the interval $(-180^o, 180^o)$ and so may be easily computed without the possibility of phase wrapping ambiguity at local points or over local regions. Provided the transfer functions are sufficiently smooth they can be computed accurately. Nevertheless, it is possible to write (\ref{G_ineq}) in terms of a single transfer function since \begin{equation} b\angle G(a j\omega )-a\angle G(b j\omega ) = \angle \bar{G}_{a,b}(j\omega) \end{equation} where \begin{equation} \bar{G}_{a,b}(s) = \frac{G( a s)^b}{G( b s)^a}. \end{equation} It thus requires, for given values of $a$ and $b$, the computation of the maximum (or minimum) phase of a single transfer function. In this sense the computational requirement is comparable to that of the off-axis circle criterion \cite{Cho:68}, a classical tool. It may also be necessary to compute the criterion for several positive integer values of $a$ and $b$. The number of different values is finite and can be bounded. Suppose the maximum phase of $G$ is $180^o-\phi_{\min}$ and the minimum phase is $-180^o+\theta_{\max}$, where $\phi_{\min}>0, \theta_{\max}>0$. Then $a \theta_{\max} +b\phi_{\min} < p\times 180^o$. So it is sufficient to choose (say) all $a<p/\theta_{\max} \times 180^o$ and $b<p/\phi_{\min} \times 180^o$ which yields a finite set of values. \section{Relation to the frequency interval approach}\label{Int} Corollaries \ref{cor:2a} and \ref{cor:2b} may be interpreted as saying that given an upper (or lower) threshold on the phase of a suitable multiplier $M$ at frequency $a\omega_0$ there is a lower (or upper) threshold on the phase on $M$ at frequency $b\omega$. It is natural to compare this with the frequency interval approach, where an upper (or lower) threshold on the phase of $M$ over an interval $[\alpha,\beta]$ implies a lower (or upper) threshold on the phase of $M$ over the interval $[\gamma,\delta]$. Let us begin by considering Theorems~\ref{Meg_a} and~\ref{Meg_b} in the limit as the length of the intervals becomes zero. We obtain the following corollaries. The results requires the ratio of the limiting frequencies to be rational. \begin{subtheorem}{corollary} \begin{corollary}\label{m_corollary_a} { \everymath={\displaystyle} For $t>0$, define \begin{equation} q_-(t) = \left \{ \begin{array}{l} \frac{b\sin (a t)-a\sin (b t)}{ b+\kappa a- b \cos (a t)-\kappa a\cos(b t)} \mbox{ for }[t]_{[0,\pi]}\neq 0,\\ \\ 0 \mbox{ for } [t]_{[0,\pi]}=0, \end{array} \right .\\ \end{equation} where $a$ and $b$ are coprime and $\kappa>0$. }Define also \begin{equation} \overline{\rho}^c = \sup_{t>0} |q_-(t)|. \end{equation} Let $M$ be an OZF multiplier and suppose \begin{equation} \mbox{Im}(M(aj\omega_0)>\rho\mbox{Re}(M(aj\omega_0)), \end{equation} and \begin{equation} \mbox{Im}(M(bj\omega_0)<-\kappa\rho\mbox{Re}(M(bj\omega_0)), \end{equation} for some $\omega_0>0$ and $\rho>0$. Then $\rho<\rho^c$ if $M\in\mathcal{M}$ \end{corollary} \begin{corollary}\label{m_corollary_b} { \everymath={\displaystyle} In addition to the conditions of Corollary~\ref{m_corollary_a}, define \begin{equation} q_+(t) = \left \{ \begin{array}{l} \frac{b\sin (a t)-a\sin (b t)}{ b+\kappa a+b \cos (a t)+\kappa a\cos(b t)} \mbox{ for }[t]_{[0,\pi]}\neq 0,\\ \\ 0 \mbox{ for } [t]_{[0,\pi]}=0, \end{array} \right . \end{equation} and \begin{equation} \overline{\rho}^c_{\mbox{odd}} = \max\left (\sup_{t>0}|q_-(t) |,\sup_{t>0}|q_+(t)|\right ) . \end{equation} Then $\rho<\overline{\rho}^c$ if $M\in\mathcal{M}_{\mbox{odd}}$. } \end{corollary} \end{subtheorem} \begin{remark} Equivalently, we can say if $\angle M(aj\omega_0)>\arctan \rho$ and $\angle M(b j \omega_0)<-\arctan \kappa \rho$ then $\rho <\rho^c$ if $M\in\mathcal{M}$ and $\rho <\rho^c_{\mbox{odd}}$ if $M\in\mathcal{M}_{\mbox{odd}}$. \end{remark} It turns out that this is equivalent to the phase condition derived via the duality approach. The inequality boundaries $\angle M(aj\omega_0)=\arctan \rho^c$ and $\angle M(b j \omega_0)=-\arctan \kappa \rho^c)$ (or $\angle M(aj\omega_0)=\arctan \rho^c_{\mbox{odd}}$ and $\angle M(b j\omega_0)=-\arctan \kappa \rho^c_{\mbox{odd}}$) are the same as those for Corollary~\ref{cor:2a} (or~\ref{cor:2b}), as illustrated in Fig~\ref{Fig03_1}. Specifically we may say: \begin{subtheorem}{theorem} \begin{theorem}\label{Meg_equiv_a} Corollary~\ref{m_corollary_a} and Theorem~\ref{thm:2a} are equivalent results. \end{theorem} \begin{theorem}\label{Meg_equiv_b} Corollary~\ref{m_corollary_b} and Theorem~\ref{thm:2b} are equivalent results. \end{theorem} \end{subtheorem} \section{Introduction} The continuous-time OZF (O'Shea-Zames-Falb) multipliers were discovered by O'Shea~\cite{OShea67} and formalised by Zames and Falb \cite{Zames68}. They preserve the positivity of monotone memoryless nonlinearities. Hence they can be used, via loop transformation, to establish the absolute stability of Lurye systems with slope-restricted memoryless nonlinearities. An overview is given in \cite{Carrasco:EJC}. Recent interest is largely driven by their compatability with the integral quadratic constraint (IQC) framework of Megretski and Rantzer \cite{Megretski97} and the availability of computational searches \cite{Safonov:87, Gapski:94,Chang:12,Chen:95,Chen:96,Turner2009,Carrasco12,Turner:12,Carrasco:14}. A modification of the search proposed in \cite{Chen:95} is used in the Matlab IQC toolbox \cite{Kao:04} and analysed by Veenman and Scherer \cite{Veenman14}. No single search method outperforms the others, and often a hand-tailored search outperforms an automated search \cite{Carrasco:14}. This motivates the analysis of conditions where a multiplier cannot exist. There are two main approaches in the literature. J\"{o}nsson and Laiou \cite{Jonsson:96} give a condition that must be satisfied at a number of isolated frequencies. Their result is a particular case of a more general analsysis based on duality in an optimization framework \cite{Jonsson96thesis,Jonsson97,Jonsson99}; we will refer to this as the ``duality approach.'' Their result requires a non-trivial search over a finite number of parameters. By contrast Megretski \cite{Megretski:95} gives a threshold such that the phase of a multiplier cannot be simultaneously above the threshold over a certain frequency interval and below its negative value on another. The idea is generalised in \cite{Wang:18}, where in particular the threshold for the second interval is allowed to have a different value. We will refer to this as the ``frequency interval approach.'' Both the duality approach and the frequency interval approach lead to powerful and useful results, but neither allows a systematic approach. With respect to the duality approach J\"{o}nsson states \cite{Jonsson96thesis} “it is in most applications hard to find a suitable frequency grid for the application of the results.'' With respect to the interval approach, in \cite{Wang:18} we conclude that the most insightful choice of interval remains open. In this paper we present a simple phase condition on two frequencies whose ratio is rational. The condition can be be tested systematically. At each frequency ratio the condition leads to a graphical criterion similar to the off-axis circle criterion \cite{Cho:68} in that it can be expressed as a bound on the phase of a transfer function. We derive the condition via the duality approach, but we also show that it is equivalent to a limiting case of the frequency interval approach. We illustrate the criterion on three examples: we show it gives a significantly better results for the numerical example in \cite{Jonsson:96}; we show it gives new bounds for the gain with O'Shea's classical example \cite{OShea67,Carrasco:EJC}; we provide an example of a third order transfer function with delay that does not satisfy the Kalman Conjecture. The structure of this paper as follows. Section~\ref{Prem} provides the necessary background material and includes the following minor contribution: Theorems`\ref{Jthm_a} and~\ref{Jthm_b} provide frequency conditions similar in spirit to the duality approach of \cite{Jonsson:96}, but more widely applicable; specifically the conditions allow both the system transfer function and the multiplier to be irrational. The main results of the paper are presented in Section~\ref{Main}. Theorems~\ref{thm:2a} and~\ref{thm:2b} give a phase condition that has a simple graphical interpretation and can be implemented systematically. We prove Theorems~\ref{thm:2a} and~\ref{thm:2b} via the duality approach. We discuss both the graphical interpretation and the numerical implementation of Theorems~\ref{thm:2a} and~\ref{thm:2b}. In Section~\ref{Int} we show that the results can also be derived via the frequency interval approach: Corollaries~\ref{m_corollary_a} and~\ref{m_corollary_b} provide a version of the interval approach \cite{Wang:18} for the limiting case where the length of interval goes to zero; Theorems~\ref{Meg_equiv_a} and~\ref{Meg_equiv_b} state these corollaries are respectively equivalent to Theorems~\ref{thm:2a} and~\ref{thm:2b}. Section~\ref{Exa} includes three examples: the first shows we achieve improved results over those reported in \cite{Jonsson:96}; the second is the benchmark problem of O'Shea\cite{OShea67} where we obtain improved results over those reported in \cite{Wang:18}; finally, in the third, we show that a third order with delay system provides a counterexample to the Kalman Conjecture. All proofs, where not immediate, are given in the Appendix. \section{Proof of Theorems~\ref{thm:2a} and~\ref{thm:2b} via frequency interval approach} \begin{proof}[Proof of Theorem \ref{Meg_equiv_a}] Consider $q_-(t)$ on $t>0$. Since $q_-(t)$ is periodic it suffices to consider the interval $0< t \leq 2\pi$. Define \begin{equation} r_-(t) = b \arctan q_-(t) + a \arctan \kappa q_-(t). \end{equation} We will show that for each $\kappa$ all turning points of $r_-(t)$ are bounded by $\pm (a+b-2)\frac{\pi}{2}$ and that at least one turning point touches the bounds. This is sufficient to establish the equivalence between Corollary~\ref{m_corollary_a} and Corollary~\ref{cor:2a}, which is in turn equivalent to Theorem~\ref{thm:2a}. The turning points of $r_-(t)$ occur at the same values of $t$ as the turning points of $q_-(t)$. Specifically \begin{equation} \frac{d}{dt} r_-(t) = \left (\frac{b}{1+q_-(t)^2} + \frac{a\kappa}{1+\kappa^2q_-(t)^2}\right ) \frac{d}{dt}q_-(t). \end{equation} When $[t]_\pi\neq 0$ the derivative of $q_-(t)$ is given by \begin{equation} \frac{d}{dt}q_-(t) =ab \frac{m_-(t)n_-(t)}{d_-(t)^2} \end{equation} with \begin{equation} \begin{split} m_-(t) & = \sin \frac{at}{2} \cos \frac{bt}{2} + \kappa \sin\frac{bt}{2}\cos\frac{at}{2}\\ n_-(t) & = b\sin \frac{at}{2} \cos \frac{bt}{2} -a \sin\frac{bt}{2}\cos\frac{at}{2}\\ d_-(t) & = b\sin^2 \frac{at}{2} + \kappa a \sin^2 \frac{bt}{2} \end{split} \end{equation} On the interval $0<t\leq 2\pi$ with $[t]_\pi\neq 0$ the derivatives of both $q_-(t)$ and $r_-(t)$ are zero when either $m_-(t)=0$ or $n_-(t)=0$. We consider the two cases separately. In both cases we use the identity \begin{equation} q_-(t) = \frac{ b\tan\frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) - a\tan\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } { b \tan^2 \frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) +\kappa a \tan^2\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } \end{equation} \begin{description} \item[Case 1] Suppose $t_1$ satisfies $m_-(t_1)=0$. At these values \begin{equation}q_-(t_1) = \cot \frac{at_1}{2}\end{equation} and \begin{equation}\kappa q_-(t_1) =- \cot \frac{bt_1}{2}\end{equation} Hence if we define \begin{align}\label{rstar} r_-^*(t) = b\left [ \frac{\pi}{2}- \frac{at}{2}\right ]_{[-\pi/2,\pi/2]}+a\left [-\frac{\pi}{2}+ \frac{bt}{2}\right ]_{[-\pi/2,\pi/2]} \end{align} for $t \in [0,2\pi]$ we find $r_-(t_1) = r_-^*(t_1)$ for all $t_1$ satisfying $m_-(t_1)=0$ The function $r_-^*(\cdot)$ is piecewise constant, taking values $(-a-b+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b-1$. On each piecewise constant interval there is a $t_1$ satisfying $m_-(t_1)=0$. Hence these turning points of $r_-(t)$ lie within the bounds $\pm(a+b-2)\frac{\pi}{2}$ with at least one on the bound. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test23a1} \caption{Phase functions $r_-$ (blue), $r_-^*$ (red) and $r_-^\dagger$ (green) with $a=3$ and $b=10$. The turning points of $r_-$ where $m_-(t)=0$ take the value $(a+b-2\lambda)\pi/2$ with $\lambda$ an integer. The function $r_-^*(\cdot)$ is piecewise constant and takes these same values. The turning points of $r_-$ where $n_-(t)=0$ take the values of $r_-^\dagger $, whose bounds are also shown.}\label{test23a} \end{center} \end{figure} \item[Case 2] Define \begin{equation}q^\dagger_-(t) = \frac{ (b^2-a^2)\sin at_2 } { a^2+b^2+\kappa a b -(b^2-a^2)\cos at_2 } \end{equation} and \begin{equation}r^\dagger_-(t) = b \arctan q^\dagger_-(t) + a \arctan \kappa q^\dagger_-(t).\end{equation} Then $q_-(t_2)=q^\dagger_-(t_2)$ and $r_-(t_2)= r_-^\dagger(t_2)$ for all $t_2$ satisfying $n_-(t_2)=0$. It follows that $|r_-(t_2)|\leq |\bar{r}^\dagger|$ for all such $t_2$ where \begin{equation}\label{def_rbar} \begin{split} \bar{r}^\dagger & = b \arctan \bar{q}^\dagger + a \arctan \kappa \bar{q}^\dagger\\ \bar{q}^\dagger & = \frac{b^2-a^2}{2\sqrt{ab(a+\kappa b)(b+\kappa a)}} \end{split} \end{equation} With some abuse of notation, write $\bar{r}^\dagger=\bar{r}^\dagger(\kappa)$; i.e. consider $\bar{r}^\dagger$ as a function of $\kappa$. We find \begin{equation}\begin{split} \frac{d}{d\kappa}\bar{r}^\dagger(\kappa) = & \frac{-(a+b\kappa )(a^2-b^2)^2 } {(2ab+ (a^2+b^2)\kappa)(2ab\kappa +a^2+b^2) } \\ & \times \sqrt{ \frac{ab}{(a+b\kappa)(a\kappa+b)} }\end{split}\end{equation} Hence $|\bar{r}^\dagger(\kappa)| \leq \max(|\bar{r}^\dagger(0)|,\lim_{\kappa\rightarrow \infty}|\bar{r}^\dagger(\kappa)|)$. Furthermore \begin{equation}\begin{split} \bar{r}^\dagger(0) & = b\arctan \left (\frac{b^2-a^2}{2ab}\right )\\ \lim_{\kappa\rightarrow \infty} \bar{r}^\dagger(\kappa) & = a\arctan \left (\frac{b^2-a^2}{2ab}\right ) \end{split}\end{equation} Hence it suffices to show \begin{equation}\max(a,b) \arctan \left |\frac{b^2-a^2}{2ab}\right | \leq (a+b-2)\frac{\pi}{2}\end{equation} If both $a$ and $b$ are both greater than 1 then this is immediate, since in this case $\max(a,b)\leq a+b-2$. Hence it suffices to show \begin{equation} b \arctan \frac{b^2-1}{2b} \leq (b-1)\frac{\pi}{2}\end{equation} or equivalently, with $b\geq 2$, that \begin{equation} \frac{b^2-1}{2b}\sin\left ( \frac{\pi}{2b}\right ) \leq \cos\left ( \frac{\pi}{2b}\right )\end{equation} We can quickly check \begin{equation}\frac{b^2-1}{2b}\sin\left ( \frac{\pi}{2b}\right ) \leq \frac{(b^2-1)\pi}{4b^2} \leq 1 - \frac{\pi^2}{8b^2} \leq \cos\left ( \frac{\pi}{2b}\right ) \end{equation} \end{description} \end{proof} \begin{proof}[Proof of Theorem \ref{Meg_equiv_b}] The proof is similar to that for Theorem~\ref{Meg_equiv_a}. We have already established appropriate bounds for $r_-(t)$. If we define \begin{equation}r_+(t) = b \arctan q_+(t) + a \arctan \kappa q_+(t)\end{equation} then we need to show it is also bounded appropriately. Similar to the previous case, the turning points of $r_+(t)$ occur at the same values of $t$ as the turning points of $q_+(t)$. When $[t]_\pi\neq 0$ the derivative of $q_+(t)$ is given by \begin{equation}\frac{d}{dt}q_+(t) =ab \frac{m_+(t)n_+(t)}{d_+(t)^2}\end{equation} with \begin{equation}\begin{split} m_+(t) & = \kappa \sin \frac{at}{2} \cos \frac{bt}{2} + \sin\frac{bt}{2}\cos\frac{at}{2}\\ n_+(t) & = b \sin\frac{bt}{2}\cos\frac{at}{2} - a\sin \frac{at}{2} \cos \frac{bt}{2} \\ d_+(t) & = b\cos^2 \frac{at}{2} + \kappa a \cos^2 \frac{bt}{2} \end{split}\end{equation} We will consider the cases $m_+(t)=0$ and $n_+(t)=0$ separately. This time we use the identity \begin{equation}\begin{split} q_+(t) & = \frac{ b\tan\frac{at}{2}\left (1+\tan^2\frac{bt}{2}\right ) - a\tan\frac{bt}{2}\left (1+\tan^2\frac{at}{2}\right ) } { b \left (1+\tan^2\frac{bt}{2}\right ) +\kappa a \left (1+\tan^2\frac{at}{2}\right ) } \end{split}\end{equation} \begin{description} \item[Case 1] Suppose $t_1$ satisfies $m_+(t_1)=0$. Then \begin{equation}\begin{split} q_+(t_1) & = \tan \frac{at_1}{2} \end{split}\end{equation} and \begin{equation}\begin{split} \kappa q_+(t_1) & = - \tan \frac{bt_1}{2} \end{split}\end{equation} Hence if we define \begin{align}\label{rstar} r_+^*(t) = b\left [ \frac{at}{2}\right ]_{[-\pi/2,\pi/2]}-a\left [ \frac{bt}{2}\right ]_{[-\pi/2,\pi/2]} \end{align} for $t \in [0,2\pi]$ we find $r_+(t_1) = r_+^*(t_1)$ for all $t_1$ satisfying $m_+(t_1)=0$. The function $r_+^*(\cdot)$ is piecewise constant, taking values $(-a-b-1+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b$ when either $a$ or $b$ are even, and values $(-a-b+2\lambda)\pi/2$ with $\lambda = 1,\ldots,a+b-1$ when $a$ and $b$ are both odd. On each piecewise constant interval there is a $t_1$ satisfying $m_+(t_1)=0$. Hence these turning points of $r_+(t)$ lie within the bounds $\pm(a+b-1)\frac{\pi}{2}$ (if either $a$ or $b$ even) or $\pm(a+b-2)\frac{\pi}{2}$ (if $a$ and $b$ both odd) with at least one on the bound. \begin{figure}[htbp] \begin{center} \includegraphics[width = 0.9\linewidth]{test25_1} \caption{ Phase functions $r_+$ (blue), $r_+^*$ (red) and $r_+^\dagger$ (green) with $a=3$ and $b=10$. The turning points of $r_+$ where $m_+(t)=0$ take the value $(a+b+1-2\lambda)\pi/2$ with $\lambda$ an integer. The function $r_+^*(\cdot)$ is piecewise constant and takes these same values. The turning points of $r_+$ where $n_+(t)=0$ take the values of $r_+^\dagger $, whose bounds are also shown. }\label{test25} \end{center} \end{figure} \item[Case 2] Define \begin{equation}q^\dagger_+(t) = \frac{ (b^2-a^2)\sin at_2 } { a^2+b^2+\kappa a b +(b^2-a^2)\cos at_2 } \end{equation} and \begin{equation}r^\dagger_+(t) = b \arctan q^\dagger_+(t) + a \arctan \kappa q^\dagger_+(t).\end{equation} Then $q_+(t_2)=q^\dagger_+(t_2)$ and $r_+(t_2)= r_+^\dagger(t_2)$ for all $t_2$ satisfying $n_+(t_2)=0$. It follows that $|r_+(t_2)|\leq |\bar{r}^\dagger|$ for all such $t_2$ where $\bar{r}^\dagger$ is given by (\ref{def_rbar}). As we have the same bounds as before, the previous analysis establishes that these turning points lie within the bounds. \end{description} \end{proof} \subsection{Proof of Corollary~\ref{m_corollary}} \subsection{Proofs of Theorems~\ref{Jthm_a} and~\ref{Jthm_b}} \begin{proof}[Proof of Theorem~\ref{Jthm_a}] Let $M\in\mathcal{M}$ take the form of Definition~\ref{def2a}. Then \begin{equation} \begin{split} M(j\omega) & = m_0-\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt-\sum_{i=1}^{\infty}h_ie^{-j\omega t_i},\\ & = \bar{m}_0 -\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt+\sum_{i=1}^{\infty}h_iM^-_{t_i}(j\omega), \end{split} \end{equation} where \begin{equation}\label{barm_ineq} \bar{m}_0 = m_0-\sum_{i=1}^{\infty}h_i\geq \| h\|_1, \end{equation} and \begin{equation} \begin{split} \sum_{r=1}^N \lambda_r \mbox{Re} & \left \{M(j\omega_r) G(j\omega_r) \right \} = \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & - \int_{-\infty}^{\infty}h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \, dt\\ & + \sum_{i=1}^{\infty}h_i \sum_{r=1}^N \lambda_r \mbox{Re}\left \{M^-_{t_i}(j\omega_r) G(j\omega_r) \right \}. \end{split} \end{equation} Suppose the conditions of Theorem~\ref{Jthm_a} hold. Then, by (\ref{thm1_ineq}), \begin{equation} \begin{split} \sum_{r=1}^N \lambda_r \mbox{Re} & \left \{M(j\omega_r) G(j\omega_r) \right \} \leq \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & - \int_{-\infty}^{\infty}h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \, dt\label{no_sum} \end{split} \end{equation} In addition, we can write (\ref{thm1_ineq}) as \begin{align}\label{thm1_ineq_alt} \sum_{r=1}^N\lambda_r & \mbox{Re}\left \{ G(j\omega_r) \right \} \nonumber\\ & \leq \sum_{r=1}^N\lambda_r \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \} \mbox{ for all } \tau\in \mathbb{R}\backslash 0. \end{align} Averaging this expression over $\tau$ yields \begin{equation}\label{ineq2} \begin{split} \sum_{r=1}^N\lambda_r & \mbox{Re}\left \{ G(j\omega_r) \right \} =\lim_{T\rightarrow\infty}\frac{1}{T}\int_0^T \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \}\, dt\\ & \leq \lim_{T\rightarrow\infty}\frac{1}{T}\int_0^T\sum_{r=1}^N\lambda_r \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \}\,dt \\ & = \sum_{r=1}^N\lambda_r \mbox{Re}\left \{ \lim_{T\rightarrow\infty}\frac{1}{T}\int_0^Te^{-j\omega_r\tau}\, dt\, G(j\omega_r) \right \} \\ & = 0. \end{split} \end{equation} From (\ref{barm_ineq}) and (\ref{ineq2}) we obtain \begin{equation} \bar{m}_0 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r)\right \} \leq \|h\|_1 \sum_{r=1}^N \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \}. \end{equation} This, with (\ref{thm1_ineq_alt}), yields \begin{align}\label{no_sum2} \bar{m}_0 \sum_{r=1}^N \lambda_r & \mbox{Re}\left \{ G(j\omega_r)\right \}\nonumber\\ & \leq \int_{-\infty}^{\infty} h(t) \sum_{r=1}^N \lambda_r \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \}\,dt. \end{align} Together (\ref{no_sum}) and (\ref{no_sum2}) yield \begin{equation}\label{final_thm1a} \sum_{r=1}^N \lambda_r \mbox{Re}\left \{M(j\omega_r ) G(j\omega_r) \right \} \leq 0. \end{equation} It follows from Definition~\ref{def1} that $M$ is not suitable for $G$. \end{proof} \begin{proof}[Proof of Theorem~\ref{Jthm_b}] Let $M\in\mathcal{M}$ take the form of Definition~\ref{def2b}. Define $\mathcal{H}^+=\{i\in\mathbb{Z}^+\mbox{ such that }h_i\geq 0\}$ and $\mathcal{H}^-=\{i\in\mathbb{Z}^+\mbox{ such that }h_i< 0\}$. Then \begin{equation} \begin{split} M(j\omega) = & \bar{m}_0 -\int_{-\infty}^{\infty}h(t)e^{-j\omega t}\,dt\\ & +\sum_{i\in\mathcal{H}^+}^{\infty}h_iM^-_{t_i}(j\omega)+\sum_{i\in\mathcal{H}^-}^{\infty}|h_i|M^+_{t_i}(j\omega) \end{split} \end{equation} where this time \begin{equation}\label{barm_ineq_b} \bar{m}_0 = m_0-\sum_{i=1}^{\infty}|h_i|\geq \| h\|_1. \end{equation} Suppose the conditions of both Theorem~\ref{Jthm_a} and~\ref{Jthm_b} hold. Then (\ref{thm1_ineq}) and (\ref{thm1b_ineq})) yield (\ref{no_sum}) as before, but with $\bar{m}_0$ given by (\ref{barm_ineq_b}). Furthermore, we can write (\ref{thm1_ineq}) and (\ref{thm1b_ineq}) together as \begin{equation}\label{thm1_ineq_alt_b} \begin{split} \sum_{r=1}^N & \lambda_r \mbox{Re}\left \{ G(j\omega_r) \right \} \\ & \leq - \sum_{r=1}^N\lambda_r \left | \mbox{Re}\left \{e^{-j\omega_r\tau} G(j\omega_r) \right \} \right |\mbox{ for all } \tau\in \mathbb{R}\backslash 0. \end{split} \end{equation} Since (\ref{ineq2}) still holds, from (\ref{barm_ineq_b}), (\ref{thm1_ineq_alt_b}) and (\ref{ineq2}) we obtain \begin{align}\label{no_sum2b} \bar{m}_0 \sum_{r=1}^N & \lambda_r \mbox{Re}\left \{ G(j\omega_r)\right \}\nonumber\\ & +\int_{-\infty}^{\infty} |h(t)| \sum_{r=1}^N \lambda_r \left | \mbox{Re}\left \{e^{-j\omega_r t} G(j\omega_r) \right \} \right | \,dt \leq 0. \end{align} Together (\ref{no_sum}) and (\ref{no_sum2b}) yield (\ref{final_thm1a}) as before. It follows from Defintion~\ref{def1} that $M$ is not suitable for $G$. \end{proof} \subsection{Proof of Theorems~\ref{thm:2a} and \ref{thm:2b}} \input{Part5_Jonsson01} \subsection{Proofs of Corollaries~\ref{m_corollary_a} and ~\ref{m_corollary_b}} \begin{proof}[Proof of Corollary~\ref{m_corollary_a}] Without loss of generality let $a<b$. The result follows by setting the intervals \begin{align} [\alpha,\beta] =[a\omega_0 - \varepsilon, a\omega_0 + \varepsilon]\mbox{ and }[\gamma,\delta] =[b\omega_0 - \varepsilon, b\omega_0 + \varepsilon] \end{align} with $\varepsilon>0$ and taking the limit as $\varepsilon \rightarrow 0$. Specifically we find \begin{equation} \begin{split} \psi(t) & = \frac{2\lambda}{t}\sin(a\omega_0t)\sin(\varepsilon t) -\frac{2\mu}{t}\sin(b\omega_0t)\sin(\varepsilon t) \\ \phi(t) & = 2\varepsilon\lambda+2\varepsilon\kappa\mu+\phi_1(t)\\ \phi_1(t) & = -\frac{2\lambda}{t}\cos(a\omega_0t)\sin(\varepsilon t) -\frac{2\kappa\mu}{t}\cos(b\omega_0t)\sin(\varepsilon t), \end{split} \end{equation} with $a\lambda=b\mu$. Hence \begin{equation} \overline{\rho}^c = \lim_{\varepsilon\rightarrow 0}\rho^c\\ \end{equation} \end{proof} \begin{proof}[Proof of Corollary~\ref{m_corollary_b}] In addition \begin{equation} \tilde{\phi}(t) = 2\varepsilon\lambda+2\varepsilon\kappa\mu-|\phi_1(t)| \end{equation} and hence \begin{equation} \overline{\rho}^c_{\mbox{odd}} = \lim_{\varepsilon\rightarrow 0}\rho^c_{\mbox{odd}} \end{equation} \end{proof} \subsection{Proof of Theorems~\ref{Meg_equiv_a} and~\ref{Meg_equiv_b} } \input{Part6_Megretski01}
1,108,101,564,482
arxiv
\section{Introduction}\label{sec:introduction} \subsection{Background}\label{sec:background} \IEEEPARstart{W}{e} study regression (prediction, learning and similar) in time series settings where we receive a data sequence related to a target signal and estimate the signal's next values in an online manner. This problem is extensively studied in the machine learning, computational learning theory and signal processing literatures under different names \cite{sayeed2021deep}, \cite{body_electric}, \cite{gharehbaghi2017deep}. An effective way in such problems as shown in many recent competitions \cite{makridakis2020m4, makridakis2022m5} is using ensemble models for improved performance \cite{ensemble_is_better}, where many base predictor models are first trained separately or successively and then inputted to a merging strategy to get the final prediction. Since there seldom exists a true data generating process in practical time series applications, the ensembling of individual predictors tend to produce better performance than the individual models using diversity, as well known in the machine learning literature \cite{ensemble_is_better_2}. Common strategies for ensembling include boosting \cite{gbdt_friedman} and bootstrap aggregation \cite{breiman1996bagging} where the base models are intentionally kept weak either by constraining them by design to underfit or by restricting the amount of data samples and/or features used each base model. The resultant ensembled prediction, however, is a direct average or sum of the predictions of the base models, e.g., taking the mean of the weak predictions in regression, or taking the majority vote in classification. Furthermore, the base models in these ``built-in'' ensemblers are often chosen to be the same predictor in practice, e.g., in random forests \cite{breiman2001random}, gradient-boosted decision trees \cite{gbdt_friedman} and extra-trees \cite{extra_trees}, hard decision trees \cite{cart_friedman} are the base predictors. This not only prevents them from using diverse base regressors but also from providing online prediction as the hard decision trees are not differentiable and therefore require periodical re-fitting against a real-time stream of data, which might be time consuming and/or computationally infeasible. Apart from these built-in ensembling mechanisms, a ``meta learner'' approach is also used for combining predictions \cite{meta_learner_generic}. In that approach, a separate machine learning model, the meta learner, is trained either over the predictions of the base models or the features extracted from the target sequence, where the output of the meta learner is commonly tailored to mimic a weighted average, i.e., the output vector has its values all nonnegative and sum up to 1; the resultant prediction is then a linear combination of the base predictions. When using the predictions of the base models as the features to the meta machine learning model, however, the diversity of the base predictions should be satisfied as otherwise the correlated features might hinder (as always hinder as we show in our simulations) the learning process of the meta learner \cite{correlated_faetures_hinder_learning}. In practice, it might be hard to find diverse-enough base predictors, which renders this approach less practical \cite{diversity_matters_book}. Moreover, even when using other features as input to the ensemble-learning algorithm in addition to the predictions of the base learners, e.g., those extracted from the time series, the base predictions themselves tend to dominate the other features and the correlation issue re-arises \cite{base_predictions_dominate_others}. Additionally, the ensembling models that do not consider the features the base predictors use tend to produce nonoptimal predictions as they lack the proper context in which these predictions are produced, i.e, some base models can work in certain states better than the others; however, this information is lost in the ensembling process, since it is oblivious to this state information. The weights in such combinations therefore tend to be biased towards the prediction of the ``best'' base predictor, which in turn causes meta learner to overfit to the said base model. Another important aspect in linear combinations of the predictions is the constraint space of the ensembling weight vector. In the general case, the weight vector is \emph{unconstrained}, i.e., its values can take any real valued number. When the statistics of the combined predictions as well as the target signal are known, naturally, the unconstrained weights achieve the lowest possible error \cite{unconstrained_best}. In practice, however, those statistics are rarely known, and the unsurpassability of the unconstrained combinations is subject to the learning procedures, i.e., whether all the parameters will be correctly learnt or not, and therefore not guaranteed. In fact, since the parameter space is the entire Euclidean space of appropriate dimension, unconstrained combinations might lead to overfitting in ensembling. To this end, we also consider two more combining strategies over the weight vector in order to generate a ``regularization'' effect. These strategies are \emph{affine} combinations where the weight vector is required to have its components sum up to 1 and \emph{convex} combinations which builds on the affine combination by also requiring nonnegativity of the ensembling weights. We emphasize that nondifferentiable ensembler models, e.g., a direct average or boosted trees, are not flexible enough to satisfy the latter two constraints in an automated manner \cite{boosting_isnt_diffable_by_default}. Here, we effectively combine the base predictions in a context-dependent and base model-agnostic manner. To this end, we propose a machine learning-based meta learner that uses a superset of features in training, which equals or extends the concatenated feature sets of the base models and does not include the predictions themselves as the features to the ensemble-learner. For the linear combination of the base predictions, the ensembler outputs a weight vector which is either unconstrained or amenable to satisfy the affine or convex constraints, all in an online manner. Our motivation is to use the superset feature vector in order to make the combining model to be ``context aware'' and less prone to overfitting to a prominent base model, while exploring various constraint spaces to allow for a regularization effect. In particular, we employ a LightGBM (a gradient boosting machine \cite{ke2017lightgbm}) and a neural network (a multilayer perceptron \cite{hornik1989multilayer}) as the meta learners to show the efficacy of the proposed ensembling approach while emphasizing that the framework is generic enough such that any machine learning model capable of minimizing a custom differentiable loss can be used. \subsection{Related Work}\label{sec:relatedwork} Ensemble models are heavily investigated in machine learning, computational learning theory, statistics and signal processing \cite{yang2013effective}, \cite{jacobs1991adaptive}, since they provide superior performance due to using diversity. There are several methods for ensembling, for example, one can train base models independently from each other in parallel and then linearly combine their predictions by a deterministic process, which is called bagging \cite{breiman1996bagging}. In boosting \cite{gbdt_friedman}, base models are trained sequentially, with each model solely focusing on correcting the errors made by the previous ones. On the contrary, the stacking method \cite{jacobs1991adaptive} relies on a separate machine learning model to estimate the linear combination weights of the base model predictions. The main issue with conventional ensembling methods is the process of combining predictions linearly or with some other final meta machine learning algorithm. In most cases, predictions are blended using a simple averaging process, and even if a separate machine learning model is used for learning the combination weights, these weights are learned according to the errors of the base models in the training dataset. Thus, ensemble models are prone to suffer from the underfitting and overfitting issues present in the base models \cite{yang2013effective}. As a remedy, several adjustments to the ensemble methods have been proposed, such as employing different preprocessing techniques for each base model, mixing the training data with independent noise for each model, using a different partition of the training data for each model, etc. \cite{zhang2007neural}. All of these approaches are sub-optimal solutions, as the ensemble learner still only considers the errors of the base models, which as we show in our simulations, is inadequate for the combination weight vector to converge to the optimal weights. The linear combination weights should be produced with respect to the data specific side information vector at each sample to exploit the diversity and avoid the learning difficulties due to high correlations, which we support with our simulations. This is natural since real life time series data often contain different patterns depending on the sample time or some data specific side information vector \cite{xiao2019learning}. For example, a daily sales data, such as the M5 Forecasting Dataset \cite{makridakis2022m5}, may show different patterns for weekdays and weekends. Thus, the linear combination of the base model predictions should be learned according to the day of the week information. A study of the short term forecasting of gas demand in Italy \cite{fabbiani2019ensembling} illustrates that, the daily industrial gas demand consumption shows different patterns depending on the weather temperature, thus, weather temperature should also be fed to the ensemble learner. Therefore, we introduce a scheme where the ensemble model learns to combine base model predictions at each time sample, by learning the relation between the errors of the base models and the data specific side information vector. There have been previous attempts to linearly combine base model predictions based on a certain side information vector. The second place winners of the M4 Forecasting Competition \cite{montero2020fforma} use the LightGBM model \cite{ke2017lightgbm} as their ensemble model, which learns to produce weights at each sample time, by solely relying on the given side information vector. However, even though the ensemble model predictions rely on the data specific side information vector, the model is still trained based on the errors of the base models in the training dataset. Thus, possible training problems of the base models, such as overfitting, can mislead the ensemble learner to produce nonoptimal combination weights during the test dataset. In order to eliminate any training related issues, we introduce a novel training scheme, where the training dataset is partitioned into two distinct sets. The base models are trained on the first partition, and then the ensemble learner is trained by their features, state of the problem and their errors on the second partition of the training dataset. Thus, we alleviate training specific issues by the base models, as these issues are not leaked to the ensemble model. Another problem with the given model is that, each weight assigned to any base model is bound to be nonnegative, and the summation of all of the base model weights for each sample is bound to sum up to $1$, which is the convex constraint. The motive behind this approach is reducing the learning complexity of the ensemble learner. However, as we illustrate in our simulations, the convex constraint is not always the best solution for every time series data. Hence, we solve the linear weight combination problem under three different constraints, where the weights can be either affine constrained, convex constrained or unconstrained. Therefore, we introduce a more comprehensive solution, where the weight constraint employed can be selected according to the given dataset. As a result, with our novel ensemble approach, we significantly improve the prediction performance compared to the base models and conventional ensembling methods for various datasets, as illustrated in our simulations including both well-known datasets in various competitions and artificially generated datasets. \subsection{Contributions}\label{sec:contributions} Our contributions are as follows: \begin{itemize} \item For the first time in the literature, we tackle the problem of finding the optimal combination weight vectors to ensemble base predictors based on a sequential data specific side information vector, under three different weight constraints. To learn the optimal time dependent and side-information dependent weights, we present two novel and generic ensembling algorithms, based on boosted decision trees \cite{gbdt_friedman} and neural networks \cite{hornik1989multilayer}, where we derive all the related equations. Note that our approach is generic as any such universal learner can be used accordingly. \item We introduce a novel ensemble training scheme, where we alleviate any training related issues by the base models, such as co-linearity, high correlation and overfitting by the base algorithms. \item With various experiments containing synthetic and real-life sequential data from the well-known competitions, we illustrate the superiority of our approach over the base models used in the experiments and the conventional ensembling methods. \item We publicly share our code for both model design, comparisons and experimental reproducibility\footnote{https://github.com/ardafazla/context-aware-ensemble}. \end{itemize} \subsection{Organization}\label{sec:organization} The rest of the paper is organized as follows. In Section \ref{sec:preliminaries}, we introduce the problem of finding the optimal base model combination weights based on the side information vector, under different constraints. In Section \ref{sec:theproposedmodel}, we first analyze the associated learning costs of the optimal combination weight vectors, under all three constraints. Next, we introduce two novel and generic machine learning algorithms to find the optimal weights, along with a new ensemble training scheme. We then illustrate the performance of our proposed algorithms via extensive experiments involving real life and synthetic data in Section \ref{sec:sims}. We then conclude our paper with remarks in Section \ref{sec:conclusion}. \section{Preliminaries}\label{sec:preliminaries} \subsection{Problem Statement}\label{sec:problemstatement} In this paper, all vectors are real column vectors and are presented by boldface lowercase letters. Matrices are denoted by boldface uppercase letters. $x^{(k)}$ and $x_{t}^{(k)}$ denotes the $k\textsuperscript{th}$ element of the vectors $\boldsymbol{x}$ and $\boldsymbol{x}_t$, respectively. $\boldsymbol{x}^T$ represents the ordinary transpose of $\boldsymbol{x}$. ${X}_{i, j}$ represents the entry at the $i\textsuperscript{th}$ row and the $j\textsuperscript{th}$ column of the matrix $\boldsymbol{X}$. We study the online prediction of sequential data using a linear mixture of different learning models. The main sequence we aim to predict is $\{y_t\}$; to this end, we employ $M$ (online) learning models, called base models, each of which uses a side information sequence $\{\boldsymbol{s}_k^{(i)}\}$ for $k \leq t$ and $ i = 1, \ldots, M$. We emphasize that each base model is free to use a different side information vector, e.g., $\{\boldsymbol{s}_k^{(i)}\}$ could include the observed past information $\{y_k\}$ along with model-dependent features. At each time $t$, all the base models produce predictions $\hat{y}_{t}^{(i)}$. We combine these $M$ predictions using a linear scheme such that the ensemble prediction $\hat{y}_{t}^E$ of $y_{t}$ is given as \begin{equation}\label{eq:linear_comb} \hat{y}_{t}^E = \boldsymbol{w}_{t}^T \boldsymbol{\hat{y}}_{t}, \end{equation} where $\boldsymbol{\hat{y}}_{t} = [\hat{y}_{t}^{(1)}, \ldots, \hat{y}_{t}^{(M)}]^T$ is the base prediction vector of size $M$ consisting of the individual scalar predictions of the base models, and $\boldsymbol{w}_{t} \in \mathbb{R}^M$ is the ensembling weight vector at time $t$. We note that even though the ensembling scheme is linear, the adaptive learning procedure that produces $\boldsymbol{w}_{t}$ could be highly nonlinear as we show in Section \ref{sec:algorithmicdesctription}. Note that other highly nonlinear combination methods including MLPs and decision trees are shown to be highly inadequate in learning the correlated relations as shown in our experiments. When we observe $y_{t}$, we suffer the loss \begin{align}\label{eq:loss_generic} L_t = \ell(y_{t}, \hat{y}_{t}^E) &= \ell(y_{t}, \boldsymbol{w}_{t}^T \boldsymbol{\hat{y}}_{t})\\\nonumber &= \ell\big(y_{t}, \sum_{i=1}^M w_{t}^{(i)}\,\hat{y}_{t}^{(i)}\big), \end{align} where $\ell$, for example, can be the squared error loss. Conventionally, the learning of the weight vector $\boldsymbol{w}_{t}$ depends only on the base models' predictions, i.e., $\boldsymbol{w}_{t} = f(\boldsymbol{\hat{y}}_{t})$ for some function $f$ representing a learning model \cite{conventional_w_learning_1,conventional_w_learning_2}. However, a disadvantage with this approach is that when the base predictions are highly correlated (e.g., due to similar algorithms or outputs), the learning procedure is hindered \cite{correlated_faetures_hinder_learning}. Therefore, highly diverse and independent base models are a prerequisite for such ensembling. Furthermore, this approach does not use any side information in the ensembling model either, which could result in nonoptimal combinations, as, naturally, even with any learning issues, \begin{equation}\label{eq:side_is_<=} \mathbb{E}[\ell(y_{t}, \hat{y}_{t}^E) | \boldsymbol{s}_t^E] \leq \mathbb{E}[\ell(y_{t}, \hat{y}_{t}^E)], \end{equation} where $\boldsymbol{s}_t^E$ is the side information vector the ensembling model uses at time step $t$. We note that the loss function $\ell$ in \eqref{eq:side_is_<=} is generic; therefore, \eqref{eq:side_is_<=} implies that the expected loss at time $t$ cannot go higher when using the extra side information vector. To this end, we propose to use \emph{not} the base predictions themselves as inputs to the ensembling model but instead a superset of the side information features of the individual models gathered in $\boldsymbol{s}_t^E$, i.e., it is at least \begin{equation}\label{eq:superset_side_information} \boldsymbol{s}_t^E = \bigcup_{i = 1}^M \boldsymbol{s}_t^{(i)}, \end{equation} where the union of vectors $\{\boldsymbol{s}_t^{(i)}\}$ corresponds to concatenating of them while not allowing for duplicate features. We note that $\boldsymbol{s}_t^E$ may have more features than this union. Our approach compared to the conventional method is illustrated in Fig. \ref{fig:conventional_vs_new}. We also emphasize that, as shown in Fig. \ref{fig:conventional_vs_new}, we do \emph{not} use the base predictions next to $\boldsymbol{s}_t^E$ as inputs to the ensembling model. The rationale behind this is two folds. Firstly, the base predictions might be highly correlated and as a result, the combining model might struggle in learning the weights \cite{correlated_faetures_hinder_learning}. Secondly, when used along with other features, i.e., $\boldsymbol{s}_t^E$ in this case, the base predictions tend to dominate the other features and the ensemble model would mostly ignore the contributions from possibly valuable side information vectors for ensembling \cite{base_predictions_dominate_others}. In order to emphasize that the learned ensembling weight vector in our framework is side information dependent, we denote it via $\boldsymbol{w}_{\boldsymbol{s}_t^E}$. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figures/conventional_vs_new} \caption{Comparison of the ensembling approaches. The conventional approach is shown where the ensembling model is fed with the base predictions only to produce a weight vector at each time $t$. The proposed approach, on the other hand, uses a superset of side information vectors to adaptively learn the combining weights at each time step under a given constraint.} \label{fig:conventional_vs_new} \end{figure} As an example of the process, in hourly wind power nowcasting, one could employ two parallel-working base models. The first model could use the past 7 hours of wind power, current wind speed and wind direction, i.e., 3 features in total, while the second model could use past 24 hours of wind power and current wind direction, i.e., 2 features in total, as side information. The ensembling model, then, would use 4 features in total, which are the past 7 and 24 hours of the target signal, wind speed and wind direction. Each base model as well as the ensembling model updates itself as soon as new observations of the next hour become available, i.e., they work in an online manner. We investigate three different mixture approaches which are identified by how they constrain the combining weight vector $\boldsymbol{w}_{\boldsymbol{s}_t^E}$; these are the \emph{unconstrained} linear combination where each $w_{\boldsymbol{s}_t^E}^{(i)}$ is free to take any real value, the \emph{affine} linear combination where the components of the weight vector sum up to 1, i.e., $\boldsymbol{w}_{\boldsymbol{s}_t^E}^T \boldsymbol{1} = 1$ and the \emph{convex} linear combination where the weight vector not only has its components sum to 1 but also all nonnegative, i.e., $\boldsymbol{w}_{\boldsymbol{s}_t^E}^T \boldsymbol{1} = 1$ and $w_{\boldsymbol{s}_t^E}^{(i)} \geq 0 \,\forall i$; all of these constraints apply to all time steps. Hence, in a purely online setting, we aim to adaptively find the ``best'' weights that linearly combine the predictions of $M$ parallel running base models to minimize a given loss $\ell$ at each time step $t$ using the super feature set $\boldsymbol{s}_t^E$ under three different constraints imposed on the weight vector. In the next section, we present an analytical overview on the constraints as well as various ensemble algorithms to find the corresponding ensembling weights. \section{Ensemble Learning}\label{sec:theproposedmodel} In this section, we first analyze the optimal ensembling vectors in the expectation sense as well as the associated costs for all three constraints on the weight vector; namely, unconstrained, affine and convex combinations. Next, we introduce novel ensembling machine learning algorithms including the mathematical derivations based on boosted decision trees \cite{gbdt_friedman} and neural networks \cite{hornik1989multilayer} in a generic framework to find the optimal weights. \subsection{Analysis on Constraints} We first analyze the optimal weight vectors for ensembling as well as the associated costs under known statistics for each of the three constraints imposed on the ensembling weight vector $\boldsymbol{w}_{\boldsymbol{s}_t^E} \in \mathbb{R}^M$ where $M$ is the number of base models in the ensemble. Let $\boldsymbol{C}(\boldsymbol{s}_t)$ be the conditional auto correlation matrix at time $t$ of the base prediction vector $\boldsymbol{\hat{y}}_{t} = [\hat{y}_{t}^{(1)}, \ldots, \hat{y}_{t}^{(M)}]^T$ given the super set of the side information vectors used by all the base algorithms and specific to the ensemble, and let $\boldsymbol{a}(\boldsymbol{s}_t)$ be the conditional cross correlation vector at time $t$ between the target signal $y_{t}$ and the base prediction vector given the super set side information, i.e., \begin{align} \boldsymbol{C}(\boldsymbol{s}_t) &= \mathbb{E}[\boldsymbol{\hat{y}}_{t} \boldsymbol{\hat{y}}_{t}^T\,|\,\boldsymbol{s}_t^E]\\ \boldsymbol{a}(\boldsymbol{s}_t) &= \mathbb{E}[{y}_{t} \boldsymbol{\hat{y}}_{t}\,|\,\boldsymbol{s}_t^E]. \end{align} In the unconstrained case, the components of $\boldsymbol{w}_{\boldsymbol{s}_t^E}$ are free to take any real number; therefore, the corresponding optimization problem given $\boldsymbol{s}_t^E$ for a given differentiable loss function $\ell$ is \begin{equation} \begin{aligned} \min_{\boldsymbol{w}_{\boldsymbol{s}_t^E} \in \mathbb{R}^M} \quad & \ell\big(y_{t}, \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)}\,\hat{y}_{t}^{(i)}\big), \end{aligned} \end{equation} where the feasible region is the entire $M$ dimensional Euclidean space. If the statistics $\boldsymbol{C}(\boldsymbol{s}_t)$ and $\boldsymbol{a}(\boldsymbol{s}_t)$ are known at all times, the optimal unconstrained ensembling vector at time $t$ under the expected squared error loss, i.e., when $\ell(y_t, \hat{y}_t) = \mathbb{E}[(y_t - \hat{y}_t)^2 | \boldsymbol{s}_t^E] = \mathbb{E}[(y_t - \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)}\,\hat{y}_{t}^{(i)})^2 | \boldsymbol{s}_t^E]$, is the solution to the normal equations and given as \begin{align}\label{eq:weight_eq_1} \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^* = \boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t). \end{align} Furthermore, assuming the target signal has the conditional variance $\sigma(\boldsymbol{s}_t)^2$ at time $t$, the conditional mean squared error of the unconstrained ensembling is given by \begin{align}\label{eq:weight_eq_2} L_{t, \text{unc}}^* := \sigma(\boldsymbol{s}_t)^2 - \boldsymbol{a}(\boldsymbol{s}_t)^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t). \end{align} In the affine constrained case, the components of the weight vector are required to sum up to 1. This translates to the optimization problem given as \begin{equation}\label{eq:aff_orig} \begin{aligned} \min_{\boldsymbol{w}_{\boldsymbol{s}_t^E} \in \mathbb{R}^M} \quad & \ell\big(y_{t}, \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)}\,\hat{y}_{t}^{(i)}\big)\\ \textrm{subject to} \quad & \boldsymbol{w}_{\boldsymbol{s}_t^E}^T \boldsymbol{1} = 1, \end{aligned} \end{equation} where the feasible region is now an $M-1$ dimensional hyperplane in $\mathbb{R}^M$. In fact, this problem could be cast as an $M-1$-dimensional unconstrained optimization over $\boldsymbol{\tilde{w}}_{\boldsymbol{s}_t} \in \mathbb{R}^{M-1}$ such that the $M^{\textsuperscript{th}}$ component of the weight vector complements the sum of the entire vector to be 1, i.e., \begin{equation}\label{eq:aff_trans} \begin{aligned} \min_{\boldsymbol{\tilde{w}}_{\boldsymbol{s}_t^E} \in \mathbb{R}^{M-1}} \quad & \ell\big(y_{t}, \sum_{i=1}^{M-1} w_{\boldsymbol{s}_t^E}^{(i)}\,\hat{y}_{t}^{(i)}\big)\\ \textrm{subject to} \quad & w_{\boldsymbol{s}_t^E}^{(M)} = 1 - \boldsymbol{\tilde{w}}_{\boldsymbol{s}_t^E}^T \boldsymbol{1}. \end{aligned} \end{equation} With the transformation of the constrained problem from \eqref{eq:aff_orig}, which is a linearly constrained quadratic optimization problem under the squared loss, i.e., $\ell(y_t, \hat{y}_t) = \mathbb{E}[(y_t - \hat{y}_t)^2 | \boldsymbol{s}_t^E]$, to \eqref{eq:aff_trans}, which is an unconstrained problem, the optimal affine ensembling weight vector admits a closed form solution given as \begin{align*} \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* = \boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t) - \frac{\boldsymbol{1}^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t) - 1}{\boldsymbol{1}^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1}\boldsymbol{1}}\boldsymbol{C}(\boldsymbol{s}_t)^{-1}\boldsymbol{1}. \end{align*} We note that the affine combination preserves the unbiasedness of the base predictors. If each base predictor provides unbiased predictions of $y_{t}$, then we have \begin{align*} \mathbb{E}[\boldsymbol{w}_{\boldsymbol{s}_t^E}^T \boldsymbol{\hat{y}_{t}}] &= \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)} \mathbb{E}[\hat y_{t}^{(i)}] = \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)} \mathbb{E}[y_{t}]\\ &= \mathbb{E}[y_{t}] \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)} = \mathbb{E}[y_{t}]. \end{align*} For the expected squared error loss, the optimal affine-constrained weights is given as \begin{align} L_{t, \text{aff}}^* := \sigma(\boldsymbol{s}_t)^2 &- \boldsymbol{a}(\boldsymbol{s}_t)^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t)\\ &+ \frac{(\boldsymbol{1}^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t) - 1)^2}{\boldsymbol{1}^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1}\boldsymbol{1}}. \end{align} As for the convex constraint case, we not only require that the components of the weight vector sum up to 1 but also they should be nonnegative, i.e., the optimization problem becomes \begin{equation}\label{eq:convex_prob} \begin{aligned} \min_{\boldsymbol{w}_{\boldsymbol{s}_t^E} \in \mathbb{R}^M} \quad & \ell\big(y_{t}, \sum_{i=1}^M w_{\boldsymbol{s}_t^E}^{(i)}\,\hat{y}_{t}^{(i)}\big)\\ \textrm{subject to} \quad & \boldsymbol{w}_{\boldsymbol{s}_t^E}^T \boldsymbol{1} = 1\\ &w_{\boldsymbol{s}_t^E}^{(i)}\geq0, \quad i = 1, \ldots, M. \\ \end{aligned} \end{equation} Under a convex loss $\ell$, problem \eqref{eq:convex_prob} is a convex quadratic minimization problem, and the feasible region is the $M$-dimensional unit simplex. Unlike the previous two constraints, \eqref{eq:convex_prob} does not admit a closed form solution. We can, however, project (unconstrained) weights to the unit simplex iteratively to find the optimal weight vector. For brevity, we let $M = 2$, which is a common case in practice, and derive the procedure under squared error loss. To this end, we begin by noting that the squared loss at time $t$ as a function of the ensembling weight vector $\boldsymbol{w}_{\boldsymbol{s}_t^E}$ is \begin{align}\label{eq:generic_mse} &L_t(\boldsymbol{w}_{\boldsymbol{s}_t^E}) =\sigma(\boldsymbol{s}_t)^2 - \boldsymbol{a}(\boldsymbol{s}_t)^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t)\\\nonumber &\hspace{0.6cm}+(\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t))^T \boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{C}(\boldsymbol{s}_t)^{-1} \boldsymbol{a}(\boldsymbol{s}_t))\\ &\hspace{1.21cm}= L_{t, unc}^* + (\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*).\nonumber \end{align} We can rewrite \eqref{eq:generic_mse} instead of the optimal affine weights $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*$ as \begin{align*} L_t(\boldsymbol{w}_{\boldsymbol{s}_t^E}) &= L_{t, \text{unc}}^*\\ &\hspace{0.3cm}+((\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*) + (\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*))^T\boldsymbol{C}(\boldsymbol{s}_t)\\ &\hspace{0.9cm}((\boldsymbol{w}_{\boldsymbol{s}_t^E} - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*) + (\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* - \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*))\\ &=L_{t, unc}^*\\ &\hspace{0.4cm}+(\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*)\\ &\hspace{0.4cm}+(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)\\ &\hspace{0.4cm}-2(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^*)\\ &= L_{t, \text{aff}}^* + (\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)\\ &\hspace{1.2cm}-2\frac{(\boldsymbol{1}^T\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{unc}}^* - 1)}{\boldsymbol{1}^T\boldsymbol{C}(\boldsymbol{s}_t)^{-1}\boldsymbol{1}}(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{1}\\ &=L_{t, \text{aff}}^* + (\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{C}(\boldsymbol{s}_t)(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*), \end{align*} where the last line follows from $(\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*)^T\boldsymbol{1} = 0$ due to the unit simplex constraint. Further ignoring the constant $L_{t, \text{aff}}^*$, the optimization problem reduces to minimizing the $\boldsymbol{C}(\boldsymbol{s}_t)$-weighted $\ell_2$-norm of $\boldsymbol{w}_{\boldsymbol{s}_t^E}-\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*$ subject to $\boldsymbol{w}_{\boldsymbol{s}_t^E}$ being on the unit simplex. For the assumed $M = 2$, we let $\Delta_2$ be the unit simplex in $\mathbb{R}^2$. There are 2 cases to consider for this problem: \begin{itemize} \item $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* \in \Delta_{2}$: Optimal affine combination is within the unit simplex set and hence a closed form solution can be written for the optimal weight vector: $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{con}}^* = \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^*$. Accordingly, $L_{t, \text{con}}^* := L_{t, \text{aff}}^*$. \item $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* \notin \Delta_{2}$: This case is when combination weights are of opposite sign. For simplicity, we assume the weight that is assigned to the first base predictor is negative and the other one is positive. Then, we can write $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{con}}^* = \boldsymbol{w}_{\boldsymbol{s}_t^E, \text{aff}}^* + \alpha[1\:\:\:-1]^T$ for some $\alpha \in \mathbb{R}$. Consequently, the cost function to minimize becomes the $\boldsymbol{C}(\boldsymbol{s}_t)$-weighted $\ell_2$-norm of $\alpha[-1\:\:\:1]^T$. This implies that the cost is proportional to the magnitude of $\alpha$. The optimal choice would be the smallest value that keeps $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{con}}^*$ within $\Delta_{2}$, which is $-\,w_{\boldsymbol{s}_t^E, \text{aff}}^{(1)}$. In this case, $\boldsymbol{w}_{\boldsymbol{s}_t^E, \text{con}}^*$ is the corner point of $\Delta_{2}$: $[0\:\:\:1]^T$. This leads to $L_{t, \text{con}}^* := L_{t, \text{aff}}^* + [-1\:\:\:1]\,\boldsymbol{C}(\boldsymbol{s}_t)\,[-1\:\:\:1]^T(w_{\boldsymbol{s}_t^E, \text{aff}}^{(1)})^2.$ \end{itemize} \begin{remark} From the inclusion order of the feasible regions of the constrained problems as well as the derived optimal squared error losses as the special cases, we naturally have \begin{equation}\label{eq:loss_order} L_{t, \text{unc}}^* \leq L_{t, \text{aff}}^* \leq L_{t, \text{con}}^* \end{equation} for all times $t$. This inequality holds in theory when the correlation statistics of the target signal and the base models are known at all times. In practice, however, those statistics are rarely known and therefore there is a learning cost associated with each constraint, which renders the relation in \eqref{eq:loss_order} unuseful. Hence, even though the unconstrained case seems to promise the least error, the associated learning procedure might lead to overfitting, i.e., in essence, affine and convex constrained cases perform as ``regulators'' in that they offer a tradeoff between bias and variance of the ensembling model. In fact, as shown in our simulations in Section \ref{sec:sims}, the unconstrained ensembling scheme does not always achieve the lowest error. \end{remark} Since $\boldsymbol{C}(\boldsymbol{s}_t)$ and $\boldsymbol{a}(\boldsymbol{s}_t)$ are rarely known in practice, next, we present novel and generic ensembling algorithms to find the optimal ensembling weight vectors under three constraint schemes. \begin{figure*}[!t] \centering \includegraphics[width=0.8\linewidth]{figures/alg_flow} \caption{Flowchart of the proposed ensemble training scheme.} \label{fig:alg_flow} \end{figure*} \subsection{Introduced Approach}\label{sec:algorithmicdesctription} In order to linearly combine the predictions of different base models by assigning weights to each model, we use ensembling algorithms. Our ensemble algorithms learn to combine the predictions of different base algorithms, by minimizing the loss function given in \eqref{eq:loss_generic}, where $y_{t}$ is the observed sequence at time $t$, $\ell$ is a given differentiable loss function such as the squared error loss, $\hat y^{E}_{t}$ is the prediction of the ensemble algorithm, $M$ is the number of base models, $\hat y^{(i)}_{t}$ are the predictions of the base models and $w_{\boldsymbol{s}^E_{t}}^{(i)}$ are the weights assigned to the base models, by utilizing the side information sequence $\boldsymbol{s}^E_{t}$. In order to train the ensemble algorithm, we introduce a novel ensemble training scheme, illustrated in Fig. \ref{fig:alg_flow} and explained thoroughly in \textbf{Algorithm 1}. Our training scheme consists of two phases: the offline phase and the online phase. In the offline phase, first, the base models are independently trained on a certain partition of the training dataset, given as $1 \leq t \leq T_{1}$. Then, the predictions of the base models are gathered on the remaining partition of the training data, given as $T_{1}+1 \leq t \leq T$, where these predictions are then used to train the ensemble algorithm. Hence, as the predictions are gathered on unseen data, we overcome the possible overfitting issue of the base models, which is a well-known issue for the conventional ensembling approaches \cite{yang2013effective}. Each base model $i$ makes predictions based on a side information vector ${\boldsymbol{s}_{t}^{(i)}}$, which may consist of the past observations of the sequence $y_{t}$ along with some additional related information specific to the observed sequence or the base model. Our ensemble model is trained with the errors of the base models, while employing the side information sequence $\boldsymbol{s}^E_{t}$, which may consist of the past observations of the sequence $y_{t}$, side information related to base models, e.g., errors of the base models, and some additional information specific to the observed sequence. The side information vector of the ensemble model is created as explained in \eqref{eq:superset_side_information}. In the online phase, we use our pre-trained ensemble algorithm from the offline phase to produce combination weights for the base algorithms on the new data, which in our case, is the new observations following the training samples of the sequence $y_{t}$. First, we retrain the base models on the whole training dataset, given as $1 \leq t \leq T$. Then, we produce the base model predictions for the test dataset, $T+1 \leq t \leq T_{2}$. We then update the ensemble side information vector $\boldsymbol{s}^E_{t}$ for the test duration. Finally, we produce the ensemble prediction $\hat y^{E}_{t} = \sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} \hat y^{{(i)}}_{t}$, for $T+1 \leq t \leq T_{2}$. \\ ---------------------------------------------------------------------------\\ \textbf{Algorithm 1} Ensemble Learning Procedure \\ --------------------------------------------------------------------------- \begin{itemize} \item \textbf{Offline Phase:} Training the Base and Ensemble Algorithms \\\textbf{inputs}: \begin{itemize} \item ${\{{y_t}\}}\triangleq{\{{y_t}\}}_{t=1}^{T}$: the training data that consists of $T$ sequential samples from the time series sequence $y_{t}$. \item $M$ base models. \item ${\{\boldsymbol{s}_{t}^{(i)}\}}\triangleq{\{\boldsymbol{s}_{t}^{(i)}\}}_{t=1}^{T}$: time series features for the $i\textsuperscript{th}$ base predictor at time $T$. \end{itemize} \textbf{outputs}: \begin{itemize} \item ensemble learner, which produces the weight vector $w_{\boldsymbol{s}^E_{t}}$ for combining base predictions. \item ${\{\boldsymbol{s}_{t}^{E}\}}\triangleq{\{\boldsymbol{s}_{t}^{E}\}}_{t=1}^{T}$: time series features for the ensemble learner. \end{itemize} \textit{\textbf{prepare ensemble data}}: \begin{itemize} \item split ${\{{y_t}\}}\triangleq{\{{y_t}\}}_{t=1}^{T}$ into training and test periods for the base algorithms, where $1 \leq t \leq T_{1}$ is the training period and $T_{1} + 1 \leq t \leq T$ is the test period. \item train all of the base algorithms over the training period. \item generate forecasts for the test period for all of the base models, i.e., form ${\{{\hat{y}_t}^{(i)}\}}_{t= T_{1}+1}^{T},\,\, {1 \leq i \leq M} $. \item construct the ensemble side information vector ${\boldsymbol{s}_{t}^{E}}$ by following the procedure in \eqref{eq:superset_side_information}. \end{itemize} \textit{\textbf{train the ensemble learner}}: \begin{itemize} \item train the ensemble learner using ${\boldsymbol{s}_{t}^{E}}$ to minimize the loss \begin{equation} \label{eq:lgbm_loss} \arg \min_{\boldsymbol{w}^T_{\boldsymbol{s}^E_{t}}} \sum_{t=T_{1}+1}^{T} \ell(y_{t} , \hat y^{E}_{t}) \end{equation} where $\hat y^{E}_{t} = \sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} \hat y^{(i)}_{t}$, $\ell$ is a given differentiable loss function such as the squared error loss, $\boldsymbol{w}_{\boldsymbol{s}^E_{t}} = [\,{w_{\boldsymbol{s}^E_{t}}^{(1)}},\,{w_{\boldsymbol{s}^E_{t}}^{(2)}},\,\dots,\,{w_{\boldsymbol{s}^E_{t}}^{(M)}}\,]^T$ is the weight vector for the linear combination of the base learners, and $w_{\boldsymbol{s}^E_{t}}^{(i)}$ is the combination weight assigned to the $i\textsuperscript{th}$ base learner at time $t$.\\ \item train all of the base algorithms over the entire training data ${1 \leq t \leq T}$. \end{itemize} \item \textbf{Online Phase:} Predicting the Test Data \\\textbf{inputs}: \begin{itemize} \item ${\{{y_t}\}}_{t=T+1}^{T_{2}}$: the test data that consists of sequential data samples of the time series sequence $y_{t}$, from $T+1$ to $T_{2}$. \item the trained ensemble learner from the offline phase. \item $M$ base models, trained on the training data. \item ${\{\boldsymbol{s}_{t}^{(i)}\}}_{t=T+1}^{T_{2}}$: time series features for the $i\textsuperscript{th}$ base predictor at time $T$. \item ${\{\boldsymbol{s}_{t}^{E}\}}_{t=T+1}^{T_{2}}$: time series features for the ensemble learner. \end{itemize} \textbf{output}: \begin{itemize} \item combination weights $w_{\boldsymbol{s}^E_{t}}^{(i)}$ for each base learner ${1 \leq i \leq M}$, for ${T+1 \leq t \leq T_{2}}$. \item ensemble predictions $\hat y^{E}_{t}$ for ${T+1 \leq t \leq T_{2}}$. \end{itemize} \textit{\textbf{generate combination weights}}: \begin{itemize} \item generate forecasts for the test data from all of the base models, i.e., generate ${\{{\hat{y}_t}^{(i)}\}}_{t= T+1}^{T_{2}},\,\, {1 \leq i \leq M} $. \item construct the ensemble side information vector ${\boldsymbol{s}_{t}^{E}}$ for the test dataset by following the procedure in \eqref{eq:superset_side_information}. \item feed the side information vector ${\boldsymbol{s}_{t}^{E}}$ to the ensemble learner to generate the weight vector $\boldsymbol{w}^T_{\boldsymbol{s}^E_{t}} = [\,{w_{\boldsymbol{s}^E_{t}}^{(1)}},\,{w_{\boldsymbol{s}^E_{t}}^{(2)}},\,\dots,\,{w_{\boldsymbol{s}^E_{t}}^{(M)}}\,]^T$, for $T+1 \leq t \leq T_{2}$. \item linearly combine the weights given to each base learner with corresponding predictions to predict $\hat y^{E}_{t} = \sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} \hat y^{{(i)}}_{t}$, for $T+1 \leq t \leq T_{2}$. \end{itemize} \end{itemize} We next introduce two novel and generic ensemble algorithms which will achieve the optimal weights under different constraints given in \eqref{eq:weight_eq_1}, \eqref{eq:weight_eq_2} and \eqref{eq:aff_orig}. \subsection{LightGBM Ensemble}\label{sec:lgbm_ens} Gradient boosting decision trees (GBDT) are sequentially trained decision tree ensembles. In each iteration, GBDT minimizes the residual error made by the previous trees in the sequence. LightGBM is a state-of-the-art GBDT model that deviates in finding split points to minimize a certain loss function for the whole data during training, and is efficient in terms of memory consumption and training speed compared to its counterparts \cite{ke2017lightgbm}. We employ LightGBM to search for the optimal ensemble combination weights, as described in \textbf{Algorithm 1}. LightGBM requires the gradient and hessian of the objective function \eqref{eq:loss_generic} in order to minimize the loss with respect to the combination weight vector $\boldsymbol{w}_{\boldsymbol{s}^E_{t}}$. We study the linear combination of the base predictions under unconstrained, convex constrained and affine constrained conditions, which were studied in Section \ref{sec:theproposedmodel}. For each base learner $i$, LightGBM produces an output value $p_{\boldsymbol{s}^E_{t}}^{(i)}$, where we then apply the necessary transformation to achieve $w_{\boldsymbol{s}^E_{t}}^{(i)}$ under the given weight constraint. Therefore, for all three cases, we study and provide the gradient and hessian of \eqref{eq:loss_generic} with respect to $p_{\boldsymbol{s}^E_{t}}^{(i)}$, for each base learner $i$. For all constraints, we set the loss function $\ell$ as the squared error loss, although any differentiable loss function can be selected, and assume we have the base predictions for $1 \leq t \leq T$, as explained in Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. Therefore, at any given time $t$ from the training dataset, the loss to minimized by the LightGBM ensemble is \begin{equation*} L_{t} = \ell(y_{t}, \hat y^{E}_{t}) = (y_{t} - \hat y^{E}_{t})^2 = (y_{t} - \sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} \hat y^{(i)}_{t})^2, \end{equation*} where $y_{t}$ is the observed sequence, and $\hat y^{E}_{t}$ is the prediction of the ensemble algorithm, at time $t$. The transformation from $p_{\boldsymbol{s}^E_{t}}^{(i)}$ to $w_{\boldsymbol{s}^E_{t}}^{(i)}$ is given as $w_{\boldsymbol{s}^E_{t}}^{(i)} = \tau(p_{\boldsymbol{s}^E_{t}}^{(i)})$, where $\tau$ indicates the transformation operation and is specific to the given weight constraint. Hence, the gradient and hessian to be calculated at any time $t$, for the base learner $i$, is given as \begin{equation} \label{eq:gradient_initial} G_{i} = \frac{\partial{L_{t}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}}, H_{i} = \frac{\partial{G_{i}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}}. \end{equation} Next, we explain the necessary transformations, and provide the gradient and hessian calculations for all three constraints. \subsubsection{Unconstrained Case}\label{lgbm_unconstrained} For the unconstrained case, there is no restriction on the value of $w_{\boldsymbol{s}^E_{t}}^{(i)}$. Therefore, we can simply write the transformation relation as $w_{\boldsymbol{s}^E_{t}}^{(i)} = p_{\boldsymbol{s}^E_{t}}^{(i)}$. The gradient and hessian equations in \eqref{eq:gradient_initial} for the base learner $i$, at time $t$ become \begin{align*} G_{i} &= 2 (y_{t} - \hat y^{E}_{t}) (-\frac{\partial{\hat y^{E}_{t}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}})\\ &= 2 (y_{t} - \hat y^{E}_{t}) (-\sum_{m=1}^M \frac{\partial w_{\boldsymbol{s}^E_{t}}^{(m)}}{\partial p_{\boldsymbol{s}^E_{t}}^{(i)}}\hat y^{(m)}_{t})\\ &= 2 \hat y^{(i)}_{t} (\hat y^{E}_{t} - y_{t})\\ H_{i} &= 2 \hat y^{(i)}_{t} (\frac{\partial{\hat y^{E}_{t}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}})\\ &= 2 (\hat y^{(i)}_{t})^2. \end{align*} \subsubsection{Affine Constrained Case}\label{affine_constraint} For the affine constrained case, the restriction is the summation of the combination weights should be equal to one, i.e., $\sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} = 1$. In order to satisfy this relation, we apply the given normalization operation $w_{\boldsymbol{s}^E_{t}}^{(i)} = \frac{p_{\boldsymbol{s}^E_{t}}^{(i)}}{\sum_{m=1}^M p_{\boldsymbol{s}^E_{t}}^{(m)}}$. Hence, the gradient and hessian equations in \eqref{eq:gradient_initial} for the base learner $i$, at time $t$ become \begin{align*} G_{i} &= 2 (y_{t} - \hat y^{E}_{t}) (-\frac{\partial{\hat y^{E}_{t}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}})\\ &= 2 (y_{t} - \hat y^{E}_{t}) (-\sum_{m=1}^M \frac{\partial w_{\boldsymbol{s}^E_{t}}^{(m)}}{\partial p_{\boldsymbol{s}^E_{t}}^{(i)}}\hat y^{(m)}_{t})\\ &= 2 (y_{t} - \hat y^{E}_{t}) (\frac{1}{\sum_{m=1}^M p_{\boldsymbol{s}^E_{t}}^{(m)}}) (\hat y^{E}_{t} - \hat y^{(i)}_{t}) \\ H_{i} &= 2 (\frac{1}{\sum_{m=1}^M p_{\boldsymbol{s}^E_{t}}^{(m)}})^2 (\hat y^{E}_{t} - \hat y^{(i)}_{t}) (3 \hat y^{E}_{t} - 2 y_{t} - \hat y^{(i)}_{t}) \end{align*} \subsubsection{Convex Constrained Case}\label{convex_constraint} For the convex constrained case, the restriction is the summation of the combination weights should be equal to one, such that $\sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} = 1$. In addition, $0 \leq w_{\boldsymbol{s}^E_{t}}^{(i)} \leq 1$ should also be satisfied. Therefore, we apply the softmax transformation given as $w_{\boldsymbol{s}^E_{t}}^{(i)} = \frac{e^{p_{\boldsymbol{s}^E_{t}}^{(i)}}}{\sum_{m=1}^M e^{p_{\boldsymbol{s}^E_{t}}^{(m)}}}$. Therefore, the gradient and hessian equations in \eqref{eq:gradient_initial} for the base learner $i$, at time $t$ become \begin{align*} G_{i} &= 2 (y_{t} - \hat y^{E}_{t}) (-\frac{\partial{\hat y^{E}_{t}}}{\partial{p_{\boldsymbol{s}^E_{t}}^{(i)}}})\\ &= 2 (y_{t} - \hat y^{E}_{t}) (-\sum_{m=1}^M \frac{\partial w_{\boldsymbol{s}^E_{t}}^{(m)}}{\partial p_{\boldsymbol{s}^E_{t}}^{(i)}}\hat y^{(m)}_{t})\\ &= 2 (y_{t} - \hat y^{E}_{t}) w_{\boldsymbol{s}^E_{t}}^{(i)} (\hat y^{E}_{t} - \hat y^{(i)}_{t}) \\ H_{i} &= G_{i} (1 - 2w_{\boldsymbol{s}^E_{t}}^{(i)}) + 2(w_{\boldsymbol{s}^E_{t}}^{(i)})^2(\hat y^{E}_{t} - \hat y^{(i)}_{t})^2. \end{align*} \subsection{MLP Ensemble}\label{sec:mlp_ens} Artificial Neural Networks(ANN) are models that are based on the processing structure of the brain cells, and are used to model complex patterns and problems \cite{jain1996artificial}. As our ensemble algorithm, we employ a certain class of ANN called Multilayer Perceptron (MLP), which is an ANN with more than two layers. MLPs are heavily used in machine learning due to their ability of learning complex and non-linear relations with high training speed compared to their counterparts \cite{gardner1998artificial}. As a specific example, we provide the equations for a two layered network architecture, which consists of an input layer, a hidden layer and an output layer. For more layers, naturally, our derivations can be straightforwardly extended. At the end of the output layer, we apply a transformation operation which maps the weights of the ensemble algorithm according to the given weight constraint. We study the linear combination of the base predictions under affine unconstrained, affine constrained and convex constrained conditions, which were studied in Section \ref{sec:theproposedmodel}. The feed-forward equations of our model is given as \begin{align*} \boldsymbol{v} &= \boldsymbol{U}^{(1)}\boldsymbol{x}\\ \boldsymbol{z} &= f(\boldsymbol{v})\\ \boldsymbol{p} &= \boldsymbol{U}^{(2)}\boldsymbol{z}\\ \boldsymbol{w} &= \tau (\boldsymbol{p}), \end{align*} where $\boldsymbol{x} \in \mathbb{R}^K$ is the input vector, $\boldsymbol{U}^{(1)} \in \mathbb{R}^{L\text{x}K}$ are the layer weights connecting the input layer to the hidden layer, $\boldsymbol{z} \in \mathbb{R}^L$ is the input of the hidden layer, which is the rectified linear activation function (ReLU) $f$ applied to $\boldsymbol{v} \in \mathbb{R}^L$. $\boldsymbol{U}^{(2)} \in \mathbb{R}^{K\text{x}M}$ are the layer weights connecting the hidden layer to the output layer, $\boldsymbol{p} \in \mathbb{R}^M$ is the output vector of the model, and $\boldsymbol{w} \in \mathbb{R}^M$ is the weight vector for the linear combination of the base models, obtained by applying the transformation operation $\tau$ to $\boldsymbol{p}$. $K, L, M$ are the lengths of the input, hidden and output layers, respectively. Our ensemble algorithm learns to produce linear combination weights, by minimizing the loss function \eqref{eq:loss_generic}. For all constraints, we set $\ell$ as the squared error loss, although any differentiable loss function can be selected, and assume we have the base predictions for $1 \leq t \leq T$, as explained in Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. Therefore, at any given time $t$ from the training data, the loss to minimized by the MLP ensemble is given as \begin{equation*} L_{t} = \ell(y_{t}, \hat y^{E}_{t}) = (y_{t} - \hat y^{E}_{t})^2 = (y_{t} - \sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} \hat y^{(i)}_{t})^2, \end{equation*} where $y_{t}$ is the observed sequence, and $\hat y^{E}_{t}$ is the prediction of the ensemble algorithm, at time $t$. Here, we employ the notations $x_i \triangleq x_{\boldsymbol{s}^E_{t}}^{(i)}$, $w_i \triangleq w_{\boldsymbol{s}^E_{t}}^{(i)}$ and $p_i \triangleq p_{\boldsymbol{s}^E_{t}}^{(i)}$, for simplicity. The loss is minimized by iteratively updating the layer weights $\boldsymbol{U}^{(1)}$ and $\boldsymbol{U}^{(2)}$ with backpropagation. The backpropagation equations are given as \begin{align*} \frac{\partial{L_{t}}}{\partial{U^{(1)}_{l,k}}} &= \sum_{j=1}^{M}\sum_{i=1}^{M} \frac{\partial{L_{t}}}{\partial{w_i}} \frac{\partial{w_i}}{\partial{p_j}} \frac{\partial{p_j}}{\partial{z_l}} \frac{\partial{z_l}}{\partial{v_l}} \frac{\partial{v_l}}{\partial{U^{(1)}_{l,k}}},\\ \frac{\partial{L_{t}}}{\partial{U^{(2)}_{m,l}}} &= \sum_{i=1}^M \frac{\partial{L_{t}}}{\partial{w_i}} \frac{\partial{w_i}}{\partial{p_m}} \frac{\partial{p_m}}{\partial{U^{(2)}_{m,l}}}, \end{align*} where $1 \leq l \leq L$, $1 \leq k \leq K$, $1 \leq m \leq M$. We then update $\boldsymbol{U}^{(1)}$ and $\boldsymbol{U}^{(2)}$ using stochastic gradient descent (SGD). The update equations are given as \begin{align*} U^{(1)}_{l,k} = U^{(1)}_{l,k} - \alpha \frac{\partial{L_{t}}}{\partial{U^{(1)}_{l,k}}}, \;\; U^{(2)}_{m,l} = U^{(2)}_{m,l} - \alpha \frac{\partial{L_{t}}}{\partial{U^{(2)}_{m,l}}}, \end{align*} where $\alpha$ is the learning rate hyperparameter of SGD. Next, we explain the necessary transformations, and provide the closed-form backpropagation equations under the given constraints. \subsubsection{Unconstrained Case}\label{mlp_unconstrained} For the unconstrained case, there is no restriction on the value of $w_{\boldsymbol{s}^E_{t}}^{(i)}$. Therefore, we can simply write the transformation relation as $w_{\boldsymbol{s}^E_{t}}^{(i)} = p_{\boldsymbol{s}^E_{t}}^{(i)}$. Hence, the backpropagation equations become \begin{align*} \frac{\partial{L_{t}}}{\partial{U^{(1)}_{l,k}}} &= \sum_{i=1}^M 2 (\hat y^{E}_{t} - y_{t}) \hat y^{(i)}_{t} U^{(2)}_{m,l} f'(v_l) x_k,\\ \frac{\partial{L_{t}}}{\partial{U^{(2)}_{m,l}}} &= 2 (\hat y^{E}_{t} - y_{t}) \hat y^{(m)}_{t} z_l, \end{align*} where $f'(v_{l})$ is the derivative of ReLU, which is the piece-wise function \begin{equation*} f'(v_{l}) = \begin{cases} 0, & \text{if } {v_{l} \leq 0} \\ v_{l}, & \text{if } {v_{l} > 0}. \end{cases} \end{equation*} \subsubsection{Affine Constrained Case}\label{affine_mlp} For the affine constrained case, the restriction is the summation of the combination weights should be equal to one, such that $\sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} = 1$. In order to satisfy this relation, we apply the given normalization operation $w_{\boldsymbol{s}^E_{t}}^{(i)} = \frac{p_{\boldsymbol{s}^E_{t}}^{(i)}}{\sum_{m=1}^M p_{\boldsymbol{s}^E_{t}}^{(m)}}$. Hence, the backpropagation equations become \begin{align*} \frac{\partial{L_{t}}}{\partial{U^{(1)}_{l,k}}} &= \sum_{i=1}^M 2 c (\hat y^{E}_{t} - y_{t})(\hat y^{(m)}_{t} - \hat y^{E}_{t}) U^{(2)}_{m,l} f'(v_l) x_k,\\ \frac{\partial{L_{t}}}{\partial{U^{(2)}_{m,l}}} &= 2 c (\hat y^{E}_{t} - y_{t})(\hat y^{(m)}_{t} - \hat y^{E}_{t}) z_l, \end{align*} where $c$ is the constant term given as $c$ = $(\frac{1}{\sum_{m=1}^M p_{\boldsymbol{s}^E_{t}}^{(m)}})$. \subsubsection{Convex Constrained Case}\label{mlp_convex} In the convex constrained case, the restriction is the summation of the combination weights should be equal to one, such that $\sum_{i=1} ^{M} w_{\boldsymbol{s}^E_{t}}^{(i)} = 1$. In addition, $0 \leq w_{\boldsymbol{s}^E_{t}}^{(i)} \leq 1$ should also be satisfied. Therefore, we apply the given softmax transformation $w_{\boldsymbol{s}^E_{t}}^{(i)} = \frac{e^{p_{\boldsymbol{s}^E_{t}}^{(i)}}}{\sum_{m=1}^M e^{p_{\boldsymbol{s}^E_{t}}^{(m)}}}$. Hence, the backpropagation equations become \begin{align*} \frac{\partial{L_{t}}}{\partial{U^{(1)}_{l,k}}} &= \sum_{i=1}^M 2 (\hat y^{E}_{t} - y_{t}) w_m (\hat y^{(m)}_{t} - \hat y^{E}_{t}) U^{(2)}_{m,l} f'(v_l) x_k,\\ \frac{\partial{L_{t}}}{\partial{U^{(2)}_{m,l}}} &= 2 (\hat y^{E}_{t} - y_{t}) w_m (\hat y^{(m)}_{t} - \hat y^{E}_{t}) z_l. \end{align*} We now have the update equations to train both the MLP ensemble and LightGBM ensemble algorithms according to Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. \begin{remark} Note that, although we provide two different models to use as our ensemble learners, any machine learning model such as random forests \cite{breiman2001random}, linear regression models \cite{seber2012linear} etc. can be employed. \end{remark} The following section illustrates the simulations of our ensemble models under different constraints, and also shows their superiority over the base models that we employ. \section{Simulations}\label{sec:sims} In this section, we first verify our model by using synthetically generated data to show the learning procedure of our algorithms with respect to the conventional classical ensemble methods widely used in the literature \cite{ren2016ensemble}. Then, we showcase the performance of our ensemble models under different constraints using real-life data, which are the well-known prediction competition M5 \cite{makridakis2022m5}, and the daily energy consumption production in Turkey. As the conventional ensemble models, we employ two different models. First, we use a conventional linear ensemble model where linear regression is used to learn the relation between the base model predictions and the observed sequence $y_{t}$. Next, we use an MLP ensemble with two layers to learn the relation between the base model predictions and the observed sequence $y_{t}$. Thus, both of the conventional models do not employ any side information related to the sequence, and make use of only the base model predictions while learning the combination weights. For both our MLP ensemble introduced in Section \ref{sec:mlp_ens} and LightGBM ensemble introduced in Section \ref{sec:lgbm_ens}, we use two base models and search for the optimal linear combination weights $w_{\boldsymbol{s}^E_{t}}^{(1)}$ and $w_{\boldsymbol{s}^E_{t}}^{(2)}$, given the side information vector $\boldsymbol{s}^E_{t}$, under unconstrained, affine constrained and convex constrained conditions. However, our results can be straightforwardly extended to cases with more than two base models. During all our simulations, the data is divided into two partitions as the training and test datasets according to Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. We only consider one-step-ahead forecasting and compare models in terms of the total loss \begin{equation*} \sum_{t=T+1}^{T_{2}}L_{t} = \sum_{t=T+1}^{T_{2}}\ell(y_{t} , \hat y_{t}), \end{equation*} where $y_{t}$ is the observed sequence at time $t$, $\ell$ is selected as the squared error loss, $T+1 \leq t \leq T_{2}$ is the duration of the test dataset, $1 \leq t \leq T$ is the duration of the training dataset, and $\hat y_{t}$ is the prediction of the corresponding (base or ensemble) model. We also illustrate the cumulative normalized total error for all of the models used in the experiments, given as \begin{equation*} \sum_{k=T+1}^t \frac{(y_k-\hat{y}_k)^2}{t}, t=T+1,\ldots,T_{2}, \end{equation*} where ${y}_k$ is the current sample of the signal to be predicted and $\hat{y}_k$ is the prediction of the corresponding model. \subsection{Synthetic Data} The synthetic data consists of two manually generated and independent data components. First data is generated using the autoregressive integrated moving average (ARIMA) model \cite{box2015time} given by \begin{equation*} y^{\{1\}}_{t} = 0.2y^{\{1\}}_{t-1} - 0.1y^{\{1\}}_{t-2} + 0.3e^{\{1\}}_{t-1} - 0.1e^{\{1\}}_{t-2} + v^{\{1\}}_{t}, \end{equation*} where $v^{\{1\}}_{t}$ is a sample function from a stationary white Gaussian process with unit variance and $e^{\{1\}}_{t}$ is the error term given by $(y^{\{1\}}_{t} - 0.2y^{\{1\}}_{t-1} - 0.1y^{\{1\}}_{t-2})$. The second data is generated by a highly complex and hard to model piecewise-linear structure, and is given as \begin{align*} y^{\{2\}}_{t} &= 30 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} > 50 \text{,} \;\; y^{\{2\}}_{t-1} > 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} > 50 \;\; \\ y^{\{2\}}_{t} &= 35 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} > 50 \text{,} \;\; y^{\{2\}}_{t-1} > 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} < 50 \;\; \\ y^{\{2\}}_{t} &= 40 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} > 50 \text{,} \;\; y^{\{2\}}_{t-1} < 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} > 50 \;\; \\ y^{\{2\}}_{t} &= 45 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} > 50 \text{,} \;\; y^{\{2\}}_{t-1} < 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} < 50 \;\; \\ y^{\{2\}}_{t} &= 56 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} < 50 \text{,} \;\; y^{\{2\}}_{t-1} > 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} > 50 \;\; \\ y^{\{2\}}_{t} &= 61 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} < 50 \text{,} \;\; y^{\{2\}}_{t-1} > 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} < 50 \;\; \\ y^{\{2\}}_{t} &= 66 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} < 50 \text{,} \;\; y^{\{2\}}_{t-1} < 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} > 50 \;\; \\ y^{\{2\}}_{t} &= 71 + v^{\{2\}}_{t} \text{, if} \;\; y^{\{2\}}_{t-7} < 50 \text{,} \;\; y^{\{2\}}_{t-1} < 50 \text{,} \;\; \frac{1}{7}\sum_{k=1}^{7}y^{\{2\}}_{t-k} < 50, \end{align*} where $v^{\{2\}}_{t}$ is a sample function from a stationary white Gaussian process with unit variance. For our synthetic data experiments, we combine $y^{\{1\}}_{t}$ and $y^{\{2\}}_{t}$ in different combinations to form different ensemble datasets. We create three different sets of data given as $y^{\{a\}}_{t}$, $y^{\{b\}}_{t}$ and $y^{\{c\}}_{t}$. These datasets are formed as: \begin{align*} y^{\{a\}}_{t} &= 0.333y^{\{1\}}_{t} + 0.667y^{\{2\}}_{t} \text{, if} \;\; t\Mod{2}= 0 \;\; \\ y^{\{a\}}_{t} &= 0.666y^{\{1\}}_{t} + 0.334y^{\{2\}}_{t} \text{, if} \;\; t\Mod{2}= 1, \end{align*} \begin{align*} y^{\{b\}}_{t} &= 0.200y^{\{1\}}_{t} + 0.800y^{\{2\}}_{t} \text{, if} \;\; t\Mod{4}= 0 \\ y^{\{b\}}_{t} &= 0.400y^{\{1\}}_{t} + 0.600y^{\{2\}}_{t} \text{, if} \;\; t\Mod{4}= 1 \\ y^{\{b\}}_{t} &= 0.600y^{\{1\}}_{t} + 0.400y^{\{2\}}_{t} \text{, if} \;\; t\Mod{4}= 2 \\ y^{\{b\}}_{t} &= 0.800y^{\{1\}}_{t} + 0.200y^{\{2\}}_{t} \text{, if} \;\; t\Mod{4}= 3, \end{align*} \begin{align*} y^{\{c\}}_{t} &= 0.059y^{\{1\}}_{t} + 0.941y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 0 \\ y^{\{c\}}_{t} &= 0.118y^{\{1\}}_{t} + 0.882y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 1 \\ y^{\{c\}}_{t} &= 0.176y^{\{1\}}_{t} + 0.824y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 2 \\ y^{\{c\}}_{t} &= 0.235y^{\{1\}}_{t} + 0.765y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 3 \\ y^{\{c\}}_{t} &= 0.294y^{\{1\}}_{t} + 0.706y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 4 \\ y^{\{c\}}_{t} &= 0.353y^{\{1\}}_{t} + 0.647y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 5 \\ y^{\{c\}}_{t} &= 0.412y^{\{1\}}_{t} + 0.588y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 6 \\ y^{\{c\}}_{t} &= 0.471y^{\{1\}}_{t} + 0.529y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 7 \\ y^{\{c\}}_{t} &= 0.529y^{\{1\}}_{t} + 0.471y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 8 \\ y^{\{c\}}_{t} &= 0.588y^{\{1\}}_{t} + 0.412y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 9 \\ y^{\{c\}}_{t} &= 0.647y^{\{1\}}_{t} + 0.353y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 10 \\ y^{\{c\}}_{t} &= 0.706y^{\{1\}}_{t} + 0.294y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 11 \\ y^{\{c\}}_{t} &= 0.765y^{\{1\}}_{t} + 0.235y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 12 \\ y^{\{c\}}_{t} &= 0.824y^{\{1\}}_{t} + 0.176y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 13 \\ y^{\{c\}}_{t} &= 0.882y^{\{1\}}_{t} + 0.118y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 14 \\ y^{\{c\}}_{t} &= 0.941y^{\{1\}}_{t} + 0.059y^{\{2\}}_{t} \text{, if} \;\; t\Mod{16}= 15.\\ \end{align*} For all synthetic data experiments, our data samples are of length $730$, where the last $100$ samples are taken as the test data, and the remaining are taken as the training data. We evaluate the model accuracies on the test dataset. We train our ensemble models according to Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1} and for the base models, we directly use $y^{\{1\}}_{t}$ and $y^{\{2\}}_{t}$. Therefore, we expect the ensemble models to learn the combination weights $w_{\boldsymbol{s}^E_{t}}^{(1)}$ and $w_{\boldsymbol{s}^E_{t}}^{(2)}$ used in forming all three ensemble datasets. As the side information vector $\boldsymbol{s}^E_{t}$, we provide the modulo information that splits the data weights into different regions. Note that the importance of the side information vector to the data as well as the data complexity varies among the three datasets, as there are different number of distinct weight combinations for each dataset. For each of the three datasets, we fit our MLP ensemble model introduced in Section \ref{sec:mlp_ens} under all three constraints (unconstrained, affine constrained and convex constrained). Note that, as $w_{\boldsymbol{s}^E_{t}}^{(1)} + w_{\boldsymbol{s}^E_{t}}^{(2)} = 1$ is satisfied for all weight combinations, our ensemble algorithm should be theoretically able to capture all of the combination weights under any constraint. Table \ref{table:synthetic} shows the results in terms of the final cumulative error for all three ensemble datasets, for all three constraints. We also illustrate the cumulative normalized total error for the predictions of $y^{\{b\}}_{t}$ and $y^{\{c\}}_{t}$ in Figs. \ref{fig:cumsum_error_b} and \ref{fig:cumsum_error_c}. When the data does not contain many distinct weight combinations related to the side information vector ($y^{\{a\}}_{t}$), all three ensembles are able to perfectly capture the weight combinations. However, as we increase the number of distinct weight combinations, the ensemble model under unconstrained and affine constrained conditions tend to make more errors compared to the convex constrained model. This is due to the fact that, these models have more parameters to learn compared to the convex constrained model, as the search space for the optimal combination weights is larger and therefore more complex. Therefore, the models may not converge to the optimal weight conditions, as in our example. Hence, even though the unconstrained ensemble model has the least expected error among all three constraints as explained in Section \ref{sec:theproposedmodel}, convex and affine models can be preferable in complex problems. In addition, the MLP ensemble model under all three constraints outperform both the conventional linear ensemble and conventional MLP ensemble. In Table \ref{table:synthetic}, it is illustrated that both of the conventional methods are unable to capture the regional change in the structure of the observed sequence $y_{t}$, hence, producing large errors opposed to our MLP ensemble, which directly employs a side information vector containing the regional switching information. Thus, our results showcase the need of employing a data specific side information vector while learning the associated combination weights. \begin{table}[!t] \begin{center} \begin{tabular}{ |c|c|c|c| } \hline Model & $y^{\{a\}}_{t}$ & $y^{\{b\}}_{t}$ & $y^{\{c\}}_{t}$ \\ \hline \makecell{MLP Ensemble \\ Unconstrained} & 0.0 & 0.10248 & 6.64695 \\ \hline \makecell{MLP Ensemble \\ Affine Constrained} & 0.0 & 0.06670 & 2.72202 \\ \hline \makecell{MLP Ensemble \\ Convex Constrained} & 0.0 & $\boldsymbol{0.00603}$ & $\boldsymbol{0.21027}$ \\ \hline \makecell{Conventional \\ Linear Ensemble} & 136.69053 & 256.78693 & 367.59445 \\ \hline \makecell{Conventional \\ MLP Ensemble} & 124.90278 & 262.02210 & 140.68292 \\ \hline \end{tabular} \end{center} \caption{Final cumulative error on the test set for all ensemble data, under all constraints for the MLP ensemble.} \label{table:synthetic} \end{table} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figures/cumsum_error_b} \caption{Comparison of the cumulative error for the prediction of the ensemble data $y^{\{b\}}_{t}$. The performance of the MLP ensemble model under unconstrained, affine constrained and convex constrained conditions are shown.} \label{fig:cumsum_error_b} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.5\textwidth]{figures/cumsum_error_c} \caption{Comparison of the cumulative error for the prediction of the ensemble data $y^{\{c\}}_{t}$. The performance of the MLP ensemble model under unconstrained, affine constrained and convex constrained conditions are shown.} \label{fig:cumsum_error_c} \end{figure} \subsection{Total Residential Natural Gas Demand in Turkey}\label{sec:botas} The data used in this section consists of the total residential natural gas demand in Turkey for 1000 days during the years 2018-2020, and each day is considered as a single sample. We train both our LightGBM ensemble introduced in Section \ref{sec:lgbm_ens} and MLP ensemble introduced in Section \ref{sec:mlp_ens}, under all weight constraints, according to Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. For the base algorithms, we use the Seasonal Auto-Regressive Integrated Moving Average with eXogenous factors (SARIMAX), which is a linear model commonly used in time series forecasting \cite{box2015time}, and the LightGBM, which is a gradient boosting framework that uses tree based learning algorithms \cite{ke2017lightgbm}. We take the last 300 samples of the data as the test dataset, and the remaining as the training dataset. We evaluate the model accuracies on the test dataset. Table \ref{table:botas} illustrates our results in terms of the final cumulative error for both the base algorithms and the ensemble algorithms under all three constraints. We also illustrate the cumulative normalized total error for all models in Fig. \ref{fig:cumsum_error_botas}. Note that, the results for the LightGBM ensemble model under unconstrained weights condition is not shown, as the model could not converge to produce reasonable weights. In addition, both our results in Table \ref{table:botas} and Fig. \ref{fig:cumsum_error_botas} illustrate that both our LightGBM ensemble and MLP ensemble algorithms outperform the two base algorithms under both affine and convex weight constraints. For both ensemble models, when the weights produced by the models are unconstrained, the models could not improve the predictions produced by the base models. Therefore, as also illustrated in synthetic data experiments, even though the expected error for the unconstrained ensemble models are less than the ensemble models with constrained weights, convex and affine models can be preferable as they have less parameters to learn compared to the unconstrained case. In addition, our ensemble models under affine and convex constraints outperform the conventional ensemble methods, as illustrated in \ref{table:botas} and Fig. \ref{fig:cumsum_error_botas}. \begin{table}[!h] \begin{center} \begin{tabular}{ |c|c| } \hline Model Name & Final Cumulative Error (1e10) \\ \hline SARIMAX Base & 50.16 \\ \hline LightGBM Base & 28.67 \\ \hline \makecell{LightGBM Ensemble \\ Affine Constrained} & 27.51 \\ \hline \makecell{LightGBM Ensemble \\ Convex Constrained} & $\boldsymbol{24.54}$ \\ \hline MLP Ensemble Unconstrained & 35.25 \\ \hline MLP Ensemble Affine Constrained & 25.92 \\ \hline MLP Ensemble Convex Constrained & 28.47 \\ \hline Conventional MLP Ensemble & 29.43 \\ \hline Conventional Linear Ensemble & 35.86 \\ \hline \end{tabular} \end{center} \caption{The final cumulative error for both the base and ensemble models used in predicting total residential natural gas demand in Turkey, under all constraints.} \label{table:botas} \end{table} \begin{figure*}[!h] \centering \includegraphics[width=0.65\textwidth]{figures/cumsum_error_botas.png} \caption{Comparison of the cumulative error for the prediction of the total residential natural gas demand in Turkey. The performance of our base and ensemble models under different constraints are shown. Note that the error plot for the unconstrained lightgbm ensemble is not provided here, as the errors produced by the model are high compared to other models.} \label{fig:cumsum_error_botas} \end{figure*} \subsection{M5 Forecasting Dataset} The M5 Forecasting dataset \cite{makridakis2022m5} involves the unit sales of the products sold by the retail company Walmart in USA. The products are classified in 3 different categories in 7 different departments, and are sold in 10 different stores in 3 different states. For our experiments, we examine the total number of unit sales in store CA\_3 department HOUSEHOLD\_1, therefore reducing the possibility of error in the data. The data length is 1941 days, where each day is considered a single sample. We consider the first 1841 days as the training dataset, and the last 100 days as the test dataset. We train both our LightGBM ensemble introduced in Section \ref{sec:lgbm_ens} and MLP ensemble introduced in Section \ref{sec:mlp_ens}, under all weight constraints, according to Fig. \ref{fig:alg_flow} and \textbf{Algorithm 1}. For the base algorithms, we use the SARIMAX and LightGBM models, as in the case in Section \ref{sec:botas}. We evaluate both the base and ensemble model accuracies on the test dataset. Table \ref{table:m5} illustrates our results in terms of the final cumulative error for both the base algorithms and the ensemble algorithms under all three constraints. We also give the cumulative normalized total error for all models in Fig. \ref{fig:cumsum_error_m5}. We do not show the results of the ensemble algorithms under unconstrained case, as the models could not converge to produce reasonable weights. Our results in Table \ref{table:m5} and Fig. \ref{fig:cumsum_error_m5} illustrate that both our MLP ensemble and LightGBM ensemble models outperform our base models and also the conventional ensemble models, under affine and convex constrained weights. \begin{table}[!h] \begin{center} \begin{tabular}{ |c|c| } \hline Model Name & Final Cumulative Error \\ \hline SARIMAX Base & 10377.02 \\ \hline LightGBM Base & 7811.49 \\ \hline LightGBM Ensemble Affine Constrained & $\boldsymbol{7489.49}$ \\ \hline LightGBM Ensemble Convex Constrained & 7509.36 \\ \hline MLP Ensemble Affine Constrained & 7582.25 \\ \hline MLP Ensemble Convex Constrained & 7688.94 \\ \hline Conventional MLP Ensemble & 8485.79 \\ \hline Conventional Linear Ensemble & 9130.51 \\ \hline \end{tabular} \end{center} \caption{The final cumulative error for both the base and ensemble models used in predicting the total unit sales in store CA\_3 department HOUSEHOLD\_1 of Walmart, USA, under affine and convex constraints.} \label{table:m5} \end{table} \begin{figure*}[!h] \centering \includegraphics[width=0.65\textwidth]{figures/cumsum_error_m5.png} \caption{Comparison of the cumulative error for the prediction of the total unit sales in store CA\_3 department HOUSEHOLD\_1 of Walmart, USA. The performance of our base and ensemble models under different constraints are shown. Note that the error plots for the unconstrained LightGBM ensemble and unconstrained MLP ensemble are not provided here, as the errors produced by the model are high compared to other models.} \label{fig:cumsum_error_m5} \end{figure*} \section{Conclusion}\label{sec:conclusion} We studied the problem of predicting sequential time series data where we combine the predictions of multiple machine learning models using a novel ensembling approach. For the first time in the literature, we tackle the problem of finding the optimal combination weight vectors under unconstrained, affine constrained and convex constrained conditions, while considering a data specific side information vector. We analyzed the associated costs for learning all three constraints given the side information vector, and then introduced two novel and generic ensembling algorithms to find the optimal combination weight vectors under given constraints. In addition, we have presented a novel training scheme for ensemble models, where we have diminished any possible issues related to the training of the base models, such as co-linearity, high correlation and overfitting by the base algorithms. With various experiments containing synthetic and well-known real-life sequential data, we have illustrated superiority of our ensemble models over both the base models used in the experiments and the conventional ensembling methods in the literature. \bibliographystyle{IEEEtran}
1,108,101,564,483
arxiv
\section{Introduction} \label{sec:intro} Models of planetary formation that involve either core accretion or fragmentation of protoplanetary discs predict that the orbit of the planets should lie in the disc \citep{Pollacketal1996, Mayeretal2002}. There is a large body of work on the tidal interaction between a planet and the protoplanetary disc assuming that the planet is orbiting in the midplane of the disc \citep{Lin1993, Brydenetal1999, Varniereetal2004, Armitage2010, Kley2012, Baruteau2013}. The density perturbations in the protoplanetary disc exert a tidal torque on the planet, so it may migrate radially. Low mass planets (with masses below a few to a few tens of Earth masses) induce linear perturbations in the structure of the disc, whereas more massive planets produce non-linear perturbations. In the latter case, the transfer of angular momentum from the planet to the disc may lead to the opening of a gap in the disc. Interestingly, the existence of gaps in discs around very young stars (type HL Tau) has been recently confirmed in submillimeter observations with the Atacama Large Millimeter-Submillimeter Array (ALMA) \citep[e.g.,][]{Carrasco-Gonzalezetal2016, Hsi-WeiYenetal2016}. Whether these gaps are created by massive planets or not is still under debate. Until this day, about $3300$ extrasolar planets have been detected through either radial velocity or transit measurements. Using the Rossiter-McLaughlin effect \citep{Fabrycky2009} it is possible to calculate the tilt angle between the sky projection of the stellar spin axis and the spin axis of the orbit of the planet. It was found that $40\%$ of the massive planets observed have a non-zero tilt angle \citep{Triaudetal2010,Albrecht2012}. Different hypothesis have been suggested to explain how planets can have misaligned orbits \citep[e.g.,][]{Xiang2013,Picogna2015}. The evolution of the orbital parameters of a planet on an inclined orbit due to its interaction with the protoplanetary disc through tidal torques has been investigated by several authors. For low-mass planets on orbits with eccentricity and inclination smaller than the disc's aspect ratio, \citet{TanakaWard2004} performed linear calculations and predicted a rapid exponential decay of the inclination $i$ and eccentricity $e$ of the planetary orbit \citep[see also][]{CresswellNelson2006}. For larger initial values of $e$ and $i$, the orbital evolution of a $20$ Earth-mass planet was studied numerically by \citet{Cresswelletal2007}, who found that the time scales for eccentricity and inclination damping, albeit longer than given by the linear analysis of \citet{TanakaWard2004}, are still shorter than the migration time scale. The orbital evolution of massive planets is more complex. \citet{MarzariNelson2009} studied the orbital evolution of a Jupiter mass ($M_{J}$) planet with an initial inclination of $20^{\circ}$ and initial eccentricities ranging from $0$ to $0.4$. For an isothermal disc with a local surface density at the planetary orbit of $242$ g cm$^{-2}$, they found that the inclination and eccentricity are rapidly damped on a timescale of the order of $10^{3}$ years. \citet{Xiang2013} considered the orbital evolution of planets between $1M_{J}$ and $6M_{J}$, initialized with zero eccentriciy and a wide range of inclinations. They showed that the inclination decay rate decreases drastically with the initial inclination. For instance, for a Jupiter mass planet with an initial inclination of $80^{\circ}$, the time required for inclination to decay by $10^{\circ}$ is of the order of $10^{6}$ years \citep[see also][]{Rein2012}. \citet{Bitsch2013} also investigated numerically the evolution of inclination and eccentricity for planets above $1M_{J}$ and provided empirical formulae for $di/dt$ and $de/dt$ by fitting the results of their simulations. \citet{Lubow2015} and \citet{Miranda2015} investigated the tidal truncation of misaligned discs in binary systems by computing the Lindblad torques. Many of the previous studies focused mainly on determining whether inclined planets can, or cannot, realign with the protoplanetary disc within the lifetime of the disc. The three-dimensional structure of the disc also changes because tidal torques by an inclined planet can open gaps in the disc, excite bending waves or warps, and can make them eccentric. Here we are interested in the gap clearing by massive inclined planets. Gap opening has been studied thoroughly in the coplanar case because the planet migration and the mass accretion rates are both sensitive to the existence of a gap. Less studied is the gap opening by inclined planets. Simulations indicate that planets with low inclinations produce much wider and deeper gaps than planets with large inclinations \citep{Xiang2013, Bitsch2013}. However, there is no physical description of the width and shape of these gaps. As occurs in the coplanar case, \citet{Xiang2013} noticed that for small and intermediate inclinations, the rate of inclination damping depends on gap formation; it decreases as soon as the gap is formed because the strength of the interaction with the disc depends on the local disc density. Given the recent observations of gaps in circumstellar discs and given the importance of gaps to understand the orbital decay of planets and the gas accretion onto giant protoplanets, we study the gap formation by a Jupiter mass planet on an inclined orbit relative to the initial midplane of the disc, when the inclination is $30^{\circ}$ or lower. This paper is organized as follows. In Section \ref{sec:cop_case}, we review the basics of gap opening in the coplanar case. In Section \ref{sec:differential_ring}, a model based on the impulse approximation is presented for calculating the torque between the disc and the planet. Section \ref{sec:gapsmethods} describes the methods to derive the gap profile. In Section \ref{sec:num_sim} we present our simulations, show the three-dimensional (3D) structure of the disc and compare the resulting gap profile to our analytical model. Finally, our main conclusions are given in Section \ref{sec:conclusions}. \section{The coplanar case: Torques and gap formation criteria} \label{sec:cop_case} Consider a thin disc with a smooth surface density $\Sigma (R)$ rotating with Keplerian angular frequency $\Omega(R)$ around a star of mass $M_{S}$, and a planet of mass $M_{p}$ on circular orbit with radius $R_{p}$. If the mass of the planet is sufficiently high, the tidal torque on the disc can open a gap in the vicinity of the planet's orbit. In the coplanar case, if the disc has a gap, there are four relevant scale lengths: the orbital radius $R_{p}$, the thickness of the disc ($H$), the Hill radius $r_{H}\equiv (M_{p}/3M_{S})^{1/3}R_{p}$, and the distance between the orbit of the planet and the edge of the gap $\Delta_{0}$. From simple physical grounds, one expects the following ordering between these scales: \begin{equation} \Delta_{0}\gtrsim H \hskip 0.2cm {\rm and} \hskip 0.2cm \Delta_{0}\gtrsim r_{H} \label{eq:ordering} \end{equation} \citep[e.g.,][]{Lin1993}. The one-sided torque between the protoplanet and the disc when they are coplanar, denoted by $T_{g}$, has been derived using different approaches \citep[see][for a review]{Lin1993}. Using the impulse approximation, \citet{LinPapaloizou1979} obtained \begin{equation} T_{g}=C_{T} q^2\Sigma R_p^4 \omega^2 \left(\frac{R_p}{\Delta_{0}}\right)^3, \label{eq:Tg_impulse_approx_coplanar} \end{equation} where $q$ is the planet to star mass ratio ($q\equiv M_{p}/M_{S}$), $\omega\equiv \Omega(R_{p})$ is the angular frequency of the planet, and $C_{T}=8/27$. Alternatively, $T_{g}$ can be also calculated by adding the contribution of the torques exerted on the disc at all the Lindblad resonances \citep[e.g.,][]{GoldreichTremaine1980,Ward1986, Lin1993}. In this formalism, the Equation (\ref{eq:Tg_impulse_approx_coplanar}) is recovered with $C_{T}=(32/243)[2K_{0}(2/3)+K_{1}(2/3)]^{2} \simeq 0.84$ (where $K_{0}$ and $K_{1}$ are modified Bessel functions; see, e.g., equation (21) in Lin \& Papaloizou 1993). \citet{Papaloizou1984} obtained $T_{g}$ by computing the angular momentum transfered between fluid elements and the planet, using the WKB approximation and taking into account the truncated disc structure. They found that the gravitational torque is maximum when $\Delta_{0}\simeq H$, and that the maximum value is \begin{equation} T_{g} = 0.23 q^2\Sigma R_p^4 \omega^2 \left(\frac{R_p}{H}\right)^3, \label{eq:H_tot} \end{equation} where $\Sigma$ is the surface density outside to the gap (in practice, it is usually taken as the unperturbed density at the planet radius, which will be denoted by $\Sigma_{0}$). Note that the above equation is in agreement with Equation (\ref{eq:Tg_impulse_approx_coplanar}) with $C_{T}$ as derived in the impulse approximation, provided that $H\simeq \Delta_{0}$. In terms of the aspect ratio $h\equiv H/R_{p}$, we can write $T_{g} = 0.23 q^2\Sigma_{0} R_p^4 \omega^2 h^{-3}$. On the other hand, the angular momentum flux due to viscous stresses in a Keplerian disc with constant viscosity $\nu$ is given by \begin{equation} T_{\nu} =3\pi\Sigma\nu R^2\Omega, \label{eq:Tensor_visc} \end{equation} \citep[e.g.,][]{Lin1993}. Equating $T_{g}$ and $T_{\nu}$, and assuming the ordering given in Equation (\ref{eq:ordering}), the viscous condition for the gap formation is given as \begin{equation} q\gtrsim q_{\rm crit}\equiv \frac{40\nu}{\omega R^2_p}. \label{eq:criteria_LP93} \end{equation} \citet{Brydenetal1999} found through numerical simulations that a {\it clean, deep} gap forms if $q>q_{\rm crit}$ \citep[see also][]{Lin1993}. For a typical disc with $h=0.05$, simulations showed that even for $q=q_{\rm crit}$, the surface density at the bottom of the gap is $\sim 0.2\Sigma_{0}$ \citep[e.g.,][]{hos07}. \citet{Cridaetal2006}, based on a semi-analytic study, obtained a more general gap opening criterion by considering a pressure torque in addition to the viscosity and gravity torques. This criterion involves simultaneously the planet mass, viscosity and scale height of the disc in the form \begin{equation} \frac{1.1 H}{q^{1/3}R_{p}}+ \frac{1}{q}\frac{50\nu}{\omega R_{p}^{2}}\leq 1. \label{eq:gapcrida} \end{equation} Equation (\ref{eq:gapcrida}) gives an estimate of the minimum planet-to-star mass ratio for which a planet clears at least $90\%$ of the gas initially in its coorbital region. In more recent studies, \citet{Fungetal2014} and \citet{Duffell2015} performed numerical experiments that suggest that the one-sided torque $T_{g}$ due to the planet is approximately \begin{equation} T_{g}= f_{0}q^{2}\Sigma_{\rm gap} R_{p}^{4}\omega^{2} h^{-3}, \label{eq:fungduffell} \end{equation} where $\Sigma_{\rm gap}$ is the surface density in the gap when a steady-state has been reached and $f_{0}\simeq 0.45\pm 0.543h$ \citep[][and references therein]{Duffell2015}. Note that Equation (\ref{eq:fungduffell}) is similar (except for a numerical factor) to Equation (\ref{eq:H_tot}) in which $\Sigma_{0}$ is replaced by $\Sigma_{\rm gap}$. The condition $T_{g}\simeq T_{\nu}$ provides the surface density in the gap \citep{Fungetal2014,Duffell2015}: \begin{equation} \frac{\Sigma_{\rm gap}}{\Sigma_{0}}\simeq \frac{3\pi \nu h^{3}}{f_{0}q^{2}R_{p}^{2}\omega}. \end{equation} If our criterion for gap formation is that $\Sigma_{\rm gap}\lesssim 0.2 \Sigma_{0}$, this implies that a gap forms if \begin{equation} q\gtrsim 10\left(\frac{\nu}{\omega R_{p}^{2}}\right)^{1/2} h^{3/2}, \end{equation} where we have used $f_{0}=0.45$. We see that the critical value of $q$ for gap formation exhibits a strong dependence on the aspect ratio $h$. The above criteria for gap opening assume that the planet does not migrate radially from its initial orbit. Therefore, they are only valid if the gap opening rate is faster than the radial migration rate of the planet \citep{Lin1986b,Ward1989}. For typical circumstellar discs, this condition is satisfied \citep[e.g.,][]{Malik2015}. \begin{figure} \includegraphics[width=\columnwidth]{Tg_vR.pdf} \caption{Excitation torque density for $l>0$ and different inclinations of the planet's orbit.} \label{fig:deltaTg} \end{figure} \section{Torques by planets on inclined orbit: the impulse approximation} \label{sec:differential_ring} We consider a thin protoplanetary disc that initially lies in the plane $z=0$ (hereafter equatorial plane). We assume that the planet describes a circular orbit with radius $R_{p}$ and that its orbital plane is inclined by an angle $i(t)$ with respect to the midplane of the disc. Due to the gravitational interaction of the planet with the disc, tidal torques lead to a damping of the planetary inclination, implying that $di/dt<0$. For realistic protoplanetary discs, the damping timescale $i/|2di/dt|$ is much larger than the orbital period of the planet. Thus, the planet performs many orbits before the change in inclination is significant. We assume the disc to be pressure-less, so that it consists of test particles, and calculate the disc and planet exchange of angular momentum as a result of the gravitational interaction of the particles with the planet. Treating the disc particles as being pressure-less is adequate as long as the velocity of the planet relative to the disc particles is supersonic \citep[e.g.,][]{Canto2013,Xiang2013}. This condition is valid for planetary inclinations larger than the disc's aspect ratio\footnote{In fact, the relative velocity between the planet and the disc particles in the vicinity of the planet is $2 \omega R_{p} \sin (i/2)$, and the local Mach number is $2h^{-1}\sin (i/2)$.}. In particular, for a typical value of $h=0.05$, the planet crosses supersonically the disc for inclinations $i\geq 3^{\circ}$. \subsection{Torques in the impulse approximation} Without any planet, a certain disc particle will describe circular orbits with radius $R_{d}$ around the central star. In the presence of a planet, the trajectory of this fluid element will be deflected due to successive gravitational encounters with the planet. We take the $x$-axis to be in the direction of the ascending line of nodes of the planet, and take $t=0$ when the planet passes on this axes, so that its position vector is \begin{equation} \mbox{\boldmath $R$} {}_{p}(t)=R_{p}(\cos\phi_{p}, \cos i\sin\phi_{p}, \sin i \sin\phi_{p}), \end{equation} where $\phi_{p}=\omega t$ and $\omega=\sqrt{GM_{S}/R_{p}^{3}}$. The planet reaches its maximum height at the Cartesian points $(0, R_{p}\cos i, R_{p}\sin i)$ and $(0, -R_{p}\cos i,-R_{p}\sin i)$, i.e. at the azimuthal angles $\pi/2$ and $3\pi/2$. The velocity of the planet, $\mbox{\boldmath $V$} {}_{p}$, is \begin{equation} \mbox{\boldmath $V$} {}_{p}=\omega R_{p}(-\sin \phi_{p}, \cos i\cos\phi_{p},\sin i\cos\phi_{p}). \end{equation} Consider a differential volume element of gas orbiting at a radius $R_{d}$ around the central star. The angular frequency of this disc particle is $\mbox{\boldmath $\Omega$} {}=\Omega (R_{d}) \hat{\mbox{\boldmath $e$} {}}_{z}$, where $\Omega=\varepsilon\sqrt{GM_{S}/R_{d}^{3}}$, and $\varepsilon=1$ if the disc rotates counter-clockwise, whereas $\varepsilon=-1$ if the disc rotates clockwise. Note that the planet has a prograde motion respect to the disc if $-\pi/2< i <\pi/2$ and $\varepsilon=1$, whereas its orbit is retrograde if $-\pi/2< i <\pi/2$ and $\varepsilon=-1$. The separation vector at the minimum distance between this fluid particle and the planet is \begin{equation} \mbox{\boldmath $d$} {}_{\rm min}=\begin{pmatrix} [R_{d}-R_{p}]\cos \phi_{p} \\ [R_{d}-R_{p}\cos i]\sin \phi_{p} \\ -R_{p}\sin i\sin\phi_{p} \end{pmatrix}. \label{eq:dist_min} \end{equation} Its modulus is \begin{equation} d_{\rm min}^{2}=\left[\Delta^{2}+4 R_{d}R_{p}\sin^{2}(i/2)\sin^{2}\phi_{p}\right]^{1/2}, \end{equation} where $\Delta\equiv R_{d}-R_{p}$. For streamlines passing close enough to the perturber, $d_{\rm min}\ll R_{p}$ (which requires that $\sin i\ll 1$), and the relative velocity between the disc particle and the planet is \begin{equation} \mbox{\boldmath $v$} {}_{\rm rel}=R_{p}\begin{pmatrix} [\omega -\Omega]\sin\phi_{p} \\ [\Omega -\omega \cos i] \cos \phi_{p} \\ -\omega \sin i \cos\phi_{p}\end{pmatrix}. \label{eq:vecv_rel} \end{equation} In the impulse approximation, we assume that the close encounter between the disc particle and the perturber occurs with impact parameter $d_{\rm min}$ and velocity $v_{\rm rel}$, and that the deflection angle $\delta_{e}$ is small enough so that the trajectory of the disc particle is approximately rectilinear. In the planet frame, the deflection angle is \begin{equation} \cot^{2}\left(\frac{\delta_{e}}{2}\right)=\frac{v_{\rm rel}^{4}d_{\rm min}^{2}}{G^{2}M_{p}^{2}}, \label{eq:cotdeltae} \end{equation} where we recall that $M_{p}$ is the mass of the planet. From Equation (\ref{eq:vecv_rel}), we have \begin{equation} v_{\rm rel}^{2}=R_{p}^{2}\left[(\Omega-\omega)^{2}+4\omega \Omega \sin^{2}(i/2)\cos^{2}\phi_{p} \right]. \label{eq:vrel_modulus} \end{equation} The velocity of the fluid element immediately after one gravitational scattering, in the system of reference of a nonrotating observer, is \begin{equation} \mbox{\boldmath $V$} {}_{f}={\mathcal{R}}\mbox{\boldmath $v$} {}_{\rm rel}+\mbox{\boldmath $V$} {}_{p}, \end{equation} where ${\mathcal{R}}$ is the rotation matrix of angle $\delta_{e}$ around the axis parallel to the vector $\mbox{\boldmath $d$} {}_{\rm min}\times \mbox{\boldmath $v$} {}_{\rm rel}$. The disc particle remains orbiting in the $z=0$ plane only if $\mbox{\boldmath $d$} {}_{\rm min}\times \mbox{\boldmath $v$} {}_{\rm rel}$ is parallel to the $z$-axis, which occurs when $i=0$. In a general case, disc particles may be scattered to a tilted plane. After one encounter, the specific (orbital) angular momentum of a disc particle $\mbox{\boldmath $L$} {}$ will change from its unperturbed value $\mbox{\boldmath $L$} {}_{i}$ to a value $\mbox{\boldmath $L$} {}_{f}$. A change in the direction of the angular momentum corresponds to a warp, whereas a change in the magnitude of $\mbox{\boldmath $L$} {}$ corresponds to a change in $R_{d}$. In the coplanar case ($i=0$), $\mbox{\boldmath $L$} {}$ is always parallel to $\hat{\mbox{\boldmath $e$} {}}_{z}$ and the planetary torques lead to a redistribution of the mass in the plane of the disc. In those encounters for which $\mbox{\boldmath $L$} {}_{f}$ and $\mbox{\boldmath $L$} {}_{i}$ have different directions but the same magnitude, the fluid element is scattered to another plane but will have the same orbital radius. Fluid elements will move radially outwards or inwards from the planet's position when $\mbox{\boldmath $L$} {}$ changes its magnitude during the collision. Since we are interested in the radial redistribution of the mass in the disc, our aim is to calculate the rate at which $L_{f}-L_{i}\equiv |\mbox{\boldmath $L$} {}_{f}|-|\mbox{\boldmath $L$} {}_{i}|$ changes due to successive encounters with the planet. Initially, the specific angular momentum of a fluid element about the central star is: \begin{equation} \mbox{\boldmath $L$} {}_{i}=\Omega R_{d}^{2}\hat{\mbox{\boldmath $e$} {}}_{z}. \end{equation} After the gravitational deflection with the planet, the specific angular momentum is \begin{equation} \mbox{\boldmath $L$} {}_{f}=\mbox{\boldmath $R$} {}_{d}\times ({\mathcal{R}}\mbox{\boldmath $v$} {}_{\rm rel}+\mbox{\boldmath $V$} {}_{p}). \end{equation} where $\mbox{\boldmath $R$} {}_{d}=\mbox{\boldmath $R$} {}_{p}+\mbox{\boldmath $d$} {}_{\rm min}$. The change in the magnitude of the angular momentum can be written in terms of ${\mathcal{R}}_{1}\equiv {\mathcal{R}}-{\mathcal{I}}$, where ${\mathcal{I}}$ is the identity matrix, as \begin{eqnarray} L_{f}-L_{i} \simeq \varepsilon \hat{\mbox{\boldmath $e$} {}}_{z}\cdot [\mbox{\boldmath $R$} {}_{d}\times ({\mathcal{R}}_{1}\mbox{\boldmath $v$} {}_{\rm rel})] =\varepsilon R_{d} ({\mathcal{R}}_{1} \mbox{\boldmath $v$} {}_{\rm rel})\cdot \mbox{\boldmath $e$} {}_{\phi}, \label{eq:lf_minus_li} \end{eqnarray} where $\mbox{\boldmath $e$} {}_{\phi}$ is the unitary vector in the azimuthal direction of the particles during the encounter: $\mbox{\boldmath $e$} {}_{\phi}=(-\sin\phi_{p},\cos\phi_{p},0)$. The (gravitational) torque acting upon an elementary ring of radius $R$ and width $\delta R$ is \begin{equation} \delta T_{g} (R) =\delta R\int_{0}^{2\pi} \varepsilon (L_{f}-L_{i})\Sigma v_{\rm rel} \frac{d\phi_{p}}{2\pi}. \label{eq:deltaTg_prev} \end{equation} Hereafter we specialize in the prograde case ($\varepsilon=1$) but the retrograde case does not pose any additional complication. Substituting Eqs (\ref{eq:vrel_modulus}) and (\ref{eq:lf_minus_li}) into Eq.~(\ref{eq:deltaTg_prev}), the radial torque density in the impulse approximation is \begin{equation} \begin{aligned} &\frac{d T_{g}(l)}{dR}=\frac{\Sigma (R) \omega^{2} R_{p}^{3}}{2\pi} \times\\ &\int_{0}^{2\pi}[( {\mathcal{R}}_{1} \tilde{\mbox{\boldmath $v$} {}}_{\rm rel})\cdot \mbox{\boldmath $e$} {}_{\phi}] \left(\frac{9}{4}l^{2}+2 \left(2-3l\right)\sin^{2}(i/2)\cos^{2} \phi_{p}\right)^{1/2} d\phi_{p}, \label{eq:delta_inc} \end{aligned} \end{equation} where we have introduced the dimensionless variables $l\equiv (R-R_{p})/R_{p}$ and $\tilde{\mbox{\boldmath $v$} {}}_{\rm rel}\equiv \mbox{\boldmath $v$} {}_{\rm rel}/(\omega R_{p})$. In the Appendix, we show that the expression for $T_{g}$ derived by \citet{LinPapaloizou1979} given in Equation (\ref{eq:Tg_impulse_approx_coplanar}) is recovered for $i=0$. In order to derive Eq.~(\ref{eq:delta_inc}), we have assumed small angle inclinations. Therefore, our approximation is inaccurate for large inclinations. Figure \ref{fig:deltaTg} shows the radial torque density $dT_{g}/dR$ on the outer disc (note that $l>0$ implies that the ring lies in the outer disc). As expected, the torque on a given ring decays for larger inclination angles of the planetary orbit. We see that as $i$ increases, the profile of $dT_{g}/dR$ vs $l$ flattens at low $l$. In Section \ref{sec:cop_case} we mentioned that the impulse approximation in the coplanar case accounts for the scalings of $T_{g}$ and gives its magnitude {\em to within a factor of $2$}. It is convenient to introduce a constant factor of the order of the unity, $\xi$, such that the corrected gravitational torque density $dT_{g}^{\rm cor}/dR$ becomes \begin{equation} \frac{dT_{g}^{\rm cor}}{dR}= \xi \frac{dT_{g}}{dR}. \end{equation} By comparing with numerical simulations, we will verify whether the impulse approximation also predicts the correct scaling for inclined planetary orbits, and if it does, we will determine the value of $\xi$ that best matches the simulation results. \subsection{The minimum and maximum values of $l$} \label{sec:cutoffs} The radial torque density $dT_{g}/dR$ derived in the last section is not valid for $|l|<l_{\rm min}$ either for $|l|>l_{\rm max}$, where $l_{\rm min}$ is the minimum impact parameter for which the assumptions of small deflection and null thickness of the disc are still valid, and $l_{\rm max}< 1$ because the impulse approximation breaks down for encounters with large impact parameters as the orbits cannot be assumed rectilinear. For our purposes, the exact value of $l_{\rm max}$ is not relevant because the torque density decays very quickly with $|l|$, so we will take $l_{\rm max}=1$. The value for $l_{\rm min}$ is a more delicate issue and it is discussed in the following. As we have ignored the thickness of the disc in our derivation, $l_{\rm min}$ should be comparable to or larger than $h$. In addition, the deflection angle in encounters with an impact parameter of $l_{\rm min}$ should be small. In the coplanar case, the deflections are large in the coorbital region, i.e. at distances $\sim R_{H}$ from the planet. For planets in inclined orbits, the relative velocity between the planet and the disc particles in the vicinity of the planet is $2 \omega R_{p} \sin (i/2)$. Since the deflection angle decreases with the relative velocity (see Eq.~\ref{eq:cotdeltae}), the condition of small deflections could be fulfilled for impact parameters less than $R_{H}$ for inclined planets. For illustration, consider the extreme case in which the orbit of the planet is coplanar to the disc but moves in a retrograde orbit. In this case, deflections are only large within the planetary accretion radius $r_{\rm acc}$ defined as $2GM_{p}/V_{\rm rel}^{2}$. Since the relative velocity in this case is $2\omega R_{p}$, we obtain that $r_{\rm acc}/R_{H}=0.7q^{2/3}$, which implies that $r_{\rm acc}$ is a factor of $100$ smaller than $R_{H}$ for $q=10^{-3}$. In general, we can state that if $r_{\rm acc}\ll R_{H}$ or, equivalently, when $i\gg i_{\rm crit}\equiv 2\arcsin(0.85 q^{1/3})$, the minimum impact parameter is given by $r_{\rm acc}$ and not by the Hill radius, as there is no coorbital region at all. For the values of $i$, $q$ and $h$ explored in this paper, the values for $R_{H}$, $H$ and $r_{\rm acc}$ are all of the same magnitude within a factor of $2$. In view of this, we take $l_{\rm min}={\rm max}\{(q/3)^{1/3},h\}$, unless otherwise stated. Finally, we assume that the gravitational torque density is null (i.e. $dT_{g}/dR=0$) at $|l|<l_{\rm min}$ and at $|l|>l_{\rm max}$. This simple cutoff in the torque density is commonly adopted in the coplanar case \citep[e.g.][]{Cridaetal2006,Kanagawaetal2015}. \section{Gaps by planets on inclined orbits} \label{sec:gapsmethods} \subsection{Steady-state gaps by planets in fixed orbits} \subsubsection{Viscous criterion for the formation of a deep gap by planets in fixed orbits} \label{sec:criterion_i} It is possible to derive a criterion for gap formation similar to that of Equation (\ref{eq:criteria_LP93}) but for a planet that is forced to move on an orbit with non-zero inclination, i.e. ignoring the damping of inclination. Such a criterion is useful if the damping timescale is much larger than the timescale for opening the gap. This situation may occur when the planet has acquired its inclination after the gas is well depleted, or if the relative inclination of the planet with the disc is maintained by some external source such as accretion of mass (which may change the orientation of the disc) or through resonant inclination excitation by a second giant planet \citep{Thommes2003}. This criterion may be also useful to interpret simulations that are started after a stage where the disc is evolved with the planet on a fixed inclined orbit \citep[e.g.,][]{Bitsch2013}. As in the coplanar case to derive the gap opening criterion (see \S \ref{sec:cop_case}), we need to calculate the one-sided gravitational torque. To derive a gap criterion, we estimate the torque on the external disc by assuming that $\xi=1$, $l_{\rm max}=1$ and $l_{\rm min}=q^{1/3}$ (see \S \ref{sec:cutoffs} and Lin \& Papaloizou 1979). Having fixed $l_{\rm min}$ and $l_{\rm max}$ we can compute numerically the total torque acting on the external side of the disc given by \begin{equation} T_{g}(q,i)=R_{p}\int_{l_{\rm min}}^{l_{\rm max}} \frac{dT_{g}}{dR} dl, \label{eq:Tg_ext} \end{equation} as a function of $q$ and $i$, using Equation (\ref{eq:delta_inc}). For $0\leq i\leq 30^{\circ}$ and $5\times 10^{-5}\leq q\leq 2\times 10^{-2}$, we provide an empirical fit of the resultant $T_{g}(q,i)$ with an error less than $12\%$. The criterion condition is derived by imposing that a gap forms if $T_{g}\geq T_{\nu}$, where $T_{\nu}$ is given in Equation (\ref{eq:Tensor_visc}). In the following, we write the gap criterion in terms of $q_{\textsc{\tiny -3}}\equiv q/10^{-3}$ and $i_{\textsc{\tiny 10}}\equiv i/10$ ($i$ in degrees). A deep gap is predicted to form if: \begin{equation} \tilde{C}(q,i) q\geq \frac{32\nu}{\omega R_{p}^{2}}, \label{eq:i10} \end{equation} where \begin{equation} \tilde{C}(q,i) = \left\{ \begin{array}{ll} \frac{q_{\textsc{\tiny -3}}}{(1+i_{10}^{3.5})(q_{\textsc{\tiny -3}}+ 0.18i_{\textsc{\tiny 10}}^{2})^{\beta (i)}} & \quad i < 17^{\circ}, \\[0.5ex]\\ 0.22 q_{\textsc{\tiny -3}} \exp\left[-\Psi (q,i)-\Pi (i)\right] & \quad i > 17^{\circ}, \end{array} \right. \end{equation} and \begin{equation} \beta(i)=1-0.26i_{\textsc{\tiny 10}}, \end{equation} \begin{equation} \Psi(q,i)=\left(\frac{q_{\textsc{\tiny -3}}}{1.3i_{\textsc{\tiny 10}}+0.2}\right)^{0.34}, \end{equation} and \begin{equation} \Pi(i)=\frac{1}{2}(i_{\textsc{\tiny 10}}-1.7)^{2}. \label{eq:fu_Pi} \end{equation} In the particular case $i=0$, $\tilde{C}(q,0)=1$, and the well-known viscosity criterion $q\geq 32\nu/(\omega R_{p}^{2})$ is recovered [see, e.g., Equation (23) in \citet{Lin1993}]. Given the disc viscosity and the inclination $i$ of the planet, we can obtain the value of $q_{\rm crit}$ for the gap opening in the surface density. For a typical value of the effective viscosity of $10^{-5}\omega R_{p}^{2}$ and for $i=0, 10^{\circ}$, $20^{\circ}$ and $30^{\circ}$, we find $q_{\rm crit}=0.5\times 10^{-3}$, $0.8\times 10^{-3}$, $1.8\times 10^{-3}$ and $3.0\times 10^{-3}$, respectively. We expect that for values of $q$ larger than $q_{\rm crit}$, the surface density at the bottom of the gap should be $\leq 0.25\Sigma_{0}$ (see \S \ref{sec:cop_case}). In Section \ref{sec:num_sim}, we present numerical experiments to test whether this prediction is correct. For a planet that undergoes inclination damping, we define $i_{\rm open}$ as the inclination of the planet's orbit at the time at which the surface density at the gap is $\sim 0.2\Sigma_{0}$. If we give the value of $i_{\rm open}$, then Equations (\ref{eq:i10})-(\ref{eq:fu_Pi}) provide a lower limit for $q$, just by replacing $i$ for $i_{\rm open}$. \begin{figure*} \centering \includegraphics[width=1\textwidth]{disks2_int} \caption{Perturbation of volume density $(\rho-\rho_i)$ at $z=0$ after $t=200$ orbits, for different inclinations. In all cases, $q=10^{-3}$, $h=0.05$, and $\nu=10^{-5}$ (Runs 1 to 6). In all the plots, the planet is at $x=\cos i$, $y=0$ and $z=\sin i$, i.e.~it is at its maximum height from the disc. Note that the scale is linear (not logarithmic).} \label{fig:disc_integer} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\textwidth]{slice_R_z} \caption{Same as Figure \ref{fig:disc_integer} but along vertical cross sections in the plane $y=0$.} \label{fig:disc1} \end{figure*} \begin{figure*} \centering \includegraphics[width=\columnwidth]{wake_out} \caption{Perturbation of volume density along the crest of the outer spiral wave, at $z=0$.} \label{fig:wake} \end{figure*} \subsubsection{Stationary gap profile in the approximation of local deposition of the torque} \label{sec:method1} Under the assumption that the gravitational torque is locally (instantaneously) deposited in the disc (\emph{i. e. } ignoring the propagation of waves before damping) a steady state is reached when the gravity is balanced by the viscous torque at every ring of the disc \citep[see, for instance,][]{Varniereetal2004, Cridaetal2006}. The radial densities of the viscous torque, $dT_{\nu}/dR$, and of the gravitational torque, $dT_g/dR$ can be obtained from Equations (\ref{eq:Tensor_visc}) and (\ref{eq:delta_inc}). Then, equating these torque densities, we obtain a differential equation that describes the gap structure: \begin{equation} \frac{1}{\Sigma}\frac{d\Sigma}{dR} =\frac{\xi}{3\pi\nu R^2\Omega\Sigma}\frac{dT_g}{dR}-\frac{1}{2R}. \label{eq:Sigma_diff} \end{equation} To solve Equation (\ref{eq:Sigma_diff}), we need to choose the boundary condition. In the outer parts of the disc, far away from the planet, we expect that the surface density remains essentially unperturbed. Thus, it is ordinary to integrate Equation (\ref{eq:Sigma_diff}) from a certain point $R_{\rm max}\gg R_{p}$ inwards down to $R_{p}+r_{\rm min}$, where $r_{\rm min}$ is the distance from the planet where the impulse approximation breaks down. We may continue the integration of the equation in the inner disc by adopting a reasonable value for $\Sigma$ at $R_{p}-r_{\rm min}$. For instance, \citet{Cridaetal2006} assumed that $\Sigma(R)\propto R^{-1/2}$ between $R_{p}-r_{\rm min}$ and $R_{p}+r_{\rm min}$, and adopted $r_{\rm min}=2R_{H}$, with $R_{H}$ the Hill radius. \begin{figure*} \centering \includegraphics[scale=0.55]{disks_zoom} \caption{Zoom of the perturbed density $\rho-\rho_{i}$ at $z=0$ and $t=200$ orbits, for inclinations $i=4^{\circ}$ (Run 2) and $i=20^{\circ}$ (Run 5). The planet is at $x=0.9975$, $y=0$ and $z=0.0697$ in the left panel and at $x=0.9396$, $y=0$ and $z=0.3420$ in the right panel.} \label{fig:disk_zoom} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{Ideg} \caption{Inclination angle of the disc, $i_{D}$, as a function of $r$ after $200$ orbits, for $q=10^{-3}$, $h=0.05$ and different $i$. The rms in the measurement of $i_{D}$ is $0.05^{\circ}$. Thus, $i_{D}$ values below $\sim 0.05^{\circ}$ are not significant.} \label{fig:inclination} \end{figure} In the coplanar case, the resultant gap profile using the instantaneous damping approximation (i.e. using Eq.~\ref{eq:Sigma_diff}) has been extensively studied. It was found that the predicted gap profile is consistent with the simulated gaps only for high disc viscosities. At lower viscosities, the predicted gaps are wider than those observed in numerical simulations \citep[for instance, see][]{Varniereetal2004, Cridaetal2006}. Moreover, at these low viscosities the predicted scaling relation between the surface density averaged over the bottom of the gap ($\Sigma_{\rm gap}$) and $q$ is also incorrect; it cannot explain why $\Sigma_{\rm gap}$ scales as a power-law with $q$, as found in numerical simulations \citep[e.g.,][]{Duffelletal2014, Fungetal2014}. \subsubsection{Gap depth in a zero-dimensional analysis} \label{sec:method2} In order to reproduce the dependence of gap depth on $q$, viscosity and $h$ observed in hydrodynamical simulations for the coplanar case, \citet{Fungetal2014}, \citet{Kanagawaetal2015} and \citet{Duffell2015} have invoked a ``zero-dimensional'' approximation, which assumes that the torque occurs only within the width of the gap. Under this approximation, the total one-sided torque can be written as \begin{equation} T_{g}=\hat{f}_{0}(i,h) q^{2}\Sigma_{\rm gap} R_{p}^{4}\omega^{2} h^{-3}. \end{equation} By choosing reasonable values for $l_{\rm min}$ and $l_{\rm max}$, the scaling of the prefactor $\hat{f}_{0}$ with $i$ and $h$ can be computed from Equations (\ref{eq:delta_inc}) and (\ref{eq:Tg_ext}). As discussed in \S\ref{sec:cutoffs}, we take $l_{\rm min}={\rm max}\{(q/3)^{1/3},h\}$ and $l_{\rm max}=1$. For a fixed aspect ratio $h_{0}$, we can find how $\hat{f}_{0}$ depends on the inclination $i$. To do so, we have evaluated the integral given in Equation (\ref{eq:Tg_ext}) for $l_{\rm min}=1.1h$, $l_{\rm max}=1$, and different inclinations. For the particular value of $h=0.05$, we find that $\hat{f}_{0}(i,h=0.05)$ can be fitted as \begin{equation} \begin{aligned} \hat{f}_{0}(i,h=0.05)=&0.744\exp \left(-\frac{i}{3}\right)-0.450\exp\left(-0.7i\right)\\ &+0.155\exp\left(-\frac{i^{0.9}}{5.6}\right)+0.0004, \end{aligned} \label{eq:hatfo} \end{equation} where $i$ is the inclination angle in degrees. This fit is valid for $i<35^{\circ}$, with a fractional error lower than $4\%$. For the calibration of the magnitude of $\hat{f}_{0}$ (i.e., to fix the value of $\xi$), we have used the condition that $\hat{f}_{0}=0.45$ for $i=0$ (see \S \ref{sec:cop_case}). It is worth noting that $\hat{f}_{0}$ decays a factor of $100$ between $i=0$ and $i=30^{\circ}$. Once $\hat{f}_{0}$ is determined, Duffell's model predicts $\Sigma_{\rm gap}$ through the formula \begin{equation} \frac{\Sigma_{\rm gap}}{\Sigma_{0}}= \left(1+\frac{\hat{f}_{0} q^{2}R_{p}^{2}\omega}{3\pi\nu h^{3}}\right)^{-1}. \label{eq:duffell_eq} \end{equation} \citet{Duffell2015} also provides a recipe to obtain the profile of the gap. However, the calculation of the profile requires knowledge of the angular momentum flux due to the damping of the planetary wake, which is uncertain for planets in inclined orbits. Therefore, we only use the ``zero-dimensional analysis'' to predict the gap depth. \subsection{The formation of the gap in the local approximation: time evolution and inclination damping} \label{sec:timevolution} The steady-state gap formed by a coplanar planet has been studied in great detail because the timescale to reach the steady state is shorter than the migration timescale (see \S \ref{sec:cop_case}). For a planet on an inclined orbit, the inclination damping timescale may be comparable to or smaller than the timescale for gap opening if the disc is sufficiently massive \citep{MarzariNelson2009,Xiang2013,Bitsch2013}. Under these circumstances, it is necessary to consider the time evolution of the disc surface density in order to include the dependence of the planet's inclination with time. Suppose that $i(t)$ is known. Then, the torque density $dT_{g}/dR$ depends implicitly on time through $i(t)$. Assuming that the evolution of the disc is axisymmetric, $\Sigma (R,t)$ can be computed by solving the continuity equation \begin{equation} \frac{\partial\Sigma}{\partial t}+\frac{1}{R} \frac{\partial}{\partial R}(R\Sigma v_{\mbox{{\tiny $R$}}})=0, \end{equation} the radial momentum equation \begin{equation} \frac{\partial v_{\mbox{\tiny $R$}}}{\partial t}+v_{\mbox{\tiny $R$}}\frac{\partial v_{\mbox{\tiny $R$}}}{\partial R} =-\frac{GM_{S}}{R^{2}}+\frac{v_{\phi}^{2}}{R} -\frac{1}{\Sigma}\frac{\partial}{\partial R}(\Sigma c_{s}^{2}), \end{equation} and the conservation of angular momentum \begin{equation} \frac{\partial L}{\partial t}+\frac{1}{R}\frac{\partial}{\partial R}(Rv_{\mbox{\tiny $R$}}L)= \frac{\nu}{R}\frac{\partial}{\partial R}(\Sigma R^{3}\Omega')+\frac{\xi}{2\pi R} \frac{dT_{g}}{dR}, \label{eq:momentum} \end{equation} where $\Omega\equiv v_{\phi}/R$, $\Omega'\equiv d\Omega/dR$ and $L\equiv \Sigma\Omega R^{2}$ is the angular momentum of a differential ring in the disc (e.g., Pringle 1981). In Equation (\ref{eq:momentum}), we have employed the local deposition approximation. As a particular case, one can derive the time evolution of $\Sigma(R,t)$ in the presence of a planet on a fixed orbit, i.e. $i(t)=i_{0}=$const, in a disc that initially has no gap. To test whether the present 1D model is successful or not, it is sufficient to check if $\Sigma(R,t)$ obtained in full 3D hydrodynamical simulations is correctly reproduced for any value of $i_{0}$. If so, the formalism should also satisfactorily predict $\Sigma(R,t)$ for an arbitrary function $i(t)$. \begin{figure} \includegraphics[width=\columnwidth]{Sig_gap_profile} \caption{Temporal evolution of $\Sigma_{\rm gap}$ for different inclinations. In all cases $q=10^{-3}$ and $h=0.05$.} \label{fig:gapsito} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{dens2d} \caption{Radial profiles of the azimuthally averaged surface density $\Sigma$ at the time when $\Sigma_{\rm gap}=\Sigma_{0}/2$. The corresponding time depends on inclination $i$ and is quoted at the corner of the figure.} \label{fig:S2} \end{figure} \begin{figure*} \centering \includegraphics[scale=0.9]{Sigma_qfix.pdf} \caption{ Gap profiles using the local damping approximation (see \S \ref{sec:timevolution}) with $\xi=1$ (dotted lines) and $\xi=2$ (dashed lines) together with those from numerical simulations (solid lines) after $45$ and $200$ orbits, for different inclinations. The stationary gap profiles using the local damping approximation are displayed in the right column. In all cases $q=10^{-3}$ and $h=0.05$.} \label{fig:sigma1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.85]{diff_q.eps} \caption{ Comparison between the gap profiles in the simulations with $i=20^{\circ}$ and different $q$ (solid lines) with those using the local damping approximation with $\xi=2$ (dashed lines). In all cases $h=0.05$.} \label{fig:diffm} \end{figure*} \begin{table} \centering \caption{Parameters of the simulations. $q_{\rm crit}$ is the viscous critical value for gap formation according to Eqs (\ref{eq:i10})-(\ref{eq:fu_Pi}). In all the cases, the kinematic viscosity is $10^{-5}$. Remind that $q$ denotes the planet to star mass ratio. } \label{tab:simulations} \begin{tabular}{cccccc} Run & $i$ & $h$ & $q_{\rm crit}/10^{-3}$ & $q/10^{-3}$ & $\Sigma_{\rm gap}/\Sigma_0$\\ & deg & & & & at $200$ orbits\\ \hline 1 & 0 & 0.05 & $0.5$ &$1$ & 0.093\\ 2 & 4 & 0.05 & $0.55$ & $1$& 0.097\\ 3 & 10 & 0.05 & $0.8$ & $1$ & 0.148\\ 4 & 15 & 0.05 & $1.2$ & $1$& 0.273\\ 5 & 20 & 0.05 & $1.8$ & $1$ & 0.445\\ 6 & 30 & 0.05 & $3.0$ & $1$ & 0.664\\ 7 & 15 & 0.05 & $1.2$ & $3$ & 0.011\\ 8 & 20 & 0.05 & $1.8$ & $3$ & 0.037\\ 9 & 20 & 0.05 & $1.8$ &$0.3$ & 0.927\\ 10 & 20 & 0.025 & $1.8$ & $1$ & 0.264\\ \hline \end{tabular} \end{table} \section{NUMERICAL SIMULATIONS} \label{sec:num_sim} \subsection{The code and initial conditions} \label{sec:fargo3d} In our study of the gap opening by a planet on inclined orbit, we use a spherical coordinate system $(r,\theta,\phi)$, where $r$ is the radial coordinate, $\theta$ is the polar angle, and $\phi$ is the azimuthal angle. The hydrodynamical equations describing the flow are the equation of continuity \begin{equation} \frac{\partial\rho}{\partial t}+ \mbox{\boldmath $\nabla$}{}\cdot (\rho\mbox{\boldmath $v$} {})=0 \label{eq:hydrodynamics} \end{equation} and momentum equation \begin{equation} \frac{\partial \mbox{\boldmath $v$} {}}{\partial t}+ (\mbox{\boldmath $v$} {}\cdot\nabla)\mbox{\boldmath $v$} {}=-\frac{1}{\rho}\mbox{\boldmath $\nabla$}{} P-\mbox{\boldmath $\nabla$}{}\Phi+\mathbf{f_{\nu}}. \label{eq:hydrodynamics1} \end{equation} Here $\rho$ is the density, $\mbox{\boldmath $v$} {}$ the velocity, $\Phi$ is the gravitational potential and $\mathbf{f_\nu}$ represents the viscous force per unit volume. The disc is assumed to be locally isothermal, i.e. the pressure is given by \begin{equation} P=\rho c_s^2, \label{eq:pressure} \end{equation} where $c_s(r)$ is the isothermal sound velocity. We use the hydrodynamic FARGO3D code \citep{Benitez-LlambayMasset2016} which is the successor of the hydrodynamic FARGO code. Both codes use the orbital advection algorithm of \citet{Masset2000}, which significantly increases the timestep in thin protoplanetary discs. We use a reference frame centred on the star, and corotating with the planet. The gravitational potential $\Phi$ due to the central star and the planet is given by \begin{equation} \Phi=\Phi_S+\Phi_p, \label{eq:potential} \end{equation} where \begin{equation} \Phi_S=-\frac{GM_S}{r}, \label{eq:star_p} \end{equation} and \begin{equation} \Phi_p=-\frac{GM_p}{\sqrt{r_p^2+\epsilon^2}}+\frac{GM_pr\cos\phi}{r_p^2}, \label{eq:planet_p} \end{equation} where $r_{p}\equiv |\mbox{\boldmath $r$} {}-\mbox{\boldmath $R$} {}_{p}|$ is the distance from the planet, and $\epsilon$ is a softening length used to avoid computational problems arising from a divergence of the potential in the vicinity of the planet. We use $\epsilon=0.6H_{p}$, where $H_{p}$ is the disc scaleheight at $R=R_{p}$, but we also performed simulations with $\epsilon=0.3 H_{p}$ to check that the results are not sensitive to the exact value of $\epsilon$. The last term in Equation (\ref{eq:planet_p}) is the indirect term which appears because the reference frame is non-inertial and centred on the star. The self-gravity of the disc is ignored in our simulations. In Runs 1 to 10, the planet, whose orbit is tilted by an angle $i$ with respect to the initial midplane of the disc, is forced to describe a circular orbit of radius $R_{p}$ around the central star. Thus, we ignore the changes in the orbital parameters of the planet caused by tidal torques (see \S\ref{sec:freeplanet} for simulations of a planet that is left to freely migrate because of the tidal torques). No accretion of gas by the planet is considered here. The aspect ratio, $h\equiv H/r$, is assumed to be constant across the disc, where $H$ is the vertical scale height of the disc. The initial density of the disc, $\rho_{i}(R,z)$, is derived by assuming a power-law surface density \begin{equation} \Sigma_{i}(R)=\Sigma_{0}\left(\frac{R_{p}}{R}\right)^{1/2}, \end{equation} and by imposing hydrostatic equilibrium, which implies that $c_s(r)=hr\Omega$ where $\Omega(r)$ is the Keplerian angular velocity around the star. Note that $\Sigma_{0}$ denotes the initial surface density at $R=R_{p}$. Our distance unit is $R_p$ and our time unit $\omega^{-1}$ (as defined in section~\ref{sec:cop_case}, $\omega$ is the angular velocity of the planet). The period of the planet is therefore $2\pi$. The domain of the simulations extends radially from $r=0.4$ to $r=2.5$. The polar angle $\theta$ covers $14^{\circ}$ (from $83^{\circ}$ to $97^{\circ}$) for the simulations with $h=0.05$ and $8^{\circ}$ (from $86^{\circ}$ to $94^{\circ}$) for discs with $h=0.025$. All the simulations have the same grid size $(N_{r},N_{\phi},N_{\theta})=(266, 768, 64)$. In the radial direction, we implemented damping boundary conditions for the radial component of the velocity $v_{r}$ \citep{Deval2006}. More specifically, $v_{r}$ is artificially damped in the regions $r\in [0.4, 0.5]$ and $r\in [2.1, 2.5]$ by solving the equation \begin{equation} \frac{\partial v_r}{\partial t}=-\frac{\Omega}{2\pi}\left(v_r-v_{r0}\right)\chi(r) \label{eq:Stockholm} \end{equation} after each time-step. Here $v_{r0}$ is the radial velocity component at $t=0$ and $\chi(r)$ is a parabolic function which takes the value $1$ at the domain boundary and $0$ at the limit of the damping region \citep[e.g.,][]{Deval2006}. For the other two velocity components and the density we use reflecting boundary conditions without any damping. \subsection{Results} In this Section, we investigate numerically the interaction of an inclined planet with the disc. The parameters of the simulations for planets on fixed orbits are given in Table \ref{tab:simulations}. In \S \ref{sec:freeplanet} we present a simulation for a freely moving planet. In most of our simulations, we take $q=10^{-3}$, $h=0.05$ and a kinematic viscosity $\nu=10^{-5}$. \subsubsection{Evolution of the disc: planets on fixed orbits} Here we study the formation and structure of the gap carved by a planet on a fixed inclined circular orbit for several inclinations $0^{\circ},4^{\circ}, 10^{\circ}, 15^{\circ}, 20^{\circ}$, and $30^{\circ}$ with respect to the initial midplane of the disc, which corresponds to the plane $\theta = 90^\circ$. The simulations were run over $200$ planetary orbits. The strengh of the interaction between the planet and the disc depends on the inclination of the planetary orbit. For $h=0.05$, the aperture angle of the disc is $2.9^{\circ}$. For inclinations larger than $2.9^{\circ}$, the planet spends a fraction of its orbital period within the disc that decreases as the inclination increases. Figures \ref{fig:disc_integer} and \ref{fig:disc1} display the perturbation of volume density after $200$ orbits on equatorial and meridional cross sections, for $q=10^{-3}$, $h=0.05$, $\nu=10^{-5}$ and different inclinations. As occurs in the coplanar case, the planet triggers a wake with two spiral arms that emanate from the planet (see Figures \ref{fig:disc_integer}$a$ and \ref{fig:disc1}$a$). A gap in the surface density is also apparent. The gap divides the disc into two regions: $i)$ the inner disc, $r<R_p$, and the outer disc at $r>R_{p}$. In the inner (outer) disc, the spiral arm is leading (trailing), as in the non-inclined case. The amplitude of the spiral arms depends on the inclination of the planetary orbit as can be observed in the cases $a)-f)$ in Figure \ref{fig:disc_integer}. Figure \ref{fig:wake} shows the perturbed density along the crest of the outer spiral arm. It can be seen that for $i=4^{\circ}$ the density enhancement along the crest is three times greater than for $i=20^{\circ}$. Since the timescale for gap opening is much larger than the dynamical timescale, the structure of the gap is rather axisymmetric (i.e. it does not depend on the phase of the planet), except in the vicinity of planet's position. A magnification of the density map in a box centred at $x=X_{p}$, $y=Y_{p}$ and $z=0$ is shown in Figure \ref{fig:disk_zoom}. Note that the planet is at its maximum distance from the disc in these snapshots. We see that the higher inclination case has a less clean gap, and finer substructure. For $i=20^{\circ}$, the spiral waves do not emanate from the projected position of the planet. The appearance of the gap of an inclined planet is somehow reminiscent of the aspect of HL~Tau gap structures \citep[][]{ALMAPartnershipetal2015, ALMAPartnershipetal2015a, ALMAPartnershipetal2015b, ALMAPartnershipetal2015c}, in which we do not see local, conspicuous density enhancements at particular azimuth, which could potentially be due to planets. A large number of mechanisms can account for the existence of the gap structures observed in HL~Tau \citep[][]{Carrasco-Gonzalezetal2009, Flocketal2015, Gonzalezetal2015, Zhangetal2015, Carrasco-Gonzalezetal2016, Okuzumietal2016, Rugeetal2016, Hsi-WeiYenetal2016}. We simply mention that the lack of localized structures within the gap does not rule out planetary torques as the mechanism responsible for their existence, since a midly inclined planet does not trigger a large density enhancement at its projected location in the gap. The tidal perturbation of a planet with non-zero inclination may lead to the excitation of vertical disturbances (bending waves or warps) in the disc \citep[for a binary star, see, e.g.,][]{PapaloizouTerquem1995, Larwoodetal1996}. For massive enough planets, the disc will try to realign with the orbital plane of the planet, reducing the relative inclination between planet's orbit and the disc. In order to quantify the excitation of vertical modes (warps) in the disc, we calculated the inclination of the disc at different radii using the expression \begin{equation} i_{\textsc{\tiny D}}(r)=\arccos{\frac{L_z(r)}{\abs{\mbox{\boldmath $L$} {}(r)}}}, \label{eq:discinc} \end{equation} where $\mbox{\boldmath $L$} {} (r)$ is the angular momentum vector of a differential ring of the disc: \begin{equation} \mbox{\boldmath $L$} {} (r)= \int \rho (\mbox{\boldmath $r$} {} \times \mbox{\boldmath $v$} {}) \,d\theta \,d\phi. \end{equation} For the adopted values of $q$, the disc hardly changes its inclination (Figure \ref{fig:inclination}). The largest values of $i_{\textsc{\tiny D}}$ occur for the case $i=4^{\circ}$ and for rings having radii $\simeq R_{p}$. In particular, for $q=10^{-3}$, $h=0.05$ and $i=4^{\circ}$, these rings are displaced by an angle $\simeq 2^{\circ}$. For $i\geq 10^{\circ}$, $i_{\textsc{\tiny D}}$ is very small compared to $i$ and hence it is a good approximation to ignore the bend of the disc. \subsubsection{Scaling relations for the gap} \label{sec:scaling} As expected, the depth of the gap depends on the planetary inclination $i$. At low inclinations ($i<10^{\circ}$), the time for gap opening is shorter than in the case where the planet orbit is more inclined. As a measure of the depth, we determine $\Sigma_{\rm gap}$ from our simulations by calculating the surface density averaged over azimuth and over the radial direction between $R_p-\sqrt{2}R_H$ and $R_p+\sqrt{2}R_H$. Figure \ref{fig:gapsito} displays $\Sigma_{\rm gap}$ as a function of time for Runs 1 to 6. The gap density $\Sigma_{\rm gap}(t)$ converges toward a constant value at larger time. In order to compare the rate of emptying of gas in the gap, we write \begin{equation} \Sigma_{\rm gap}(t)=\left[1-f(t)\right]\Sigma_{0}+f(t) \Sigma_{200}, \end{equation} where $\Sigma_{200}$ denotes the surface density in the gap at $t=200$ orbits and $f(t)$ is an auxiliary function that satisfies $f(0)=0$ and $f(200)=1$. If $f(t)$ becomes flat, it means that we have essentially reached the asymptotic value of $\Sigma_{\rm gap}$. As judged from Figure \ref{fig:gapsito}, the rate of depletion of gas in the gap is very low after $200$ orbits, indicating that $\Sigma_{200}$ may be considered as representative of the asymptotic value, except perhaps for the simulation with $i=30^{\circ}$. In fact, the gap cleaning proceeds slightly slower for high $i$ (lower panel in Figure \ref{fig:gapsito}). In order to compare how the process of gap opening occurs for different planetary inclinations, we plot the azimuthally-averaged surface density of the gap at the time when the condition $\Sigma_{\rm gap}=\Sigma_0/2$ is satisfied (Figure \ref{fig:S2}). For $i=4^{\circ}, 10^{\circ}, 15^{\circ}$, and $20^{\circ}$ this occurs at $25$, $40$, $50$, and $109$ orbits, respectively. In the case $i=30^{\circ}$, the planet is unable to carve such a deep gap during the time of the simulation ($200$ orbits); at the end of this simulation $\Sigma_{\rm gap}=0.67\Sigma_{0}$. We see that at the time when $\Sigma_{\rm gap}=\Sigma_{0}/2$, the profile of the gaps are clearly different. For $i=4^{\circ}$, the planet satisfies the condition $\Sigma_{\rm gap}=\Sigma_{0}/2$ in a shorter timescale and, moreover, it depletes more material, leading to a wider gap, than planets with larger inclinations. This means that the gap cleaning is more efficient, along a wider radial range, for low inclination planets. Also, the local maxima of the density that appear near the edges of the evacuated gap have more time to spread by viscous diffusion, since the plots of larger $i$ are made at a later time. For each simulation, Table \ref{tab:simulations} lists the values of $q_{\rm crit}$, as derived in \S \ref{sec:criterion_i}. We see that the viscous criterion for gap formation is roughly satisfied, in the sense that when $q>q_{\rm crit}$ it holds that $\Sigma_{200}\lesssim 0.2\Sigma_{0}$. Figure \ref{fig:sigma1} shows the radial profile of the azimuthally-averaged surface density after $45$ and $200$ planetary orbits and different inclination angles (but having the same kinematic viscosity and aspect ratio). It is apparent that the depth of the gap decreases with the inclination angle of the planet's orbit. The surface density bumps at $r=1.4$ at $t=45$ and $t=200$ orbits appear because the profiles of the surface density are not completely relaxed at $t\leq 200$ orbits given that the viscous timescale is $\sim 2.5\times 10^{3}$ orbits. We have compared the resultant gap profiles in the simulations with those predicted using the 1D model (the method described in \S \ref{sec:timevolution}), which assumes that the wake's torque is deposited locally in the disc. We use $l_{\rm min}=(q/3)^{1/3}$ (see \S \ref{sec:cutoffs}) and explore two values for $\xi$ ($\xi=1$ and $\xi=2$). We also plot the steady-state gap profile by integrating numerically Equation (\ref{eq:Sigma_diff}), with the boundary condition $\Sigma (2.5)=\Sigma_{0}/\sqrt{2.5}$, which is the unperturbed surface density at $R=2.5R_p$, and using $r_{\rm min}=R_{H}$. To be fully satisfactory, the models should be able to reproduce the width and depth of the gap at any time. From Figure \ref{fig:sigma1}, we see that the width of the gaps is fairly reproduced for both $\xi=1$ and $\xi=2$. However, models with $\xi=1$ predict shallower gaps than those found in the simulations for inclinations $\geq 10^{\circ}$. It is remarkable that the depth of the gaps at $200$ orbits in the full 3D simulations is larger than the depth of the stationary gaps in the 1D model, i.e. the `steady state' curves in Figure \ref{fig:sigma1}. This suggests that the value of $\xi$ is larger than $1$. This is not unexpected if one reminds that the torque calculated by summing the contribution from all Lindblad resonances in the coplanar case is a factor of $2.8$ larger than the torque calculated with the impulse approximation [see \S \ref{sec:cop_case} and \citet{Lin1986a}]. Adopting $\xi=2$, there is a good level of agreement between the gap profiles derived using the 1D model and those found in the simulations for inclinations between $10^{\circ}$ and $20^{\circ}$ (see Figure \ref{fig:sigma1}). For lower inclinations ($i<10^{\circ}$), the 1D model overestimates the depth of the gap when assuming our fiducial value of $l_{\rm min}$ (not shown). In order to reproduce the gap profile found in the simulation with $i=4^{\circ}$, we need $l_{\rm min}=1.8(q/3)^{1/3}$. For $i=30^{\circ}$, the 1D model clearly underestimates the depth of the gap at any time (Figure \ref{fig:sigma1}), indicating that at least one of our assumptions is not fully correct. It is plausible that for inclinations as large as $30^{\circ}$, the impulse approximation underestimates the torque. Moreover, the local damping approximation is less justified as the inclination increases because the perturbed density in the wake decreases (Figure \ref{fig:wake}). However, it is unclear that the resulting angular momentum flux driven by spiral arms will increase gap clearing in the vicinity of the planet. In order to explore a bit further the 1D model, Figure \ref{fig:diffm} compares the gap profiles in our full 3D simulations with those in 1D models for $i=20^{\circ}$ and two different values of $q$. We see that for $q=3\times 10^{-3}$, the 1D model is still successful in reproducing the gap profile. However, for $q=3\times 10^{-4}$, the evacuation of gas from the gap is more efficient in the full 3D simulations than in the 1D models. It is worthwhile to consider the predictions using the `zero-dimensional' approximation. Following the procedure described in Section \ref{sec:method2}, we have calculated $\hat{f}_{0}(i,h)$ for $l_{\rm min}=0.87h, 1.1h$ and $1.2h$. Figure \ref{fig:duffell_i} compares $\Sigma_{\rm gap}$ calculated using Equation (\ref{eq:duffell_eq}) with the values obtained in the simulations. The general trend that the gap is shallower when increasing $i$ is consistent with simulations. A value for $l_{\rm min}$ of $1.2h$ is required to reproduce the gap density values for a disc with aspect ratio $=0.05$. It is worthwile to look at Run 10. This simulation has $h=0.025$ and lies in the deeply nonlinear regime. For this run, the zero-dimensional approximation with $l_{\rm min}=1.2h$ predicts $\Sigma_{200}/\Sigma_{0}=0.38$, while the simulated disc has a gap with $\Sigma_{200}/\Sigma_{0}=0.26$. This illustrates that hydrodynamical effects may be important for very thin discs. \begin{figure} \includegraphics[width=\columnwidth]{mod_duffell.pdf} \caption{Predicted values of $\Sigma_{200}/\Sigma_{0}$ using the zero-dimensional approach (see \S \ref{sec:method2}) with $l_{\rm min}=0.87h$ (dotted lines), $l_{\rm min}=1.1h$ (dashed lines) and $l_{\rm min}=1.2h$ (solid lines), together with the values from the numerical simulations (symbols) for simulations with different inclinations (Run 1 to Run 6; top panel) and for simulations with different $q$ (Runs 5, 8 and 9; bottom panel). } \label{fig:duffell_i} \end{figure} \subsection{Radial migration and inclination damping for free planets} \label{sec:freeplanet} In the impulse approximation, it is possible to estimate the characteristic timescales for radial migration and inclination damping for planets with inclination large enough that they cross the disc at supersonic velocities. \citet{Rein2012} find that the inclination and the semimajor axis damping timescales, measured in units of the orbital period, are \begin{equation} \tau_{\rm inc}\equiv i/(2|di/dt|)= \frac{M_{S}} {2\pi q\Sigma_{0}R_{p}^{2}} \frac{i\sin^{3}(i/2)}{\ln\Lambda}, \label{eq:tau_inc} \end{equation} \begin{equation} \tau_{\rm R}\equiv R_{p}/(2|\dot{R}_{p}|)=\frac{M_{S}}{8\pi q\Sigma_{0}R_{p}^{2}} \frac{\sin(i/2)\sin(i)}{\ln\Lambda}, \label{eq:tau_R} \end{equation} where $\Sigma_{0}$ is the surface density of the disc at the intersection of the planetary orbit with the disc, and $\ln\Lambda$ is the Coulomb logarithm of the interaction. More specifically, $\Lambda$ is the ratio between the upper ($r_{\rm max}$) and lower ($r_{\rm min})$ cut-off length scales of the interaction. A similar formula (except by a factor of $2$) for the timescale for the orbit to change was derived by \citet{Xiang2013}. The timescales $\tau_{\rm inc}$ and $\tau_{\rm R}$ depend on the unperturbed local surface density of the disc, $\Sigma_{0}$, because it was assumed that the timescale for gap opening is larger than both $\tau_{\rm inc}$ and $\tau_{\rm R}$. If the planet is able to open a gap, the depletion of material in the planet vicinity should be taken into account. Once the planet has carved a gap, the rates of damping in $i$ and $R$ are expected to decrease \citep[e.g.,][]{Xiang2013}. \begin{figure} \includegraphics[width=\columnwidth]{damping} \caption{Top: Semimajor axis $R_p$ as function of time for a planet with $q=10^{-3}$ and an initial inclination of $i_0=20^{\circ}$. The disc has $\Sigma_{0}=210$ g cm$^{-2}$. Bottom: Temporal evolution of the planetary inclination $i$.} \label{fig:damp} \end{figure} The timescale for inclination damping may be comparable or even smaller than the timescale for gap opening if the disc is sufficiently massive. For illustration, consider a planet-star system with $q=10^{-3}$, $M_{S}=1M_{\odot}$ and $i=20^{\circ}$. For these parameters, $\tau_{\rm inc}$ is given by \begin{equation} \tau_{\rm inc}=\frac{450}{\ln\Lambda} \left(\frac{\Sigma_{0}}{200\,{\rm g \,cm}^{-2}}\right)^{-1} \left(\frac{R_{p}}{5.2\,{\rm AU}}\right)^{-2} \,\,{\rm orbits}. \label{eq:tau_inc20} \end{equation} Since the timescale to form a gap with a depth $\Sigma_{\rm gap}=0.5\Sigma_{0}$ is $128$ orbits (see Figure \ref{fig:S2}), the inclination damping timescale is comparable to or smaller than the gap opening timescale if $\Sigma_{0}\gtrsim 600/\ln\Lambda$ g cm$^{-2}$ (assuming $R_{p}=5.2$ AU, $i=20^{\circ}$ and $q=10^{-3}$). The critical surface density is expected to increase linearly with $q$ because $\tau_{\rm inc}\propto q^{-1}$, whereas the timescale for gap opening goes as $\sim q^{-2}$. In order to test these estimates, we perform one simulation in which the planet feels the tidal torques by the disc since the beginning of the simulation. The planet is initially in a circular orbit with $R_{p}=5.2$ AU and $i_{0}=20^{\circ}$. It has $q=10^{-3}$ and a softening radius $\epsilon=0.3H_{p}$. The initial surface density at $5.2$ AU is $210$ g cm$^{-2}$. Figure \ref{fig:damp} shows the temporal evolution of $R_{p}$ and $i$. In $100$ orbits, the semimajor axis decays from $5.2$ AU to $4.8$ AU and the inclination from $20$ to $13$ deg, resulting in $di/dt=-0.07$ deg/orbit and $dR_{p}/dt=-4\times 10^{-3}$ AU/orbit. These rates of inclination damping and radial migration are consistent with those found in previous studies \citep[][]{MarzariNelson2009,Xiang2013}. For instance, for a planet with $i_{0}=20^{\circ}$, $\Sigma_{0}=76$, and $q=10^{-3}$, \citet{Xiang2013} find that $di/dt=-0.028$ deg/orbit and $dR_{p}/dt=-0.9\times 10^{-3}$ AU/orbit. In order to compare with the predictions of Equations (\ref{eq:tau_inc})-(\ref{eq:tau_R}), we need to estimate $\ln\Lambda$ in our simulation. To do so, we use that $r_{\rm min}$ depends on the softening radius $\epsilon$ as $r_{\rm min}\simeq 2.25\epsilon = 0.67H_{p}$ \citep{Bernal2013} and that $r_{\rm max}\simeq 2.1\sqrt{2} H_{p}$ in a disc \citep{Canto2013,Xiang2013}. Hence, we find that $\ln \Lambda\simeq 1.5$. Using this value, Equations (\ref{eq:tau_inc}) and (\ref{eq:tau_R}) predict $di/dt=-0.027$ deg/orbit and $dR_{p}/dt=-0.9\times 10^{-3}$ AU/orbit. These values are a factor of $3-4$ smaller than those found in the simulations. Nevertheless, it is likely that the accuracy is better for larger inclinations \citep{Rein2012,Xiang2013}. Given that the inclination damping timescale is comparable to the timescale for gap clearing, the process is dynamical in the sense that the inclination cannot be assumed to be constant. Figure \ref{fig:freeSigma} plots the surface density profile after $82$ orbits, that is, when the planet has an inclination of $15^{\circ}$, together with the surface density profile in simulations where the planets are at fixed orbits with inclinations $15^{\circ}$ and $20^{\circ}$. As expected, the surface density has its minimum at a inner radius when the planet is allowed to migrate. In addition, for the planet starting with $i_{0}=20^{\circ}$, the depth of the gap is larger when the planet is free to migrate than when the planet is forced to orbit at constant inclination ($20^{\circ}$). However, the planet that is forced to orbit at a constant inclination of $15^{\circ}$ opens a deeper gap after $82$ orbits than the gap produced by the migrating planet. We conclude that for planets with masses of the order of $M_{J}$ and for $\Sigma_{0}\gtrsim 100$ g cm$^{-2}$, the inclination damping timescale is comparable to or shorter than the gap clearing process and, therefore, it is necessary to solve the time-dependent 1D model described in \S\ref{sec:timevolution}. For those values of $\Sigma_{0}$, the inclination is damped to zero in a timescale much shorter than the lifetime of the disc. Therefore, the scattering process of inclined planets should occur when the surface density of the disc was significantly smaller than $100$ g cm$^{-2}$ \citep[see also][]{Bitsch2013}. \begin{figure} \includegraphics[width=\columnwidth]{density2dGAPS_new} \caption{Comparison of the radial profiles of the surface density for migrating and non-migrating planets, after $82$ orbits.} \label{fig:freeSigma} \end{figure} \section{Conclusions} \label{sec:conclusions} We have developed a model to understand the dynamical response of a protoplanetary disc to the presence of a planet in inclined orbit. We considered planets massive enough to open a gap but not too massive to warp the disc significantly. Given that the impulse approximation for non-inclined planets yields correct scalings and better than a factor of $2$ estimates of the torque \citep[e.g.][]{Lin1993,Armitage2010}, we have computed the excitation torque density by inclined planets on circular orbits in the impulse approximation. Using this simple approach, we have derived a viscous criterion for the formation of gaps by mildly inclined planets ($i\leq 30^{\circ}$) [see \S \ref{sec:criterion_i}]. Such a criterion may be useful when the planet has acquired its inclination after the gas in the disc is well depleted, or to interpret simulations that are started after the disc has evolved with the planet at a fixed, constant inclination. For planets that are forced to describe fixed circular orbits, we have calculated the temporal evolution of the gap profile in the impulse approximation and using the hypothesis of local damping. We have compared these radial gap profiles with those derived in 3D hydrodynamical simulations. The simple model underestimates the depth of the gap for $i\geq 10^{\circ}$ when comparing with the results of our simulations. Introducing a correction factor of $2$ in the torque allows us to reproduce successfully the temporal evolution of the gap profile for inclinations between $10^{\circ}$ and $20^{\circ}$, and planetary masses $\geq 1M_{J}$. For planetary inclinations larger than $20^{\circ}$, the simple model underestimates the depth of the gap probably because one assumption on which the impulse approximation is based, namely that the interaction is local, is only poorly verified at large inclinations. We have also computed the depth of the stationary gap in the so-called zero-dimensional approximation and find that it accounts correctly the trend of the gap depth with the inclination and mass of the planet. In order to check the validity of the approximations made in our approach, we have mainly focused on planets in fixed circular orbits. This approximation is strictly valid only if the inclination damping timescale is larger than the timescale for gap opening. Nevertheless, for given, arbitary functions $i(t)$, our formalism allows to derive the gap profile as a function of time. The results will be most accurate for $10^{\circ}\leq i(t) \leq 20^{\circ}$. \section*{Acknowledgements} We thank the referee for useful comments which improved the paper appreciably. The computer Tycho 2 (Posgrado en Astrof\'{\i}sica-UNAM, Instituto de Astronom\'{\i}a-UNAM and PNPC-CONACyT) has been used in this research. This work has been partially supported by CONACyT grant 165584, SIP 20161416 and UNAM's DGAPA grant PAPIIT IN101616. \bibliographystyle{mnras}
1,108,101,564,484
arxiv
\section*{} \vspace{-1cm} \footnotetext{\textit{$^{a}$~German Aerospace Center (DLR), Pfaffenwaldring 38, 70569 Stuttgart, Germany. Fax: +49 (0)731 5034011 ; Tel: +49 (0)711 68628254; E-mail: [email protected]}} \footnotetext{\textit{$^{b}$~Helmholtz Institute Ulm (HIU), Helmholtzstr. 11, 89081 Ulm, Germany. }} \footnotetext{\textit{$^{c}$~SINTEF Industry, Richard Brikelands vei 2b, 7034 Trondheim, Norway. }} \footnotetext{\textit{$^{d}$~CIDETEC Energy Storage, P$^\circ$ Miram\'on, 196, Donostia-San Sebasti\'an 20014, Spain. }} \footnotetext{\textit{$^{e}$~Ulm University (UUlm), Albert-Einstein-Allee 47, 89081 Ulm, Germany. }} \footnotetext{\dag~Electronic Supplementary Information (ESI) available: [details of any supplementary information available should be included here]. See DOI: 10.1039/b000000x/} \section{Introduction} Rechargeable zinc-air batteries (ZABs) are a promising post-Lithium-Ion battery technology\cite{Fu2017,Pang2017,Li2017b} for applications ranging from renewable energy storage, to electric vehicles\cite{Cano2018} and flexible electronics\cite{Tan2017b}. Current state-of-the-art ZABs feature an alkaline electrolyte like \ce{KOH} for its high conductivity, good electrochemical reaction kinetics, and moderate \ce{Zn} solubility\cite{Mainar2018,Xu2015}. Unfortunately, the absorption of \ce{CO2} from air into the electrolyte leads to the parasitic formation of carbonates (\ce{CO3^{2-}}), which slowly poisons the electrolyte\cite{Stamm2017, Li2014}. For this reason, the lifetime of alkaline ZABs is cut short by a few weeks of continuous exposure to air. Engineering solutions to the carbonation challenge have been proposed\cite{Pei2014a}. The use of \ce{CO2} filters to scrub the feed-gas could delay the onset of carbonation\cite{Drillet2001}, but to reach competitive lifetimes, the \ce{CO2} concentration would need to be reduced by two orders-of-magnitude\cite{Stamm2017}. Mechanically rechargeable Zn-air fuel cells and \ce{Zn}-air flow batteries, in which the electrolyte is routinely replaced, have also been demonstrated\cite{Zhu2016a, Ma2015, Oh2018,Pei2014,Wang2014,Pichler2018}. These solutions are effective for some applications, but they add cost and complexity to the system. Ideally, the carbonation challenge should be addressed on the materials level. Aqueous electrolytes with near-neutral pH values are resilient towards carbonation and could improve ZAB lifetime~\cite{An2018Heterostructure-PromotedElectrolyte}. The most common near-neutral electrolyte (NNE) is \ce{ZnCl2-NH4Cl}, which has been used in zinc-based LeClanch\'{e} batteries for over 100 years\cite{Heise1952,Garche2009}. In this tradition, we refer to zinc-air batteries with aqueous \ce{ZnCl2-NH4Cl} as LeClanch\'{e} zinc-air batteries (L-ZABs). The L-ZAB concept was first proposed in the 1970s\cite{Jindra1973}, but it has only recently become a broadly pursued topic in industry and research. Start-up companies are beginning to commercialize L-ZAB technology for grid-scale stationary applications\cite{Amendola2012}, and recent experimental research\cite{ThomasGoh2014,Sumboja2016} has verified the favorable cycling stability and lifetime of these systems. Although the future of L-ZABs is hopeful, there are some factors limiting their further development. We recently performed a theoretical investigation of L-ZAB cell operation\cite{Clark2017}. Our continuum model confirms that the LeClanch\'{e} electrolyte is generally valid for ZAB applications and highlights some potential obstacles. The model predicts that the pH of the electrolyte can become acidic during charging and that mixed zinc salts, not \ce{ZnO}, generally dominate the discharge product. The instability of the pH exacerbates material degradation, and the precipitation of non-\ce{ZnO} products consumes the electrolyte and lowers the practical energy density of the cell. The study suggests that reducing the total chloride content and tuning the initial pH to be slightly alkaline (\emph{e.g.} pH 8) could improve the pH stability and support the precipitation of more favorable products. In this work, we combine experimental characterization with theory-based simulations to validate our understanding of L-ZABs. We build upon the previously reported thermodynamic analysis to show how LeClanch\'{e} electrolytes can be formulated and prepared to provide a stable pH value during operation and favor more desirable discharge products. Based on this analysis, we identify 4 electrolyte compositions for experimental investigation. Through the use of long-term cell cycling, operando electrolyte pH measurements, and ex-situ XRD, SEM, and EDS characterization of discharged and charged \ce{Zn} electrodes, we evaluate the effect of electrolyte composition on L-ZAB performance. \section{Theory of LeClanch\'{e} Zinc-Air Batteries} \begin{figure}[t!] \includegraphics[width=1.0\linewidth]{Figure1.pdf} \caption{Schematic of idealized L-ZAB discharge. (I) The ORR occurs at the three-phase boundary of the air electrode. (II) The buffer reaction proceeds to stabilize the pH. (III) The zinc electrode dissolves to create \ce{Zn^{2+}}, which (IV) forms complexes with other solutes. (V) When the saturation limit of \ce{Zn^{2+}} is reached, zinc products precipitate. Possible solid discharge products include \ce{ZnO}, \ce{Zn(OH)2}, \ce{ZnCl2*4Zn(OH)2*H2O}, and \ce{ZnCl2*2NH3}.} \label{fgr:Schematic} \end{figure} In this section, we review the operating principle of L-ZABs and discuss how both the equilibrium and dynamic behavior of the electrolyte can influence cell performance. \subsection{Operating Principle} Figure \ref{fgr:Schematic} shows the operational schematic of an idealized L-ZAB. During discharge, the oxygen reduction reaction (ORR) occurs at the so-called three-phase boundary of the porous bi-functional air electrode (BAE) with the help of a catalyst like \ce{MnO2}. The ORR drives a change in the concentration of \ce{H+} at the air electrode, and the pH is stabilized by the deprotonation of the weak acid \ce{NH4+}. The \ce{Zn} electrode is electrochemically oxidized to \ce{Zn^{2+}} ions, which form complexes with other solutes (\emph{e.g.} \ce{Cl-}, \ce{NH3}, or \ce{OH-}). The equations and standard redox potentials for the electrochemical reactions in the L-ZAB are \begin{align} &\ce{Zn} \rightleftharpoons \ce{Zn^{2+} + 2e-}, \ E^0 = -0.762 \: \textrm{V}, \\ &\ce{0.5O2 + 2H+ + 2e- <=> H2O}, \ E^0 = 1.229 \: \textrm{V}. \end{align} The stabilization of the electrolyte pH due to the weak acid buffer and the formation of zinc-amine complexes are described by the reactions, \begin{align} \ce{NH4+} &\ce{<=> NH3 + H+}, \: \textrm{and} \\ \ce{Zn^{2+} + \textit{x}NH3} & \ce{<=> \ce{Zn(NH3)_{\textit{x}}^{2+}}}. \end{align} When the solubility of \ce{Zn^{2+}} in the electrolyte is exceeded, zinc products precipitate. For the system to function as a true zinc-air battery\cite{Clark2017}, \ce{ZnO} should precipitate via \begin{equation} \ce{Zn^{2+} + H2O <=> ZnO(s) + 2H+}, \end{equation} and give an overall reaction of \begin{equation} \ce{Zn + 0.5O2 <=> ZnO(s)}. \end{equation} However, as we will discuss in the following section, \ce{ZnO} is not always the dominant solid product. In some cases the discharge product can consist of a mix of \ce{ZnO}, \ce{Zn(OH)2}, \ce{ZnCl2*2NH3}, and \ce{ZnCl2*4Zn(OH)2*H2O}\cite{Zhang1996,Larcin1997,Passivation1976}. The overall cell reactions for various products are shown in Table \ref{tbl:OverallReactions}. The precipitation of non-\ce{ZnO} products consumes the electrolyte as an active material and reduces the energy density of the cell. The performance of L-ZABs is governed by the delicate interplay between pH buffering, \ce{Zn^{2+}} chelation, and zinc salt precipitation. To better understand L-ZAB operation and identify optimum electrolyte formulations, we examine the thermodynamics of the system. \begin{table}[t!] \caption{Overall cell reactions for different discharge products.} \label{tbl:OverallReactions} \def1.5{1.5} \begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l} \hline Overall Reaction \\ \hline \ce{Zn + 0.5O2 <=> ZnO(s)} \\ \ce{Zn + 0.5O2 + H2O <=> Zn(OH)2(s)} \\ \ce{Zn + 0.5O2 + 2NH4+ + 2Cl- <=> ZnCl2*2NH3(s) + H2O} \\ \ce{Zn + 0.5O2 + 0.8H2O + 0.4NH4+ + 0.4Cl- <=>} \\ \qquad \qquad \qquad \qquad\ce{0.2ZnCl2*4Zn(OH)2*H2O(s) + 0.4NH3} \\ \hline \end{tabular*} \end{table} \subsection{Equilibrium Thermodynamics}\label{sec:Thermodynamics} In this section, we apply 0D thermodynamic models to investigate the equilibrium composition of the aqueous \ce{ZnCl2-NH4Cl-NH4OH} electrolyte, and discuss how this relates to L-ZAB operation. The models applied in this analysis are derived and validated in existing works\cite{Limpo1993,Limpo1995,Zhang2001,Vazquez-Arenas2012,Clark2017,Song2017} and described in the supplementary information$^\dag$. In the text, we use square brackets to denote concentration, \emph{e.g.} $[\ce{NH3}] = c_{\ce{NH3}}$ with units $\textrm{mol} \cdot \textrm{L}^{-1}$. In aqueous solutions, the \ce{Zn^{2+}} ion forms complexes with other solutes\cite{Zhang1996}. The dominant zinc complex\cite{Zhang1996} in strongly alkaline electrolytes is the zincate ion, \ce{Zn(OH)4^{2-}}; the dominant zinc complex in acidic chloride electrolytes is the tetrachlorozincate ion, \ce{ZnCl4^{2-}}. But between the strongly acidic and strongly alkaline pH regions, the state of the zinc complex is very sensitive to changes in electrolyte composition. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{Figure2.pdf} \caption{Speciation and solubility in the \ce{ZnCl2}-\ce{NH4Cl}-\ce{NH4OH} system. (a) Speciation of the \ce{Zn^{2+}} ion versus pH and total zinc concentration (total chloride concentration, $[\ce{Cl}]_{\mathrm{T}}$, = 3 M). (b) \ce{NH3} distribution versus pH ($[\ce{Zn}]_{\mathrm{T}}$ = 1 M, $[\ce{Cl}]_{\mathrm{T}}$ = 3 M).} \label{fgr:Speciation} \end{figure} Figure \ref{fgr:Speciation}(a) shows the 2D zinc speciation and solubility landscapes for the \ce{ZnCl2-NH4Cl} system as a function of the pH and total concentration of zinc in solution, $[\ce{Zn}]_\textrm{T}$, for a fixed total chloride concentration, $[\ce{Cl}]_\textrm{T}$. The pH is adjusted through the addition of \ce{NH4OH}. The colored regions identify the dominant \ce{Zn^{2+}} complex, and the colored solid lines represent the solubility of various zinc products. The gray dashed lines represent fixed total \ce{NH3} concentrations (complexed \ce{NH3}, non-complexed \ce{NH3}, and \ce{NH4+}) and indicate paths the electrolyte follows as the ZAB is operated, as described by increases or decreases in the total zinc concentration. The concentrations of \ce{NH4+}, non-complexed \ce{NH3}, and total \ce{NH3} are shown in Figure \ref{fgr:Speciation}(b). In these diagrams, it is important to note the relationship between \ce{NH4+}, \ce{NH3}, and the zinc-amine complexes. We start by examining how the dominant zinc complex shifts as a function of pH, as shown in Figure \ref{fgr:Speciation}(a). For acidic pH values, the solution is dominated by zinc-chloride complexes because the concentration of \ce{NH3} is very low. The \ce{NH3} concentration rises with increasing pH values, leading to the formation of ternary zinc-chloride-amine complexes. When the concentrations of \ce{NH3} and \ce{NH4+} approach the equivalence point at pH 9.8, the solution is already dominated by \ce{Zn(NH3)4^{2+}}. The formation of \ce{Zn(NH3)_{\textit{x}}^{2+}} complexes also has an important effect on the solubility of zinc products. \ce{ZnO} and \ce{Zn(OH)2} are normally insoluble in the near-neutral pH regime. However, \ce{NH3} is able to act as a chelator for \ce{Zn^{2+}} ions, thereby increasing the solubility of zinc products. Figure \ref{fgr:Speciation}(a) shows that in the acidic pH regime, \ce{Zn^{2+}} is very soluble. As the pH approaches the near-neutral regime, the solubility falls sharply until the increasing \ce{NH3} concentration becomes high enough to chelate the \ce{Zn^{2+}} ions. The solubility levels off and subsequently increases as the solution becomes saturated with \ce{NH3}. Figure \ref{fgr:Speciation}(a) can also be used to predict how electrolyte composition affects the stable working point of the battery (i.e. where the anodic, cathodic, and precipitation reactions form a complete cycle). During discharge, an electrolyte with an initial total zinc concentration of 0.5 M and pH of 4 will follow the gray dashed line until the solubility limit of \ce{ZnCl2*4Zn(OH)2*H2O} is reached around pH 6, and the battery achieves a stable working point. On the other hand, an electrolyte with the same initial total zinc concentration but with an initial pH of 9 will reach its stable working point around pH 8 and \ce{Zn(OH)2} is the first solid to precipitate. The solubility of zinc products is also strongly linked to the concentration of chloride in the electrolyte. Increasing the total chloride content of the electrolyte, as shown in Figure \ref{fgr:Thermo2D}, decreases the solubility of chloride-rich products like \ce{ZnCl2*2NH3}. This is important for L-ZAB operation because it shows how changes in local electrolyte concentration and pH can alter the composition of the discharge product. \begin{figure}[b!] \includegraphics[width=1.0\linewidth]{Figure3.pdf} \caption{Speciation and solubility in the \ce{ZnCl2}-\ce{NH4Cl}-\ce{NH4OH} system, $[\ce{Cl}]_{\mathrm{T}}$ = 5 M.} \label{fgr:Thermo2D} \end{figure} During L-ZAB operation, the pH of the electrolyte is stabilized by the buffer reaction \ce{NH4+ <=> NH3 + H+} and can be calculated in terms of the buffering species concentrations as \begin{equation} \mathrm{pH} = \mathrm{pK_a}-\mathrm{log}_{10}\frac{[\ce{NH4+}]}{[\ce{NH3}]}. \end{equation} The capacity and reversibility of the buffer are described by the ratio $[\ce{NH4+}]:[\ce{NH3}]$. First we consider the capacity of the pH buffer. To achieve a stable operational pH, the value of the ratio $[\ce{NH4+}]:[\ce{NH3}]$ should be kept as constant as possible, \begin{equation} \frac{\partial\frac{[\ce{NH4+}]}{[\ce{NH3}]}}{\partial t} \approx 0, \qquad \frac{\partial\frac{[\ce{NH4+}]}{[\ce{NH3}]}}{\partial x} \approx 0. \end{equation} The buffer reaction consumes \ce{NH4+} and produces \ce{NH3} as it proceeds, causing the value of $[\ce{NH4+}]:[\ce{NH3}]$ to fall and creating a slow and steady increase in pH. Fortunately, the formation of complexes between \ce{NH3} and \ce{Zn^{2+}} allows the buffer reaction to proceed while the concentration of free \ce{NH3} remains relatively constant. In this way, the time-rate-of-change of $[\ce{NH4+}]:[\ce{NH3}]$ is reduced and the capacity of the buffer to stabilize the pH is enhanced. The practical reversibility of the buffer is described by the magnitude of $[\ce{NH4+}]:[\ce{NH3}]$. For most compositions in the near-neutral pH range, the concentration of \ce{NH4+} is much higher than \ce{NH3} (see Figure \ref{fgr:Speciation}(b)). This allows \ce{NH4+} to act as a proton donor and effectively buffer pH shifts in the alkaline direction. But if the pH becomes more acidic, there is only a small amount of \ce{NH3} available to act as proton acceptors. Although some excess \ce{NH3} can be supplied from \ce{Zn(NH3)_{\textit{x}}^{2+}} complexes, the reaction is very susceptible to any concentration gradients that could develop. Therefore, the buffer reaction can manage pH shifts in the alkaline direction, but the practical reversibility of the reaction to manage similar shifts in the acidic direction is limited. Because of this, there is a risk that the electrolyte could become acidic when the L-ZAB is charged. \begin{figure}[t!] \includegraphics[width=\linewidth]{Figure4.pdf} \caption{Calculated Pourbaix diagram for the aqueous \ce{ZnCl2-NH4Cl-NH4OH} system. $\alpha = \ce{ZnO}$, $\beta = \ce{Zn(OH)2}$, $\gamma = \ce{ZnCl2*4Zn(OH)2*H2O}$ ($[\ce{Cl}]_{\mathrm{T}}$ = 2.6M, $[\ce{Zn}]_{\mathrm{T}}$ = 0.5M).} \label{fgr:Pourbaix} \end{figure} Finally, we examine the equilibrium potentials of the electrochemical reactions. Figure \ref{fgr:Pourbaix} shows a Pourbaix diagram for the L-ZAB system. The equilibrium redox potential of the \ce{Zn} electrochemical reaction is below the redox potential for \ce{H2} evolution, indicating that the \ce{Zn} electrode is thermodynamically unstable in water. The kinetics of the hydrogen evolution reaction (HER) are slow and can be further suppressed by the addition of dopants like \ce{Hg}, \ce{In}, or \ce{Bi} to the \ce{Zn} electrode\cite{Lysgaard2018a,SureshKannan1995,Vorkapic1974,Baugh1983}. The equilibrium redox potential for chlorine gas evolution is close to the oxygen redox potential at very acidic pH values, but the two potentials separate as the pH becomes more alkaline. This supports the experimental observation from Sumboja, et al.\cite{Sumboja2016} that no \ce{Cl2} gas is evolved during charging. Zn passivation behavior occurs between pH values of circa 6-9, where mixed zinc products become insoluble. This analysis demonstrates how the stable working point of the battery can be predicted from thermodynamic considerations. Furthermore, it is shown that for most electrolyte compositions in the near-neutral pH regime, mixed zinc-chloride-hydroxide salts are most likely to precipitate. A three dimensional analysis of zinc speciation and solubility as a function of pH, \ce{NH4Cl}, and \ce{ZnCl2} concentrations is available in the supplementary information$^\dag$. The equilibrium properties of the system are predicted from thermodynamics, but the real performance of L-ZAB cells deviate due to kinetics and mass transport limitations. In the following, we apply a dynamic model to consider these effects. \subsection{Electrolyte Transport Dynamics} \begin{figure}[b!] \includegraphics[width=1.0\linewidth]{Figure5.pdf} \caption{Dynamic profiles of (a) electrolyte pH and (b) dominant \ce{Zn^{2+}} complex in a L-ZAB cell over one discharge-charge cycle. The electrolyte is \ce{0.5MZnCl2-1.6MNH4Cl} pH 8. The cell is operated with current density $\mathrm{j_d} = \mathrm{j_c} = 1 \mathrm{mA}\cdot\mathrm{cm}^{-2}$.} \label{fgr:ZAB_Dyanmics} \end{figure} \begin{table*}[t!] \caption{Measured physicochemical properties of the four proposed electrolyte compositions compared with literature values for the standard alkaline ZAB electrolyte, 30 wt\% \ce{KOH}. Values for ionic conductivity (IC), mass density ($\rho$), dissolved oxygen concentration ([\ce{O2}]), and viscosity ($\mu$) are measured for each electrolyte.} \label{tbl:ElectrolyteProperties} \def1.5{1.5} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}ccccccccc} \hline Designation & [\ce{ZnCl2}] & [\ce{NH4Cl}] & pH & IC ($\mathrm{mS} \cdot \mathrm{cm}^{-1}$) & $\rho$ ($\mathrm{g} \cdot \mathrm{mL}^{-1}$) & [\ce{O2}] ($\mathrm{mg} \cdot \mathrm{L}^{-1}$) & $\mu$ (cP) \\ \hline E4 & 0.51 & 2.34 & 4 & 215 & 1.06 & 6.16 & 1.12 \\ E6 & 0.51 & 2.34 & 6 & 206 & 1.08 & 7.12 & 1.09 \\ E7 & 0.26 & 5.00 & 7 & 382 & 1.05 & 6.74 & 1.09 \\ E8 & 0.50 & 1.60 & 8 & 209 & 1.05 & 6.61 & 1.15 \\ \hline 30 wt\% \ce{KOH} & - & - & 14.8 & 638\cite{Mainar2018a} & 1.28\cite{Akerlof1941} & 2.52\cite{Davis1967} & 2.23\cite{Sipos2000} \\ \hline \end{tabular*} \end{table*} To simulate the dynamic performance of L-ZABs, we implement a 1D continuum model of the system and examine the performance over a discharge-charge cycle. In the \ce{ZnCl2-NH4Cl-NH4OH} electrolyte, the large quantity of solute species combined with the orders-of-magnitude concentration swings that occur create difficulties for numerical solvers of traditional continuum models. Similar systems like the electrochemical desalination of water~\cite{Dykstra2017TheoryDeionization} or ammonia recovery from liquid bio-waste~\cite{Dykstra2014TheorySystems} have been successfully modeled, but there is a dearth of continuum models for L-ZAB performance. In our first model-based investigation of L-ZAB performance\cite{Clark2017}, we derived a novel framework for modeling transport in LeClanch\'{e} electrolytes. The framework defines a set of so-called quasi-particles that describe the quantities of mass and charge that are conserved in the homogeneous electrolyte reactions. In that way, the computational effort to obtain a solution is significantly improved. In an upcoming publication, we expand the validity of the quasi-particle framework to cover a wider range of electrolyte compositions beyond \ce{ZnCl2-NH4Cl-NH4OH}. The quasi-particle model solves the equations for mass and charge continuity in the electrolyte. The concentration of each quasi-particle, $q$, is determined from the mass continuity equation, while the local electro-neutrality condition is set by the charge continuity equation. \begin{align} \frac{\partial (c_q\varepsilon_{\mathrm{e}})}{\partial t} &= \underbrace{-\vec{\nabla}\cdot \vec{N}_q^{\mathrm{D,M}}-\vec{\nabla}\cdot \vec{N_q^{\mathrm{C}}}}_\text{transport} + \overbrace{\dot{s}_q}^\text{source}, \\ 0 &= \underbrace{-\vec{\nabla}\cdot \vec{j}}_\text{transport} + \overbrace{\sum_iz_i\dot{s}_i}^\text{source}. \end{align} Detailed derivation, parameterization, validation, and discussion of the continuum modeling method is available in existing works \cite{Neidhardt2012, Horstmann2013, Stamm2017, Clark2017, Clark2018} and in the supplementary information$^\dag$ of this article~\cite{Newman,Shock1988,Shock1989,Shock1997,Frank1996a,Atkins2006,Livermore1990,Sverjensky1997,Limpo1993,Limpo1995,Clever1992,Reichle1975}. Figure \ref{fgr:ZAB_Dyanmics} presents the performance of an L-ZAB cell with pH 8 0.5M \ce{ZnCl2} - 1.6M \ce{NH4Cl} over one discharge-charge cycle. The dynamic pH profile, shown in Figure \ref{fgr:ZAB_Dyanmics}(a), indicates that the pH in the BAE trends alkaline during discharging. On the other hand, when the cell is charged, the pH in the BAE trends acidic. In both cases, the buffer reaction is able to stabilize the pH in the near-neutral regime. At the \ce{Zn} electrode, the pH trends acidic during discharging. This is because the excess \ce{Zn^{2+}} takes up what small amount of \ce{NH3} is present. When the cell is charged, \ce{Zn^{2+}} is redeposited and releases \ce{NH3} into the electrolyte, causing the pH to trend alkaline. Figure \ref{fgr:ZAB_Dyanmics}(b) shows the dominant zinc complex in the solution. Initially the dominant complex over the entire cell domain is \ce{ZnCl(NH3)3^{+}}, but as the pH begins to shift and concentration gradients build up in the cell, the dominant complex becomes \ce{Zn(NH3)4^{2+}} in the air electrode and \ce{ZnCl3(NH3)-} in the \ce{Zn} electrode. Comparing the dynamic \ce{Zn} speciation with the equilibrium values calculated in the previous section shows how cell operation can affect the inhomogeneous behavior of the electrolyte. The simulation results described in this section provide a foundation for understanding and interpreting experimental measurements of L-ZAB cells. In the following sections, we experimentally characterize L-ZABs with a variety of electrolyte compositions and compare the results with model-based predictions. \section{Experimental Methods} \begin{table*}[t] \caption{Obtained electrochemical results during the cell cycling tests and corresponding electrolyte evaporation under open-circuit conditions. Overpotential is defined as $\Delta V = V_{\mathrm{OER}} - V_{\mathrm{ORR}}$.} \label{tbl:CyclingEvaporation} \def1.5{1.1} \begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}cccccc} \hline \textbf{Cycle Number} & & \textbf{E4} & \textbf{E6} & \textbf{E7} & \textbf{E8} \\ \hline \multirow{4}{*}{1} & ORR (V) & 0.887 & 0.929 & 0.860 & 0.888 \\ & OER (V) & 2.079 & 2.067 & 1.977 & 1.922 \\ & Overpotential (V) & 1.192 & 1.138 & 1.117 & 1.034 \\ & Evaporation (wt\%) & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline \multirow{4}{*}{25} & ORR (V) & 0.924 & 0.909 & 0.915 & 0.947 \\ & OER (V) & 2.070 & 2.095 & 2.098 & 2.014 \\ & Overpotential (V) & 1.146 & 1.186 & 1.183 & 1.067 \\ & Evaporation (wt\%) & 1.70 & 0.57 & 0.00 & 5.87 \\ \hline \multirow{4}{*}{50} & ORR (V) & 0.882 & 0.896 & 0.896 & 0.909 \\ & OER (V) & 2.069 & 2.088 & 2.117 & 2.047 \\ & Overpotential (V) & 1.187 & 1.192 & 1.221 & 1.138 \\ & Evaporation (wt\%) & 3.80 & 0.57 & 0.00 & 8.19 \\ \hline \multirow{4}{*}{100} & ORR (V) & 0.848 & 0.847 & 0.866 & 0.761 \\ & OER (V) & 2.077 & 2.194 & 2.117 & 2.254 \\ & Overpotential (V) & 1.229 & 1.347 & 1.251 & 1.493 \\ & Evaporation (wt\%) & 5.34 & 1.69 & 0.00 & 12.58 \\ \hline \multirow{4}{*}{150} & ORR (V) & 0.858 & 0.863 & 0.853 & 0.718 \\ & OER (V) & 2.079 & 2.172 & 2.142 & 2.272 \\ & Overpotential (V) & 1.221 & 1.309 & 1.289 & 1.554 \\ & Evaporation (wt\%) & 6.25 & 1.69 & 0.00 & 14.04 \\ \hline \end{tabular*} \end{table*} \subsection{Preparation of materials} Four different electrolyte systems (designated E4, E6, E7, and E8) were prepared from \ce{ZnCl2} (EMD Millipore 98 \%) and \ce{NH4Cl} (EMD Millipore 99.5 \%) dissolved in deionized water and the pH value was adjusted with \ce{NH4OH} (Fluka analytical 5.0 N). Table 2 lists the formulation of each electrolyte system and corresponding pH value. The electrolyte compositions were chosen to reflect existing studies in the literature~\cite{ThomasGoh2014,Sumboja2016} and to evaluate the recently proposed composition from Clark et al.~\cite{Clark2017}. Furthermore, the variation in selected electrolyte compositions are defined as to demonstrated changes in both pH stability and zinc precipitation product, as discussed in the section on Equilibrium Thermodynamics. The bifunctional air electrode was prepared by mixing 70 wt.\% carbon nanotube (CNT, Arkema Graphistrength\textsuperscript{TM} C100), 20 wt.\% electrolytic manganese dioxide (EMD, Tosoh Hellas A. I. C.) and 10 wt.\% PTFE (Dyneon TF 5032 PTFE). The mixture was pressed twice for 1 minute at 50 bar against a carbon gas diffusion layer (Freudenberg H23C9). Once the electrodes were pressed, they were heated at 340$^\circ$C for 30 minutes where 2.2 mg cm$^{-2}$ of catalyst loading were achieved. \subsection{\textbf{Physicochemical characterization of electrolytes}} Physicochemical properties of the different electrolyte systems were analyzed with specific equipment for those measurements. In this context, the ionic conductivity (IC), viscosity ($\mu$), dissolved oxygen ([\ce{O2}]) and mass density ($\rho$) were measured for each formulation. The obtained values are listed in Table \ref{tbl:ElectrolyteProperties}. \subsection{\textbf{Electrochemical characterization}} In this work two customized electrochemical cell designs were used: ex-situ (C-EI) and operando pH (C-OpH). In the first cell (C-EI), the electrodes were separated by 0.9 cm and 1.1 mL of electrolyte was injected. The C-OpH cell requires more space between both electrodes to place two pH microelectrodes (Mettler Toledo, InLab$^{\text{\textregistered}}$ Micro) near the positive and negative electrodes. In this context, the C-OpH design features a distance of 2.8 cm between the electrodes and 4.4 mL of electrolyte. Photos of the cells are available in the supplementary information$^\dag$. In both electrochemical cells the bifunctional air electrode and a zinc foil (Alfa Aesar, 99.98\%, 250 $\mu$m thickness) were used as working and counter electrode, respectively, with an active area of 1.327 cm$^{2}$. The electrochemical analyses were carried out in a BaSyTEC Battery Test System. Operando pH measurements were performed in the C-OpH cell design, applying a current density of 2 $\mathrm{mA} \cdot \mathrm{cm}^{-2}$ for 29 h of discharge and 29 h of charge (330 $\mathrm{mAh} \cdot \mathrm{g}^{-1}$; 40\% depth of discharge). Evaporation of the electrolyte was analyzed by measuring the weight loss over time at open-circuit conditions in the C-EI cell. The same cell design was used for the cycling tests of different electrolyte systems. In this case, a current density of 1 $\mathrm{mA} \cdot \mathrm{cm}^{-2}$ was applied during 2 hours discharge and 2 hours charge. In order to evaluate the nature of the solid reaction products during cycling, two sets of experiments were performed. In the first, cells were discharged at a rate of 1 $\mathrm{mA} \cdot \mathrm{cm}^{-2}$ to a DoD of 40\%, and in the second cells were discharged in the same manner before being charged at 1 $\mathrm{mA} \cdot \mathrm{cm}^{-2}$ back to the original fully charged state. \subsection {Physical Characterization of cell reaction products} X-ray diffraction data were collected using a Bruker D8 Advance A25 powder diffractometer equipped with a Cu K-$\alpha$ radiation source and LynxEye XE\textsuperscript{TM} detector. All measurements were collected in Bragg-Brentano mode. Measurement of powder products after discharging to 40\% DoD was performed with the product still attached to the Zn foil anode, whilst powder products after discharging + charging were removed from the anode foil and dispersed on a "zero background" single crystal Si sample holder using silicon grease as adhesive. The phases present within the samples were identified using the ICDD PDF4+ 2017 crystal structure database~\cite{ICDDDatabase} and Crystallographic Open Database (COD)~\cite{Grazulis2009,Grazulis2012,Grazulis2015,LeBail2005,Downs2003}. Phases were confirmed via Rietveld-type fitting, but it was not possible to adequately correct the complex preferred orientation exhibited by product phases and so final fitting was performed via Pawley-type whole powder pattern fitting in which only peak positions are constrained. All fitting was performed using the Bruker Topas v5 analysis software. Electron microscopy and EDS element mapping were performed using a Hitachi S3400N electron microscope equipped with an Oxford Instruments Aztec EDS system. Top surface images were collected from dry samples following electrochemical testing. For cross-sectional imaging, dried as-reacted anodes were embedded in epoxy resin (Struers EpoFix), and manually polished using grinding papers down to 4000 grit / 5$\mu$m grit size. In order to avoid dissolution or reaction, the samples were dry-polished without water or lubricant. Prior to imaging, both top and cross-section samples were coated with a thin layer of carbon to aid conductivity. \section{Results and discussion} \begin{figure}[t] \includegraphics[width=\linewidth]{Figure6.pdf} \caption{Cell voltage for L-ZABs cycled at current density $\mathrm{j_d} = \mathrm{j_c} = 1 \mathrm{mA}\cdot\mathrm{cm}^{-2}$ for $\mathrm{t_d} = \mathrm{t_c} = 2$ hours.} \label{fgr:ExpCycles} \end{figure} \subsection{Material Properties} An overview of the physicochemical characterization of each electrolyte formulation is given in Table \ref{tbl:ElectrolyteProperties}. To provide an adequate frame of reference for these values, we compare them with properties of the standard electrolyte for alkaline ZABs (30 wt\% KOH) as reported in the literature~\cite{Mainar2018a,Akerlof1941,Davis1967,Sipos2000}. The ionic conductivity (IC) measurements indicate that electrolytes E4, E6, and E8 have comparable ionic conductivity values just above 200 $\mathrm{mS} \cdot \mathrm{cm}^{-1}$, while E7 shows a substantially higher conductivity of 382 $\mathrm{mS} \cdot \mathrm{cm}^{-1}$. This is likely due to the higher concentration of \ce{NH4Cl} in E7 as compared to the other electrolytes. Although the measured IC values are lower than that of \ce{KOH} (638 $\mathrm{mS} \cdot \mathrm{cm}^{-1}$), they are still in a suitable range for battery electrolyte applications. Analysis of the dissolved oxygen content (DO) and viscosity ($\mu$) of the electrolytes also indicate their suitability for ZAB applications. The dissolved oxygen levels are over twice as high as those found in \ce{KOH}, and the viscosity is roughly half that of \ce{KOH}. Higher dissolved oxygen concentration is beneficial for the kinetics of the ORR, and the low viscosity helps achieve good transport and wetting behavior in the air electrode. On the other hand, lower electrolyte viscosity could increase the risk of flooding the air electrode (see supplementary information$^\dag$). Care should be taken to adjust the hydrophilic/hydrophobic properties of the BAE substrate accordingly~\cite{Danner2014,Danner2016}. \subsection{Full Cell Cycling} Figure \ref{fgr:ExpCycles} compares the voltages of ZAB cells with the various electrolytes after 150 discharge-charge cycles over 600 hours at 1$\mathrm{mA}\cdot \mathrm{ cm}^{-2}$ (2h discharge, 2h charge). Electrolyte E8 presents an overpotential ($\Delta V = V_{\mathrm{OER}} - V_{\mathrm{ORR}}$) lower than 1.15 V, and is the lowest value compared with E4, E6 and E7 during the first 50 cycles (see Table \ref{tbl:CyclingEvaporation}). However, after 200 hours of battery cycling, electrolyte E8 shows a significant degradation in both the magnitude and the stability of the cell potential, while electrolytes E4, E6, and E7 show relatively stable performance during the cycling period. This may be due to increased loss of electrolyte by evaporation observed in E8 (12.58 wt\% in the 100th cycle, Table \ref{tbl:CyclingEvaporation}). Evaporation values have been taken at open-circuit conditions; they might be undervalued during the cycling testing due to a possible competition between OER and HER. The electrolyte evaporation in the C-EI cell design reduces the electrolyte level in contact with active materials. This reduces the practical active area of the BAE and, as a consequence, the real applied current density is increased, leading to a higher overpotential. \subsection{Operando pH Stability} \begin{figure}[b!] \includegraphics[width=1\linewidth]{Figure7.pdf} \caption{Comparison of measured and predicted pH profiles near the air electrode and the \ce{Zn} electrode from (a) experiment and (b) simulation. For a single cycle at current density $\mathrm{j_d} = \mathrm{j_c} = 2 \mathrm{mA}\cdot\mathrm{cm}^{-2}$.} \label{fgr:Exp_pH} \end{figure} \begin{figure*}[t!] \includegraphics[width=1\textwidth]{Figure8.pdf} \caption{Comparison of measured and predicted pH profiles near the air electrode and the \ce{Zn} electrode from (a, c, e) experiment and (b, d, f) simulation. For a single cycle at current density $\mathrm{j_d} = \mathrm{j_c} = 2 \mathrm{mA}\cdot\mathrm{cm}^{-2}$.} \label{fgr:Exp_pH2} \end{figure*} According to our understanding of L-ZAB performance as discussed in the theory section of this paper and our previous work\cite{Clark2017}, we predict that the electrolyte in the air electrode can become strongly acidic during charging due to the slow diffusion of \ce{NH3}. To validate this prediction, operando pH measurements are taken near the air and \ce{Zn} electrodes during a single discharge-charge cycle. Electrolyte E4 is the formulation most likely to become unstable because there is initially very little \ce{NH3} in the solution and the composition is far from the stable working point of the cell. Therefore, an L-ZAB featuring E4 offers the best opportunity to observe the predicted behavior. Figure \ref{fgr:Exp_pH}(a) shows the measured pH profiles in an L-ZAB with electrolyte E4. These curves contain two features of interest. First, although the electrolyte is initially at pH 4, there is a rapid increase at the start of discharge that eventually approaches a steady-state value near pH 6. The pH increase begins at the air electrode followed by a delayed increase at the \ce{Zn} electrode. Second, when the cell is charged, the pH is initially stable but drops to strongly acidic values near the end of charging. This drop begins in the air electrode and continues in the \ce{Zn} electrode. The pH in the \ce{Zn} electrode rebounds upward at the very end of charging. The L-ZAB model predicts this behavior~\cite{Clark2017}. The simulation results can help elucidate the mechanism behind the observed pH swings. Figure \ref{fgr:Exp_pH}(b) shows the simulated pH values in a L-ZAB with electrolyte E4. As observed in the experiment, the model predicts that the pH rapidly increases at the air electrode from the start of discharge until the cell reaches a stable working point in the near-neutral pH regime. A comparable shift is expected at the \ce{Zn} electrode, but it is delayed due to slow mass transport across the electrolyte bath and the excess concentration of \ce{Zn^{2+}}. As discharge continues, the rate of pH change stabilizes for both the BAE and the \ce{Zn} electrode. When the cell is charged, the pH near the BAE begins to drop and is stabilized by the buffer reaction. On the other side, the pH near the \ce{Zn} electrode becomes slightly more alkaline as \ce{Zn^{2+}} is deposited, releasing more \ce{NH3} from zinc-amine complexes. Near the end of charging, a \ce{NH3} mass transport limitation becomes dominant in the air electrode. With \ce{NH3} locally depleted, the buffer reaction is no longer effective and the pH drops to acidic values. Measured and simulated pH curves for the remaining L-ZAB electrolyte systems are compared in Figure \ref{fgr:Exp_pH2}. In contrast to electrolyte E4, the pH of the other systems remains more stable. There is no increase at the start of discharge because electrolytes E6, E7, and E8 are formulated at their stable working points. The drop to acidic values at the end of charging is not observed at the measurement location in these systems. There is generally good agreement between the predicted and observed pH behavior. The model tends to overestimate pH changes than are measured in the experiment. This may be because the measurement is taken at a single point in three-dimensional space, while the simulation is simplified to one-dimension. Because the pH is measured near the electrodes, there is a delay between the onset of pH variations in the electrodes and when they can be observed in the measurement. Nonetheless, the major predictions of the model including pH increase to the stable working point during discharging and the rapid fall to acidic values during charging are experimentally observed. The pH fluctuations that occur during cell cycling can have important consequences for L-ZAB lifetime and performance. The suitability of ORR/OER catalyst materials is strongly dependent on electrolyte pH. For example, although \ce{MnO2} is a good catalyst in alkaline and neutral solutions, it is known to dissolve in acidic media~\cite{Huynh2014, Takashima2012a,Pokhrel2015}. Therefore, future research should examine ways to stabilize the pH in the air electrode during charging. The electrolyte pH has a strong influence on the composition, precipitation, and dissolution of zinc salts. This, in turn, has a strong influence on the electrochemical performance of the \ce{Zn} electrode. In the following section, we examine the precipitation behavior of discharged and discharged-charged \ce{Zn} electrodes as a function of electrolyte composition. \subsection{\ce{Zn} Electrode Characterization} The dominant discharge product in a true \ce{Zn}-air cell should be \ce{ZnO}. The thermodynamic analysis shown in Figures \ref{fgr:Speciation} and \ref{fgr:Thermo2D} predicts that the precipitation product in \ce{ZnCl2-NH4Cl} electrolytes is a mix of \ce{ZnCl2*4Zn(OH)2*H2O}, \ce{ZnCl2*2NH3}, and \ce{Zn(OH)2}. To investigate this prediction, ex-situ XRD, SEM, and EDS measurements are performed on \ce{Zn} electrodes galvanostatically cycled between 0 - 40\% DoD in each electrolyte formulation. \begin{figure}[t] \includegraphics[width=1\linewidth]{Figure9.pdf} \caption{Powder X-ray diffractograms for anode product phases obtained at 40\% DoD in electrolytes E4, E6, E7, and E8. Data are scaled with the square-root of intensity to emphasize weak reflections. Samples were measured on the host Zn foil anode.} \label{fgr:XRD} \end{figure} \begin{figure}[t] \includegraphics[width=1\linewidth]{Figure10.pdf} \caption{Powder X-ray diffractograms for anode product phases obtained from electrodes discharged 40\% DoD and charged in electrolytes E4, E6, E7, and E8. Data are scaled with the square-root of intensity to emphasize weak reflections.} \label{fgr:XRD_Charged} \end{figure} Figures \ref{fgr:XRD} and \ref{fgr:XRD_Charged} show the XRD spectra for \ce{Zn} electrodes which have been discharged to 40\% DoD and subsequently charged in each electrolyte. The XRD results show a clear trend in the precipitation products as a function of electrolyte pH and total chloride content. In Figure \ref{fgr:XRD}, the dominant phase in electrolytes E4 and E6 is the layered zinc hydroxide chloride simonkolleite~\cite{Hawthorne2002}, \ce{ZnCl2*4Zn(OH)2*H2O}, with small quantities of \ce{ZnCl2*2NH3} also observed. In electrolyte E7, the total chloride concentration and pH increase and \ce{ZnCl2*2NH3} becomes the majority phase observed. In electrolyte E8, a clear change in the nature of the discharge products in the slightly alkaline environment is observed. Small quantities of both \ce{ZnCl2*2NH3} and \ce{ZnCl2*4Zn(OH)2*H2O} are present, but a change in the primary phase is most obvious. This is evidenced by the evolution of a sharp reflection at approx. 8.35$\degree$ 2$\theta$ (d=10.68\AA) and accompanied by a broad reflection at around 9.7$\degree$ 2$\theta$. These features could not be identified with reference to the available databases. \begin{figure*}[t] \includegraphics[width=1\textwidth]{Figure11.pdf} \caption{SEM and EDS analysis showing (a) the cross-section of a \ce{Zn} electrode after discharge and (b) the surface of a \ce{Zn} electrode discharged and charged in electrolyte E6. There is a separation of chloride and oxide phases in the discharged electrode. The charged electrode is covered by significant quantities of zinc precipitates. The relative fractions of Zn, Cl, O and N (measured via EDS) for the positions labelled "1" and "2" on each figure are presented in a table in supplementary information$^\dag$.} \label{fgr:SEM_Cycle} \end{figure*} It is well-established that layered hydroxide phases, including \ce{ZnCl2*4Zn(OH)2*H2O}~\cite{Arizaga2012}, can intercalate charged and neutral species with concomitant demonstration of an expanded unit cell in the c-direction, corresponding to an expansion of the metal hydroxide layer spacing. Analysis of the E8 diffraction pattern (Figure \ref{fgr:XRD}) yields five sharp diffraction lines which can be fitted as the first five indexes of \{00l\} reflections for a hexagonal unit cell of c=10.68\AA. We note that this accords well with the value of 10.8\AA{} reported by Ar\'{i}zaga \cite{Arizaga2012} for the inter-layer spacing in an ammonia-intercalated sample of \ce{Zn5(OH)8Cl2.H2O}. This interpretation is supported by the observation of nitrogen in this phase by EDS (see supplementary information$^\dag$) and by the thermodynamic model, which predicts that the concentration of \ce{NH3} in the electrolyte increases at higher pH values (Figure \ref{fgr:Speciation}(b)). Assuming the formation of such a pillared hydroxide phase, the broad reflection at around 9.7$\degree$ 2$\theta$ can then be interpreted as arising from incompletely intercalated particles of the same phase. One important result of this analysis is that there is no identifiable \ce{ZnO} phase present in any of the discharged samples. Instead, the precipitated phases are dominated by a mixture of zinc hydroxide chlorides. As noted in Table \ref{tbl:OverallReactions}, this alters the overall reaction of the cell and consumes electrolyte as an active component. Figure \ref{fgr:XRD_Charged} shows the XRD spectra of samples that are first discharged to 40\% DoD and then charged back to their original state-of-charge in each electrolyte. The charged electrodes show the same progression of products from \ce{ZnCl2*4Zn(OH)2*H2O} in E4 and E6, to \ce{ZnCl2*2NH3} in electrolyte E7, and a layered phase in E8. However, the signals from \ce{Zn} metal are strongly reduced. This indicates that the dissolution of the precipitated \ce{Zn} products is suppressed during the charging process, which would have significant consequences for the reversibility of the battery. SEM/EDS characterization of the electrodes give further insight into this observation. Figure \ref{fgr:SEM_Cycle} shows (a) the cross-section of a \ce{Zn} electrode discharged in electrolyte E6 and (b) the top-down view of the discharged-charged electrode. Both cross-section and top-down views of the SEM/EDS data are presented, so as to give a full insight into the sample microstructure. The relative fractions of Zn, Cl, O and N (measured via EDS) for the positions labelled "1" and "2" on each figure are presented in a table in supplementary information$^\dag$, along with SEM/EDS data for the other samples (E4, E7, and E8). Figure \ref{fgr:SEM_Cycle}(a) shows that there is a clear separation between layers of chlorine-rich and oxygen-rich phases during discharge. The point EDS composition measurements (supplementary information$^\dag$) further support the observations by XRD that these phases are \ce{ZnCl2*2NH3} and \ce{ZnCl2*4Zn(OH)2*H2O}. The observed phase layering supports the hypothesis that local changes in electrolyte concentration affect the composition of the precipitation product. In this case, the precipitation of a chlorine-rich phase reduces the local concentration of chlorides in the electrolyte, thus favoring the shift towards an oxygen-rich phase. This theory is supported by the thermodynamic analysis in Figures \ref{fgr:Speciation} and \ref{fgr:Thermo2D} and our existing work~\cite{Clark2017}. However, we note that the SEM cross-section indicates the distribution of the phases in space but not in time. Therefore, additional research investigating precipitation at various states of discharge could give further insight into the time-dependent phase formation. Figure \ref{fgr:SEM_Cycle}(b) shows that after charging, the products that precipitated during the discharge process are not redissolved and deposited as Zn metal, as would be expected for a reversible electrode reaction. Instead, additional material deposits on the Zn electrode which corresponds to the chemical composition of the simonkolleite phase. This is further supported by the cross-sectional image for the recharged cell using electrolyte E4 in the supplementary information$^\dag$. The kinetics of \ce{ZnO} dissolution are known to be sluggish in neutral electrolytes~\cite{Zhang1996}, but this cannot be solely responsible for the limited reversibility of the electrode. The model-based analysis identifies a few mechanisms which can contribute to this observation. First, there is some inertia in electrolyte behavior between the discharging and charging processes. Because the volume of electrolyte in the cell is comparatively large, it takes time for the electrolyte to transition between quasi-steady-state compositions and precipitation can continue even after charging begins. Second, the same buffering mechanism that works to stabilize the pH during discharge can actually act as a self-braking mechanism in the dissolution of zinc precipitates. When zinc metal is deposited from \ce{Zn(NH3)_{\textit{x}}^{2+}} complexes, ammonia is released into the solution. \ce{NH3} acts as a proton acceptor and stabilizes the pH in the near-neutral regime, as discussed in the section on Equilibrium Thermodynamics. As is evident in Figures \ref{fgr:Speciation} and \ref{fgr:Thermo2D}, maintaining a stable near-neutral pH keeps the electrolyte in a state of low zinc solubility, thereby limiting the dissolution of the precipitated products. Only when either the concentration of dissolved zinc drops to very low levels or the pH becomes more acidic, will the precipitated products dissolve. This mechanism is observed in the cell-level continuum simulations. Figure \ref{fgr:Dissolution} shows (a) the total concentration of zinc in the electrolyte and (b) a comparison of \ce{Zn^{2+}} concentration and the saturation limit over a single simulated galvanostatic cycle in electrolyte E6. In Figure \ref{fgr:Dissolution}(a), the stages of discharge and charge are clearly distinguishable. At the beginning of discharge, the total concentration of zinc in the electrolyte rises as it becomes saturated, levels off as solid zinc phases nucleate, and falls as those phases precipitate. When the cell is charged, the total concentration of zinc falls as it is redeposited on the \ce{Zn} metal electrode and begins to stabilize when the precipitated zinc products start to dissolve. However, the zinc provided by the products dissolution is not enough to fully recover, resulting in a net loss of zinc from the electrolyte. Figure \ref{fgr:Dissolution}(b) shows the concentration and solubility limit of \ce{Zn^{2+}} at the surface of the Zn electrode. In this figure, the stages of discharge are also clearly distinguishable. At the beginning of discharge, the \ce{Zn^{2+}} concentration increases as the \ce{Zn} electrode dissolves. When the solution becomes supersaturated with zinc, solid phases nucleate and the \ce{Zn^{2+}} concentration falls as they precipitate. When the cell is charged, the concentration of zinc in the electrolyte drops further as the \ce{Zn} electrode is plated. Although the concentration of \ce{Zn^{2+}} also decreases as it is deposited, the amount of free \ce{NH3} rises as it is released from \ce{Zn(NH3)_{\textit{x}}^{2+}} complexes. The combination of a low concentration of \ce{Zn^{2+}} and an excess of \ce{NH3} conspire to create a self-braking effect and slow the dissolution of the zinc precipitates. Only when the acidic front from the air electrode reaches the \ce{Zn} electrode at the end of charging does the solubility increase. \begin{figure}[t] \includegraphics[width=1.0\linewidth]{Figure12.pdf} \caption{(a) Total zinc concentration in the electrolyte and (b) [\ce{Zn^{2+}}] concentration compared with the saturation limit at the front of the \ce{Zn} electrode over a single discharge-charge cycle. During charging, the simultaneous loss of \ce{Zn^{2+}} (due to deposition) and gain in \ce{NH3} (released from \ce{Zn(NH3)_x} complexes) creates a self-braking effect that slows the re-dissolution of the precipitated zinc products.} \label{fgr:Dissolution} \end{figure} In summary, the ex-situ characterization of the \ce{Zn} electrodes shows that \ce{ZnO} is not the dominant discharge product in the investigated L-ZAB cells. Rather, the zinc precipitate phases are dominated by a mix of zinc hydroxide chlorides. Furthermore, the reversibility of the precipitated products appears to be limited. When the cell is charged, the zinc products are slow to dissolve, and a significant quantity of precipitated material remains on the electrode at the end of charging. These observations are in accord the theory of the system, and the mechanisms driving these processes are elucidated by the cell models. \section{Conclusions} Near-neutral \ce{ZnCl2-NH4Cl} electrolytes could extend the lifetime of rechargeable ZABs by minimizing the effects of electrolyte carbonation, but these electrolytes bring new challenges that must be addressed in material development and cell design. Model-based analysis makes two important predictions about L-ZAB operation: the pH can become strongly acidic during charging and the dominant precipitation product is not \ce{ZnO}. Both of these predictions are experimentally observed. Operando pH measurements obtained during cell cycling confirm that the electrolyte can become strongly acidic. According to our model, this effect is driven by the slow diffusion and low concentration of \ce{NH3} in the electrolyte. Acidic electrolyte environments can accelerate catalyst degradation and material corrosion, thereby limiting the lifetime of the battery. The precipitation of zinc products is problematic for L-ZAB design and operation. Ex-situ XRD, SEM, and EDS measurements confirm the model-based prediction that the dominant solid discharge product is not \ce{ZnO} but a pH-dependent mix of zinc hydroxide chloride phases. Furthermore, although the pH buffer is needed to stabilize the performance of the air electrode, it slows the dissolution of zinc precipitates during charging and limits the reversibility of the battery. As a topic for future research, forced convection of the electrolyte could help address some of these challenges. A flow cell configuration would reduce the mass transport limitations in the buffer solution and limit the precipitation of problematic zinc salts, but the energy density of the system would be significantly reduced. Nonetheless, well-designed LeClanch\'{e} zinc-air flow batteries could potentially find use in stationary energy storage systems. \section*{Conflicts of Interest} The authors declare no conflicts of interest. \section*{Acknowledgement} This work has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No. 646186 (ZAS! project) and from the Basque Country Government (ELKARTEK 2017 program). The support of the bwHPC initiative through the use of the JUSTUS HPC facility at Ulm University is acknowledged. The electrochemical characterization in this work has been done in the frame of the Doctoral Degree Program in Chemistry by the Universitat Aut\`onoma de Barcelona.
1,108,101,564,485
arxiv
\section{Introduction}\label{Sect:intro} Tracers of the large scale matter distribution in the Universe are one of the main probes of cosmology, with current galaxy surveys such as KiDS \citep{Hildebrandt2017} and Dark Energy Survey (DES) already providing constraints competitive with Planck on some cosmological parameters \citep{DES2017}, and future surveys such as Euclid \citep{Laureijs:2011gra} and the Large Synoptic Sky Telescope \citep[LSST,][]{Abell:2009aa} as scheduled to greatly improve our understanding of dark energy and the structuration of the universe. Late time tracers of the large scale structure (LSS) have, however, undergone significant non-linear evolution, which makes their probability distribution function deviate from the Gaussianity of the initial density field. As such, two-point statistics no longer retain all the information, and have their covariance increased by the presence of a non-vanishing trispectrum. The focus of the present article is on this increased covariance, in the case of galaxy clustering analysed with the angular power spectrum $C_\ell$. Covariances are central to the statistical inference process, being the only element besides signal prediction when using the popular Gaussian likelihood, though see \cite{Sellentin2018} for indications that current data may need a non-Gaussian likelihood. In the past, covariances have been estimated through a variety of techniques. Jackknife or bootstrap methods allow estimates from the data itself, however it is known that the results from these methods are very noisy, and \cite{Lacasa2017} showed that they give biased estimates of the effect of super-sample covariance, which I will introduce later. Numerical covariance estimation through simulations of the large scale structure remains costly, especially if one wants cosmology-dependent covariance matrices, although newer techniques allowing fast mock creations \citep{Klypin2017} or data compression \citep{Heavens2017} may help cut down that cost. Furthermore the induced numerical noise in the covariance must be propagated in the likelihood \citep{Sellentin2016,Sellentin2017}, enlarging our uncertainties. Finally \cite{Lacasa2017} showed that simulations also give biased estimates of super-sample covariance, unless the simulation is orders of magnitude larger than the survey, a difficult task for future surveys covering a large part of the sky up to high redshifts \citep{Schneider2016}. Analytical modelling of the covariances is an approach which has the logical advantage of yielding a self-consistent data analysis with complete analytical understanding of the physics. Conversely, not being able to predict the covariance analytically would question the confidence with which analytical prediction of the signal itself should be trusted.\\ Analytical covariances have been developed recently for LSS tracers including some non-Gaussian effects \citep[e.g.][]{Lacasa2016,Krause2017}. They can provide both fast and noiseless covariance matrices, which could possibly be varied with model parameters. The current approaches are based on the halo model \citep{Cooray2002} coupled with perturbation theory, and are state of the art applied to recent galaxy surveys \citep{Hildebrandt2017,vanUitert2017,Krause2017b,DES2017}. The Gaussian contribution to the covariance is normally simple enough to treat analytically: if $C_\ell$ is the total power spectrum (including shot-noise effects) of the signal considered, the full-sky Gaussian covariance is \ba \Cov_G(C_\ell,C_{\ell'})= \frac{2 \ C_\ell^2}{2\ell+1} \ \delta_{\ell,\ell'} \ea Super-sample covariance (SSC) \citep{Hu2003,Takada2013} is currently thought to be the dominant non-Gaussian contribution to the covariance. The effect comes from the non-linear modulation of local observables by long wavelength density fluctuations. In other words, the survey can be non-representative of the universe by probing a region denser (or less dense) than average, as all observables react to such background density change. SSC has already had an impact on cosmological constraints from current surveys, with \cite{Hildebrandt2017} finding that failure to include it would lead to 1-$\sigma$ shift in their constraint on $S_8$. Numerical investigation by the author in the case of photometric galaxy clustering have shown that indeed SSC is the dominant effect beyond the Gaussian covariance for specifications of ongoing survey such as DES \citep[slide 14]{Lacasa2017-LAL}. However when implementing specifications of future surveys, I found that other non-Gaussian terms (1-halo and 2-halo1+3, which will be introduced in the article's main text) have an impact on the covariance which is as important as SSC, sometimes even more important depending on redshift and scale \citep[slide 8]{Lacasa2017-LAL}. As these terms become significant, the question arises of the importance of all the other non-Gaussian terms. It thus becomes timely to carry out an exhaustive derivation of all possible non-Gaussian covariance terms, within the current modelling framework that is the halo model. Here, I undertake the task of carrying out this exhaustive derivation in the case of the angular power spectrum of galaxies $C_\ell^\mr{gal}$. The choice of the harmonic basis is the one underlying current covariance implementations, even when data are in the other popular basis: real space \citep[e.g.][]{Joachimi2008,Krause2017b}, indeed results can be converted straightforwardly through \citep[e.g.][]{Crocce2011} \ba w(\theta) = \sum_\ell \frac{2\ell+1}{4\pi} \ C_\ell \ P_\ell(\cos\theta) , \ea \ba \mathcal{C}_{\theta,\theta'} = \sum_{\ell,\ell'} \frac{(2\ell+1)(2\ell'+1)}{(4\pi)^2} \ \mathcal{C}_{\ell,\ell'} \ P_\ell(\cos\theta) \ P_{\ell'}(\cos\theta') . \ea The choice of signal is relevant, though most of the theoretical framework developed should adapt straightforwardly to another observable. I focus here on galaxy clustering using the halo model together with standard perturbation theory at tree-level. This is the most complex signal in the sense that galaxy discreteness (shot-noise) yields more covariance terms. I have left the application to other observables (clusters, weak-lensing, secondary anisotropies of the CMB) to future works. The methods used in the article are presented in Sect.~\ref{Sect:methods}. This includes in particular a diagrammatic approach to galaxy polyspectra with the halo model, and projection from 3D quantities to 2D observables or covariances. This should be of interest to theoreticians of large-scale structure tracers. Sections \ref{Sect:1halo} to \ref{Sect:shotnoise} contain the main calculation of the article: all non-Gaussian covariance terms are derived one by one; firstly in the most general case then simplified with relevant approximations. An explanation of the origin of these terms and their ordering will be given in Sect.~\ref{Sect:cov-terms}. A regrouping of terms, comparison with derivation of previous literature, and analytical discussion of the potential importance of these non-Gaussian terms will be performed in Sect.~\ref{Sect:discu}. This should be of interest to give more physical interpretation of the derivation, and intuition on when and why it should be considered of importance. Finally, a self-consistent summary of the results for the busy reader is given in Sect.~\ref{Sect:summary}, using simplifications of relevance to current data analysis. This is the section of reference for numerical implementations of the formulae, for inclusion in analysis of present and future surveys. \section{Methods}\label{Sect:methods} \subsection{Conventions} \subsubsection*{2.1.1. Cosmological notations} I use the following notations for cosmological quantities : $r(z)$ is the comoving distance to redshift z, $G(z)$ is the growth factor and $\dd V = r^2(z) \frac{\dd r}{\dd z} \dd z$ is the comoving volume element per unit solid angle. Unit vectors have an upper hat, for example, $\hn$ is a direction on the celestial sphere. In the Limber approximation, the peak of the spherical bessel function $j_\ell(kr)$ is $k_\ell=(\ell+1/2)/r(z)$ and depends on an implicit redshift.\\ I also make use of quantities in the halo model \citep{Cooray2002}, such as the halo mass function $\frac{\dd n_h}{\dd M}$, halo spherical profile $u(k|M,z)$ and halo bias $b_\beta(M,z)$ where $\beta=1,2,3$ for the local bias terms used here, and $\beta=s2$ for the quadratic tidal tensor bias \citep{Chan2012,Baldauf2012}. \subsubsection*{2.1.2. Shortenings} I have used the following shortenings in order to keep long equations as readable as can be possible. Spherical harmonics indices are shortened through $i\equiv(\ell_i,m_i)$, including in the case of indices in sums. The sum of Fourier wavevectors is shortened through $\kk_{i+j}\equiv\kk_i+\kk_j$, implying in particular $k_{i+j}=\left|\kk_i+\kk_j\right|$. When unambiguous, arguments of multivariate functions are shortened through $f(z_{1234})\equiv f(z_1,z_2,z_3,z_4)$. For example for polyspectra, \ba \mathcal{P}^{(n)}(\kk_{1\cdots n},z_{1\cdots n}) \equiv \mathcal{P}^{(n)}(\kk_1,\cdots, \kk_n | z_1,\cdots, z_n) \ea Outside of function arguments, repetition of indices is used to note multiplication : $X_{ij}\equiv X_i \, X_j$, for example, in integration elements $\dd M_{\alpha\beta}\equiv\dd M_\alpha \, \dd M_\beta$.\\ The number of galaxy n-tuples (pairs, triplets...) is shortened to \ba N_\mr{gal}^{(n)} \equiv N_\mr{gal} \ \left(N_\mr{gal}-1\right) \ \cdots \ \left(N_\mr{gal}-(n-1)\right) \ea \subsubsection*{2.1.3. Definitions} Inspired by the notations of \cite{Takada2013}, I defined the following integrals for galaxies : \ba \nonumber I_\mu^\beta(k_1,\cdots,k_\mu|z) = \int \dd M \ & \frac{\dd n_h}{\dd M} \ \lbra N_\mr{gal}^{(\mu)}\rbra \ b_\beta(M,z) \\ & \times u(k_1|M,z) \cdots u(k_\mu|M,z) \ea where $\mu$ is an integer (the galaxy tuple power) and $\beta$ is the bias type. I note that $I_\mu^\beta \to cst$ when $k_1,\cdots,k_\mu \rightarrow 0$, as $u(k)\underset{k\to 0}{\longrightarrow}1$. $I_\mu^\beta$ becomes scale-dependent only on small scales, of order of the halo sizes.\\ I also introduced the following integrals, useful later for angular quantities projected in a redshift bin \ba \mathcal{I}_\ell^{\mu,\beta}(k;k_1,\cdots,k_\mu|i_z) = \int_{z\in i_z} \dd V \ j_\ell(kr) \ G(z) \ I_\mu^\beta(k_1,\cdots,k_\mu|z) \ea and its generalisation to multiple Bessel functions \ba \nonumber \mathcal{I}_{n;\ell_1,\cdots,\ell_n}^{\mu,\beta}(k_1,\cdots,k_n;k'_1,\cdots,k'_\mu|i_z) = \int_{z\in i_z} \dd V \ j_{\ell_1}(k_1 r)\cdots j_{\ell_n}(k_n r) \\ \times G(z)^n \ I_\mu^\beta(k'_1,\cdots,k'_\mu|z) \ea In the following, when unambiguous I will leave redshift integration bounds implicit for simplicity of notation. \subsection{Diagrammatic}\label{Sect:diagrammatic} I modelled the galaxy density field using the halo model \citep{Cooray2002} coupled with tree-level perturbation theory, allowing first-principle description of all galaxy statistics. In this context, the galaxy number density is written as \citep{Lacasa2014}: \be n_\mr{gal}(\xx) = \sum_i \sum_{j=1}^{N_\mr{gal}(i)} \delta^{(3)}(\xx-\xx_j) \ee where the first sum runs over halos and the second over galaxies inside that halo.\\ In Fourier space, the (absolute) galaxy polyspectrum of order $n$ is defined by\footnote{With this absolute convention, all 3D polyspectra have dimension Mpc$^{-3}$. I note also that all galaxy angular polyspectra have dimension sr$^{-1}$, and angular covariances have dimension sr$^{-2}$.} \ba \nonumber\lbra n_\mr{gal}(\kk_1,z_1)\cdots n_\mr{gal}(\kk_n,z_n)\rbra_c = (2\pi)^3 \ &\delta^{(3)}(\kk_1+\cdots+\kk_n) \\ & \times \mathcal{P}^{(n)}_\mathrm{gal}(\kk_{1\cdots n},z_{1\cdots n}) \ea \cite{Lacasa2014} introduced a diagrammatic method to compute the different terms of the galaxy polyspectrum with the halo model. This method was illustrated in more detail, including a trispectrum example, in Sect. 3.4.4 of \cite{Lacasa2014b}. For the polyspectrum of order $n$, the first step is to draw in diagrams all the possibilities of putting $n$ galaxies in halo(s). Potentially two or more galaxies can lie at the same point (`contracted') for the shot-noise terms. Then for each diagram, the galaxies should be labelled from 1 to $n$, as well as the halos for example, with $\alpha_1$ to $\alpha_p$.\\ Each diagram produces a polyspectrum term which is an integral over the halo masses $\int \dd M_{\alpha_1 \cdots \alpha_p}$ of several factors: \begin{itemize} \item for each halo $\alpha_j$ there is a corresponding : \begin{itemize} \item halo mass function $\orange{\frac{\dd n_h}{\dd M}(M_{\alpha_j})}$ \item average of the number of galaxy tuples in that halo.\\ for example, $\darkgreen{\lbra N_\mr{gal}\rbra}$ for a single galaxy in that halo, $\darkgreen{\lbra N_\mr{gal} (N_\mr{gal} - 1)\rbra}$ for a pair etc. \item as many halo profile ${\color{red} u(k|M_{\alpha_j})}$ as different points, where $k = \left| \sum_{i\in \mr{point}} \kk_i \right|$.\\ for example, $k=k_i$ for a non-contracted galaxy, while $k=k_{i_1+i_2}=\left|\kk_{i_1}+\kk_{i_2}\right|$ for a galaxy contracted twice. \end{itemize} \item the halo polyspectrum of order p, conditioned to the masses of the corresponding haloes : $${\color{blue} \mathcal{P}^{(p)}_\mathrm{halo}\left(\sum_{i\in \alpha_1} \kk_i\, , \cdots , \sum_{i\in \alpha_p} \kk_i \,\Bigg| M_{\alpha_1} , \cdots , M_{\alpha_p} \right) }$$ where the sum $\sum_{i\in \alpha_j} \kk_i$ runs over the indexes i of the galaxies inside the halo $\alpha_j$ . \end{itemize} Finally one should account for all the possible permutations of the galaxy labels 1 to $n$ in the diagram. Additionally, if one is interested in the polyspectrum of the relative density fluctuations $\delta_\mr{gal} \equiv \frac{\delta n_\mr{gal}}{\nbargal}$, instead of the absolute fluctuations, one should add a $1/\nbargal^n$ prefactor to the preceding expression. This can prove useful for 3D observables; however for 2D projected observables, as the angular power spectrum studied in this article, it is the absolute fluctuations which naturally enter the equations. \subsection{Projection to 2D observables}\label{Sect:2Dproj} The projected galaxy density in the direction $\hn$ in a redshift bin $i_z$, $n_\mr{gal}(\hn,i_z)$ is the line-of-sight integral: \be n_\mr{gal}(\hn,i_z) = \int \dd V \ n_\mr{gal}(\xx=r\hn,z) \ee with $\dd V=r^2\dd r$ being the comoving volume per unit solid angle.\\ This projection neglects redshift-space distortion and other general relativistic effects \citep[for example,][]{Bonvin2011}, whose impacts are left for future studies. In full sky, after spherical harmonic decomposition the harmonic coefficients are \citep[for example,][]{Lacasa2016} \ba a_{\ell m}^\mr{gal}(i_z) &= \int \dd^2\hn \; n_\mr{gal}(\hn,i_z)\; Y^*_{\ell m}(\hn)\\ &= \int \dd V \; \dd^2\hn \; \frac{\dd^3\kk}{(2\pi)^3} \; n_\mr{gal}(\kk,z) \; \mre^{\ii \kk \cdot r\hn} \; Y^*_{\ell m}(\hn)\\ &= \ii^\ell \int \dd V \; \frac{\dd^3\kk}{2\pi^2} \; j_\ell(k r) \; n_\mr{gal}(\kk,z) \; Y^*_{\ell m}(\hk) \ea The galaxy power spectrum estimator is then \ba \hat{C}_\ell^\mr{gal}(i_z,j_z) &= \frac{1}{2\ell+1} \sum_m \ a_{\ell m}^\mr{gal}(i_z) \ \left(a_{\ell m}^\mr{gal}(j_z)\right)^* ,\\ \nonumber &= \int \dd V_{12} \frac{\dd^3\kk_{12}}{(2\pi^2)^2} \, j_\ell(k_1 r_1) \, j_\ell(k_2 r_2) \, n_\mr{gal}(\kk_1,z_1) \, n^*_\mr{gal}(\kk_2,z_2) \\ & \qquad \times \frac{1}{2\ell+1}\sum_m Y^*_{\ell m}(\hk_1) \, Y_{\ell m}(\hk_2) ,\\ \label{Eq:Clgal-estimator-Lagrange} \nonumber &= 4\pi \, (-1)^\ell \int \dd V_{12} \frac{\dd^3\kk_{12}}{(2\pi)^6} \; j_\ell(k_1 r_1) \, j_\ell(k_2 r_2) \ P_\ell(\hk_1 \cdot \hk_2)\\ & \qquad \times n_\mr{gal}(\kk_1,z_1) \; n_\mr{gal}(\kk_2,z_2) \ea with the $(-1)^\ell$ coming from the change $\kk_2\rightarrow -\kk_2$ and the parity of the Legendre polynomial $P_\ell$ .\\ Its expectation value is .\ba C_\ell^\mr{gal}(i_z,j_z) &= \frac{2}{\pi} \int \dd V_{12} \ k^2 \, \dd k \ j_\ell(k r_1) \, j_\ell(k r_2) \ P_\mr{gal}(k|z_{12}) \ea The non-Gaussian part of the galaxy spectrum covariance is thus: \ba \nonumber \mathcal{C}_{\ell,\ell'} = (4\pi)^2 \, (-1)^{\ell+\ell'} &\int \dd V_{1234} \frac{\dd^3\kk_{1234}}{(2\pi)^{12}} \; j_\ell(k_1 r_1) \, j_\ell(k_2 r_2) \, j_{\ell'}(k_3 r_3) \\ \nonumber & j_{\ell'}(k_4 r_4) \ P_\ell(\hk_1 \cdot \hk_2) \ P_{\ell'}(\hk_3 \cdot \hk_4) \\ & \times (2\pi)^3 \, \delta^{(3)}\left(\kk_1+\cdots+\kk_4\right) \, T_\mr{gal}(\kk_{1234},z_{1234}) \label{Eq:CovClgal-Lagrange} \ea where I used the abbreviation $$\mathcal{C}_{\ell,\ell'}\equiv \Cov\left(\hat{C}_\ell^\mr{gal}(i_z,j_z),\hat{C}_{\ell'}^\mr{gal}(k_z,l_z)\right)$$ leaving redshift bins implicit hereafter As a power spectrum estimator, most of the contribution to $\hat{C}_\ell$ Eq. \ref{Eq:Clgal-estimator-Lagrange} will come from $\kk_1\!\approx\!-\kk_2$, that is, $k_{1+2} \ll k_1 \!\approx\! k_2$. Thus, similarly to the case of the 3D power spectrum estimator \citep{Takada2013}, the covariance Eq. \ref{Eq:CovClgal-Lagrange} probes the trispectrum in the squeezed diagonal configuration represented in Fig.~\ref{Fig:squeezed-trispectrum}~: $k_{1+2}=k_{3+4} \ll k_1 \!\approx\! k_2,\, k_3\!\approx\! k_4$. Contrary to \citep{Takada2013} however, the present derivation does not rely on any approximation or Taylor expansion in terms of $k_{1+2}$. \begin{figure}[htbp] \begin{center} \begin{tikzpicture} \draw [->, very thick,teal] (0,0) -- node[below] {$\vec{k}_1$} (-4,2); \draw [->, very thick,teal] (-4,2.1) -- node[above] {$\vec{k}_2$} (0,1); \draw [->, very thick,brown] (0,1) -- node[above] {$\vec{k}_3$} (4,3); \draw [->, very thick,brown] (4,2.9) -- node[below] {$\vec{k}_4$} (0,0); \draw [->, very thick, dashed,red] (0,0) -- node[left] {$\vec{k}_{1+2}$} (0,1); \end{tikzpicture} \caption{3D trispectrum in the squeezed diagonal limit.} \label{Fig:squeezed-trispectrum} \end{center} \end{figure} The 3D trispectrum generally depends on six degrees of freedom (d.o.f.) that fix the shape of the quadrilateron $\kk_1+\kk_2+\kk_3+\kk_4=0$. There is however no unique natural choice for these six d.o.f. \footnote{contrary to the power spectrum which has one d.o.f. : $k$, and the bispectrum which has three d.o.f. that it is natural to chose as the sides $k_1,k_2,k_3$ of the triangle, although some studies prefer to take one or two length(s) and one or two angle(s), or use lengths ratios.} : the choice will depend on the trispectrum term considered. For all the terms considered here (see Appendix \ref{App:3Dhalopolysp}), four d.o.f. will be the quadrilateron sides $k_1,k_2,k_3,k_4$, then the trispectrum may also depend on the length of one of the diagonals $k_{1+2},k_{1+3},k_{1+4}$ and/or on angles either between base wavevectors $\hk_i\cdot\hk_j$ or between a diagonal and a base wavevector $\hk_{i+j}\cdot\hk_{l}$. Deriving the projection Eq. \ref{Eq:CovClgal-Lagrange} analytically in all the necessary trispectrum cases proves a complex task, and is the subject of Appendices \ref{App:2Dproj-trisp-angindep} \& \ref{App:2Dproj-trisp-angdep}. I list below the three less complex cases where the trispectrum does not depend on any angle, solely on lengths of base wavevectors or diagonals. Firstly, the easiest case is of a diagonal-independent trispectrum, that is, $T_\mr{gal}(\kk_{1234},z_{1234})=T_\mr{gal}(k_{1234},z_{1234})$. This case was treated by \cite{Lacasa2014,Lacasa2014b} for a general diagonal-independent polyspectrum. In the present case, one finds (see Appendix \ref{App:2Dproj-trisp-diagindep} for a derivation): \ba \nonumber \mathcal{C}_{\ell,\ell'} &= \frac{\left(\frac{2}{\pi}\right)^4}{4\pi} \int x^2 \, \dd x \; \dd V_{1234} \; k^2_{1234} \, \dd k_{1234} \ j_\ell(k_1 r_1) \, j_\ell(k_1 x) \\ \nonumber & \qquad j_\ell(k_2 r_2) \, j_\ell(k_2 x) \ j_{\ell'}(k_3 r_3) \, j_{\ell'}(k_3 x) \ j_{\ell'}(k_4 r_4) \, j_{\ell'}(k_4 x) \\ & \qquad \times T_\mr{gal}(k_{1234},z_{1234}) \label{Eq:Cll'-diagindep-nolimber} \ea Using Limber's approximation (see Appendix \ref{App:2Dproj-trisp-diagindep}), this simplifies to \ba\label{Eq:Cll'-diagindep-limber} \mathcal{C}_{\ell,\ell'} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ T_\mr{gal}(k_{\ell_{1234}},z). \ea This case will be relevant for a large part of the covariance terms later on. The second case of interest is of a trispectrum depending on the length of the squeezed diagonal, $K=k_{1+2}$, additionally to the length of the four sides $k_{1234}$. This case is treated in Appendix \ref{App:2Dproj-trisp-sqzdiag}, giving: \ba\label{Eq:Cll'-sqzdiag-nolimber} \nonumber \mathcal{C}_{\ell,\ell'} &= \frac{\left(\frac{2}{\pi}\right)^5}{4\pi} \int \dd V_{1234} \ k^2_{1234} \,\dd k_{1234} \ K^2\,\dd K \ \dd V_{ab} \\ \nonumber & \qquad j_\ell(k_1 r_1) \, j_\ell(k_1 x_a) \ j_\ell(k_2 r_2) \, j_\ell(k_2 x_a) \ j_0(K x_a) \\ \nonumber & \qquad j_0(K x_b) \ j_{\ell'}(k_3 r_3) \, j_{\ell'}(k_3 x_b) \ j_{\ell'}(k_4 r_4) \, j_{\ell'}(k_4 x_b) \\ & \qquad \times T_\mr{gal}(k_{1234},K,z_{1234}). \ea Using Limber's approximation on $k_{1234}$ (but not on $K$, since this would be a poor approximation for $j_0$ which has a large support and peaks at $K=0$), this simplifies to \ba\label{Eq:Cll'-sqzdiag-limber} \nonumber \mathcal{C}_{\ell,\ell'} &= \frac{\frac{2}{\pi} \ \delta_{i_z,j_z} \, \delta_{k_z,l_z} }{4\pi} \int K^2\dd K \; \dd V_{ab} \; j_0(K x_a) \, j_0(K x_b) \\ & \qquad \times T_\mr{gal}(k_{\ell},k_{\ell},k_{\ell'},k_{\ell'},K,z_{aabb}). \ea This case will be relevant for super-sample covariance terms later on (Sect.~\ref{Sect:discu-SSC}). The third case of interest is of a trispectrum depending on the length of one of the other diagonal, $K=k_{1+3}$ (with $K=k_{1+4}$ giving a symmetric result), additionally to the length of the four sides $k_{1234}$. This case is treated in Appendix \ref{App:2Dproj-trisp-altdiag}, giving: \ba\label{Eq:Cll'-altdiag-nolimber} \nonumber \mathcal{C}_{\ell,\ell'} &= \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \left(\frac{2}{\pi}\right)^5 \int \dd V_{1234} \ k^2_{1234} \, \dd k_{1234} \\ \nonumber & \qquad K^2\dd K \; \dd V_{ab} \ j_\ell(k_1 r_1) \, j_\ell(k_1 x_a) \ j_\ell(k_2 r_2) \, j_\ell(k_2 x_b) \\ \nonumber & \qquad j_{\ell_a}(K x_a) \, j_{\ell_a}(K x_b) \ j_{\ell'}(k_3 r_3) \, j_{\ell'}(k_3 x_a) \ j_{\ell'}(k_4 r_4) \, j_{\ell'}(k_4 x_b) \\ & \qquad \times T_\mr{gal}(k_{1234},K,z_{1234}). \ea Using Limber's approximation on $k_{1234}$ but not on the diagonal, the covariance simplifies to \ba \nonumber \mathcal{C}_{\ell,\ell'} = \delta_{i_z,k_z} \ \delta_{j_z,l_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \frac{2}{\pi} \int K^2\dd K \; \dd V_{ab} \\ \label{Eq:Cll'-altdiag-partial-limber} j_{\ell_a}(K x_a) \, j_{\ell_a}(K x_b) \times T_\mr{gal}(k_{\ell_{1234}},K,z_{abab}). \ea This case will be relevant for braiding terms later on (Sect.~\ref{Sect:discu-braiding}). There are furthermore eight cases of trispectra depending on angles between wavevectors : four cases where the trispectrum depends on one angle, tackled in Appendix~\ref{App:2Dproj-trisp-angdep-1angle}, and four cases where the trispectrum depends on two angles, tackled in Appendix~\ref{App:2Dproj-trisp-angdep-2angles}. Due to the complexity of these expressions, they are left to their respective appendices for the clarity of the main text. These formulae involve geometric coefficients which are shown in Appendix~\ref{App:3nJ-symbols} to be related to Wigner 3n-J symbols : 6J, 9J and even the case of a reduced 12J symbol of the second kind. Reduction checks are performed in Appendix~\ref{App:reductions} to assure the robustness of the results. These 2D projection formulae, although not the main aim of this article, can be viewed as standalone results that should be of interest for other analyses, for example, interested in the covariance of other signals or using a different modelling framework such as a different flavour of perturbation theory. \subsection{Example of the power spectrum}\label{Sect:power-spectrum} In this article, I am interested in the covariance of the galaxy power spectrum. Before coming to the covariance, the diagrammatic formalism and the 2D projection explained above can already be illustrated at the power spectrum level. This will already uncover technical details of later interest. Figure~\ref{Fig:diagrams-spectrum} shows the power spectrum diagrams, recovering the well known fact that the spectrum decomposes in a two-halo term, a one-halo term and shot-noise. One immediate advantage, already underlined in \cite{Lacasa2014,Lacasa2014b}, is that shot-noise is described consistently and does not need a separate formalism like counts in cells \citep[e.g.][]{Peebles1980}. \begin{figure}[!th] \begin{center} \includegraphics[width=\linewidth]{diagrams-spectrum-allin1.jpg} \caption{Diagrams for the galaxy power spectrum. From left to right: two-halo (2h), one-halo (1h) and shot-noise (shot).} \label{Fig:diagrams-spectrum} \end{center} \end{figure} Applying the diagrammatic rules yields for example the following expression for the two-halo term of the power spectrum between two possibly different redshifts : \ba \nonumber P^\mr{2h}_\mr{gal}(k|z_{12}) =\int &\dd M_{\alpha\beta} \ \orange{\frac{\dd n_h}{\dd M}(M_\alpha,z_1) \, \frac{\dd n_h}{\dd M}(M_\beta,z_2)} \\ \nonumber & \darkgreen{\lbra N_\mr{gal}\rbra(M_\alpha,z_1) \, \lbra N_\mr{gal}\rbra(M_\beta,z_2)} \\ & {\color{red} u(k|M_\alpha,z_1) \, u(k|M_\beta,z_2)} \ {\color{blue} P_\mr{hh}(k|M_{\alpha\beta},z_{12})} \ea In the following, I will shorten the argument of mass and redshift to its indices: $$\left.\frac{\dd n_h}{\dd M}\right|_{\alpha,1}\equiv \frac{\dd n_h}{\dd M}(M_\alpha,z_1)$$ $$\lbra N_\mr{gal}\rbra_{\alpha,1}\equiv\lbra N_\mr{gal}\rbra(M_\alpha,z_1)$$ $$u(k|\alpha,1)\equiv(k|M_\alpha,z_1)$$ At tree level the halo power spectrum takes the form \be P_\mr{halo}(k|M_{\alpha\beta},z_{12}) = b_1(M_\alpha,z_1) \; b_1(M_\beta,z_2) \ P_\mr{lin}(k|z_{12}) \ee so that one recovers the familiar equation \ba\label{Eq:P2h-treelevel} P^\mr{2h}_\mr{gal}(k|z_{12}) &= \nbargal(z_1) \, \nbargal(z_2) \ b_1^\mr{gal}(k,z_1) \, b_1^\mr{gal}(k,z_2) \ P_\mr{lin}(k|z_{12})\\ &= I_1^1(k|z_1) \ I_1^1(k|z_2) \ P_\mr{lin}(k|z_{12}) \ea with the (scale-dependent) galaxy first order bias \be b_1^\mr{gal}(k,z) = \left. \int \dd M \; \frac{\dd n_h}{\dd M} \, \lbra N_\mr{gal}\rbra \, u(k|M,z) \, b_1(M,z) \ \middle/ \ \nbargal(z) \right. \ee I note however that things become more complex at 1-loop order, where one would get additional contributions from higher order perturbation theory and halo biases, with a form more complex than Eq.~\ref{Eq:P2h-treelevel}. Since I will be working only at tree level, in the following I note $P(k,z)$ instead of $P_\mr{lin}(k,z)$ for simplicity.\\ The 2-halo part of the angular power spectrum is then given by \ba \Cl^\mr{2h} = \frac{2}{\pi} \int k^2 \dd k \ P(k,z=0) \ \mathcal{I}_\ell^{1,1}(k;k|i_z) \ \mathcal{I}_\ell^{1,1}(k;k|j_z) \ea In the following I note $P(k)\equiv P(k,z=0)$ for shortening.\\ Limber's approximation simplifies $\Cl^\mr{2h}$ to \ba \Cl^\mr{2h} = \int \dd V \ \left(I_1^1(k_\ell|z)\right)^2 \ P(k_\ell|z) \ea For the shot-noise power spectrum term, the diagrammatic rules give \ba \nonumber P^\mr{shot}_\mr{gal}(k|z_{12}) &=\int \dd M \ \orange{\frac{\dd n_h}{\dd M}} \ \darkgreen{\lbra N_\mr{gal}\rbra(M,z)} \ {\color{red} u(0|M,z)} \ {\color{blue} \times 1}\\ &= \nbargal(z) \label{Eq:Pshot} \ea At this point I seem to face a slight incoherence : which redshift am I talking about, $z_1$ or $z_2$ ? This would be a real issue if one were computing the power spectra between slices of the universe at the same location but different times, in which case our correlation function could hit the same galaxy at two different redshifts. However real observables are located on the past light cone, so the two redshifts have to coincide. Whatever value is given to $P^\mr{shot}_\mr{gal}(k|z_{12})$ when $z_1\neq z_2$ should get canceled when the 2D projection of Sect. \ref{Sect:2Dproj} is carried out. So I can take \ba\label{Eq:Pshot-possib} P^\mr{shot}_\mr{gal}(k|z_{12}) &= \nbargal(z_1) \quad \mr{or} \quad \nbargal(z_2) \quad \mr{or} \quad \delta_{z_1,z_2} \ \nbargal(z_1) \ea and it should give the same angular power spectrum.\\ Indeed one can check for instance with the first possibility: \ba \nonumber C_\ell^\mr{shot} &= \frac{2}{\pi} \int \dd V_{12} \ k^2 \, \dd k \ j_\ell(k r_1) \, j_\ell(k r_2) \ P^\mr{shot}_\mr{gal}(k|z_{12}) \\ \nonumber &= \frac{2}{\pi} \int \dd V_{12} \ \nbargal(z_1) \underbrace{\int k^2 \, \dd k \ j_\ell(k r_1) \, j_\ell(k r_2)}_{=\frac{\pi}{2 r_1^2} \delta(r_1-r_2)} \\ &= \delta_{i_z,j_z} \int \dd V \ \nbargal(z) = N_\mr{gal}(i_z) \ \delta_{i_z,j_z} \label{Eq:Clshot} \ea and one would get the same results with the two other possibilities given in Eq.~\ref{Eq:Pshot-possib}. I note the appearance of a Kronecker between redshift bins, assuming they are not overlapping (see Appendix \ref{App:shot-overlapping} for the case of overlapping bins).\\ In the following, I adopt notation from the third possibility (i.e. $P^\mr{shot}_\mr{gal}=\delta_{z_1,z_2} \ \nbargal(z_1)$, with $\delta_{z_1,z_2}$ being a Kronecker symbol) as it makes explicit that the redshift have to coincide. I also note that the Limber approximation is exact for this power spectrum term, as $P^\mr{shot}_\mr{gal}$ is constant with $k$.\\ Finally, the issue of Poissonian or non-Poissonian shot-noise is discussed in Appendix \ref{App:shotvsPoisson}. For the one-halo power spectrum term, the diagrammatic rules give \ba \nonumber P^\mr{1h}_\mr{gal}(k|z_{12}) &=\int \dd M \ \orange{\frac{\dd n_h}{\dd M}} \ \darkgreen{\lbra N_\mr{gal}^{(2)}\rbra(M,z)} \ {\color{red} u(k|M,z)^2} \ {\color{blue} \times 1}\\ &= I_2^0(k,k|z) \ea again I am faced with an apparent redshift incoherence. But since the two galaxies hit by the correlation function are in the same halo, and since observations are located on the past light cone, the two redshifts must be close, limited by $\delta r < 2 R(M,z)$ where $R(M,z)$ is the virial radius of the halo. In this limited redshift interval there will be no appreciable evolution of the mass function, halo profiles etc. So all redshifts can be taken to be equal, finding \ba P^\mr{1h}_\mr{gal}(k|z_{12}) &= \delta_{z_1,z_2} \ I_2^0(k,k|z_1) \ea One can note that the Limber's approximation is particularly well adapted to this power spectrum term. Indeed, at low $\ell$ / low $k$ where the Limber's approximation often gets wrong, $P^\mr{1h}_\mr{gal}$ goes to a constant so that Limber becomes exact, and $P^\mr{1h}_\mr{gal}$ starts to have a scale-dependence only on halo size scales - small scales where Limber's approximation works well. Thus one gets the angular power spectrum \ba C_\ell^\mr{1h} &= \delta_{i_z,j_z} \int_{z\in i_z} \dd V \ I_2^0(k_\ell,k_\ell|z) \ea Again I note the presence of a Kronecker over redshift bins, and that other forms of $P^\mr{1h}_\mr{gal}$ coinciding for $z_1=z_2$ would have given the same answer for the observable. In general for higher order polyspectra, Limber's approximation will be exact for wavevectors on which the polyspectrum does not depend, well justified for wavectors for which the dependence is only through halo profiles $u(k)$, justified only at high $\ell$ when there is a power spectrum dependence $P(k)$, and unjustified when there is a dependence on the angles between wavevectors (e.g. through perturbation theory kernels as will be seen later).\\ In the following, I thus apply Limber's approximation on wavevectors for which there is no dependence or only $u(k)$ dependence, and will provide both the no-Limber and Limber equations in the other cases. \subsection{Power spectrum covariance terms}\label{Sect:cov-terms} Section \ref{Sect:2Dproj} gave the projection equations from 3D to 2D. I now need the 3D trispectrum equations. Using the diagrammatic approach of Sect. \ref{Sect:diagrammatic}, the involved diagrams are shown in Fig.~\ref{Fig:diagrams-trispectrum} \begin{figure}[!th] \begin{center} \includegraphics[width=\linewidth]{diagrams-trispectrum-allin1.jpg} \caption{Diagrams for the galaxy trispectrum. From left to right, \textit{top row}: four-halo (4h), three-halo (3h), three-halo shot-noise (3h-shot3g), two-halo 1+3 (2h1+3). \textit{Middle row}: two-halo 2+2 (2h2+2), two-halo three-galaxy shot-noise a (2ha-shot3g), two-halo three-galaxy shot-noise b (2hb-shot3g), two-halo two-galaxy shot-noise a (2ha-shot2g), two-halo two-galaxy shot-noise b (2hb-shot2g). \textit{Bottom row}: one-halo (1h), one-halo three-galaxy shot-noise (1h-shot3g), one-halo two-galaxy shot-noise a (1ha-shot2g), one-halo two-galaxy shot-noise b (1hb-shot2g), one-galaxy shot-noise (shot1g).} \label{Fig:diagrams-trispectrum} \end{center} \end{figure} This justifies the organisation of the next few sections of this article, as I derive the covariance terms in order of increasing complexity. I will start with the clustering terms: one-halo in Sect.~\ref{Sect:1halo}, two-halo (both 2+2 and 1+3) in Sect.~\ref{Sect:2halo}, three-halo in Sect.~\ref{Sect:3halo} and finally four-halo in Sect.~\ref{Sect:4halo}. I then move to shot-noise (all terms) in Sect.~\ref{Sect:shotnoise}. \section{One-halo term}\label{Sect:1halo} The one-halo term is the tenth diagram of Fig.~\ref{Fig:diagrams-trispectrum} (bottom row, first from the left), while the other diagrams with a single halo (remainder of the bottom row in Fig.~\ref{Fig:diagrams-trispectrum}) are shot-noise terms which will be treated in Sect.~\ref{Sect:shotnoise}. Applying the diagrammatic rules from Sect.~\ref{Sect:diagrammatic} and the notes in Sect.~\ref{Sect:power-spectrum} about coincident redshifts, the corresponding trispectrum part is: \ba \nonumber T^\mr{1h}_\mr{gal}(\kk_{1234},z_{1234}) &= \delta_{z_1,z_2,z_3,z_4} \int \dd M \ \orange{\frac{\dd n_h}{\dd M}} \ \darkgreen{\lbra N_\mr{gal}^{(4)}\rbra} \ {\color{red} u(k_1|M,z) }\\ & \qquad {\color{red} u(k_2|M,z) \, u(k_3|M,z) \, u(k_4|M,z)} \ {\color{blue} \times 1}\\ & = \delta_{z_1,z_2,z_3,z_4} \ I_4^0(k_{1234}|z) \ea where $\delta_{z_1,z_2,z_3,z_4}=\delta_{z_1,z_2}\,\delta_{z_2,z_3}\,\delta_{z_3,z_4}$, i.e. it is equal to 1 when all redshifts are equal, and 0 otherwise.\\ This trispectrum term is diagonal-independent, according to the nomenclature of Sect. \ref{Sect:2Dproj}. When projecting onto the angular covariance, as argued in Sect.~\ref{Sect:power-spectrum}, Limber's approximation is justified on all wavevectors, since no $P(k)$ factors are present. One thus obtains: \ba\label{Eq:Cll'-1h} \mathcal{C}_{\ell,\ell'}^\mr{1h} &= \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd M \, \dd V \ \frac{\dd n_h}{\dd M} \ \lbra N_\mr{gal}^{(4)}\rbra u(k_{\ell}|M,z)^2 \, \, u(k_{\ell'}|M,z)^2 \\ &= \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ I_4^0(k_{\ell},k_{\ell},k_{\ell'},k_{\ell'}|z). \ea \section{Two-halo terms}\label{Sect:2halo} \subsection{Two-halo 1+3 term}\label{Sect:2halo1+3} This term is the fourth diagram of Fig.~\ref{Fig:diagrams-trispectrum} (upper right corner). Applying the diagrammatic rules from Sect.~\ref{Sect:diagrammatic}, the corresponding trispectrum part is: \ba \nonumber T^\mr{2h1+3}_\mr{gal}(\kk_{1234},z_{1234}) &= \delta_{z_2,z_3,z_4} \int \dd M_{\alpha\beta} \ \orange{\left.\frac{\dd n_h}{\dd M}\right|_{\alpha,1} \left.\frac{\dd n_h}{\dd M}\right|_{\beta,2}} \darkgreen{\lbra N_\mr{gal}\rbra_{\alpha,1}} \\ \nonumber & \qquad \darkgreen{\lbra N_\mr{gal}^{(3)}\rbra_{\beta,2}} {\color{red} u(k_1|\alpha,1) \, u(k_2|\beta,2) \, u(k_3|\beta,2)} \\ & \qquad \times {\color{red} u(k_4|\beta,2)} {\color{blue} \ P_\mr{hh}(k_1|M_{\alpha\beta},z_{12})} + 3 \ \mr{perm.} \\ \nonumber & = \delta_{z_2,z_3,z_4} \ I_1^1(k_1|z_1) \ I_3^1(k_{234}|z_2) \ P(k_1|z_{12})\\ & \qquad + 3 \ \mr{perm.} \ea This trispectrum term is diagonal-independent following the nomenclature of Sect. \ref{Sect:2Dproj}. For the permutation presented above, Limber's approximation is justified on $k_2,k_3,k_4,$ but may not be justified on $k_1$ for low $\ell$. One finds the covariance term \ba\label{Eq:Cll'-2h1+3-nolimber} \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h1+3} = \frac{\frac{2}{\pi} \ \delta_{j_z,k_z,l_z}}{4\pi} \int k_1^2 \ \dd k_1 \ P(k_1,z=0) \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \times \mathcal{I}_\ell^{3,1}(k_1;k_\ell,k_{\ell'},k_{\ell'}|j_z) + 3 \ \mr{perm.} \ea Using the Limber's approximation also on $k_1$, one finds: \ba\label{Eq:Cll'-2h1+3-limber} \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h1+3} = \frac{\delta_{i_z,k_z,l_z}+\delta_{j_z,k_z,l_z}}{4\pi} & \int \dd M_{\alpha\beta} \, \dd V \ \frac {\dd n_h}{\dd M}(M_\alpha) \ \frac{\dd n_h}{\dd M}(M_\beta) \ \lbra N_\mr{gal}\rbra_{\alpha} \\ \nonumber & \quad \lbra N_\mr{gal}^{(3)}\rbra_{\beta} \ u(k_{\ell}|M_\alpha) \, u(k_{\ell}|M_\beta) \, u(k_{\ell'}|M_\beta)^2 \\ \nonumber & \quad \times P_\mr{halo}(k_{\ell}|M_{\alpha\beta},z) \\ & + (\ell \leftrightarrow \ell')\\ \nonumber = \frac{\delta_{i_z,k_z,l_z}+\delta_{j_z,k_z,l_z}}{4\pi} & \int \dd V \ I_1^1(k_\ell|z) \ I_3^1(k_\ell,k_{\ell'},k_{\ell'}|z) \ P(k_\ell|z) \\ & + (\ell \leftrightarrow \ell') \ea \subsection{Two-halo 2+2 term}\label{Sect:2halo2+2} This term is the fifth diagram of Fig.~\ref{Fig:diagrams-trispectrum}: middle row, first from the left. The other diagrams in the middle row with two halos are shot-noise terms which will be treated in Sect.~\ref{Sect:shotnoise}. Applying the diagrammatic rules from Sect.~\ref{Sect:diagrammatic}, the corresponding trispectrum part is: \ba \nonumber T^\mr{2h2+2}_\mr{gal}(\kk_{1234},z_{1234}) = \delta_{z_1,z_2} \ \delta_{z_3,z_4} \int \dd M_{\alpha\beta} \ \orange{\left.\frac{\dd n_h}{\dd M}\right|_{\alpha,1} \left.\frac{\dd n_h}{\dd M}\right|_{\beta,3}} \darkgreen{\lbra N_\mr{gal}^{(2)}\rbra_{\alpha,1}} \\ \nonumber \darkgreen{\lbra N_\mr{gal}^{(2)}\rbra_{\beta,3}} \ {\color{red} u(k_1|\alpha,1) \, u(k_2|\alpha,1) \, u(k_3|\beta,3) \, u(k_4|\beta,3)} \\ {\color{blue} \times P_\mr{hh}(k_{1+2}|M_{\alpha\beta},z_{13})} + \mr{2 \ perm.} \\ \label{Eq:T2h2+2-sqzdiag} = \delta_{z_1,z_2} \ \delta_{z_3,z_4} \ I_2^1(k_1,k_2|z_1) \ I_2^1(k_3,k_4|z_3) \ P(k_{1+2}|z_{13})\\ \label{Eq:T2h2+2-altdiag13} + \delta_{z_1,z_3} \ \delta_{z_2,z_4} \ I_2^1(k_1,k_3|z_1) \ I_2^1(k_2,k_4|z_2) \ P(k_{1+3}|z_{12}) \\ \label{Eq:T2h2+2-altdiag14} + \delta_{z_1,z_4} \ \delta_{z_2,z_3} \ I_2^1(k_1,k_4|z_1) \ I_2^1(k_2,k_3|z_2) \ P(k_{1+4}|z_{12}) \ea The three permutations can be viewed respectively as flat rhyme (aabb), alternate rhyme (abab) and enclosed rhyme (abba). For the first permutation (flat rhyme, Eq. \ref{Eq:T2h2+2-sqzdiag}), Limber's approximation is justified on $k_1,k_2,k_3,k_4$ but not on the squeezed diagonal $k_{1+2}$, especially since the later aliases into the monopole. Eq. \ref{Eq:Cll'-sqzdiag-limber} must then be used for the projection. One finds \ba\label{Eq:Cll'-2h2+2-sqz} \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h2+2-sqz} = \frac{\frac{2}{\pi} \ \delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int K^2 \dd K \ \dd V_{ab} \ j_0(K x_a) \, j_0(K x_b) \\ \times I_2^1(k_\ell,k_\ell|z_a) \ I_2^1(k_{\ell'},k_{\ell'}|z_b) \ P(K|z_{ab}) \ea where explicitely $z_a\in i_z$ and $z_b\in k_z$. There are two possible ways to compute this equation numerically, depending whether one first goes with the wavevector integral or the redshift integrals. Respectively \ba \mathcal{C}_{\ell,\ell'}^\mr{2h2+2-sqz} =& \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int \dd V_{ab} \ I_2^1(k_\ell,k_\ell|z_a) \ I_2^1(k_{\ell'},k_{\ell'}|z_b) \ C_0^m(z_a,z_b) \label{Eq:Cov-2h2+2-sqz} \\ \nonumber =& \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \ \frac{2}{\pi} \int K^2 \, \dd K \ P(K) \ \mathcal{I}_{0}^{2,1}(K;k_\ell,k_{\ell}|i_z) \\ & \qquad \qquad \times \mathcal{I}_{0}^{2,1}(K;k_{\ell'},k_{\ell'}|k_z) \ea where \ba C_\ell^m(z_1,z_2) = \frac{2}{\pi} \int k^2 \ \dd k \ j_\ell(k r_1) \, j_\ell(k r_2) P(k|z_{12}) \ea is the matter angular power spectrum (that would be measure if one could directly see total matter instead of galaxies), that is involved in several other equations below. For the two other permutations (alternate and enclosed rhymes, Eqs. \ref{Eq:T2h2+2-altdiag13} \& \ref{Eq:T2h2+2-altdiag14}), Limber's approximation is justified on $k_1,k_2,k_3,k_4$ but may not be justified on the alternate diagonal $k_{1+3}$ (resp. $k_{1+4}$). Eq. \ref{Eq:Cll'-altdiag-partial-limber} must then be used for the projection.\\ Then one finds \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h2+2-alt} = \delta_{i_z,k_z} \ \delta_{j_z,l_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \frac{2}{\pi} \int K^2\dd K \; \dd V_{ab} \\ \nonumber j_{\ell_a}(K x_a) \ j_{\ell_a}(K x_b) \ I_2^1(k_{\ell},k_{\ell'}|z_a) \ I_2^1(k_{\ell},k_{\ell'}|z_b) \ P(K|z_{ab})\\ + \mr{term}(k_{1+4}). \ea Again, there are two possible ways to compute this equation numerically, depending whether one first goes for the wavevector integral or the redshift integrals. Respectively \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h2+2-alt} =& \ \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z} + \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \quad \times \int \dd V_{ab} \ I_2^1(k_{\ell},k_{\ell'}|z_a) \ I_2^1(k_{\ell},k_{\ell'}|z_b) \ C_{\ell_a}^m(z_a,z_b) \label{Eq:Cov-2h2+2-alt} \\ \nonumber =& \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z} + \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \frac{2}{\pi} \int K^2 \, \dd K \ P(K) \ \mathcal{I}_{\ell_a}^{2,1}(K;k_\ell,k_{\ell'}|i_z) \ \mathcal{I}_{\ell_a}^{2,1}(K;k_\ell,k_{\ell'}|j_z) \ea where $z_a\in i_z$ and $z_b\in j_z$.\\ \section{Three-halo term}\label{Sect:3halo} This section is longer than the previous ones due to the increased complexity of the term, and is thus split into smaller subsections for clarity. \subsection{Trispectrum}\label{Sect:3halo-trispectrum} This term is the second diagram of Fig.~\ref{Fig:diagrams-trispectrum}. The other diagram containing three different halos (third diagram of Fig.~\ref{Fig:diagrams-trispectrum}) is a shot-noise term which will be treated in Sect.~\ref{Sect:shotnoise}. Applying the diagrammatic rules from Sect.~\ref{Sect:diagrammatic}, the corresponding trispectrum part is: \ba \nonumber T^\mr{3h}_\mr{gal}(\kk_{1234},z_{1234}) = \delta_{z_2,z_3} \int & \dd M_{\alpha\beta\gamma} \ \orange{\left.\frac{\dd n_h}{\dd M}\right|_{\alpha,1} \left.\frac{\dd n_h}{\dd M}\right|_{\beta,2} \left.\frac{\dd n_h}{\dd M}\right|_{\gamma,4}} \\ \nonumber & \darkgreen{\lbra N_\mr{gal}\rbra_{\alpha,1} \lbra N_\mr{gal}^{(2)}\rbra_{\beta,2} \lbra N_\mr{gal}\rbra_{\gamma,4}} \\ \nonumber & {\color{red} u(k_1|\alpha,1) \ u(k_2|\beta,2) \, u(k_3|\beta,2) \ u(k_4|\gamma,4)}\\ \nonumber & {\color{blue} B_\mr{hhh}(k_1,k_{2+3},k_4|M_{\alpha\beta\gamma},z_{124})} \\ & + \mr{5 \ perm.} \ea The halo bispectrum splits into three terms (b2, s2 and 2PT, see Appendix \ref{App:3Dhalopolysp}), and thus the 3h galaxy trispectrum too. Using notations from Appendix \ref{App:3Dhalobispec}, one finds: \ba \nonumber T^\mr{3h}_\mr{gal}(\kk_{1234},z_{1234}) &= \delta_{z_2,z_3} \sum_{X\in\{\mr{b2,s2,2PT}\}} \\ \nonumber & \Big[2! \ I_1^1(k_1|z_1) \ I_2^1(k_2,k_3|z_2) \ I_1^X(k_4|z_4) \\ \nonumber & \times K_X(\kk_1,\kk_{2+3}) \ P(k_1|z_{14}) \ P(k_{2+3}|z_{24}) \\ \nonumber & +2! \ I_1^1(k_1|z_1) \ I_2^X(k_2,k_3|z_2) \ I_1^1(k_4|z_4) \\ \nonumber & \times K_X(\kk_1,\kk_{4}) \ P(k_1|z_{12}) \ P(k_{4}|z_{42}) \\ \nonumber & +2! \ I_1^X(k_1|z_1) \ I_2^1(k_2,k_3|z_2) \ I_1^1(k_4|z_4) \\ \nonumber & \times K_X(\kk_{2+3},\kk_{4}) \ P(k_{2+3}|z_{21}) \ P(k_{4}|z_{41}) \Big] \\ & \quad + \mr{5 \ perm.} \ea This can be rewritten to regroup terms and explicit the 18 permutations, grouping first the six terms involving angles between base wavevectors ($\hk_\alpha\cdot\hk_\beta$), and then the twelve terms involving angles with a diagonal $\kk_{\alpha+\beta}$ : \ba \nonumber T^\mr{3h-X}_\mr{gal}(\kk_{1234},z_{1234}) =& \sum_{\{\alpha,\beta\}\in\{1,2,3,4\}}\!\!\!\!\!\! 2 \,\delta_{z_\gamma,z_\delta} \ I_1^1(k_\alpha|z_\alpha) \, I_2^X(k_\gamma,k_\delta|z_\gamma) \, I_1^1(k_\beta|z_\beta) \\ \nonumber & \times K_X(\kk_\alpha,\kk_{\beta}) \, P(k_\alpha|z_{\alpha\gamma}) \, P(k_{\beta}|z_{\beta\gamma}) \\ \nonumber & +\!\!\!\!\!\!\sum_{\{\alpha,\beta\}\in\{1,2,3,4\},\gamma}\!\!\!\!\!\!\!\! 2 \,\delta_{z_\alpha,z_\beta} \ I_1^X(k_\delta|z_\delta) \ I_2^1(k_\alpha,k_\beta|z_\alpha) \ I_1^1(k_\gamma|z_\gamma) \\ & \times K_X(\kk_{\alpha+\beta},\kk_{\gamma}) \ P(k_{\alpha+\beta}|z_{\alpha\delta}) \ P(k_{\gamma}|z_{\gamma\delta}) \\ \nonumber =& T^\mr{3h-Xbase}_\mr{gal} + T^\mr{3h-Xdiag}_\mr{gal}. \ea After Legendre decomposition of the angle dependence, the 2PT term yields three subterms ($n=0,1,2$), while the b2 and s2 terms yield one subterm each ($n=0$ and $n=2$ respectively). Accounting for all permutations I thus have a total of 90 subterms. \subsection{Covariance} Let us first compute the contribution coming from the six terms with an angle between base wavevectors $\kk_\alpha\cdot\kk_{\beta}$. Using results from Appendix \ref{App:2Dproj-trisp-angdep-1angle-base}, for $X\in\{\mr{b2,s2,2PT}\}$, one finds \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xbase} =& \sum_{n=0}^2 (-1)^n \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell_a}{n}^2 \\ \nonumber & \delta_{k_z,l_z} \left(\frac{2}{\pi}\right)^2 \int k^2_{12} \,\dd k_{12} \ 2 \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \\ \nonumber & \times \mathcal{I}_{2;\ell_a,\ell_a}^{2,X}(k_1,k_2;k_{\ell'},k_{\ell'}|k_z) \ K_X^n(k_1,k_2) \ P(k_1)\,P(k_2) \\ \label{Eq:Cov-3h-X-base-12} & + (\ell,i_z,j_z\leftrightarrow\ell',k_z,l_z) \\ \nonumber & +(-1)^{\ell+\ell'} \sum_{n=0}^2 (2n+1) \Bigg[\sum_{\lu,\lt} \ii^{\lu+\ell+\lt+\ell'} \ \frac{(2\ell+1)_{13}}{4\pi} \\ \nonumber & \threeJz{\ell}{\lu}{n}^2 \threeJz{\ell'}{\lt}{n}^2 \ \delta_{j_z,l_z} \left(\frac{2}{\pi}\right)^2 \int k^2_{13}\,\dd k_{13} \\ \nonumber & 2 \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{2;\lu,\lt}^{2,X}(k_1,k_3;k_{\ell},k_{\ell'}|j_z) \\ \nonumber & K_X^n(k_1,k_3) \ P(k_1)\,P(k_3) + ({}_3,k_z,l_z\leftrightarrow {}_4,l_z,k_z)\Bigg] \\ \label{Eq:Cov-3h-X-base-13} & \qquad + ({}_1,i_z,j_z\leftrightarrow {}_2,j_z,i_z) \ea where the first term (Eq.~\ref{Eq:Cov-3h-X-base-12}) comes from the pairs (1,2) and (3,4), while the second term (Eq.~\ref{Eq:Cov-3h-X-base-13}) comes from the pairs (1,3), (1,4), (2,3) and (2,4).\\ Limber's approximation can be applied on the first term only if $\ell_a=\ell$, which is the sole contribution only for $n=0$. Similarly, Limber's approximation can be applied to the second term only for $n=0$. I return to the $n=0$ case later. I tackle here the contribution coming from the twelve terms with an angle with a diagonal $\kk_{\alpha+\beta}\cdot\kk_{\gamma}$. Using results from Appendix \ref{App:2Dproj-trisp-angdep-1angle-diag}, for $X\in\{\mr{b2,s2,2PT}\}$, one finds \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xdiag} =& \sum_{n=0}^2 \Bigg[(-1)^{\ell'} \sum_{\lt} \ii^{\ell'+\lt+n} \frac{2\lt+1}{4\pi} \threeJz{\ell'}{\lt}{n}^2 \\ \nonumber & \delta_{i_z,j_z} \left(\frac{2}{\pi}\right)^2 \int k_3^2\,\dd k_3 \ K^2\,\dd K \ 2 K_X^n(K,k_3) \ \mathcal{I}_{0}^{2,1}(K;k_\ell,k_\ell|i_z) \\ \nonumber & \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{2;\lt,n}^{1,X}(k_3,K;k_{\ell'}|l_z) \ P(K)\,P(k_3) \\ \label{Eq:Cov-3h-X-diag-sqz} & + ({}_3,k_z,l_z\leftrightarrow {}_4,l_z,k_z) \Bigg] + \left(\substack{{}_1,{}_2,i_z,j_z\leftrightarrow {}_3,{}_4,k_z,l_z \\ {}_3,{}_4,k_z,l_z\leftrightarrow {}_1,{}_2,i_z,j_z}\right) \\ \nonumber &+\sum_{n=0}^2 \Bigg[(-1)^{\ell+\ell'+n} \sum_{\ell_{ab2}} \ii^{\ell_a-\ell+\ld-\ell_b} \frac{(2\ell+1)_{ab2}}{4\pi} \\ \nonumber & \sixJ{\ell_a}{\ell_b}{n}{\ell_2}{\ell}{\ell'} \ \delta_{i_z,k_z} \left(\frac{2}{\pi}\right)^2 \int k_2^2\,\dd k_2 \ K^2\,\dd K \ 2 \ K_X^n(K,k_2) \\ \nonumber & \mathcal{I}_{\ell_a}^{2,1}(K;k_{\ell},k_{\ell'}|i_z) \ \mathcal{I}_{\ell}^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{2;\ld,\ell_b}^{1,X}(k_2,K;k_{\ell'}|l_z) \\ & P(K)\,P(k_2) + ({}_2,j_z,l_z,\ell,\ell'\leftrightarrow {}_4,l_z,j_z,\ell',\ell)\Bigg] + 3 \ \mr{perm.} \label{Eq:Cov-3h-X-diag-alt} \ea where the first term (\ref{Eq:Cov-3h-X-diag-sqz}) comes from the permutations involving the squeezed diagonal (12-3,12-4,34-1,34-2), while the second term (\ref{Eq:Cov-3h-X-diag-alt}) comes from the permutations involving an alternate diagonal (13-2,13-4,14-2,14-3,23-1,23-4,24-1,24-3)\footnote{the first two permutations (13-2,13-4) are explicited, while the other six are in the `+3 perm.' term.}. \subsection{Simplifications} In the $n=0$ case, the covariance gets simpler, as it corresponds to an angle-independent trispectrum. The covariance becomes: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xbase0} =& \frac{\delta_{k_z,l_z}}{4\pi} \left(\frac{2}{\pi}\right)^2 \int k^2_{12} \,\dd k_{12} \ 2 \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \\ \nonumber & \times \mathcal{I}_{2;\ell_a,\ell_a}^{2,X}(k_1,k_2;k_{\ell'},k_{\ell'}|k_z) \ K_X^0 \ P(k_1)\,P(k_2) \\ & + (\ell,i_z,j_z\leftrightarrow\ell',k_z,l_z) \\ \nonumber & + \Bigg[\frac{\delta_{j_z,l_z}}{4\pi} \left(\frac{2}{\pi}\right)^2 \int k^2_{13}\,\dd k_{13} \ 2 \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \nonumber & \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{2;\lu,\lt}^{2,X}(k_1,k_3;k_{\ell},k_{\ell'}|j_z) \ K_X^0 \ P(k_1)\,P(k_3) \\ & + ({}_3,k_z,l_z\leftrightarrow {}_4,l_z,k_z)\Bigg] + ({}_1,i_z,j_z\leftrightarrow {}_2,j_z,i_z) \ea and \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xdiag0} =& \frac{\delta_{i_z,j_z}}{4\pi} \Bigg[\left(\frac{2}{\pi}\right)^2 \int k_3^2\,\dd k_3 \ K^2\,\dd K \ 2 \ K_X^0 \ \mathcal{I}_{0}^{2,1}(K;k_\ell,k_\ell|i_z) \\ \nonumber & \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{2;\ell',0}^{1,X}(k_3,K;k_{\ell'}|l_z) \ P(K)\,P(k_3) \\ & + ({}_3,k_z,l_z\leftrightarrow {}_4,l_z,k_z) \Bigg] + \left(\substack{{}_1,{}_2,i_z,j_z\leftrightarrow {}_3,{}_4,k_z,l_z \\ {}_3,{}_4,k_z,l_z\leftrightarrow {}_1,{}_2,i_z,j_z}\right) \\ \nonumber &+ \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \Bigg\{ \delta_{i_z,k_z} \Bigg[\left(\frac{2}{\pi}\right)^2 \\ \nonumber & \int k_2^2\,\dd k_2 \ K^2\,\dd K \ 2 \ K_X^0 \ \mathcal{I}_{\ell_a}^{2,1}(K;k_{\ell},k_{\ell'}|i_z) \ \\ \nonumber & \times \mathcal{I}_{\ell}^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{2;\ell,\ell_a}^{1,X}(k_2,K;k_{\ell'}|l_z) \ P(K)\,P(k_2) \\ & + ({}_2,j_z,l_z,\ell,\ell'\leftrightarrow {}_4,l_z,j_z,\ell',\ell)\Bigg] + 3 \ \mr{perm.} \Bigg\}. \ea Limber's approximation can also be used (except on the squeezed diagonal), if one first performs the wavenumber integrals before the redshift integrals (instead of the opposite, that was used in the previous no-Limber equations). The resulting covariance is \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xbase0} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \ K_X^0 \ \left(I_1^{1}(k_\ell|z) \ P(k_\ell|z)\right)^2 I_2^{X}(k_{\ell'},k_{\ell'}|z) \\ \nonumber & + \ (\ell\leftrightarrow\ell') \\ \nonumber +\frac{4 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \ K_X^0 \ I_1^{1}(k_\ell|z) \ I_1^{1}(k_{\ell'}|z) \ I_2^{X}(k_{\ell},k_{\ell'}|z) \\ & \times P(k_\ell|z) \ P(k_{\ell'}|z) \ea and \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-Xdiag0} =& \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int \dd V_{ab} \ 4 \ K_X^0 \ I_1^X(k_{\ell'}|z_b) \ I_1^1(k_{\ell'}|z_b) \\ & \times I_2^1(k_{\ell},k_{\ell}|z_a) \ P(k_{\ell'}|z_b) \ C_0^m(z_a,z_b) + (\ell,{}_a,{}_b\leftrightarrow\ell',{}_b,{}_a) \label{Eq:Cov-3h-Xdiag0-sqz-Limber} \\ \nonumber & + \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z} + \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \\ \nonumber & \int \dd V_{ab} \ I_2^1(k_{\ell},k_{\ell'}|z_a) \ \Big[ 2 \ K_X^0 \ I_1^X(k_{\ell'}|z_b) \ I_1^1(k_{\ell}|z_b) \ P(k_{\ell}|z_b) \\ & + (\ell\leftrightarrow \ell') \Big]\times C_{\ell_a}^m(z_a,z_b) + (z_a\leftrightarrow z_b) \label{Eq:Cov-3h-Xdiag0-alt-Limber} \ea where in Eq.~\ref{Eq:Cov-3h-Xdiag0-sqz-Limber} $z_a\in i_z$ and $z_b\in k_z$, while in Eq.~\ref{Eq:Cov-3h-Xdiag0-alt-Limber} $z_a\in i_z$ and $z_b\in j_z$. One can perform the summation over $X \in \{\mr{b2,s2,2PT}\}$ by introducing the notation \ba I_i^{\Sigma_2} \equiv \sum_{X \in \{\mr{b2,s2,2PT}\}} K_X^0 \ I_i^X \label{Eq:def-I^Sigma} = \frac{17}{21} \ I_i^1 + \frac{1}{2} \ I_i^2 \ea This yields \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-base0} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \; \left(I_1^{1}(k_\ell|z) \ P(k_\ell|z)\right)^2 I_2^{\Sigma_2}(k_{\ell'},k_{\ell'}|z) \\ \nonumber & + \quad (\ell\leftrightarrow\ell') \\ \nonumber +\frac{4 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \ I_1^{1}(k_\ell|z) \ I_1^{1}(k_{\ell'}|z) \ I_2^{\Sigma_2}(k_{\ell},k_{\ell'}|z) \\ & \times P(k_\ell|z) \ P(k_{\ell'}|z) \ea and \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-diag0} =& \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int \dd V_{ab} \ 4 \ I_1^{\Sigma_2}(k_{\ell'}|z_b) \ I_1^1(k_{\ell'}|z_b) \, P(k_{\ell'}|z_b) \\ & \times I_2^1(k_{\ell},k_{\ell}|z_a) \ C_0^m(z_a,z_b) + (\ell,{}_a,{}_b\leftrightarrow\ell',{}_b,{}_a) \label{Eq:Cov-3h-diag0-sqz-Limber} \\ \nonumber & + \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z} + \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \\ \nonumber & \int \dd V_{ab} \ I_2^1(k_{\ell},k_{\ell'}|z_a) \ \Big[ 2 \ I_1^{\Sigma_2}(k_{\ell'}|z_b) \ I_1^1(k_{\ell}|z_b) \, P(k_{\ell}|z_b) \\ & + (\ell\leftrightarrow \ell') \Big]\times C_{\ell_a}^m(z_a,z_b) + (z_a\leftrightarrow z_b) \label{Eq:Cov-3h-diag0-alt-Limber} \ea \section{Four-halo terms}\label{Sect:4halo} \subsection{Trispectrum} This term is the first diagram of Fig.~\ref{Fig:diagrams-trispectrum}. Applying the diagrammatic rules from Sect.~\ref{Sect:diagrammatic}, the corresponding trispectrum part is: \ba \nonumber T^\mr{4h}_\mr{gal}(\kk_{1234},z_{1234}) = \int & \dd M_{\alpha\beta\gamma\delta} \ \orange{\left.\frac{\dd n_h}{\dd M}\right|_{\alpha,1} \left.\frac{\dd n_h}{\dd M}\right|_{\beta,2} \left.\frac{\dd n_h}{\dd M}\right|_{\gamma,3} \left.\frac{\dd n_h}{\dd M}\right|_{\delta,4}} \\ \nonumber & \darkgreen{\lbra N_\mr{gal}\rbra_{\alpha,1} \lbra N_\mr{gal}\rbra_{\beta,2} \lbra N_\mr{gal}\rbra_{\gamma,3} \lbra N_\mr{gal}\rbra_{\delta,4}} \\ \nonumber & {\color{red} u(k_1|\alpha,1) u(k_2|\beta,2) \, u(k_3|\gamma,3) \, u(k_4|\delta,4)} \\ \nonumber & {\color{blue} \times \ T_\mr{hhhh}(\kk_{1234}|M_{\alpha\beta\gamma\delta},z_{1234}).} \ea Following Appendix \ref{App:3Dhalopolysp}, the halo trispectrum splits into three terms, and thus the 4h galaxy trispectrum too : \ba \nonumber T^\mr{4h}_\mr{gal}(\kk_{1234},z_{1234}) = T^\mr{4h-b3} + T^\mr{4h-3PT} + T^\mr{4h-2\times 2} \ea where \ba\label{Eq:T4h-b3} \nonumber T^\mr{4h-b3}(\kk_{1234},z_{1234}) =& I_1^1(k_1,z_1) \ I_1^1(k_2,z_2) \ I_1^1(k_3,z_3) \ I_1^3(k_4,z_4)\\ & \times P(k_1|z_{14}) \ P(k_2|z_{24}) \ P(k_3|z_{34}) + 3 \ \mr{perm.} \ea \ba\label{Eq:T4h-3PT} \nonumber T^\mr{4h-3PT}(\kk_{1234},z_{1234}) = I_1^1(k_1,z_1) \ I_1^1(k_2,z_2) \ I_1^1(k_3,z_3) \ I_1^1(k_4,z_4) \\ \times \Big[3! \ F_3(\kk_1,\kk_2,\kk_3) \ P(k_1|z_{14}) \ P(k_2|z_{24}) \ P(k_3|z_{34}) + 3 \ \mr{perm.}\Big] \ea and \ba \nonumber T^\mr{4h-2\times 2}(\kk_{1234},z_{1234}) = \sum_{X,Y\in \{\mr{b2,s2,2PT}\} } I_1^1(k_1,z_1) \ I_1^1(k_2,z_2) \ I_1^X(k_3,z_3) \\ \nonumber I_1^Y(k_4,z_4) \ 4\ K_X(\kk_{1+3},-\kk_1) \ K_Y(\kk_{1+3},\kk_2) \\ P(k_{1+3}|z_{34}) \, P(k_1|z_{13}) \, P(k_2|z_{24}) + 11 \ \mr{perm.} \ea where the permutations are explicited for example in Appendix \ref{App:3Dhalotrispec}.\\ After Legendre decomposition of the angles and accounting for all permutations, the b3 term splits into four subterms, the 3PT term into $9\times3\times4=108$ subterms, and the $2\times 2$ term into $25\times 12= 300$ subterms. I thus have a total of 412 subterms to compute. \subsection{Covariance} First, the term from third order halo bias is \ba\label{Eq:Cov-4h-b3} \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-b3} =& \frac{1}{4\pi} \left(\frac{2}{\pi}\right)^3\int k^2_{123} \, \dd k^2_{123} \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \\ \nonumber & \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{3;\ell,\ell,\ell'}^{1,3}(k_1,k_2,k_3;k_{\ell'}|l_z) \ P(k_1) \, P(k_2) \, P(k_3)\\ & +3 \ \mr{perm.} \ea This term is the simplest as the trispectrum does not have an angle dependence. As such, if valid, the Limber approximation may easily be applied, and gives \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-b3} =& \frac{2 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ \left(I_1^1(k_{\ell},z)\right)^2 \ I_1^1(k_{\ell'},z) \ I_1^3(k_{\ell'},z) \\ & \times P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell'). \ea Next, the term from third order perturbation theory splits in two parts: one coming from trispectrum permutations involving the squeezed diagonal $\kk_{1+2}$, and one coming from permutations involving an alternate diagonal $\kk_{1+3}$ or $\kk_{1+4}$.\\ The first part is \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-sqz} = \sum_{n,n'} (-1)^{\ell+\ell'} \sum_{\lu,\lt} \ii^{2\lu+\ell'+\lt+n'} \ \frac{(2\ell+1)_{13}}{4\pi} \threeJz{\ell}{\lu}{n}^2 \\ \nonumber \threeJz{\ell'}{\lt}{n'}^2 \left(\frac{2}{\pi}\right)^4 \int k^2_{123} \,\dd k_{123} \ 3! \ F_{3;n,n'}^{1,2;3,1+2}(k_1,k_2,k_3) \\ \nonumber \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ P(k_1) \, P(k_2) \, P(k_3) \\ \nonumber \int \dd V_4 \ G(z_4)^3 \ j_{\lt}(k_3 r_4) \ I_1^1(k_{\ell'},z_4) \int K^2\,\dd K \ \dd V_a \ j_{\lu}(k_1 x_a) \\ \times \ j_{\lu}(k_2 x_a) \ j_{0}(K x_a) \ j_{n'}(K r_4) \quad + \quad 3 \ \mr{perm.} \ea I note that the last integral (over $K$ and $x_a$) is purely analytic, however I did not find a closed form expression for it, except in the case $n'=0$ which will be tackled later.\\ The second part of the 3PT term is \ba \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-alt} = \sum_{n,n'} (-1)^{\ell+\ell'} \sum_{\ell_{123ab}} \ii^{\lu+\lt-\ell_a+\ell_b+\ld+\ell'} \ \frac{(2\ell+1)_{123ab}}{4\pi} \\ \nonumber K_{n',\ell_b,\ld}^{\lu,\lt,n;\ell,\ell_a,\ell'} \left(\frac{2}{\pi}\right)^4 \int k^2_{123} \,\dd k_{123} \ 3! \ F_{3;n,n'}^{1,3;2,1+3}(k_1,k_2,k_3) \\ \nonumber \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ P(k_1) \, P(k_2) \, P(k_3) \\ \nonumber \int \dd V_4 \ G(z_4)^3 \ j_{\ld}(k_2 r_4) \ I_1^1(k_{\ell'},z_4) \int K^2\,\dd K \ \dd V_a \ j_{\lu}(k_1 x_a) \\ \times \ j_{\ell_a}(K x_a) \ j_{\ell_b}(K r_4) \ j_{\lt}(k_3 x_a) \quad + \quad 7 \ \mr{perm.} \ea where the $K_{n',\ell_b,\ld}^{\lu,\lt,n;\ell,\ell_a,\ell'}$ symbol is related to a contraction of a 12J symbol of the second kind in Appendix \ref{App:K-and-12J}. Again the last integral (over $K$ and $x_a$) is purely analytic with no known closed form expression, except in the case $n'=0$ which will be tackled later.\\ Finally the $2\times2$ trispectrum term also splits in two parts : one coming from trispectrum permutations involving the squeezed diagonal $\kk_{1+2}$, and one coming from permutations involving an alternate diagonal $\kk_{1+3}$ or $\kk_{1+4}$.\\ The first part is \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-sqz} = \sum_{\substack{X,Y\in \{\mr{b2,s2,2PT}\} \\ n,n'}} (-1)^{\ell+\ell'} \sum_{\ell_{13}} \ii^{\lu+\ell+n+n'+\lt+\ell'} \ \frac{(2\ell+1)_{13}}{4\pi} \\ \nonumber \threeJz{\ell}{\lu}{n}^2 \ \threeJz{\ell'}{\lt}{n'}^2 \ \left(\frac{2}{\pi}\right)^3 \int k^2_{13}\,\dd k_{13} \ K^2\,\dd K \\ \nonumber 4 \, K_{X,n}(K,k_1) \, K_{Y,n'}(K,k_3) \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_{2;\lu,n}^{1,X}(k_1,K;k_\ell|j_z) \\ \nonumber \times \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{2;n',\lt}^{1,Y}(K,k_3;k_{\ell'}|l_z) \ P(K) \, P(k_1) \, P(k_3) \\ + \ 3 \ \mr{perm.} \ea The second part of the $2\times2$ term is \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-alt} = \sum_{\substack{X,Y\in \{\mr{b2,s2,2PT}\} \\ n,n'}} (-1)^{\ell+n} \sum_{\ell_{12abc}} \ii^{\lu-\ell_a+\ell_b+\ld} \frac{(2\ell+1)_{12abc}}{4\pi} \\ \nonumber J_{\ell',\lu,\ell_a}^{\ld,\ell,n';\ell_b,n,\ell_c} \ \left(\frac{2}{\pi}\right)^3 \int k^2_{12}\,\dd k_{12} \ K^2\,\dd K \ 4 \, K_{X,n}(K,k_1) \\ \nonumber K_{Y,n'}(K,k_2) \ P(K) \, P(k_1) \, P(k_2) \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \nonumber \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{2;\lu,\ell_a}^{1,X}(k_1,K;k_{\ell'}|k_z) \ \mathcal{I}_{2;\ld,\ell_b}^{1,Y}(k_2,K;k_{\ell'}|l_z) \\ + \ 7 \ \mr{perm.} \ea \subsection{Simplifications} For some cases of the Legendre decomposition, the covariance equations get simpler; I tackle these cases here.\\\ First, beginning with 3PT terms, when $n'=0$, the 3PT-squeezed term simplifies to \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-sqz0} = \sum_{n} (-1)^{\ell} \sum_{\lu} \ii^{2\lu} \ \frac{(2\lu+1)}{4\pi} \threeJz{\ell}{\lu}{n}^2 \qquad \\ \nonumber \left(\frac{2}{\pi}\right)^3 \int k^2_{123} \,\dd k_{123} \ 3! \ F_{3;n,0}^{1,2;3,1+2}(k_1,k_2,k_3) \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \nonumber \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{3;\lu,\lu,\ell'}^{1,1}(k_1,k_2,k_3;k_{\ell'}|l_z) \\ \times \ P(k_1) \, P(k_2) \, P(k_3) \quad + \quad 3 \ \mr{perm.} \ea If I further set $n=0$, I have \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-sqz00} = \left(\frac{2}{\pi}\right)^3 \int k^2_{123} \,\dd k_{123} \ 3! \ F_{3;0,0}^{1,2;3,1+2} \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \qquad \\ \nonumber \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{3;\ell,\ell,\ell'}^{1,1}(k_1,k_2,k_3;k_{\ell'}|l_z) \\ \times \ P(k_1) \, P(k_2) \, P(k_3) \quad + \quad 3 \ \mr{perm.} \ea and Limber's approximation can be used on all wavevectors to yield: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-sqz00} = \frac{2 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ 3! \ F_{3;0,0} \left(I_1^1(k_{\ell},z)\right)^2 \left(I_1^1(k_{\ell'},z)\right)^2 \\ \times \ P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') \ea where I used the fact that $F_{3;0,0}$ is independent of its arguments and superscripts (see Appendix \ref{App:3Dhalotrispec}).\\ If $n'=0$, the 3PT-alternate term also simplifies: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-alt0} = \sum_{n} (-1)^{\ell+\ell'} \sum_{\ell_{13a}} \ii^{\lu+\lt+\ell+\ell'} \ \frac{(2\ell+1)_{13a}}{4\pi} \sixJ{\ell}{\ell'}{\ell_a}{\lt}{\lu}{n} \\ \nonumber \left(\frac{2}{\pi}\right)^3 \int k^2_{123}\,\dd k_{123} \ 3! \ F_{3;n,0}^{1,3;2,1+3}(k_1,k_2,k_3) \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \nonumber \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{3;\lu,\ell,\lt}^{1,1}(k_1,k_2,k_3;k_{\ell'}|l_z) \\ \times \ P(k_1) \, P(k_2) \, P(k_3) \quad + \quad 7 \ \mr{perm.} \ea when I further set $n=0$ \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-alt00} = \frac{1}{4\pi} \left(\frac{2}{\pi}\right)^3 \int k^2_{123}\,\dd k_{123} \ 3! \ F_{3;0,0}^{1,3;2,1+3} \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \\ \nonumber \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \ \mathcal{I}_{3;\ell,\ell,\ell'}^{1,1}(k_1,k_2,k_3;k_{\ell'}|l_z) \\ \times \ P(k_1) \, P(k_2) \, P(k_3) \quad + \quad 7 \ \mr{perm.} \ea and Limber's approximation can be used on all wavevectors to yield: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-alt00} = \frac{4 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ 3! \ F_{3;0,0} \left(I_1^1(k_{\ell},z)\right)^2 \left(I_1^1(k_{\ell'},z)\right)^2 \\ \times \ P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') \ea This result is exactly twice that of the squeezed term\footnote{this can already be seen to hold before the Limber approximation} : $\mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-alt00}=2\times\mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-sqz00}$. As such, the two terms can be grouped together : \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3PT-00} = \frac{6 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ 3! \ F_{3;0,0} \left(I_1^1(k_{\ell},z)\right)^2 \left(I_1^1(k_{\ell'},z)\right)^2 \\ \times \ P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') \ea Another straighter way to get to this equation is that taking $(n,n')=(0,0)$ is equivalent to inputting $F_3(\kk_1,\kk_2,\kk_3)= 3 \, F_{3;0,0}$ in Eq.~\ref{Eq:T4h-3PT} and realising the analogy with the 4h-b3 case. Thus Eq.~\ref{Eq:Cov-4h-b3} can be used with the replacements $I_1^3 \rightarrow 3! \ 3 \, F_{3;0,0} \times I_1^1 $. This remarks allows us to unify the b3 and 3PT terms into a single equation: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3} = \frac{2 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ 3! \ \left(I_1^1(k_{\ell},z)\right)^2 \ I_1^1(k_{\ell'},z) \ I_1^{\Sigma_3}(k_{\ell'},z) \\ \times \ P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') \ea where \ba I_1^{\Sigma_3}(k,z) \equiv 3 \, F_{3;0,0} \ I_1^{1}(k,z) + \frac{1}{3!} \ I_1^{3}(k,z) \ea with $3 \, F_{3;0,0}=\frac{1023}{1701}$.\\\ Second, let's look now at $2\times 2$ terms which give symmetric roles to $n$ and $n'$ and thus significantly simplify only when both are zero. When $n=n'=0$, the $2\times 2$-squeezed term simplifies to \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-sqz00} =& \frac{1}{4\pi} \!\sum_{X,Y\in \{\mr{b2,s2,2PT}\} }\! \left(\frac{2}{\pi}\right)^3 \int k^2_{13}\,\dd k_{13} \ K^2\,\dd K \ 4 \ K_{X,0} \ K_{Y,0} \\ \nonumber & \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_{2;\ell,0}^{1,X}(k_1,K;k_\ell|j_z) \ \mathcal{I}_{\ell'}^{1,1}(k_3;k_3|k_z) \\ & \mathcal{I}_{2;0,\ell'}^{1,Y}(K,k_3;k_{\ell'}|l_z) \ P(K) \ P(k_1) \ P(k_3) \ + \ 3 \ \mr{perm.} \ea Using Limber's approximation on $k_1,k_3$, and definition \ref{Eq:def-I^Sigma} of $I_i^{\Sigma_2}$, one finds: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-sqz00} = \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int \dd V_{ab} \ 4 \ I_1^{\Sigma_2}(k_{\ell},z_a) \ I_1^1(k_{\ell},z_a) \, P(k_{\ell},z_a) \\ \times \ 4 \ I_1^{\Sigma_2}(k_{\ell'},z_b) \ I_1^1(k_{\ell'},z_b) \, P(k_{\ell'},z_b) \ C_{0}^m(z_a,z_b) \label{Eq:Cov-4h-2X2-sqz00-Limber} \ea where $z_a\in i_z$, $z_b\in k_z$.\\ The $2\times 2$-alternate term also simplifies when $n=n'=0$ \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-alt00} = \!\!\!\!\!\!\!\!\sum_{\substack{\ell_a \\ X,Y\in \{\mr{b2,s2,2PT}\} }}\!\!\!\!\!\!\!\! \frac{(2\ell_a+1)}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \left(\frac{2}{\pi}\right)^3 \int k^2_{12}\ \dd k_{12} \\ \nonumber K^2\ \dd K \ 4 \ K_{X,0} \ K_{Y,0} \ \mathcal{I}_\ell^{1,1}(k_1;k_1|i_z) \ \mathcal{I}_\ell^{1,1}(k_2;k_2|j_z) \\ \nonumber \mathcal{I}_{2;\ell,\ell_a}^{1,X}(k_1,K;k_{\ell'}|k_z) \, \mathcal{I}_{2;\ell,\ell_b}^{1,Y}(k_2,K;k_{\ell'}|l_z) \ P(K) \, P(k_1) \, P(k_2) \\ + \quad 7 \ \mr{perm.} \ea Using Limber's approximation on $k_1,k_2$, and definition \ref{Eq:def-I^Sigma} of $I_i^{\Sigma_2}$, one finds: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-2\times 2-alt00} =& \left(\delta_{i_z,k_z}\ \delta_{j_z,l_z}+\delta_{i_z,l_z}\ \delta_{j_z,k_z}\right) \ \sum_{\ell_a} \frac{(2\ell_a+1)}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ \nonumber & \int \dd V_{ab} \ \left[2 \ I_1^{\Sigma_2}(k_{\ell'},z_a) \ I_1^1(k_{\ell},z_a) \ P(k_{\ell},z_a) + (\ell\leftrightarrow\ell')\right] \\ \nonumber & \times \left[2 \ I_1^{\Sigma_2}(k_{\ell},z_b) \ I_1^1(k_{\ell'},z_b) \ P(k_{\ell'},z_b) + (\ell\leftrightarrow\ell')\right] \\ & \times \ C_{\ell_a}^m(z_a,z_b) \label{Eq:Cov-4h-2X2-alt00-Limber} \ea where $z_a\in i_z$, $z_b\in j_z$. \section{Shot-noise}\label{Sect:shotnoise} \subsection{one-galaxy shot-noise}\label{Sect:shot1g} This term is the last diagram of Fig. \ref{Fig:diagrams-trispectrum}. Applying the diagrammatic rules from Sect. \ref{Sect:diagrammatic}, the corresponding trispectrum part is: \ba \nonumber T^\mr{shot1g}_\mr{gal}(\kk_{1234},z_{1234}) &= \int \dd M \ \orange{\frac{\dd n_h}{\dd M}} \ \darkgreen{\lbra N_\mr{gal}\rbra} \ {\color{red} u(k_{1+\cdots+4}|M,z)} \ {\color{blue} \times 1}\\ \nonumber &= \delta_{z_1,z_2,z_3,z_4} \ I_1^0(k_{1+\cdots+4}|z) \\ &= \delta_{z_1,z_2,z_3,z_4} \ \nbargal(z) \ea since $\kk_1+\cdots+\kk_4=0$ and the halo profile is normalised to $u(0|M,z)=1$. The corresponding covariance term is \ba \mathcal{C}_{\ell,\ell'}^\mr{shot1g} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \ n_\mr{gal}(i_z) \ea \subsection{Two-galaxy shot-noise}\label{Sect:shot2g} These terms are the 8th, 9th, 12th, and 13th diagrams of Fig. \ref{Fig:diagrams-trispectrum}. One could apply the diagrammatic rules from Sect. \ref{Sect:diagrammatic} to write down the corresponding trispectrum part; however it was realised by \cite{Lacasa2014} that these diagrams are identical to ones of lower order polyspectra. For instance for the 8th diagram (2ha-shot2g) \be T^\mr{2ha-shot2g}_\mr{gal}(\kk_{1234},z_{1234}) = P^\mr{2h}_\mr{gal}(k_{1+2},z_{13}) + 2 \ \mr{perm.} \ee These diagrams can then be resummed to reveal the clustering part of the lower order polyspectrum (here the power spectrum). Noting $P^\mr{clust}_\mr{gal}=P^\mr{2h}_\mr{gal}+P^\mr{1h}_\mr{gal}$ and writing down explicitely all involved permutations of (1234), one finds: \ba \nonumber T^\mr{shot2g}_\mr{gal}(\kk_{1234},z_{1234}) =& \ T^\mr{2hb-shot2g} + T^\mr{1hb-shot2g} \\ \nonumber & + T^\mr{2ha-shot2g} + T^\mr{1ha-shot2g} \\ \nonumber =& \ \delta_{z_2,z_3,z_4} \, P^\mr{clust}_\mr{gal}(k_1,z_{12}) + \delta_{z_1,z_3,z_4} \, P^\mr{clust}_\mr{gal}(k_2,z_{12}) \\ \nonumber & + \ \delta_{z_1,z_2,z_4} \, P^\mr{clust}_\mr{gal}(k_3,z_{13}) + \delta_{z_1,z_2,z_3} \, P^\mr{clust}_\mr{gal}(k_4,z_{14})\\ \nonumber & + \delta_{z_1,z_3} \ \delta_{z_2,z_4} \ P^\mr{clust}_\mr{gal}(k_{1+3},z_{12}) \\ \nonumber & + \delta_{z_1,z_4} \ \delta_{z_2,z_3} \ P^\mr{clust}_\mr{gal}(k_{1+4},z_{12}) \\ &+ \ \delta_{z_1,z_2} \ \delta_{z_3,z_4} \ P^\mr{clust}_\mr{gal}(k_{1+2},z_{13}) \label{Eq:Tshot2g-resummed} \ea where the first two lines in Eq. \ref{Eq:Tshot2g-resummed} come from "1+3" diagrams (2hb-shot2g and 1hb-shot2g, respectively 9th and 13th diagrams in Fig. \ref{Fig:diagrams-trispectrum}), and the last three lines come from "2+2" diagrams (2ha-shot2g and 1ha-shot2g, respectively 8th and 12th diagrams in Fig. \ref{Fig:diagrams-trispectrum}). The corresponding covariance is \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot2g} =& \frac{\delta_{j_z,k_z,l_z}+\delta_{i_z,k_z,l_z}}{4\pi} \ C_\ell^\mr{gal,clust}(i_z,j_z) \\ \nonumber & + \frac{\delta_{j_z,k_z,l_z}+\delta_{i_z,k_z,l_z}}{4\pi} \ C_{\ell'}^\mr{gal,clust}(k_z,l_z) \\ \nonumber & + \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z}+ \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \times \ C_{\ell_a}^\mr{gal,clust}(i_z,j_z) \label{Eq:Cov-shot2g-alt} \\ & + \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \ C_{0}^\mr{gal,clust}(i_z,k_z) \label{Eq:Cov-shot2g-sqz} \ea where $C_{\ell}^\mr{gal,clust}$ is the clustering part (i.e. without shot-noise) of the galaxy angular power spectrum \subsection{Three-galaxy shot-noise}\label{Sect:shot3g} These terms are the 3rd, 6th and 11th diagrams of Fig. \ref{Fig:diagrams-trispectrum} (3h-shot3g, 2h-shot3g and 1h-shot3g). As in the previous subsection, I remark that these diagrams are identical to ones of the galaxy bispectrum and can be resummed. Noting $B^\mr{clust}_\mr{gal}=B^\mr{3h}_\mr{gal}+B^\mr{2h}_\mr{gal}+B^\mr{1h}_\mr{gal}$ and writing explicitly all involved permutations of (1234), one finds: \ba \nonumber T^\mr{shot3g}_\mr{gal}(\kk_{1234},z_{1234}) = \delta_{z_1,z_2} \ B^\mr{clust}_\mr{gal}(k_{1+2},k_3,k_4,z_{134})\\ \nonumber + \delta_{z_1,z_3} \ B^\mr{clust}_\mr{gal}(k_{1+3},k_2,k_4,z_{124})\\ \nonumber + \delta_{z_1,z_4} \ B^\mr{clust}_\mr{gal}(k_{1+4},k_2,k_3,z_{123})\\ \nonumber + \delta_{z_2,z_3} \ B^\mr{clust}_\mr{gal}(k_1,k_{2+3},k_4,z_{124})\\ \nonumber + \delta_{z_2,z_4} \ B^\mr{clust}_\mr{gal}(k_1,k_{2+4},k_3,z_{123})\\ + \delta_{z_3,z_4} \ B^\mr{clust}_\mr{gal}(k_1,k_2,k_{3+4},z_{123}) \ea The corresponding covariance is (first with terms in the same order) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot3g} &= \frac{\delta_{i_z,j_z}}{4\pi} \ b_{0,\ell',\ell'}^\mr{gal,clust}(i_z,k_z,l_z) \\ \nonumber &+ \delta_{i_z,k_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \\ \nonumber &+ \delta_{i_z,l_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) \\ \nonumber &+ \delta_{j_z,k_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ b_{\ell,\ell_a,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \\ \nonumber &+ \delta_{j_z,l_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ b_{\ell,\ell_a,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) \\ \nonumber &+ \frac{\delta_{k_z,l_z}}{4\pi} \ b_{0,\ell,\ell}^\mr{gal,clust}(i_z,j_z,k_z) \\ &= \frac{\delta_{i_z,j_z}}{4\pi} \ b_{0,\ell',\ell'}^\mr{gal,clust}(i_z,k_z,l_z) + \frac{\delta_{k_z,l_z}}{4\pi} \ b_{0,\ell,\ell}^\mr{gal,clust}(i_z,j_z,k_z) \label{Eq:Cov-shot3g-sqz} \\ \nonumber &+ \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \Big[ \delta_{i_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \\ \nonumber & + \delta_{i_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) + \delta_{j_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,l_z) \\ & + \delta_{j_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,k_z) \Big] \label{Eq:Cov-shot3g-alt} \ea where $b_{\ell_1,\ell_2,\ell_3}^\mr{gal,clust}$ is the clustering part (i.e. without shot noise) of the galaxy angular bispectrum. \subsection{Shot-noise subtraction}\label{Sect:shot-subs} The shot-noise contribution to the power spectrum is $C_\ell^\mr{shot}(i_z,j_z) = \delta_{i_z,j_z} \ n_\mr{gal}^\mr{obs}(i_z)$ where $n_\mr{gal}^\mr{obs}$ is the actual number of galaxies in the survey (in a redshift bin, and per steradian). This number is perfectly known, and thus it can be subtracted from the measurement in order to reveal power up to smaller scales. This subtraction is indeed actually applied in power spectrum measurement by past and current surveys working with the relative fluctuation $\delta_\mr{gal}$: the corrected spectrum is $\tilde{C}_\ell = C_\ell^\mr{obs}-1/\nbargal$. Naively this subtraction shouldnt affect the covariance, since covariances are invariant under addition of a constant. However it does affect it, because one is subtracting the actual number of galaxies, not the model-predicted number. The actual number of galaxies is itself a random variable, so subtracting it will add covariance terms. This random number is, in fact, positively correlated with the galaxy power spectrum measurement, thus the spectrum covariance will be reduced by the shot-noise subtraction. Explicitly, the shot-noise subtraction removes several of the power spectrum covariance terms. There are two equivalent ways to see which terms are going to be canceled. The first way uses the fact that the shot-noise contribution to the power spectrum corresponds to the diagram with coinciding galaxies (right most diagram in Fig.~\ref{Fig:diagrams-spectrum}), thus subtracting shot-noise corresponds to only taking the two other spectrum diagrams, that is, forbidding galaxies 1 and 2 to coincide. At the covariance level, galaxies 1 and 2 correspond to $C_\ell$ and galaxies 3 and 4 correspond to $C_{\ell'}$ ; shot-noise subtraction (for both $C_\ell$ and $C_{\ell'}$) thus corresponds to forbidding diagrams with a coincidence 1=2 and/or a coincidence 3=4. This removes many of the terms presented in the previous sections \ref{Sect:shot1g}, \ref{Sect:shot2g} and \ref{Sect:shot3g}, however there are still terms remaining, corresponding to the coincidences 1=3 or instance. The second equivalent way goes through real-space: in $w(\theta)$, shot-noise corresponds to a dirac at $\theta=0$. So shot-noise subtraction will kill all terms that yield a dirac at $\theta=0$ and/or $\theta'=0$ in the real-space covariance $C_{\theta,\theta'}$. Harmonic transforming this back to $\mathcal{C}_{\ell,\ell'}$, this corresponds to all terms which have no dependence on at least one of the multipole. For example, in the shot3g term (Sect.~\ref{Sect:shot3g}) $\frac{\delta_{i_z,j_z}}{4\pi} b^\mr{gal,clust}_{0,\ell',\ell'}$ has no dependence on $\ell$, and thus will be canceled by the shot-noise subtraction. Through either equivalent way, one finds that the following shot-noise covariance terms are canceled by shot-noise subtraction: \begin{itemize} \item $\mathcal{C}_{\ell,\ell'}^\mr{shot1g}$ \item the "1+3" part of $\mathcal{C}_{\ell,\ell'}^\mr{shot2g}$ : $\frac{\delta_{j_z,k_z,l_z}+\delta_{i_z,k_z,l_z}}{4\pi} \ C_\ell^\mr{gal,clust}(i_z,j_z)$ and the symmetric term in $\ell'$ \item the squeezed part of $\mathcal{C}_{\ell,\ell'}^\mr{shot2g}$ : $\frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \ C_{0}^\mr{gal,clust}(i_z,k_z)$ \item the squeezed part of $\mathcal{C}_{\ell,\ell'}^\mr{shot3g}$ : $\frac{\delta_{k_z,l_z}}{4\pi} \ b_{0,\ell,\ell}^\mr{gal,clust}(i_z,j_z,k_z)$ and the symmetric term in $\ell'$ \end{itemize} The following covariance terms are however still present: \begin{itemize} \item the alternate part of $\mathcal{C}_{\ell,\ell'}^\mr{shot2g}$ : $$\left(\delta_{j_z,l_z}+ \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ C_{\ell_a}^\mr{gal,clust}(i_z,j_z)$$ \item the alternate part of $\mathcal{C}_{\ell,\ell'}^\mr{shot3g}$ : $$\delta_{i_z,k_z} \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \ + \ 3 \ \mr{perm.}$$ \end{itemize} In summary, most shot-noise effects are canceled out. Most, but not all: a small group of terms, later participating in the braiding covariance (see Sect.~\ref{Sect:discu-braiding}), resist because one cannot erase the discreteness of the galaxy density field. Inference from a discretely sampled field cannot be the same as from the underlying continuous field. \section{Discussion of the results}\label{Sect:discu} \subsection{Super-sample covariance}\label{Sect:discu-SSC} Super-sample covariance (SSC) has been studied in the past literature, mostly for 3D surveys \citep[e.g.][]{Takada2013} but then also in spherical harmonics: \cite{Lacasa2016} derived its impact on the cross-covariance between cluster counts and the galaxy angular power spectrum. However its impact on the auto-covariance of an angular power spectrum has never been derived rigorously, but was surmised using the 3D results or the cross-covariance result. Here I show that SSC does emerge naturally from the halo model derivation, and does recover the postulation from \cite{Lacasa2016} based on the cross-covariance result. I use only equations after Limber's approximation, leaving the issue of no-Limber SSC to be tackled in future works. Also, I first tackle in Sect. \ref{Sect:discu-SSC-angindep} the simpler case of SSC terms coming from angle-independent trispectra ($n=0$), which will be the terms going into the summary Sect.~\ref{Sect:summary}. Then in Sect. \ref{Sect:discu-SSC-angdep} I will discuss the case of angle-dependent terms ($n=1,2$) and how to generalise SSC to other modelisation and partial sky coverage. \subsubsection{Angle-independent terms}\label{Sect:discu-SSC-angindep} We see SSC emerging when grouping all covariance terms where the trispectrum has a dependence on the squeezed diagonal through a $P(k_{1+2})$, making a $C_{0}^m(z_a,z_b)$ appear in the covariance. Specifically, there is such dependence in the 2h2+2-sqz term, Eq.~\ref{Eq:Cov-2h2+2-sqz}, in the 3h-diag-sqz0 term, Eq.~\ref{Eq:Cov-3h-Xdiag0-sqz-Limber}, and in the 4h-2X2-sqz00 term, Eq.~\ref{Eq:Cov-4h-2X2-sqz00-Limber}. There are also shot-noise contributions which need a bit more work to yield a form unifiable with the clustering terms.\\ The first shot-noise contribution comes from the two-halo part of Eq.~\ref{Eq:Cov-shot2g-sqz} \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot2g-sqz-2h} = & \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \ C_{0}^\mr{gal,2h}(i_z,k_z) \\ \simeq & \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \ \int \dd V_{ab} \ I_1^1(0|z_a) \ I_1^1(0|z_b) \ C_{0}^m(z_a,z_b) \ea where I assumed that $k_{1+2}$ is sufficiently small to neglect the scale dependence of $I_1^1$, meaning that galaxy bias can be considered constant for super-survey modes, a reasonable approximation. The second shot-noise contribution comes from the two- and three-halo parts of Eq.~\ref{Eq:Cov-shot3g-sqz}. \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot3g-sqz-23h} =& \frac{\delta_{i_z,j_z}}{4\pi} \left( b_{0,\ell',\ell'}^\mr{gal,2h}(i_z,k_z,l_z) + b_{0,\ell',\ell'}^\mr{gal,3h}(i_z,k_z,l_z) \right) + \mr{perm.} \\ \simeq & \frac{\delta_{i_z,j_z}}{4\pi} \int \dd V_{ab} \ I_1^1(0|z_a) \ I_2^1(k_{\ell'},k_{\ell'}|z_b) \ C_{0}^m(z_a,z_b) \\ \nonumber & + \frac{\delta_{i_z,j_z}}{4\pi} \sum_{X\in \{\mr{b2,s2,2PT}\} } \int \dd V_{ab} \ I_1^1(0|z_a) \ 4 \ K_X^0 \ I_1^X(k_{\ell'}|z_b) \\ \nonumber & \qquad \times \ I_1^1(k_{\ell'}|z_b) \ C_{0}^m(z_a,z_b) \\ & + \mr{perm.} \ea Combining all these terms (2h2+2-sqz, 3h-diag-sqz, 4h-2X2-sqz, shot2g-sqz-2h, shot3g-sqz-23h), one finds: \ba\label{Eq:SSC-unified-wshot} \mathcal{C}_{\ell,\ell'}^\mr{SSC} = \frac{\delta_{i_z,j_z} \ \delta_{k_z,l_z}}{4\pi} \int \dd V_{ab} \ \Psi_\ell^\mr{sqz}(z_a) \ \Psi_{\ell'}^\mr{sqz}(z_b) \ C_0^m(z_a,z_b) \ea where \ba \nonumber \Psi_\ell^\mr{sqz}(z) =& \sum_{X\in \{\mr{b2,s2,2PT}\} } 4 \ K_X^0 \ I_1^X(k_{\ell}|z) \ I_1^1(k_{\ell}|z) \ P(k_{\ell}|z) \\ \nonumber & \qquad + I_2^1(k_{\ell},k_{\ell}|z) + I_1^1(0|z) \\ =& 4 \ I_1^{\Sigma_2}(k_{\ell}|z) \ I_1^1(k_{\ell}|z) \ P(k_{\ell}|z) + I_2^1(k_{\ell},k_{\ell}|z) + I_1^1(0|z) \label{Eq:def-Psi^sqz} \ea More familiar equations can be found by expliciting the sum over $X\in \{\mr{b2,s2,2PT}\}$, using that \ba K_\mr{2PT}^0 = \frac{17}{21} \qquad K_\mr{s2}^0 = 0 \qquad K_\mr{b2}^0 = \frac{1}{2} \ , \ea introducing the effective galaxy bias \ba I_1^X(k|z) = b_X^\mr{gal}(k,z) \ \nbargal(z) \ea and the variance of the background matter density \ba \sigma^2(z_a,z_b) = \frac{C_0^m(z_a,z_b)}{4\pi} \ea With all these notations, one finds \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{SSC} = \delta_{i_z,j_z} \ \delta_{k_z,l_z} \int \dd V_{ab} \ \nbargal(z_a)^2 \ \nbargal(z_b)^2 \ \frac{\partial P_\mr{gal}(k_{\ell},z_a)}{\partial \delta_b} \\ \times \ \frac{\partial P_\mr{gal}(k_{\ell'},z_b)}{\partial \delta_b} \ \sigma^2(z_a,z_b) \ea with \ba \nonumber \frac{\partial P_\mr{gal}(k,z)}{\partial \delta_b} = \left[\frac{68}{21}\left(b_1^\mr{gal}(k,z)\right)^2 + 2 \ b_1^\mr{gal}(k,z) \ b_2^\mr{gal}(k,z) \right] P(k|z) \\ + \ \frac{I_2^1(k,k|z)}{\nbargal(z)^2} \ + \ \frac{b_1^\mr{gal}(k=0,z)}{\nbargal(z)} \label{Eq:dPgalddeltab} \ea I thus nicely recover the same SSC equation as the one derived in \cite{Lacasa2016} in the cross-covariance case. In the literature, for example \cite{Takada2013}, the first term in Eq.~\ref{Eq:dPgalddeltab} is called beat-coupling (BC) and the third term is called halo sample variance (HSV) ; the second and fourth terms were discovered by \cite{Lacasa2016}, they come respectively from the non-linear response of halos to matter density and from shot noise.\\ However the SSC shot-noise terms will be canceled by the shot-noise subtraction described in Sect.~\ref{Sect:shot-subs}, the shot-noise corrected SSC then becomes: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{SSC,corr} = \delta_{i_z,j_z} \ \delta_{k_z,l_z} \int \dd V_{ab} \ \nbargal(z_a)^2 \ \nbargal(z_b)^2 \ \frac{\partial P_\mr{gal}^\mr{corr}(k_{\ell},z_a)}{\partial \delta_b} \\ \times \ \frac{\partial P_\mr{gal}^\mr{corr}(k_{\ell'},z_b)}{\partial \delta_b} \ \sigma^2(z_a,z_b) \label{Eq:SSC-unified-shotcorr} \ea with \ba \nonumber \frac{\partial P_\mr{gal}^\mr{corr}(k,z)}{\partial \delta_b} =& \left[\frac{68}{21}\left(b_1^\mr{gal}(k,z)\right)^2 + 2 \ b_1^\mr{gal}(k,z) \ b_2^\mr{gal}(k,z) \right] P(k|z) \\ & \ + \ \frac{I_2^1(k,k|z)}{\nbargal(z)^2} \label{Eq:dPgalcorrddeltab} \ea \subsubsection{Angle-dependent terms}\label{Sect:discu-SSC-angdep} Two subleading SSC effects found in the recent literature are not present in the previous subsection. Here I show that is because they come from the $n=1$ and $n=2$ terms. The first effect is the so-called dilation effect found by \cite{Li2014} in the 3D $P(k)$ case. From \cite{Li2014}, one sees that this term comes from the \ba \frac{1}{2}\left(\frac{k_1}{k_2}+\frac{k_2}{k_1}\right) \cos\theta_{12} \ea part of the 2PT kernel $F_2(\kk_1,\kk_2)$.\\ In my derivation this part of $F_2$ yields the 2PT $n=1$ Legendre term, cf Appendix \ref{App:3Dhalopolysp}. Hence the dilation effect is present here if one considers $n=1$. The second effect is from super-survey tidal fields. This has been first uncovered by \cite{Li2017,Akitsu2017} for the redshift space power spectrum of galaxies. Then \cite{Barreira2017b}, which appeared the same day as this article v1 on arXiv, showed that it also affects the isotropic power spectrum of weak-lensing. To discus this issue, I consider matter only, not galaxies, and adopt the same notations as \cite{Barreira2017b}. The central notion is that of the large scale structure response to long wavelength (=soft) modes. The first order response $\mathcal{R}_1$, for one such soft mode, is defined through the squeezed limit of the bispectrum \ba \nonumber \lim_{\mathbf{p}\rightarrow 0} \lbra \delta(\kk) \ \delta(\kk') \ \delta(\mathbf{p}) \rbra = (2\pi)^3 \ \delta_D(\kk+\kk'+\mathbf{p}) \ \mathcal{R}_1(\kk,\mu=\hk\cdot\hat{p}) \\ \times \ P(k) \ P(p) \ea and \cite{Barreira2017b} decompose this response into its isotropic and tidal field part \ba \mathcal{R}_1(\kk,\mu) = R_1(k) + \frac{2}{3} R_K(k) \ P_2(\mu) \ea where $P_2$ is the second order Legendre polynomial. In this article, I use standard perturbation theory at tree-level. Then from Appendix \ref{App:3Dhalobispec} giving the 2PT bispectrum, one sees that the resulting power spectrum response is \ba \mathcal{R}_1(\kk,\mu) = 2 \ F_2(\kk,\mathbf{p}) + 2 \ F_2(-\kk+\mathbf{p},\mathbf{p}) \frac{P(|\kk-\mathbf{p}|)}{P(k)}. \ea From this one sees easily that the $n=2$ Legendre term in $F_2$ will source the tidal response. Taking appropriate limits when $\mathbf{p}\rightarrow 0$, one can further see that the $n=1$ Legendre term in $F_2$ will source both the isotropic and the tidal responses. The $n=0$ term considered in the previous subsection sources the so-called growth-only part of the isotropic response. Hence the decomposition of $F_2$ in $n=0,1,2$ Legendre terms is equivalent to including growth-only, dilation and tidal effects in the response approach to SSC. Although the derivation presented here uses standard perturbation theory at tree-level, it is possible to generalise the SSC equations to another modeling of matter, simply by adapting the power spectrum response. For instance, this response can be fitted to numerical simulations and then fed in the equations \citep[see e.g.][]{Barreira2017,Barreira2017b}. This appears a solution for observables directly sensitive to the matter power spectrum (e.g. weak-lensing), but it may not be feasible for galaxies. Indeed for the galaxy spectrum, the one-halo response $I_2^1(k,k|z)$ is a critical element at intermediate to small scales, and it is fully non-perturbative and heavily dependent on the galaxy selection function. As such it necessitates a halo modelisation to be predicted correctly. The advantage of the present derivation is that it does not rely on any particular soft mode limit $\mathbf{p}\rightarrow 0$ or any Taylor expansion as in \cite{Takada2013}. Instead, the present derivation is exact within the modelling assumption. Hence it remains valid even on large scales comparable to the survey size, while the response approach is limited to $k\ll p$, in other words, at small scales. A side note is that I developed here all the covariance equations in the full-sky limit. However this is not a practical limitation, as \cite{Lacasa2016b} has recently developed the formalism to predict analytically SSC in the more realistic case of partial sky coverage with an arbitrary survey mask. One basically needs to change $\sigma^2$ in Eq.~\ref{Eq:SSC-unified-shotcorr} to account for the effect of the mask power spectrum. Finally, there has been a lot of emphasis on SSC in literature, however the systematic derivation presented in this article finds a wealth of other non-Gaussian covariance terms which have never been considered before. The following subsections are devoted to these new terms and their potential importance. \subsection{Braiding terms}\label{Sect:discu-braiding} Braiding terms are those that arise when the trispectrum has a dependence on one of the alternate diagonal through a $P(k_{1+3})$ (or $P(k_{1+4})$), making a $C_{\ell_a}^m(z_a,z_b)$ appear in the covariance as well as a 3J symbol mixing the multipoles $\ell$ and $\ell'$. For any super-sample covariance term (studied in Sect.~\ref{Sect:discu-SSC}) there is a corresponding braiding term. Namely the clustering contributions are the 2h2+2-alt term, Eq.~\ref{Eq:Cov-2h2+2-alt}, the 3h-diag-alt term, Eq.~\ref{Eq:Cov-3h-diag0-alt-Limber}, and the 4h-2X2-alt term, Eq. \ref{Eq:Cov-4h-2X2-alt00-Limber}. Defining the braiding kernel \ba \mathcal{B}_{\ell,\ell'}(z_a,z_b) \equiv \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 C_{\ell_a}^m(z_a,z_b) \ea these three clustering terms can be unified into: \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{braid-clust} = \left(\delta_{i_z,k_z}\ \delta_{j_z,l_z}+\delta_{i_z,l_z}\ \delta_{j_z,k_z}\right) \int \dd V_{ab} \ \Psi^\mr{alt,clust}_{\ell,\ell'}(z_a) \\ \times \ \Psi^\mr{alt,clust}_{\ell,\ell'}(z_b) \ \mathcal{B}_{\ell,\ell'}(z_a,z_b) \label{Eq:braiding-unified-clust} \ea where \ba \Psi^\mr{alt,clust}_{\ell,\ell'}(z) = \Big[ 2 \ I_1^{\Sigma_2}(k_{\ell'}|z) \ I_1^1(k_{\ell}|z) \, P(k_{\ell}|z) + (\ell \leftrightarrow\ell') \Big] + I_2^1(k_{\ell},k_{\ell'}|z) \ea which is relatively similar to $\Psi^\mr{sqz}_{\ell}$ defined in Eq. \ref{Eq:def-Psi^sqz}, except for the shot-noise part and the multipole-coupling structure. In fact, one has the identity \ba\label{Eq:identity-Psi-alt-sqz} \Psi^\mr{alt,clust}_{\ell,\ell} = \Psi^\mr{sqz,clust}_{\ell} \ea Shot-noise contributions to the braiding covariance are given by Eq. \ref{Eq:Cov-shot2g-alt} \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot2g-alt} =& \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z}+ \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \times \ C_{\ell_a}^\mr{gal,clust}(i_z,j_z) \ea and Eq. \ref{Eq:Cov-shot3g-alt} \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot3g-alt} =& \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \Big[ \delta_{i_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \\ \nonumber & + \delta_{i_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) + \delta_{j_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,l_z) \\ & + \delta_{j_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,k_z) \Big] \ea In order to include these terms in a unified formula similar to Eq. \ref{Eq:braiding-unified-clust}, one has to add some approximations: \ba C_{\ell_a}^\mr{gal,clust}(i_z,j_z) \simeq \int \dd V_{ab} \ I_1^1(k_{\ell_a}|z_a) \ I_1^1(k_{\ell_a}|z_b) \ C_{\ell_a}^m(z_a,z_b) \ea which neglects $C_{\ell_a}^\mr{1h}$, and \ba b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) \simeq \delta_{j_z,k_z} \int \dd V_{ab} \ I_1^1(k_{\ell_a}|z_a) \ \Psi^\mr{alt}_{\ell,\ell'}(z_b) \ C_{\ell_a}^m(z_a,z_b) \ea which neglects several bispectrum terms\footnote{all the one-halo bispectrum, two permutations of the two-halo, one permutation of the three-halo, and $n\geq 1$ in the Legendre decomposition of the kernels $K_X(\kk_\alpha,\kk_\beta)$.}, and uses Limber's approximation on $\ell$ and $\ell'$.\\ With these, one gets the unified formula \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{braid} =& \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z}+ \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \times \int \dd V_{ab} \ \Psi^{\ell_a,\mr{alt}}_{\ell,\ell'}(z_a) \ \Psi^{\ell_a,\mr{alt}}_{\ell,\ell'}(z_b) \ C_{\ell_a}^m(z_a,z_b) \ea where \ba \Psi^{\ell_a,\mr{alt}}_{\ell,\ell'}(z) = \Psi^{\mr{alt,clust}}_{\ell,\ell'}(z) + I_1^1(k_{\ell_a}|z) \ea \subsection{Importance of terms}\label{Sect:discu-importance} Among the non-Gaussian covariance terms, super-sample covariance is the main reference against which to compare the new terms discovered in this article. Indeed it has already been well studied in the literature, including for the galaxy angular power spectrum, for example, in combination with cluster number counts \citep{Lacasa2016} or with weak lensing \citep{Krause2017}. Its importance is already well recognised, and it is is indeed included in the analysis of current galaxy surveys \citep[e.g.][]{vanUitert2017,Krause2017b,DES2017} having an impact both on the cosmological error bars as well as central values \citep{Hildebrandt2017}. As already mentionned in the introduction, numerical investigations I performed \citep{Lacasa2017-LAL} show that the 1h and 2h1+3 terms have an impact comparable to SSC on the signal to noise ratio of $C_\ell^\mr{gal}$, when using survey specifications representative of future missions like Euclid. As these terms become important, there is a priori no reason for others not to be, so I now turn to analytical arguments comparing all the other terms to SSC in adequate regimes where they can be compared. The braiding terms (Sect.~\ref{Sect:discu-braiding}) are the easiest ones to be compared with SSC, as it has already been noted that they have some similarity to it\footnote{This similarity is not a coincidence, but a straightforward consequences that a trispectrum term with a $P(k_{1+2})$ is a particular permutation of a contribution which also yield a term with a $P(k_{1+3})$ and a term with a $P(k_{1+4})$.}. Indeed, using Eq.~\ref{Eq:identity-Psi-alt-sqz}, in the case $\ell=\ell'$ and $i_z=j_z=k_z=l_z$ one has the identity \ba\label{Eq:braid-on-diag} \nonumber \mathcal{C}_{\ell,\ell}^\mr{braid-clust} = 2 \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell}{\ell_a}^2 \int \dd V_{ab} \ \Psi^\mr{sqz,clust}_{\ell}(z_a) \\ \times \ \Psi^\mr{sqz,clust}_{\ell}(z_b) \ C_{\ell_a}^m(z_a,z_b) \ea which can be compared with the corresponding SSC case \ba \mathcal{C}_{\ell,\ell}^\mr{SSC,corr} = \frac{1}{4\pi} \int \dd V_{ab} \ \Psi^\mr{sqz,clust}_{\ell}(z_a) \ \Psi^\mr{sqz,clust}_{\ell}(z_b) \ C_{0}^m(z_a,z_b) \ea in particular one sees that the term $\ell_a=0$ in the sum of Eq.~\ref{Eq:braid-on-diag} implies \ba \mathcal{C}_{\ell,\ell}^\mr{braid-clust} > \frac{2}{2\ell+1} \mathcal{C}_{\ell,\ell}^\mr{SSC,corr} \ea so at low multipoles, braiding must be non-negligible. To go further, assumptions need to be made. If one can assume that the matter power spectrum $C_{\ell_a}^m$ is slowly varying over the range of multipoles of interest, then \ba \nonumber \mathcal{C}_{\ell,\ell}^\mr{braid-clust} \simeq & 2 \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell}{\ell_a}^2 \int \dd V_{ab} \ \Psi^\mr{sqz,clust}_{\ell}(z_a) \\ \nonumber & \qquad \qquad \qquad \qquad \times \ \Psi^\mr{sqz,clust}_{\ell}(z_b) \ C_{0}^m(z_a,z_b) \\ =& \ 2 \ \mathcal{C}_{\ell,\ell}^\mr{SSC,corr} \ea More explicitely, if $C_{\ell}^m$ is an increasing function of $\ell$ over $[0,2\ell]$, then one finds \ba \frac{\mathcal{C}_{\ell,\ell}^\mr{braid-clust}}{\mathcal{C}_{\ell,\ell}^\mr{SSC,corr}} > 2 \ea whereas if $C_{\ell}^m$ is decreasing \ba \frac{2}{2\ell+1} < \frac{\mathcal{C}_{\ell,\ell}^\mr{braid-clust}}{\mathcal{C}_{\ell,\ell}^\mr{SSC,corr}} < 2 \ea The first situation occurs when the survey probes scales larger than the matter-radiation equality where $P(k)$ has a maximum, i.e. $\ell \lesssim \ell_\mr{eq} = k_\mr{eq} r(z)$, which will be the case for future surveys covering large portions of the sky. The second situation occurs at smaller scales. From current constraints $C_{\ell}^m \propto 1/\ell$ in the cosmologically interesting domain ; then from power counting argument one gets that the covariance ratio is $\mathcal{O}\left(\frac{\ln \ell}{\ell}\right)$. So in summary \ba \frac{\mathcal{C}_{\ell,\ell}^\mr{braid-clust}}{\mathcal{C}_{\ell,\ell}^\mr{SSC,corr}} &> 2 \quad \mr{if} \quad \ell \lesssim \ell_\mr{eq} \\ \frac{\mathcal{C}_{\ell,\ell}^\mr{braid-clust}}{\mathcal{C}_{\ell,\ell}^\mr{SSC,corr}} &= \mathcal{O}\left(1\right) \quad \mr{if} \quad C_{\ell}^m \sim C_{0}^m \\ \frac{\mathcal{C}_{\ell,\ell}^\mr{braid-clust}}{\mathcal{C}_{\ell,\ell}^\mr{SSC,corr}} &= \mathcal{O}\left(\frac{\ln \ell}{\ell}\right) \quad \mr{at \ high \ \ell} \ea Another regime where braiding is important is cross-redshifts: from Eq.~\ref{Eq:SSC-unified-wshot} one sees that SSC vanishes for cross-spectra $i_z\neq j_z$ as a consequence of Limber's approximation ; however from Eq.~\ref{Eq:braiding-unified-clust} one sees that braiding covariance does not vanish in this regime. Hence braiding will be of importance for effects producing non-vanishing cross-spectra, for example when dropping Limber's approximation or accounting for general relativistic effects \citep[e.g.][]{Cardona2016}. The next terms which may be important are the third order ones involved in the 4-halo term, either the third order bias term (4h-b3) or the term from third perturbation theory (4h-3PT). For these terms, it is simpler to argue at the level of the 3D trispectrum: both these terms give a trispectrum $$ T \propto P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') $$ whereas the SSC from 4h-2X2-sqz (beat coupling BC-BC in the literature) comes from a trispectrum $$ T \propto P(k_{1+2}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) $$ where the prefactor of both trispectra are of the same order.\\ Thus these terms will be more important than BC-BC in the range of multipoles where $P(k_{\ell}|z) \gtrsim P(k_{1+2}|z)$. In this full sky derivation, $k_{1+2}$ becomes aliased in the monopole $\ell=0$, but in general $k_{1+2}$ is a super-survey mode, so for a general survey covering a fraction $f_\mr{SKY}$ of the sky, one gets the rule of thumb that these terms are going to be important for multipoles where .\ba\label{Eq:thumbrule-gtSSC} P(k_{\ell}) \gtrsim P(1 / f_\mr{SKY} \, r(z)) \ea This certainly happens for $\ell \lesssim \ell_\mr{eq}$ if the survey is large enough to see the matter-radiation equality scale. Interestingly, Eq.~\ref{Eq:thumbrule-gtSSC} is basically equivalent to the condition for braiding to be important with respect to SSC, although in the former case this was argued only on the diagonal $\ell=\ell'$ whereas here it suffices that one of the multipoles fullfills the condition, either $\ell$ or $\ell'$. The penultimate terms are coming from the three-halo term, explicitely 3h-base which has two contributions whose trispectra follow $$ T \propto P(k_\ell|z)^2 \ I_2^{\Sigma_2}(k_{\ell'},k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell')$$ and $$ T \propto P(k_\ell|z) \ P(k_{\ell'}|z) \ I_2^{\Sigma_2}(k_{\ell},k_{\ell'}|z)$$ whereas the SSC from 3h-diag-sqz (BC-HSV in the literature) comes from a trispectrum $$ T \propto P(k_{1+2}|z) \ P(k_{\ell'}|z) \ I_2^{1}(k_{\ell},k_{\ell}|z) \quad + \quad (\ell\leftrightarrow\ell')$$ where the prefactor of both trispectra are of the same order.\\ Thus these terms will be more important than BC-HSV in the range of multipoles where $P(k_{\ell}|z) \gtrsim P(k_{1+2}|z)$. Hence the condition Eq.~\ref{Eq:thumbrule-gtSSC} is again the one that rules the importance of these terms. Finally, shot-noise terms impact the measurements in a manner inversely proportional to the number of observed galaxies. More precisely the impact on the signal to noise of $C_\ell^\mr{gal}$ is of order $\mathcal{O}\left(N_\ell/N_\mr{gal}\right)$, where $N_\ell$ is the number of multipoles considered. For future surveys, this effect can thus be expect to be well below the percent level, unless targeting high multipoles with thin redshift bins at the lowest and highest redshifts, where galaxy numbers decrease. In summary, most terms have a chance to be of importance if the survey considered probes scales comparable to or larger than the matter-radiation equality $k_\mr{eq}$, and shot-noise can be of importance for a spectroscopic survey targeting information on small scales. \section{Summary}\label{Sect:summary} This section summarises covariance terms that should be considered in the simplest case where one uses Limber's approximation and shot-noise subtraction (Sect.~\ref{Sect:shot-subs}), as is usually done in current galaxy surveys, and further considering only $n=0$ for angle-dependent kernels. The more general equations can be found in the main body of the text. This section can thus be considered by the busy reader as the reference summary containing the first order equations to be implemented numerically. \subsection{Notations and remarks} In order for this section to be self-contained, I recapitulate here the particular notations which are used in the covariance terms.\\ To begin with, as discussed in Sect.~\ref{Sect:methods}, I consider the power spectrum of the absolute fluctuations $\delta n_\mr{gal}(\xx)$ and not the relative fluctuations $\delta_\mr{gal}=\delta n_\mr{gal}/\nbargal$. One can convert my absolute power spectrum into a relative one by dividing by the factor $N_\mr{gal}(i_z) \ N_\mr{gal}(j_z)$, where $i_z$ and $j_z$ are the indices of the two redshift bins considered.\\ The power spectrum covariance is noted for simplicity $$\mathcal{C}_{\ell,\ell'} \equiv \Cov\left(C_\ell^\mr{gal}(i_z,j_z),C_{\ell'}^\mr{gal}(k_z,l_z)\right)$$ and needs to be divided by a factor $$N_\mr{gal}(i_z) \ N_\mr{gal}(j_z) \ N_\mr{gal}(k_z) \ N_\mr{gal}(l_z)$$ if one wants relative fluctuations instead of absolute ones. Most importantly, I use the following definitions:\\ $k_\ell=(\ell+1/2)/r(z)$ is the comoving wavenumber given by Limber's approximation at multipole $\ell$ and redshift $z$, \ba \nonumber I_\mu^\beta(k_1,\cdots,k_\mu|z) \equiv \int \dd M \ & \frac{\dd n_h}{\dd M} \ \lbra N_\mr{gal}^{(\mu)}\rbra \ b_\beta(M,z) \\ & \times u(k_1|M,z) \cdots u(k_\mu|M,z) \ea is an integral that will appear frequently, \ba I_\mu^{\Sigma_2} & \equiv \sum_{X \in \{\mr{b2,s2,2PT}\}} K_X^0 \ I_\mu^X = \frac{17}{21} \ I_\mu^{1} + \frac{1}{2} \ I_\mu^{2} \ea is the sum of second order contributions, and \ba I_\mu^{\Sigma_3}(k,z) \equiv \frac{1023}{1701} \ I_\mu^{1}(k,z) + \frac{1}{3!} \ I_\mu^{3}(k,z) \ea is the sum of third order contributions.\\ The angular power spectrum of matter is \ba C_\ell^m(z_a,z_b)= \int k^2\,\dd k \ P(k|z_{ab}) \ j_\ell(k r_a) \ j_\ell(k r_b) \ea and in full-sky the SSC and braiding kernels are respectively \ba \sigma^2(z_a,z_b) = \frac{C_0^m(z_a,z_b)}{4\pi} \ea \ba \mathcal{B}_{\ell,\ell'}(z_a,z_b) = \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \ C_{\ell_a}^m(z_a,z_b). \ea \subsection{Covariance terms} This subsection simply lists the different covariance contributions, ordered in term of simplicity.\\ The first contribution is the one-halo term (Sect.~\ref{Sect:1halo}) \ba \mathcal{C}_{\ell,\ell'}^\mr{1h} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ I_4^0(k_{\ell},k_{\ell},k_{\ell'},k_{\ell'}|z) \ea then there is the two-halo 1+3 term (Sect.~\ref{Sect:2halo1+3}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{2h1+3} = \frac{\delta_{i_z,k_z,l_z}+\delta_{j_z,k_z,l_z}}{4\pi} & \int \dd V \ I_1^1(k_\ell|z) \ I_3^1(k_\ell,k_{\ell'},k_{\ell'}|z) \ P(k_\ell|z) \\ & + (\ell \leftrightarrow \ell') \ea the three-halo base term (Sect.~\ref{Sect:3halo}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{3h-base0} = \frac{\delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \; \left(I_1^{1}(k_\ell|z) \ P(k_\ell|z)\right)^2 I_2^{\Sigma_2}(k_{\ell'},k_{\ell'}|z) \\ \nonumber & + \quad (\ell\leftrightarrow\ell') \\ \nonumber +\frac{4 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int & \dd V \ 2 \ I_1^{1}(k_\ell|z) \ I_1^{1}(k_{\ell'}|z) \ I_2^{\Sigma_2}(k_{\ell},k_{\ell'}|z) \\ & \times P(k_\ell|z) \ P(k_{\ell'}|z) \ea and the four-halo term from third order contributions (Sect.~\ref{Sect:4halo}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{4h-3} = \frac{2 \ \delta_{i_z,j_z,k_z,l_z}}{4\pi} \int \dd V \ 3! \ \left(I_1^1(k_{\ell},z)\right)^2 \ I_1^1(k_{\ell'},z) \ I_1^{\Sigma_3}(k_{\ell'},z) \\ \times \ P(k_{\ell}|z) \ P(k_{\ell}|z) \ P(k_{\ell'}|z) \quad + \quad (\ell\leftrightarrow\ell') \ea Then there are groups of terms unifying contributions with different numbers of halos.\\ First is the super-sample covariance (Sect.~\ref{Sect:discu-SSC}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{SSC} = \delta_{i_z,j_z} \ \delta_{k_z,l_z} \int \dd V_{ab} \ \Psi_\ell^\mr{sqz,clust}(z_a) \ \Psi_{\ell'}^\mr{sqz,clust}(z_b) \ \sigma^2(z_a,z_b) \ea where $z_a \in i_z$, $z_b \in k_z$, and \ba \Psi_\ell^\mr{sqz,clust}(z) = 4 \ I_1^{\Sigma_2}(k_{\ell}|z) \ I_1^1(k_{\ell}|z) \ P(k_{\ell}|z) + I_2^1(k_{\ell},k_{\ell}|z) \ea Second is the braiding covariance (Sect.~\ref{Sect:discu-braiding}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{braid-clust} = \left(\delta_{i_z,k_z}\ \delta_{j_z,l_z}+\delta_{i_z,l_z}\ \delta_{j_z,k_z}\right) \int \dd V_{ab} \ \Psi^\mr{alt,clust}_{\ell,\ell'}(z_a) \\ \times \ \Psi^\mr{alt,clust}_{\ell,\ell'}(z_b) \ \mathcal{B}_{\ell,\ell'}(z_a,z_b) \ea where $z_a \in i_z$, $z_b \in j_z$, and \ba \Psi^\mr{alt,clust}_{\ell,\ell'}(z) = \Big[ 2 \ I_1^{\Sigma_2}(k_{\ell'}|z) \ I_1^1(k_{\ell}|z) \, P(k_{\ell}|z) + (\ell \leftrightarrow\ell') \Big] + I_2^1(k_{\ell},k_{\ell'}|z) \ea Finally, shot-noise terms, where the only surviving shot-noise subtraction (Sect.~\ref{Sect:shot-subs}) are braiding ones (Sect.~\ref{Sect:discu-braiding}) \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot2g-alt} =& \left(\delta_{i_z,k_z} \ \delta_{j_z,l_z}+ \delta_{i_z,l_z} \ \delta_{j_z,k_z}\right) \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \\ & \times \ C_{\ell_a}^\mr{gal,clust}(i_z,j_z) \ea and \ba \nonumber \mathcal{C}_{\ell,\ell'}^\mr{shot3g-alt} =& \sum_{\ell_a} \frac{2\ell_a+1}{4\pi} \threeJz{\ell}{\ell'}{\ell_a}^2 \Big[ \delta_{i_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,l_z) \\ \nonumber & + \delta_{i_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(i_z,j_z,k_z) + \delta_{j_z,k_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,l_z) \\ & + \delta_{j_z,l_z} \ b_{\ell_a,\ell,\ell'}^\mr{gal,clust}(j_z,i_z,k_z) \Big] \ea The importance of all these covariance terms is partially discussed in Sect.~\ref{Sect:discu-importance} with analytical arguments. The actual importance for a galaxy survey will strongly depend on the survey specifications, galaxy selection, choice of data vector (e.g. redshift and scale cuts) and so on, and as such cannot be forecast easily analytically. Instead numerical analysis must be carried out, which will be the subject of future works. I expect that at least several of these terms, if not most of them, will be of importance for cosmological constraints from future surveys such as Euclid and LSST. \section{Conclusion}\label{Sect:conclusion} I have carried out an exhaustive analytic derivation of all non-Gaussian covariance terms of the galaxy angular power spectrum $C_\ell^\mr{gal}$ when using the halo model at tree level. The calculation of the involved trispectrum is developed up to third order both in halo bias and standard perturbation theory, including non-local halo bias and all shot-noise terms. The projection of the trispectrum into the angular covariance has been derived in all trispectra cases, including complex cases with several dependence on angle between wavenumbers, in two appendices (Appendices~\ref{App:2Dproj-trisp-angindep} \& \ref{App:2Dproj-trisp-angdep}) together with robustness checks of the formulae performed (Appendix~\ref{App:reductions}). These derivations, though not the original aim of the article, are standalone results that can be used in order to model the angular covariance of other signals or alter the modelling framework, for example, using a different flavour of perturbation theory. A wealth of non-Gaussian covariance terms has been found, providing a rigorous derivation of the already known super-sample covariance (SSC) in the angular case, and more importantly discovering several new terms. A whole new class of terms, which I dub braiding covariance, stems from the same physical effects that lead to SSC but leads to different couplings between multipoles and redshift bins. Other terms (3h-base, third order contributions) furthermore exist, and I provide a unified treatment of shot-noise terms, including how they are affected by the popular habit of subtracting $1/\nbargal$ from the observed power spectrum. A clean executive summary is provided in Sect.~\ref{Sect:summary} in the simplest case where one uses Limber's approximation, shot-noise subtraction (Sect.~\ref{Sect:shot-subs}), and retains only $n=0$ terms for angle-dependent kernels. This section can serve as a reference for the minimal number of non-Gaussian terms to quantify numerically for galaxy surveys. The potential importance of the new non-Gaussian terms has been discussed with analytical arguments in Sect.~\ref{Sect:discu-importance}, in particular in comparison with super-sample covariance, as the latter has already shown to have an impact on constraint from current surveys. It was found that some terms (braiding, 3h-base, 4h-3) should become comparable to, if not bigger than, SSC on scales comparable to the matter-radiation equality. Other terms (1h, 2h1+3) can become important for deep surveys with a high portion of satellite galaxies, with only numerical calculation which can decide precisely on their actual impact. Finally shot-noise terms can become relevant when analysing small scales with spectroscopic (more sparse) surveys, such as when constraining neutrinos or non-cold dark matter. Numerical codes computing SSC and the one-halo term already exist and have been used. The one by the author, for instance, can compute covariance on a standard laptop in a matter of seconds or tens of seconds depending on the number of multipoles. Including the new non-Gaussian terms presented here shall prove very feasible and will be studied in future works. It is expected that this inclusion will not alter the order of magnitude of the speed of the calculation. Hence the analytical approach to covariances will remain feasible, and in fact the most competitive, for current and future surveys. \section*{Acknowledgements} \vspace{0.2cm} I thank Ruth Durrer, Vittorio Tansella and Alexandre Barreira for helpful discussions, Pierre Fleury for help with 9J symbols, and Elena Sellentin and Martin Kunz for proofreading and suggestions that improved this article.\\ I acknowledge support by the Swiss National Science Foundation. \bibliographystyle{aa}
1,108,101,564,486
arxiv
\section{Introduction and main results} Schramm-Loewner Evolution (SLE) and random matrix theory (RMT) are two active and well-studied fields of research within modern probability theory \cite{lawler2008conformally, akemann2011oxford}. The SLE was introduced by Oded Schramm in 2000 in his study of scaling limits of various discrete processes \cite{schramm2000scaling}. RMT appeared earlier in the statistical work of Wishart \cite{wishart1928generalised} and the pioneering physics of Wigner \cite{wigner1955}. Both SLE and RMT have been thriving areas of mathematical research since their advent. When studying SLE theory, one introduces the notion of compact hulls, which are compact sets with simply connected complements in the upper half-plane. If $K_t$ is a growing set of hulls parameterized by $t \in [0, T]$ and the growth is local in some sense, then it is known that $g_t:= g_{K_t}$ obeys the Loewner differential equation \[ \partial_t g_t(z) = \frac{2}{g_t(z) - W_t} \] where $W_t$ is referred to as the driving function and captures the local growth of $K_t$. SLE are the random curves corresponding to the $g_t$ when the driving function is a constant multiple of Brownian motion, that we denote by $\sqrt{\kappa}B_t$, for $\kappa \geq 0$. With probability one, $g_t$ is continuous up to the boundary and the limit $$ \gamma(t)=\lim_{y\to 0} g_t^{-1}(\sqrt{\kappa}B_t+iy) $$ exists and is continuous in time, by the Rohde-Schramm Theorem \cite{RohdeSchramm}. The curve $\gamma(t)$ is called the SLE trace. Also, it can be shown that with probability one, $g_t$ is a continuous family of conformal maps from $H_t$ to $\mathbb{H}$, where $H_t$ is the unbounded component of the complement in $\mathbb{H}$ of $\gamma(t),$ for $t \in [0, T]$ \cite{RohdeSchramm}. Moreover, the nature of the curve changes as $\kappa$ increases from simple a.s. when $\kappa \in [0,4]$, to having double points a.s. for $\kappa \in (4,8)$ and space-filling a.s., for $\kappa \geq 8$. For different parameters $\kappa$, the SLE models the scaling limits of an astoundingly diverse set of discrete models. For instance, it was proved in \cite{Lawlerschrammwerner} that the scaling limit of the loop erased random walk (with the loops erased in a chronological order) converges in the scaling limit to $\text{SLE}_{\kappa}$ with $\kappa = 2.$ Moreover, other two dimensional discrete models from Statistical Mechanics including the Ising model cluster boundaries, Gaussian free field interfaces, percolation on the triangular lattice at critical probability, and Uniform spanning trees were proved to converge in the scaling limit to SLE for values of $\kappa=3,$ $\kappa=4,$ $\kappa=6$ and $\kappa=8$ respectively in the series of works \cite{Stasising}, \cite{SLEGFF}, \cite{Smirnovpercolation} and \cite{Lawlerschrammwerner}. One can consider more generally the Loewner equation driven by a time-dependent real-valued measure $\mu_{t}$ $$ \frac{\partial}{\partial t} g_{t}(z)=\int_{\mathbb{R}} \frac{\mu_{t}(d x)}{g_{t}(z)-x}, \quad g_{0}(z)=z. $$ When the driving measure $\mu_{t}$ is a Dirac-delta mass at location $\sqrt{\kappa}B_t$, we recover the previous SLE maps. In the case $\mu_{t}=\sum_{i=1}^{N} \omega_{i}(t) \delta_{U_{i}(t)}$, for some non-intersecting continuous functions $U_{i}(t) \in \mathbb{R}$ (called driving functions), and weights $\omega_{i}(t) \in \mathbb{R}^{+},$ we obtain the multi-slit Loewner equation with driving functions $U_i(t)$, $i=1,\cdots ,n$. In this work, we consider the case $\omega_{i}(t)=1/N$, for all $t \in [0, T].$ For a real parameter $\beta >0$, Dyson Brownian motion (DBM) is defined by the following system of N equations \begin{equation}\label{dbm} d\lambda^{(i)}_t=\frac{2}{\sqrt{N\beta}}dB_t^{(i)}+\frac{2}{N}\sum_{j \neq i}\frac{dt}{\lambda_t^{(j)}-\lambda_t^{(i)}}, \end{equation} for $i=1,2,...,N$. Due to its connections with other fields, an important Loewner equation is the multiple SLE with DBM as a driver. The multiple SLE maps that are obtained when the driving measure is an empirical measure on $N$ DBM particles are denoted in this paper by $g_t^N(z).$ This model was introduced by Cardy in \cite{cardy2003stochastic}, and studied further by Lawler and Healey in \cite{lawlernew}, in connection with the quantum Calogero-Sutherland model and Conformal Field Theory. More works on the connection between Multiple SLE and CFT can be found in \cite{LenVik} and \cite{MultipleSLEcft}. In the case of $N=2$ curves, perturbations of this model in the parameter $\beta$ have been studied in \cite{JVperturbation}. We note that the parameters $\beta$ in the DBM model and $\kappa$ in SLE theory are related via $\beta=8/\kappa$. We refer to the multiple SLE model with Dyson Brownian motion as a driver as the simultaneously growing multiple SLE model. There is also a version of the multiple SLE that has non-simultaneous growth that has received a lot of attention in the previous years. There have been several results on the multiple SLE model in both the upper half-plane and the unit disk versions \cite{Katoriwelding, LenVik, PeltolaWu, BefPeltolaWu, HotaS, hydrodinamiclimit, delMonacoetc, delMonaco2, dubedat, peltolakyto, oliversc, zh1, zh2, multipleSLE0}. In \cite{delMS2016}, the authors consider the $N\rightarrow\infty$ limit of multiple SLE driven by DBM. In particular they show that the empirical measure of the initial positions converges to a probability measure $\mu_0$, then $g_t^N$ converges in distribution with respect to locally uniform convergence to $g_t^\infty$ solving \begin{equation}\label{eq:A:g infty def} \frac{\partial}{\partial t}g_t^\infty(z)=M_t^\infty(z),\qquad g_0(z)=z, \end{equation} Where $M_t^\infty$ is a solution to the complex Burgers equation\begin{equation}\label{eq:A:Complex Burgers Equation} \begin{cases} \frac{\partial M_t^\infty(z)}{\partial t}=-2M_t^\infty(z)\frac{\partial M_t^\infty(z)}{\partial z},\ t>0,\\ M_0^\infty(z)=\int_\mathbb{R}\frac{2}{z-x}d\mu_0(x). \end{cases} \end{equation} Their result serves as the multiple SLE analog of Wigner's famous semicircle law in random matrix theory. We consider this model and we obtain more refined information by providing an order of convergence of this model in a weaker version of the Carath\'eodory type convergence. We aim in future works to study the full Carath\'eodory convergence by strengthening the estimates as we approach the multiple SLE hull. In this work, we combine elements of the proof of Local Laws in random matrix theory, such as resolvent techniques, with elements of the SLE theory. In other words, we apply modern techniques from random matrix theory to the analysis of SLE. Local Laws are a very important research direction in random matrix theory in the last years (see \cite{Antti}, \cite{13}, \cite{14}, \cite{15}, \cite{16}, \cite{17}, \cite{18}, \cite{19}, \cite{20}, \cite{21}, \cite{22}, \cite{23}, \cite{VladRMT}, \cite{SeanVu}, \cite{32}, \cite{33}, \cite{34}, \cite{35}, \cite{36} for a non-exhaustive list). They are one of the fundamental ingredients in proving the universality of Wigner ensembles in random matrix theory (see \cite{erdos2017dynamical}). Given the outstanding developments in the proof of the universality in RMT using the analysis of the DBM, the interaction between multiple SLE and random matrices will provide many avenues to explore. The approach in the current work represents one of the possible directions of exploration between these two major fields of Probability theory. In a different direction, which we aim to explore in the future, one can study the geometry of the multiple SLE curves using the analysis of the Dyson Brownian motion drivers, as well as good approximation schemes of the model (see, for example, \cite{Jamespaper}, \cite{tranconvergence}, \cite{NVscheme} in the one SLE curve case). Yet another possibility is to study the continuity of the multiple SLE model in the parameter $\beta$, motivated by the great interest and progress throughout the years in this yet unresolved conjecture in the one SLE curve case (see \cite{BLM}, \cite{AtulVladpaper}, \cite{FrizYuan}, \cite{SLEcont}). In addition, the fact that the multiple SLE curves grow from the positions of the drivers along with some knowledge about the structure of the drivers gives the possibility of defining new observables in order to study the convergence of discrete models to the multiple SLE. Examples of such observables include the statistics of the $k^{th}$ smallest distance between drivers, for $k \geq 1$, (see \cite{BAB}) or the probability of having no drivers in a symmetric region about the origin (see \cite{Noeigen} for $\beta=2$). Although our result can be obtained for general bounded initial conditions, we state it in the case in which all the Dyson Brownian motion particles start from the origin. We prefer this choice for the simplicity of the notation and exposition. \begin{theorem}\label{thm: Main Result} Let $\beta=1$ or $\beta=2$, and let us consider Dyson Brownian motion beginning at the origin. Let $K_T$ be the multiple SLE hull at time $T>0.$ Then, for any $\varepsilon >0$, for the multiple SLE maps for $N$ curves, we have that $$\sup_{t \in [0,T],\ z \in G}|g^N_t(z)-g^{\infty}_t(z)|=O\left(\frac{1}{N^{1/3-\varepsilon}}\right),$$ with overwhelming probability\footnote{An event $E$ holds with overwhelming probability if, for every $p > 0$, $\mathbb{P}(E) \geq 1 - O_p(n^{-p})$; see Definition \ref{def:events} for details.}, for a given $G \subset \mathbb{H}\setminus K_T.$ \end{theorem} \begin{remark} It is well-known that for the special values of the parameters $\beta=1$, $\beta=2$ and $\beta=4$, the Dyson Brownian motion particles statistics can be understood using matrices as these values correspond to the well-studied models of the Gaussian Orthogonal Ensemble, Gaussian Unitary Ensemble (GUE), and the Gaussian Symplectic Ensemble (GSE) respectively. An $n \times n$ real symmetric matrix $A$ is drawn from the Gaussian Orthogonal Ensemble (GOE) if the upper-triangular entries $A_{ij}$, $1 \leq i \leq j \leq n$ are independent Guassian random variables, where $A_{ij}$ has mean zero and variance $\frac{1 + \delta_{ij}}{n}$ and $\delta_{ij}$ is the Kronecker delta. The GUE and GSE ensembles are defined similarly with complex and quaternic Gaussian off-diagonal entries. We study the cases $\beta=1$ and $\beta=2$ respectively as they correspond to the critical parameters $\kappa=8$ and $\kappa=4$ in SLE theory. We expect that a similar analysis will hold for the case $\beta=4$ that corresponds to the value $\kappa=2$. We note that the $N^{-(1/3-\epsilon)}$ order of convergence to the hydrodynamic limit of multiple SLE is obtained via an estimate in \cite{SeanVu} which is, to the best of our knowledge, the best stability estimate in this setting available in the literature. Theorem \ref{thm: Main Result} relies on the following technical result. \begin{theorem}\label{thm:Local law theorem} Let $\beta=1$ or $\beta=2$, and let us consider Dyson Brownian motion started from the origin $\left(\lambda_t^{(1)},\dots,\lambda_t^{(N)}\right)$ and $M_t^N:\mathbb{C}_+\rightarrow \mathbb{C}_-$ defined by\begin{equation*} M_t^N(z)=\frac{1}{N}\sum_{j=1}^N\frac{2}{z-\lambda_t^{(j)}}. \end{equation*} Let $M_t^\infty:\mathbb{C}_+\rightarrow \mathbb{C}_-$ be the solution to the complex Burgers equation \begin{equation}\label{eq:Complex Burgers Equation} \begin{cases} \frac{\partial M_t^\infty(z)}{\partial t}=-2M_t^\infty(z)\frac{\partial M_t^\infty(z)}{\partial z},\ t>0,\\ M_0^\infty(z)=\frac{2}{z}. \end{cases} \end{equation} Then for any compact set $G\subset \mathbb{C}_+$, $\varepsilon > 0$, and fixed $t\in [0,T]$ \begin{equation} \sup_{z\in G}\left|M_t^N(z)-M_t^\infty(z) \right|=O_{G,\varepsilon}\left( \frac{t}{N^{\frac{1}{3}-\varepsilon}} \right), \end{equation} with overwhelming probability. \end{theorem} \end{remark} \vspace{2mm} The remainder of the paper is organized into several sections. In the second section, we present probabilistic estimates involving the multiple SLE hull and subsets of its complement. The third section focuses on the random matrix techniques we use, as well as on the proof of Theorem 1.3. In subsection 3.4 we utilize a net argument that extends the previously obtained results for a fixed time $t \in [0, T]$, to all times simultaneously. In section 4, we prove Theorem 1.1 and in the Appendix we provide the stability part of the argument. \vspace{5mm} \section{Subset of the complement of the multiple SLE hull} \vspace{5mm} In this section, we provide probabilistic estimates for general $\beta \geq 1$ that are useful in deducing the choice of the set $G \subset \mathbb{H}\setminus K_T$, where we establish the order of convergence of the family of maps. We present the estimates for general $\beta \geq 1$, and specialize to the $\beta=1$ and $\beta=2$ cases in our application. Let $\partial_t g_t(z)=\frac{1}{N} \sum_{i=1}^N \frac{2}{g_t(z)-\lambda^i_t}, $ where $(\lambda_t^{(1)}, \cdots, \lambda_t^{(N)})$ is a Dyson Brownian motion (DBM) with parameter $\beta \geq 1.$ We first consider $\lambda^i_t \equiv 0, \forall t \in[0, T]$, for all $i = \{ 1, 2, \cdots, N\}.$ Then, we have that $\partial_t g_t(z)=\frac{2N}{N g_t(z)}=\frac{2}{g_t(z)}.$ Since $g_t(z)=\text{Re}(g_t(z))+i\text{Im}(g_t(z)),$ we have that $$\partial_t \text{Im}(g_t(z))=\frac{-2 \text{Im}( g_t(z))}{\left|g_{t}(z)\right|^2} \geq \frac{2}{(\text{Im}(g_t(z))^2}.$$ This allows us to conclude that $$\text{Im}\left(g_t(z)\right)^2 \geqslant\left(\text{Im}(z)\right)^2-4 t>0,$$ whenever $\text{Im}(z)>2\sqrt{T}.$ In order to control the real part, for a Dyson Brownian motion $(\lambda_t^{(1)}, \cdots, \lambda_t^{(N)})$ with parameter $\beta \geq 1$, we observe that $$\partial_t \text{Re}(g_t(z))=\frac{1}{N}\sum_{i=1}^N \frac{\text{Re}\left(g_t(z)\right)-\lambda_t^i}{\left|g_t(z)-\lambda^i_t\right|^2}>0,$$ whenever $\text{Re}(g_t(z))>M=\sup _{t \in[0, T]}\sup_{i=\{1, 2, \ldots, N\}}\left|\lambda_t^i\right|.$ Then, combining the two estimates, we have that $$\{z \in H|:| \text{Re}(z)>M \hspace{1mm} \text{or} \hspace{1mm} \text{Im} > 2\sqrt{T} \} \subset \mathbb{H}\setminus K_T.$$ We also note that for all $t \in [0, T]$, we have $$K_t \subset\{z \in \overline{\mathbb{H}}:|\operatorname{Re} z| \leq M \hspace{1mm} and \hspace{1mm} \operatorname{Im} z \leq 2 \sqrt{T}\}.$$ Next, we use the following probabilistic result on the behaviour of the extreme eigenvalues. \begin{lemma}[Lemma 4.3.17 in \cite{Zeit}]\label{Zeitlemma} Let $\lambda_N^*(t):=\max _{1 \leq i \leq N}\left|\lambda^{(i)}_t\right|=\max \left(\lambda_t^{(N)},-\lambda_t^{(1)}\right).$ Let $\beta \geq 1$. Then there exist finite constants $\alpha=\alpha(\beta)>0, C=C(\beta)$, and for all $t \geq 0$ a random variable $\eta_N^*(t)$ with law independent of $t$, such that $$ P\left(\eta_N^*(t) \geq x+C\right) \leq e^{-\alpha N x} $$ and, for all $t \geq 0$, $$ \lambda_N^*(t) \leq \lambda_N^*(0)+\sqrt{t} \eta_N^*(t). $$ \end{lemma} In the case of the DBM drivers, using Lemma \ref{Zeitlemma}, we have that for $\beta \geq 1$ and for $C=C(\beta)$ and $\alpha=\alpha(\beta)$ some finite constants that $$\mathbb{P}\left( \sup _{t \in[0, T]}\sup_{i=\{1, 2, \ldots, N\}}\left|\lambda_t^i\right| \leq (C+x)\sqrt{T}\right) \geq 1-e^{-\alpha N x}.$$ For conformal maps, we have the following result. \begin{lemma}[Lemma $4.5$ in \cite{Kemp}] Let $K$ be a hull and $H=\mathbb{H} \backslash K$. If $K \subset B\left(x_0, r\right)$, then $g_K$ maps $H \cap B\left(x_0, 2 r\right)$ into $B\left(x_0, 3 r\right)$ and $$\sup _{z \in H}\left|g_K(z)-z\right| \leq 5 r.$$ \end{lemma} For a box $G \subset H_T=\mathbb{H}\setminus K_T$, we have that with overwhelming probability that \begin{equation}\label{eq:A:set containment} g^N_t(G) \subset \{z: \sqrt{\text{Im}(z_0))^2-4 t} \leq \text{Im}(z) \leq \text{Im}(z_0); \hspace{2mm} |Re(z)| \leq f(N, T)\}, \end{equation} where $f(N,T)$ can be deduced from the following: \begin{equation} |\text{Re}g_K(z)| \leq |g_K(z)| \leq |z|+5r. \end{equation} In the case of the multiple SLE hull $K_T$, we have $r=\sqrt{M^2+(2\sqrt{T})^2}$. \section{Random Matrix Techniques} In this section we prove some random matrix results leading to the proof of Theorem \ref{thm:Local law theorem}. It is worth noting that for $\beta=1$ and $\beta=2$, DBM $\left(\lambda_t^{(1)},\dots,\lambda_t^{(N)}\right)$ defined as the solution to \eqref{dbm} starting from initial positions $\left( \lambda_0^{(1)},\dots, \lambda_0^{(N)} \right)$ is equal in distribution to the eigenvalues of $D-2\sqrt{t}A$ where $D$ is an $N\times N$ diagonal matrix of the initial positions and $A$ is a matrix drawn from the Gaussian Orthogonal Ensemble for $\beta=1$ or Gaussian Unitary Ensemble (GUE) for $\beta=2$. We establish the results in this section for the case when $A$ is drawn from the GOE, since the adjustments to the GUE model are straightforward. \subsection{Tools} This section introduces the tools we will use throughout. We begin with a definition describing high probability events. \begin{definition}[High probability events] \label{def:events} Let $E$ be an event that depends on $n$. \begin{itemize} \item $E$ holds \emph{asymptotically almost surely} if $\mathbb{P}(E) = 1 - o(1)$. \item $E$ holds \emph{with high probability} if $\mathbb{P}(E) = 1 - O(n^{-c})$ for some constant $c > 0$. \item $E$ holds \emph{with overwhelming probability} if, for every $p > 0$, $\mathbb{P}(E) \geq 1 - O_p(n^{-p})$. \end{itemize} \end{definition} For $z = E + i \eta \in \mathbb{C}_+$, $n\times n$ Hermitian matrix $H$, and $G(z):=\left(H-zI\right)^{-1}$ the \emph{Ward identity} states that \begin{equation} \label{eq:ward} \sum_{j = 1}^n \left| G_{ij}(z) \right|^2 = \frac{1}{ \eta} \Im G_{ii}(z). \end{equation} If $A$ and $B$ are invertible matrices, the \emph{resolvent identity} states that \begin{equation} \label{eq:resolvent} A^{-1} - B^{-1} = A^{-1} (B - A) B^{-1} = B^{-1} (B - A) A^{-1}. \end{equation} If $\xi$ is a Gaussian random variable with mean zero and variance $\sigma^2$ and $f: \mathbb{R} \to \mathbb{C}$ is continuously differentiable, the \emph{Gaussian integration by parts formula} states that \begin{equation} \label{eq:ibp} \mathbb{E}[ \xi f(\xi)] = \sigma^2 \mathbb{E}[ f'(\xi) ], \end{equation} provided the expectations are finite. The next lemma is a convenient moment bound for a martingale difference sequence. \begin{lemma} [Lemma 2.12 from \cite{kyle8}] \label{klem:burkholder2} Let $\{X_k\}$ be a complex martingale difference sequence and $\mathcal{F}_k = \sigma(X_1, \dots, X_k)$ be the $\sigma$-algebra generated by $X_1, \dots, X_k$. Then, for any $p \geq 2$, \[ \mathbb{E} \left|\sum_{k=1}^n X_k \right|^p \leq C_p \left(\mathbb{E}\left( \sum_{k=1}^n \mathbb{E}_{k-1} |X_k|^2 \right)^{p/2} + \mathbb{E} \sum_{k=1}^n |X_k|^p \right). \] where $C_{p}$ is a constant that only depends on $p$ and $\mathbb{E}_{k-1}[\cdot] := \mathbb{E}[\cdot | \mathcal{F}_{k-1}]$. \end{lemma} The next concentration lemma is helpful in controlling the deviation of a quadratic form from its expectation. \begin{lemma} [Equation (3) from \cite{kyle2}] \label{klem:quadraticform} Let $X$ be an $n$-vector containing iid standard Gaussian random variables, $A$ a deterministic $n \times n$ matrix and $\ell \geq 1$ an integer. Then \[ \mathbb{E}[X^* A X - \tr A|^{2 \ell} \leq C_\ell (\tr A A^* )^\ell \] where $C_\ell$ is a constant that only depends on $\ell$. \end{lemma} Finally, we will require the following algebraic identity in Section \ref{Concentration of Stieltjes transform}. \begin{lemma} [Theorem A.5 from \cite{kyle8}] \label{klem:tracedifference} Let $A$ be an $n \times n$ symmetric matrix and $A_k$ be the $k$-th major submatrix of size $(n-1) \times (n-1)$. If $A$ and $A_k$ are both invertible, then \[ \tr( A^{-1}) - \tr(A_k^{-1}) = \frac{1+ \alpha_k^* A_k^{-2} \alpha_k}{A_{kk} - \alpha_k^* A_k^{-1} \alpha_k} \] where $\alpha_k$ is obtained from the $k$-th column of $A$ by deleting the $k$-th entry. \end{lemma} \subsection{Concentration of the Gaussian Orthogonal Ensemble}\label{Concentration of Stieltjes transform} In this section we show that $|M_{t}^N(z) - \mathbb{E} M_{t}^N(z)|$ is small for a fixed $z \in \mathbb{C}_+$. To match the random matrix literature we will consider for fixed $t>0$, $m_N(z):=-\frac{1}{2}M_t^N(z)$. We let $A_t$ be $\sqrt{t} A$ where $A$ is drawn from the Gaussian Orthogonal Ensemble. We note that $m_N(z) - \mathbb{E} m_N(z)$ can be written as the following telescopic sum \[ m_N(z) - \mathbb{E} m_N(z) = \sum_{k=1}^n \left( \mathbb{E}_{k} m_N(z) - \mathbb{E}_{k-1} m_N(z) \right) := \sum_{k=1}^N \gamma_k \] Observe that \[ m_N(z) = \frac{1}{N} \tr (A_t - z)^{-1} = \frac{1}{N} \tr \frac{1}{\sqrt{t} A - z} =\frac{1}{N \sqrt{t}} \tr \frac{1}{ A - z/\sqrt{t}} = \frac{1}{N \sqrt{t}} \tr \frac{1}{ A - z'} \] We define $E' = E/\sqrt{t}$ and $\eta' = \eta/\sqrt{t}$. Let $\mathbb{E}_k$ denote the conditional expectation with respect to the $\sigma$-field generated by $A_{ij}$ with $i,j \leq k$, so that $\mathbb{E}_N m_N(z) = m_N(z)$ and $\mathbb{E}_0 m_N(z) = \mathbb{E} m_N(z)$. \begin{align*} \gamma_k &= \frac{1}{N \sqrt{t}}(\mathbb{E}_{k} \tr ( A -z')^{-1} - \mathbb{E}_{k-1} \tr (A -z')^{-1} ) \\ &= \frac{1}{N \sqrt{t} } \Big(\mathbb{E}_{k} \big[ \tr ( A -z')^{-1} - ( A_k - z')^{-1} \big] - \mathbb{E}_{k-1} \big[ \tr ( A-z')^{-1} - \tr ( A_k - z')^{-1} \big] \Big) \\ &= \frac{1}{N \sqrt{t}} (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \Bigg(\frac{a_k^* G_k^{2} a_k - \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{A_{kk} - z' - a_k^* G_k a_k} + \frac{1 + \mathbb{E}_{a_k} a_k^* G_k^{2} a_k }{A_{kk} - z' - a_k^* G_k a_k} \\ &\hspace{7cm}- \frac{1 + \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{A_{kk} - z' - \mathbb{E}_{a_k} a_k^* G_k a_k} \Bigg) \\ &= \frac{1}{N \sqrt{t}} (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \Bigg(\frac{a_k^* G_k^{2} a_k - \mathbb{E}_{a_k} a_k^* G_k^{2} a_k}{A_{kk} - z' - a_k^* G_k a_k} \\ &\hspace{4cm} - \frac{(1 + \mathbb{E}_{a_k}a_k^* G_k^{2} a_k) (a_k^* G_k a_k - \mathbb{E}_{a_k} a_k^* G_k a_k)}{(A_{kk} - z' - a_k^* G_k a_k)(A_{kk} - z' - \mathbb{E}_{a_k} a_k^* G_k a_k)}\Bigg) \\ &= \frac{1}{N \sqrt{t}} (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \Bigg(\frac{a_k^* G_k^{2} a_k - \frac{1}{N} \tr G_k^{2} }{A_{kk} - z' - a_k^* G_k a_k} \\ &\hspace{4cm} - \frac{(1 + \frac{1}{N} \tr G_k^{2} ) (a_k^* G_k a_k - \frac{1}{N}\tr G_k )}{(A_{kk} - z' - a_k^* G_k a_k)(A_{kk} - z' - \frac{1}{N} \tr G_k )}\Bigg) \\ \end{align*} where $a_k$ denotes the $k$-th row of $A$ with the $k$-th entry removed. We define the following quantities, \[ \alpha_k = a_k^* G_k^{2} a_k - \frac{1}{N} \tr G_k^{2}, \] \[ \beta_k = \frac{1}{ A_{kk} - z' - a_k^* G_k a_k}, \quad \bar{\beta}_k = \frac{1}{A_{kk} - z' - \frac{1}{N} \tr G_k}, \] \[ \delta_k = a_k^* G_k a_k - \frac{1}{N}\tr G_k, \quad \epsilon_k = 1 + \frac{1}{N} \tr G_k^{2} , \] so that \begin{align} \label{eq:concentration} m_N(z) - \mathbb{E} m_N(z) &= \frac{1}{N \sqrt{t} } \sum_{k=1}^N (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \alpha_k \beta_k - \frac{1}{N \sqrt{t}} \sum_{k=1}^N (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \epsilon_k \delta_k \beta_k \bar{\beta}_k \nonumber \\ &:= \frac{1}{\sqrt{t}} S_1 - \frac{1}{\sqrt{t}} S_2. \end{align} For a fixed $\varepsilon > 0$, we will show that $N^{1-\varepsilon} (\eta')^3 |S_1| = o(1)$ and $N^{1 - \varepsilon} (\eta')^3 |S_2| = o(1)$ with overwhelming probability. This will be done via the method of moments. We begin with $S_1$. By Markov's inequality, it suffices to bound $\mathbb{E}| N^{1- \varepsilon} (\eta')^3 S_1|^{2 \ell} = \mathbb{E} | N^{-\varepsilon} (\eta')^3\sum_{k =1}^n (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \alpha_k \beta_k|^{2 \ell}$ for $\ell \in \mathbb{N}$. By Lemma \ref{klem:burkholder2}, for any $\ell \geq 1$, \begin{align*} \mathbb{E} |N^{-\varepsilon} (\eta')^3 \sum_{k =1}^N (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \alpha_k \beta_k|^{2 \ell} &\leq C_\ell \Bigg( \mathbb{E} \left(\sum_{k=1}^N \mathbb{E}_{k-1} |N^{-\varepsilon} (\eta')^3 \alpha_k \beta_k|^2 \right)^{\ell} \\ &\quad \quad + \sum_{k=1}^N \mathbb{E} |N^{-\varepsilon} (\eta')^3 \alpha_k \beta_k|^{2 \ell} \Bigg). \end{align*} We use $C_\ell$ to indicate a constant that only depends on $\ell$, but may change from line to line. Since $\Im a^*_k G_k a_k > 0$, \[ |\beta_k| \leq (\eta')^{-1}. \] Therefore, \begin{align} \label{keq:S1} \mathbb{E} \left|N^{- \varepsilon}(\eta')^3 \sum_{k=1}^n (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \alpha_k \beta_k \right|^{2 \ell} &\leq C_\ell N^{-2 \varepsilon \ell} \Bigg( \mathbb{E} \left(\sum_{k=1}^N \mathbb{E}_{k-1} |(\eta')^2 \alpha_k |^2 \right)^{\ell} \nonumber \\ &\qquad \qquad + \sum_{k=1}^N \mathbb{E} | (\eta')^2 \alpha_k|^{2 \ell} \Bigg). \end{align} By Lemma \ref{klem:quadraticform}, \[ \mathbb{E}|(\eta')^2 \alpha_k|^{2 \ell} \leq C_\ell (\eta')^{4 \ell} N^{-2 \ell} \mathbb{E}|\tr G_k^{2} G_k^{*2}|^\ell. \] We use the simple bound that \begin{align} \label{keq:indicator} \tr G_k^2 G_k^{*2} &= \left(\sum_{i=1}^N \frac{1}{((\lambda_i - E)^2 + (\eta')^2)^2} \right) \nonumber \\ &\leq N (\eta')^{-4} \end{align} We now have that \begin{align*} \mathbb{E}|(\eta')^2 \alpha_k|^{2 \ell} &\leq C_\ell (\eta')^{4 \ell} N^{-2 \ell} \mathbb{E} | N (\eta')^{-4}|^\ell \\ &\leq C_\ell N^{- \ell} \end{align*} Therefore, by equation \eqref{keq:S1}, \begin{equation*} \mathbb{E} \left|N^{-\varepsilon} (\eta')^3 \sum_{k=1}^N (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \alpha_k \beta_k \right|^{2 \ell} \leq C_\ell N^{-2 \varepsilon \ell} \left( \mathbb{E} \left(\sum_{k=1}^N \mathbb{E}_{k} |(\eta')^2 \alpha_k|^2 \right)^{\ell} + N^{-\ell+1} \right) \end{equation*} By the same reasoning as in \eqref{keq:indicator}, we also have that $\mathbb{E}_k |\alpha_k|^2 \leq N (\eta')^{-4}$. Thus, \[ \mathbb{E}_{k} |(\eta')^2 \alpha_k|^2 \leq K N^{-1} \] so \[ \mathbb{E} \left(\sum_{k=1}^N \mathbb{E}_{k} |(\eta')^2 \alpha_k|^2 \right)^{\ell} \leq C_\ell. \] Finally, we can conclude that \[ \mathbb{E} \left|N^{-\varepsilon} (\eta')^3 \sum_{k=1}^N (\mathbb{E}_{k-1} - \mathbb{E}_k) \alpha_k \beta_k \right|^{2 \ell} \leq C_\ell N^{-2 \varepsilon \ell}. \] As $\ell$ is arbitrary, we have shown that $|S_1| = o_{\eta} (t^{3/2}/N^{1 - \varepsilon})$ with overwhelming probability. Now we address $S_2$. We first observe that \begin{align*} \left|1 + \frac{1}{N} \tr G_k^2\right| &\leq 1 + \frac{1}{N} \tr G_k G_k^* \\ &= (\eta')^{-1} \Im\left(-A_{kk} + z' + \frac{1}{N} \tr G_k \right) \end{align*} Therefore, \begin{align*} |\epsilon_k \bar{\beta}_k| = \frac{|1 + \frac{1}{N} \tr G_k^2|}{|A_{kk} - z' - \frac{1}{N} \tr G_k|} \leq (\eta')^{-1} \end{align*} Recalling that $|\beta_k| \leq (\eta')^{-1}$, we have that \[ \mathbb{E} |N^{1 - \varepsilon} (\eta')^4 S_2|^{2 \ell} = N^{-2 \varepsilon \ell} (\eta')^{2 \ell} \left| \sum_{k=1}^N (\mathbb{E}_{k} - \mathbb{E}_{k-1}) \delta_k \right|^{2 \ell} \] Again, by Lemma \ref{klem:burkholder2} \begin{align*} \mathbb{E} |N^{1 - \varepsilon} (\eta')^4 S_2|^{2 \ell} &\leq C_\ell N^{-2 \varepsilon \ell} (\eta')^{2\ell} \left( \mathbb{E} \left( \sum_{k=1}^N \mathbb{E}_{k-1} |\delta_k|^2\right)^{\ell} + \sum_{K=1}^N \mathbb{E}|\delta_k|^{2 \ell}\right). \end{align*} Note that by Lemma \ref{klem:quadraticform}, \[ \mathbb{E} |\delta_k|^{2 \ell} \leq C_\ell N^{-2 \ell} \mathbb{E}|\tr G_k G_k^*|^{\ell}. \] We have that \[ \tr G_k G_k^* \leq N (\eta')^{-2} \] so \[ \mathbb{E} |\delta_k|^{2 \ell} \leq C_\ell N^{-\ell} (\eta')^{-2 \ell}. \] Additionally, \[ \mathbb{E}_{k-1} |\delta_k|^2 \leq N^{-1} (\eta')^{-2}. \] Thus, \[ \mathbb{E} |N^{1 - \varepsilon} (\eta')^4 S_2|^{2 \ell} \leq C_\ell N^{-2 \varepsilon \ell}. \] We can then conclude that $S_2$ is $o_\eta (t^2/N^{1- \varepsilon})$ with overwhelming probability. Returning to $\eqref{eq:concentration}$ we have shown that \begin{equation}\label{eq:A:Concentration Conclusion} |m_N(z) - \mathbb{E} m_N(z)| = o\left( \frac{t}{N^{1 - \varepsilon}}\right) \end{equation} with overwhelming probability. \subsection{Proof of Theorem \ref{thm:Local law theorem}} In this section we provide the proof of Theorem \ref{thm:Local law theorem}. We will give begin the proof for generic initial starting positions of the Dyson Brownian motion, before specializing to the starting positions at the origin. Define the matrix \begin{equation}\label{eq:A:Matrix model} L_t=D-2\sqrt{t}A \end{equation} where $A$ is drawn from the Gaussian Orthogonal/Unitary Ensemble and $D$ is an $N\times N$ deterministic diagonal matrix. Define the resolvent matrices \begin{equation*} G_t(z):=\left( L_t-zI \right)^{-1}, \quad \text{and} \quad Q(z):=\left( D-zI \right)^{-1}. \end{equation*} Next, we define the functions \begin{equation*} M_t^N(z)=-\frac{2}{N}\tr G_t(z), \quad \text{and} \quad S^N(z)=-\frac{2}{N}\tr Q(z). \end{equation*} Fix $t,\eta>0$ and $z$ such that $\Im(z)\geq \eta$. Additionally, define the matrices \begin{equation*} G:= G_t(z), \end{equation*} and \begin{equation*} Q:=Q\left(z-2t\mathbb{E} M_t^N(z)\right). \end{equation*} In particular $S^N\left(z-2t\mathbb{E} M_t^N(z)\right)=-\frac{2}{N}\tr Q$. By the resolvent identity \eqref{eq:resolvent} \begin{align}\label{eq:A:Resolvent identity step} \mathbb{E} M_t^N(z) - S^N\left(z-2t\mathbb{E} M_t^N(z)\right) &= -\frac{2}{N}\left(\tr G_t -\tr Q_t \right) \\ &= -2\mathbb{E}\frac{1}{N} \tr \left(G \Tilde{A} Q \right)+4t\mathbb{E} M_t^N(z) \mathbb{E} \frac{1}{N} \tr \left(GQ \right) \nonumber \end{align} where $\Tilde{A}=2\sqrt{t}A$. We now consider the term \begin{equation}\label{eq:A:term for GIBP} -2\mathbb{E}\frac{1}{N} \tr \left(G \Tilde{A} Q \right)=-\frac{2}{N}\sum_{i,j} Q_{ii}\mathbb{E} \left[G_{ij}\Tilde{A}_{ji} \right]. \end{equation} A computation involving the resolvent identity \eqref{eq:resolvent} shows that \begin{equation*} \frac{\partial G_{kl}}{\partial A_{ij}}=\begin{cases} G_{ki}G_{ji}+G_{kj}G_{il},& \text{if } i\neq j,\\ G_{ki}G_{jl}, & \text{if } i=j \end{cases}. \end{equation*} Applying Gaussian integration by parts to \eqref{eq:A:term for GIBP} yields \begin{equation*} -2\mathbb{E}\frac{1}{N} \tr \left(G \Tilde{A} Q \right)=\frac{-8}{N^2}\mathbb{E}\sum_{i,j}Q_{ii}G_{ij}^2-\frac{4t}{N}\mathbb{E} M_t^{N}(z)\tr (QG), \end{equation*} which when combined with \eqref{eq:A:Resolvent identity step} gives \begin{align}\label{eq:A:Step after GIBP} \mathbb{E} M_t^N(z) - S^N\left(z-2t\mathbb{E} M_t^N(z)\right) &=\frac{-8}{N^2}\mathbb{E}\sum_{i,j}Q_{ii}G_{ij}^2-\frac{4t}{N}\mathbb{E} M_t^{N}(z)\tr (QG)\\ &\quad+4t\mathbb{E} M_t^N(z) \mathbb{E} \frac{1}{N} \tr \left(GQ \right). \nonumber \end{align} We now fix $z=E+i\eta\in\mathbb{C}_+$. By the Ward identity \eqref{eq:ward} \begin{align*} \left|\frac{8}{N^2}\mathbb{E}\sum_{i,j}Q_{ii}G_{ij}^2 \right|&\leq \mathbb{E} \frac{8}{N^2}\sum_{j}|Q_{ii}|\sum_{j}|G_{ij}|^2\\ &\leq \mathbb{E} \frac{8}{N^2\eta}\sum_{i}|Q_{ii}|\Im G_{ii}\\ &\leq \frac{8}{N\eta^3}. \end{align*} For the difference $4t\mathbb{E} M_t^N(z) \mathbb{E} \frac{1}{N} \tr \left(GQ \right)-\frac{4t}{N}\mathbb{E} M_t^{N}(z)\tr (QG)$, note that \begin{align*} \left|\frac{4t}{N}\tr \left(GQ \right) \right|&=\left| \frac{4t}{N}\sum_{i}Q_{ii G_{ii}} \right|\\ &\leq \frac{4t}{\eta^2}. \end{align*} It then follows from \eqref{eq:A:Concentration Conclusion} with $D$ equal to the zero matrix that\begin{align*} \mathbb{E} & \left[\left| 4t\left(\mathbb{E} M_t^N(z)\right) \frac{1}{N} \tr \left(GQ \right) -\frac{4t}{N} M_t^{N}(z)\tr (QG) \right| \right] \\ &\qquad \leq \mathbb{E}\left[\left| M_t^N(z) \mathbb{E} -\mathbb{E} M_t^{N}(z) \right| \left|\frac{4t}{N} \tr \left(GQ \right) \right| \right]\\ &\qquad =o\left(\frac{4\max(t,t^2)}{N^{1-\varepsilon}\eta^2} \right). \end{align*} Thus, we conclude that \begin{equation} \mathbb{E} M_t^N(z) - S^N\left(z-2t\mathbb{E} M_t^N(z)\right)=O\left(\frac{4\max(t,t^2)}{N^{1-\varepsilon}\eta^3} \right), \end{equation} where $S^N(z)=\frac{2}{z}$ for all $N$. Let $M^\infty_t$ be defined as in Theorem \ref{thm:Local law theorem}, then \begin{equation*} M_t^\infty(z)-S^N\left(z-2t M_t^\infty(z)\right)=0. \end{equation*} Note for each $z\in\mathbb{C}_+$, $s_t=-\frac{1}{2}M_t^\infty(z)$, $\Tilde{s}_t=-\frac{1}{2}\mathbb{E} M_t^N(z)$, and $s_0(z)=-\frac{1}{2}S^N(z)$ satisfy the conditions of Proposition \ref{Prop:A:stability}, (see Appendix) and hence it follows from Proposition \ref{Prop:A:stability} and \eqref{eq:Small t stability} (see Appendix) that \begin{equation}\label{eq:A:Expected value is close} \mathbb{E} M^N_t(z)-M^\infty_t(z)=O\left(\frac{4^{1/3}\max(t,t^2)^{1/3}}{N^{1/3-\varepsilon}\eta} \right). \end{equation} Applying \eqref{eq:A:Concentration Conclusion} to \eqref{eq:A:Expected value is close} completes the proof of Theorem \ref{thm:Local law theorem}. \subsection{Extension to uniform bound over $[0,T]$}\label{section: net to all t} In this section we outline how to extend Theorem \ref{thm:Local law theorem} uniformly in $t \in [0,T]$. This relies on the continuity of DBM. Without loss of generality, we work with the interval $[0, 1]$ instead of the interval $[0,T]$. Let us consider a partition of the time interval $[0,1]$ into a uniform partition with $t_k=\frac{k}{n}$, $k =0, 1, \ldots, n.$ The intervals of this partition are all equally-sized and their lengths are equal to $\frac{1}{n}.$ Let us consider $t \in (t_1, t_2)$ an intermediate time. We have that \begin{align}\label{eq:A:M triangle} &\sup_{z \in G}|M_t^{\infty}(z)-M^N_t(z)|\nonumber\\ &\leq \sup_{z \in G}|M_{t}^{\infty}(z)-M^{\infty}_{t_1}(z)| + \sup_{z \in G}|M_{t_1}^{\infty}(z)-M^{N}_{t_1}(z)|+ \sup_{z \in G}|M_{t_1}^{N}(z)-M^{N}_{t}(z)|, \end{align} with $G$ being a particular subset of the complement of the hull as in the previous section. The first term can be controlled from the Burgers equation as the solution is locally Lipschitz in time. For the second term of the right hand side of \eqref{eq:A:M triangle}, we have that from Theorem \ref{thm:Local law theorem}, for any $\epsilon >0$ \begin{equation} \sup_{z \in G}|M_{t_1}^{\infty}(z)-M^N_{t_1}(z)|=O_\varepsilon\left(\frac{1}{N^{1/3-\epsilon}}\right) \end{equation} with overwhelming probability, that is with probability at least $1- e^{-cN}$, for some constant $c$. By a union bound for any $t_j$, $j=1,\ldots, n$ in the net, we have that \begin{align} &\mathbb{P}\left( \bigcup_{t_i}|M^{\infty}_{t_i}(z)-M_{t_i}^N(z)|=\Omega\left(\frac{C}{N^{1/3-\epsilon}}\right)\right)\nonumber\\ &\leq \sum_{i=1}^n \mathbb{P}\left(| M^{\infty}_{t_i}(z)-M_{t_i}^N(z)|=\Omega\left(\frac{C}{N^{1/3-\epsilon}}\right)\right)\nonumber\\ &\leq n e^{-CN}, \end{align} where $g=\Omega(f)$ means $\frac{g(x)}{f(x)}$, as $x \to \infty.$ For the third term of the right hand side of \eqref{eq:A:M triangle}, using the notation $\tilde{\eta}^i_t=z-\lambda^i_t$, for $i=1, 2,\ldots, N$, we have that \begin{equation} |M^{N}_{t_1}(z)-M_t^N(z)| \leq \frac{2}{N}\sum_{i=1}^N\frac{|\lambda^i_t-\lambda^i_{t_1}|}{|\tilde{\eta}^i_{t_1}\tilde{\eta}^i_t|} \leq \frac{\tilde{C}|t-t_1|^{1/2-\epsilon}}{\text{Im}(z_0)^2}, \end{equation} where we have used the regularity of the Dyson Brownian Motion driver (\cite{NualartPerez}) and the bound $|\Tilde{\eta}^i_t |\geq |\Im(z)| \geq |\Im(z_0)|$ where $z_0 \in \mathbb{H}$ such that $\Im(z_0)\leq \min_{z\in G}(\Im(z))$. Using the notation $\hat{C}=\frac{\tilde{C}}{\text{Im}(z_0)^2}$, if we want the error to not accumulate in our net we need $$\hat{C}\frac{1}{n^{1/2-\epsilon}} \leq \frac{C}{N^{1/3-\epsilon}}.$$ Thus, for our partition of the time interval we have $$n > \frac{\hat{C}^2(N^{(1/3-\epsilon)})^2}{C^2},$$ for $\hat{C}$ and $C$ some constants. It then follows from \eqref{eq:A:M triangle}, that \begin{equation}\label{eq:A:Uniform local} \sup_{t\in [0,1],\ z\in G}\left| M_t^N(z)-M_t^\infty(z) \right|=O\left( \frac{1}{N^{\frac{1}{3}-\varepsilon}} \right). \end{equation} \section{Proof of Theorem \ref{thm: Main Result}} In this section we will complete the proof of Theorem \ref{thm: Main Result}. Fix $\varepsilon > 0$. Let $G$ be a suitable compact subset of $\mathbb{C}_+$ and let $\Tilde{G}$ be a compact subset of $\mathbb{C}_+$ such that $g_t^N(G)\subseteq \Tilde{G}$ with overwhelming probability (see \eqref{eq:A:set containment} for the existence of such a $\Tilde{G}$). Begin by defining $\eta:=\min_{z\in\Tilde{G}}(\Im z) > 0$. Note that \begin{align}\label{eq:A:g difference} |g_t^N(z)-g_t^\infty(z)|&=\left|\int_0^t M_s^N(g_s^N(z))-M_s^\infty(g_s^\infty(z))ds \right|\\ &\leq\left|\int_0^t M_s^N(g_s^N(z))-M_s^\infty(g_s^N(z))ds \right| \\ &\quad \quad +\left|\int_0^t M_s^\infty(g_s^N(z))-M_s^\infty(g_s^\infty(z))ds \right|.\nonumber \end{align} For the term $ M_s^N(g_s^N(z))-M_s^\infty(g_s^N(z))$, observe that from Theorem \ref{thm:Local law theorem} \begin{equation*} \sup_{z\in \Tilde{G}}\left| M_s^N(z)-M_s^\infty(z) \right|=O\left( \frac{4T^2}{N^{\frac{1}{3}-\varepsilon}} \right), \end{equation*} for fixed $s\in [0,T]$ with overwhelming probability. From the argument in Section \ref{section: net to all t} this can be extended to \begin{equation}\label{eq:A: Local law uniformly bounded} \sup_{s\in [0,T],\ z\in \Tilde{G}}\left| M_s^N(z)-M_s^\infty(z) \right|=O\left( \frac{4T^2}{N^{\frac{1}{3}-\varepsilon}} \right). \end{equation} For the term $M_s^\infty(g_s^N(z))-M_s^\infty(g_s^\infty(z))$, note that $M_s^\infty$ is at most $\frac{2}{\eta^2}$-Lipschitz on $\Tilde{G}$, and hence \begin{equation}\label{eq:A:Lipschitz bound for Gronwall's} \left|M_s^\infty(g_s^N(z))-M_s^\infty(g_s^\infty(z))\right|\leq \frac{2}{\eta^2}|g_t^N(z)-g_t^\infty(z)|. \end{equation} From \eqref{eq:A:g difference}, \eqref{eq:A: Local law uniformly bounded}, and \eqref{eq:A:Lipschitz bound for Gronwall's}, we conclude that \begin{equation*} |g_t^N(z)-g_t^\infty(z)|\leq O\left( \frac{4T^2}{N^{\frac{1}{3}-\varepsilon}} \right)+\int_0^t\frac{2}{\eta^2}|g_s^N(z)-g_s^\infty(z)|ds. \end{equation*} Theorem \ref{thm: Main Result} then follows from Gr\"onwall's iequality. \newpage
1,108,101,564,487
arxiv
\section{Introduction} The existence of black hole singularity is one of the most fundamental question in physics. The Penrose cosmic censorship hypothesis asserts that the spacetime singularities need to be hidden from an observer at infinity by an event horizon, which blocks all of the information within it \citep{Hawking:1969sw, Hawking:1973uf}. Generally all the electrovacuum solutions of classical general relativity are consistent with this conjecture. However, the conjecture does not restrain us from considering black hole spacetimes which are free from singularity, within classical general relativity. In this context, recently proposed theory of four dimensional Gauss-Bonnet gravity is quite interesting one \citep{Glavan:2019inb}. It was demonstrated that for a positive Gauss-Bonnet coupling parameter $\alpha$, the static and spherically symmetric solution of the theory is free from the much debated singularity problem. The theory is captivating for other reasons too, for example, the obtained black hole solution appears in the setting of the gravity with a conformal anomaly \citep{Cai:2009ua, Cai:2014jea}, and also in the context of quantum corrections \citep{Tomozawa:2011gp, Cognola:2013fva}. However, the black hole solution in four dimensional Gauss-Bonnet theory is attractive, as it is a modified theory of classical gravity, and hence is on an equal footing with general relativity. These captivating features of this novel theory resulted in a surge of various investigations around this theory, including the theoretical aspects, viability of the solution and physical properties \cite{Konoplya:2020bxa, Guo:2020zmf, Casalino:2020kbt, Konoplya:2020qqh, Fernandes:2020rpa, Lu:2020iav, Konoplya:2020ibi, Ghosh:2020syx, Konoplya:2020juj, Kobayashi:2020wqy, Zhang:2020qam, HosseiniMansoori:2020yfj, Kumar:2020uyz, Wei:2020poh, Churilova:2020aca, Islam:2020xmy, Liu:2020vkh, Konoplya:2020cbv, Jin:2020emq, Ai:2020peo, Heydari-Fard:2020sib, Li:2020tlo, Wei:2020ght, Kumar:2020owy, Hennigar:2020lsl, Mahapatra:2020rds, Shu:2020cjw, Gurses:2020ofy, NaveenaKumara:2020rmi}. It is well known that black holes are not merely strong gravity systems, but also a thermal systems. Particularly, the establishment of laws of black hole thermodynamics has made the phase transition of these compact objects appealing in every sense \citep{Bekenstein1973, Bardeen:1973gs}. In recent times, anti-de Sitter black hole thermodynamics have gained more interest, as the identification of cosmological constant as the thermodynamic variable pressure, leads to the modification in first law, which has a conventional $PdV$ term \citep{Kastor:2009wy, Dolan:2011xt}. In this extended phase space, AdS black holes exhibit variety of phase transition features, of which, the van der Waals like transition is of great interest \citep{Kubiznak2012, Gunasekaran2012, Kubiznak:2016qmn}. As in the case of a conventional van der Waals fluid, the black hole shows a first order phase transition between two phases, namely, the large black hole phase and the small black hole phase. The authors have studied the thermodynamics of the four dimensional Gauss-Bonnet AdS black hole for both the charged and uncharged cases \citep{Hegde:2020xlv}, and it was observed that a vdW like phase transition exists. Having said so, as the black hole is a gravitational and also a thermal system, it is quite natural to seek a connection between the effects of strong gravity and phase transition. It is customary to seek the details of a gravitating object, especially a compact object with strong gravity, by observing the characteristic features of a test particle moving along the geodesics around it. For a particle moving in the vicinity of a black hole, the black hole features are expected to be encoded in the behaviour of the particle motion. These notions are exploited in connecting the unseen attributes of black hole to observational aspects, for example, black hole shadow and quasinormal modes \citep{Cardoso:2008bp, Stefanov:2010xz}. Along with this, the phase transition signature of a charged AdS black hole can be obtained using the quasinormal mode (QNM) studies \citep{Liu:2014gvf}. It was reported that during the van der Waals like phase transition of the black hole, the slope of the quasinormal mode changes drastically, which is an observable phenomenon. These initial findings motivated to investigate a more concrete relationship between the gravitational and phase transition features of the AdS black holes using the null geodesics \citep{Wei:2017mwc, Wei:2018aqm}. By studying the photon orbits in the background of a charged AdS black hole, the phase transition properties are observed from the behaviour of radius $r_{ps}$ and minimum impact parameter $u_{ps}$ of the circular orbit. The behaviour of $r_{ps}$ and $u_{ps}$ with the Hawking temperature $T$ and pressure $P$, mimics the isobar and isotherms found in thermodynamics counterpart. Below the critical values, the first order phase transition is reflected by these orbital parameters. During the phase transition, these two parameters change by a finite amount, which serves as order parameters to characterise the black hole phase transition, with a critical exponent $1/2$. Originally, this was observed in a charged AdS black holes \citep{Wei:2017mwc}, this correlation between gravity and thermodynamics, via photon orbits, can be seen in different black hole spacetimes, namely, Kerr-AdS \citep{Wei:2018aqm}, Born-Infeld AdS background \citep{Xu:2019yub}, regular AdS black holes \citep{A.:2019mqv}, massive gravity \citep{Chabab:2019kfs}, Born-Infeld-dilaton black hole \citep{Li:2019dai}, five-dimensional Gauss-Bonnet black holes \citep{Han:2018ooi} etc. Related studies in other contexts have also appeared in subsequent works \citep{ Zhang:2019tzi, Bhamidipati:2018yqy, Wei:2019jve}. In this article we seek a similar correlation for the novel four-dimensional Gauss-Bonnet AdS black hole. The article is organised as follows. In the next section (\ref{secTD}) we briefly present the 4D Gauss-Bonnet AdS black hole solution and it's thermodynamics. In section \ref{secPT} we investigate the phase transition features of the black hole, wherein, the phase structure is probed using the coexistence and metastable curves. This is followed by section \ref{secgeo}, where we consider the null geodesics on the equatorial plane, and hence obtain the photon orbit radius $r_{ps}$ and minimum impact parameter $u_{ps}$. In section \ref{secphotoncritical}, we study the critical behaviour of $r_{ps}$ and $u_{ps}$, where the order parameters are presented. Finally, we conclude the paper in section \ref{conclusion}. \section{4D Gauss-Bonnet AdS Black Hole: Metric and Thermodynamics} \label{secTD} In this section we briefly present the black hole metric and it's thermodynamics. The $D$-dimensional Einstein-Maxwell-Gauss-Bonnet theory with a negative cosmological constant $\Lambda$ is described by the action \citep{Fernandes:2020rpa}, \begin{equation} \label{action} \mathcal{I}=\frac{1}{16\pi} \int d^Dx\sqrt{-g}\left[ R+2\Lambda +\alpha \mathcal{G} -F^{ab}F_{ab}\right], \end{equation} where $g$ is the determinant of the metric $g_{ab}$, $F_{ab}=\partial _a A_b -\partial _b A_a$, is the Maxwell field tensor and $\alpha$ is the Gauss-Bonnet coupling coefficient. The Gauss Bonnet term is given by, \begin{equation} \mathcal{G}=R^2-4R_{ab}R^{ab}+R_{abcd}R^{abcd}, \end{equation} where $R$ is the Ricci scalar, $R_{ab}$ is the Ricci tensor, $R_{abcd}$ is the Riemann tensor. The cosmological constant is related to the AdS radius $l$ as, \begin{equation} \Lambda = -\frac{(D-1)(D-2)}{2l^2}. \end{equation} In four dimensions the Gauss-Bonnet term does not contribute to the dynamics of the system, as the integral over that term is a topological invariant. However, recently a genuine four dimensional Einstein-Gauss-Bonnet gravity was obtained by scaling $\alpha$ as \citep{Glavan:2019inb}, \begin{equation} \alpha \rightarrow \frac{\alpha }{D-4}, \end{equation} and then taking the limit $D\rightarrow 4$. The spherically symmetric solution for the action (\ref{action}) is, \begin{equation} \label{gbsolution} ds^2=-f(r)dt^2+\frac{1}{f(r)}dr^2+r^2d\Omega ^2_{D-2}. \end{equation} In the limit $D\rightarrow 4$ the metric function has the form, \begin{equation} \label{metricfun} f(r)=1+\frac{r^2}{2\alpha} \left(1-\sqrt{1+4 \alpha \left(-\frac{1}{l^2}+\frac{2 M}{r^3}-\frac{Q^2}{r^4}\right)}\right), \end{equation} where $M$ is the ADM mass and $Q$ is the total charge of the black hole. The validity of the theory from which we obtained the above static spherically symmetric solution has been scrutinised in detail in several propositions \citep{Ai:2020peo, Gurses:2020ofy, Shu:2020cjw, Mahapatra:2020rds, Tian:2020nzb, Bonifacio:2020vbk, Arrechea:2020evj}. However, these does not rule out the possibility of having a spherically symmetric solution as it can be obtained from consistent formulations \citep{Lu:2020iav, Kobayashi:2020wqy, Fernandes:2020nbq, Hennigar:2020lsl, Aoki:2020lig}. Therefore we approach the solution (\ref{gbsolution}) as a self-reliant one. Interestingly, this solution also appears in the context of a conformal anomaly gravity \citep{Cai:2009ua, Cai:2014jea}. The horizon of the black hole ($r_+$) is defined by the condition $f(r_+)=0$. Using this condition we obtain the mass of the black hole to be, \begin{equation} M=\frac{r_+^3}{2 l^2}+\frac{Q^2}{2 r_+}+\frac{\alpha }{2 r_+}+\frac{r_+}{2}. \end{equation} We present the thermodynamics of the black hole in an extended phase space, where the cosmological constant $(\Lambda)$ is treated as the thermodynamic pressure $(P)$, and they are related as $P=-\frac{\Lambda}{8\pi}$. The Hawking temperature of the black hole is associated with the surface gravity $\kappa$, which is, \begin{equation} T=\frac{\kappa}{2\pi}=\left. \frac{f'(r)}{4\pi} \right|_{r=r_+}=-\frac{\alpha -8 \pi P r_+^4+Q^2-r_+^2}{4 \pi r_+^3+8 \pi \alpha r_+}. \label{Hawking} \end{equation} The first law of black hole can be written, considering the GB coupling parameter $\alpha$ to be a thermodynamic variable \citep{Cai:2013qga, Wei:2014hba}, as, \begin{equation} \label{firstlaw} dM=TdS+VdP+\Phi dQ+ \mathcal{A}d\alpha \end{equation} where the potentials $\Phi$ and $\mathcal{A}$ are conjugate to $Q$ and $\alpha$, respectively. Likewise, the thermodynamic volume $V$ is a conjugate to pressure $P$, \begin{equation} V=\left( \frac{\partial M}{\partial P}\right) _{S,Q,\alpha}=\frac{4}{3} \pi r_+^3. \end{equation} The entropy of the black hole can be obtained as follows, \begin{equation} S=\int _0^{r_+} \frac{1}{T}dM=\frac{A}{4}+2\pi \alpha \ln \left( \frac{A}{A_0} \right), \end{equation} where $A=4\pi r_+^2$ is the horizon area and $A_0$ is the integration constant, which has the dimension of $[length]^2$. It is clear that the Gauss Bonnet coupling parameter $\alpha$ modifies the Bekenstein-Hawking entropy-area law. In general, the black hole entropy is independent of the charge $Q$ and cosmological constant $\Lambda$, therefore, the integration constant can be set as $A_0=4\pi | \alpha |$ \citep{Wei:2020poh}. With this identification, the entropy reads, \begin{equation} S=\pi r_+^2+4\pi \alpha \ln \left( \frac{r_+}{\sqrt{|\alpha|}}\right). \end{equation} We emphasise that the black hole entropy has a logarithmic correction, whereas, the thermodynamic volume remains same as the geometric volume. Before concluding the thermodynamics of the black hole, we also mention that, the variables presented above satisfy the Smarr relation in addition to the first law, \begin{equation} M=2TS+\Phi Q-2PV+2\alpha \mathcal{A}. \end{equation} \section{Phase Transition of 4D Gauss Bonnet AdS Black Hole} \label{secPT} The phase transition of the 4D Gauss-Bonnet black hole has been well studied by the authors \citep{Hegde:2020xlv}. Here we recall them to analyse the phase structure using the coexistence and spinodal curves. The state equation of the system is obtained by inverting the expression for Hawking temperature, \begin{equation} P=\frac{Q^2}{8 \pi r_+^4}+\frac{\alpha }{8 \pi r_+^4}+\frac{\alpha T}{r_+^3}-\frac{1}{8 \pi r_+^2}+\frac{T}{2 r_+}. \end{equation} In terms of volume we have, \begin{equation} P=\frac{(6 \pi )^{2/3} (\alpha + Q^2)}{18 \pi ^{1/3} V^{4/3}}+\frac{4 \pi \alpha T}{3 V}+\frac{\pi ^{1/3} T}{6V^{1/3}}-\frac{1}{2\times 6^{2/3} \pi ^{1/3} V^{2/3}}. \end{equation} The critical behaviour of the black hole can be easily seen in the $P-V$ isotherms, where a first order phase transition exists between a small black hole phase (SBH) and a large black hole phase (LBH). This phase transition property is exhibited by both the charged and neutral black holes. The critical point of the phase transitions is determined by using the condition, \begin{equation} \left( \frac{\partial P}{\partial V}\right)_{T,Q,\alpha} =\left( \frac{\partial ^2P}{\partial V^2}\right) _{T,Q,\alpha}=0. \end{equation} The critical values of the thermodynamic variables are \citep{Hegde:2020xlv}, \begin{equation} T_c=\frac{\left(8 \alpha +3 Q^2-\rho \right) \sqrt{6 \alpha +3 Q^2+\rho }}{48 \pi \alpha ^2}; \end{equation} \begin{equation} P_c=\frac{9 \alpha +6 Q^2+\rho }{24 \pi \left(6 \alpha +3 Q^2+\rho \right)^2}; \end{equation} \begin{equation} V_c=\frac{4}{3} \pi \left(6 \alpha +3 Q^2+\rho \right)^{3/2}; \end{equation} where $\rho =\sqrt{48 \alpha ^2+9 Q^4+48 \alpha Q^2}$. Making use of these quantities we define the reduced thermodynamic variables, \begin{equation} \tilde{T}=\frac{T}{T_c} \qquad \tilde{P}=\frac{P}{P_c} \qquad \tilde{V}=\frac{V}{V_c}. \end{equation} By observing the phase structure, we can have a better understanding of the phase transition. In the extended phase space, the black hole mass plays the role of enthalpy, which is evident from the first law (\ref{firstlaw}). With this understanding, the Gibbs free energy of the black hole is calculated to be $G=M-TS$, which reads, \begin{eqnarray} G=\frac{4}{3} \pi P r_+^3+\frac{Q^2}{2 r_+}-T \left[\pi r_+^2+4 \pi \alpha \log \left(\frac{r_+}{\sqrt{\alpha }}\right)\right]+\frac{\alpha }{2 r_+}+\frac{r_+}{2}. \end{eqnarray} Here $r_+$ is regarded as a function of $(P,T)$ from equation of state. We obtain the coexistence curve in the $\tilde{P}-\tilde{T}$ plane, by using the swallow tail behaviour of the Gibbs free energy. The coexistence expression is also translated into $\tilde{T}-\tilde{V}$ plane. The results are shown in fig. \ref{GBPTTV}. In the $\tilde{P}-\tilde{T}$ plane, the coexistence line (red solid line) partitions the SBH and LBH phases below the critical point. It terminates at the second order phase transition point, above which the phase is supercritical. The figures also display the metastable curves (blue dashed lines) which satisfy, \begin{equation} ( \partial _V P)_T=0, \qquad (\partial _V T)_P=0. \end{equation} The region between the coexistence curve and metastable curve are the metastable phases, namely, superheated SBH and supercooled LBH phases. In the $\tilde{T}-\tilde{V}$ plane the region under the metastable curve corresponds to the coexistence phase of SBH and LBH. \begin{figure}[t] \centering \subfigure[][]{\includegraphics[scale=0.85]{GBPT.eps}\label{GBPT}} \qquad \subfigure[][]{\includegraphics[scale=0.85]{GBTV.eps}\label{GBTV}} \caption{The coexistence curve (red solid line) and Spinodal curve (blue dashed line) in $\tilde{P}-\tilde{T}$ and $\tilde{T}-\tilde{V}$ plane. The black dot at $(1,1)$ denotes the critical point. } \label{GBPTTV} \end{figure} \section{Geodesic equations of motion} \label{secgeo} In this section we establish the relationship between the thermodynamics and the null geodesics. Consider a photon which is orbiting the black hole freely on the equatorial plane described by $\theta =\pi /2$. The Lagrangian which characterises this motion can be directly written from the metric (\ref{gbsolution}), \begin{equation} \label{lagrangian} 2 \mathcal{L}=-f(r)\dot{t}^2+\frac{\dot{r}^2}{f(r)}+r^2\dot{\phi}^2. \end{equation} Here, the dots represent the differentiation with respect to an affine parameter. The 4D Gauss-Bonnet AdS black hole spacetime has two Killing fields, $\partial _t$ and $\partial _\phi$, which leads to two constants of motion, $E$ and $L$, which are the conserved quantities, the energy and orbital angular momentum of the photon, respectively. The generalised momenta corresponding to the Lagrangian (\ref{lagrangian}) can be obtained by using $p_a =g_{ab} \dot{x}^b$ as, \begin{eqnarray} p_t=-f(r)\dot{t}\equiv E\\ p_\phi= r^2\dot{\phi} \equiv L\\ p_r=\dot{r}/f(r). \end{eqnarray} The $t$ and $r$ motion of the photon can now be described as, \begin{equation} \dot{t}=\frac{E}{f(r)} \end{equation} \begin{equation} \dot{\phi}=\frac{L}{r^2 \sin ^2\theta}. \end{equation} The Hamiltonian for the system is obtained from the standard definition, and it vanishes, \begin{equation} 2\mathcal{H}=-E\dot{t}+L\dot{\phi}+\dot{r}^2/f(r)=0. \end{equation} Employing $r$ and $\phi$ motion, we can rewrite the expression for the radial $r$ motion as, \begin{equation} \dot{r}^2+V_{eff}=0, \end{equation} with the effective potential given by, \begin{equation} V_{eff}=\frac{L^2}{r^2}f(r)-E^2. \end{equation} The photon can only move in the region where $V_{eff}<0$, since $\dot{r}^2>0$. A photon approaching the black hole will be absorbed if it has a smaller angular momentum $L$ and get scattered if the angular momentum is large enough. The absorption and scattering are separated by a critical angular momentum, which defines a unstable circular photon orbit. Expressions governing this orbit are, \begin{equation} \label{orbit} V_{eff}=0\quad , \quad V'_{eff}=0 \quad , \quad V''_{eff}<0, \end{equation} where prime denotes a differentiation with respect to $r$. The radial velocity $\dot{r}$ of the photon is zero in this unstable circular orbit. The corresponding value of $r$ is the radius of photon orbit. Expanding the second equation in (\ref{orbit}) we have, \begin{equation} 2f(r_{ps})-r_{ps}\partial _r f(r_{ps})=0. \label{aneqn} \end{equation} Substituting the metric function $(\ref{metricfun})$ into this equation and solving, we obtain the expression for the radius of photon sphere $r_{ps}$, which is a complicated expression and a function of black hole parameters $(M,Q,P,\alpha)$. Solving the first equation in (\ref{orbit}), $(V_{eff}=0)$, we obtain the minimum impact parameter of the photon as, \begin{equation} u_{ps}=\frac{L_c}{E}=\left. \frac{r}{\sqrt{f(r)}} \right| _{r_{ps}}. \label{upsequation} \end{equation} To investigate the correlation between the photon sphere and the black hole phase transition, we observe the behaviour of the radius $r_{ps}$ and minimum impact parameter $u_{ps}$ with respect to the Hawking temperature and pressure, in the reduced parameter space. Apparently this investigation is motivated by the observations in the phenomena of black hole lensing, where the impact parameter $u$ has a close connection with the deflection angle. The deflection angle is small for a large impact parameter. Yet, in the limit $u\rightarrow u_{ps}$, the deflection angle is unbounded \cite{Bozza:2002zj}. In fig. \ref{TruGB}, the Hawking temperature $T$ is shown as a function of the photon orbit radius $r_{ps}$ and minimum impact parameter $u_{ps}$, separately, with fixed pressures. The isobars in this figure imply the typical van der Waals like phase transition. For pressures below the critical value, the isobars first increase, then decrease, and finally increase with respect to the photon sphere radius $r_{ps}$ and minimum impact parameter $u_{ps}$. In fig. \ref{PruGB} the pressure $P$ is seen as function of $r_{ps}$, first, and then of $u_{ps}$, keeping the temperature as a constant. The behaviour of pressure here (fig. \ref{PruGB}) is opposite to that of temperature (fig. \ref{TruGB}). For example, when temperature $\tilde{T}$ increases with $r_{ps}$ or $u_{ps}$, the pressure $\tilde{P}$ decreases. In summary, from the behaviour of the photon orbit radius and minimum impact parameter along the isothermal and isobaric curves of 4D Gauss-Bonnet AdS black hole, the van der Waals like phase transition can be clearly identified. This affirms that there exists a correlation between null geodesics and phase transition of the black hole. \begin{figure}[t] \centering \subfigure[][]{\includegraphics[scale=0.85]{TrGB.eps}\label{TrGB}} \qquad \subfigure[][]{\includegraphics[scale=0.85]{TuGB.eps}\label{TuGB}} \caption{The behaviour of photon sphere radius and minimum impact parameter of unstable null geodesic with Hawking temperature in reduced parameter space. We take $Q=1$ and $\alpha=0.5$ } \label{TruGB} \end{figure} \begin{figure}[t] \centering \subfigure[][]{\includegraphics[scale=0.85]{PrGB.eps}\label{PrGB}} \qquad \subfigure[][]{\includegraphics[scale=0.85]{PuGB.eps}\label{PuGB}} \caption{The behaviour of photon sphere radius and minimum impact parameter of unstable null geodesic with pressure in reduced parameter space. We take $Q=1$ and $\alpha=0.5$ } \label{PruGB} \end{figure} \section{Critical behaviour of the photon sphere} \label{secphotoncritical} \begin{figure}[t] \centering \subfigure[][]{\includegraphics[scale=0.8]{rGB.eps}\label{rGB}} \qquad \subfigure[][]{\includegraphics[scale=0.8]{uGB.eps}\label{uGB}} \caption{ The variation of the radius of photon sphere and the minimum impact parameter, for unstable null geodesic, with respect to the Hawking temperature (in reduced parameter space). The SBH (blue dashed line) and LBH (red solid line) meet at the critical point $(\tilde{T}=1)$.} \label{coex} \end{figure} \begin{figure}[t] \centering \subfigure[][]{\includegraphics[scale=0.8]{drGB.eps}\label{drGB}} \qquad \subfigure[][]{\includegraphics[scale=0.8]{duGB.eps}\label{duGB}} \caption{The change in photon sphere radius and minimum impact parameter of unstable null geodesic during the phase transition of the black hole. The concavity of the curve changes near the critical point, which is shown in an enlarged form in inlets.} \label{diffcoex} \end{figure} The black hole exhibits a first order vdW like phase transition which terminates at the critical point, which corresponds to second order phase transition. As we have seen, there is a connection between the photon sphere and phase transition, it is worth examining the behaviour of change in photon orbit and minimum impact parameter during the phase transition. We construct the equal area law for the $\tilde{T}-\tilde{r}_{ps}$ and $\tilde{T}-\tilde{u}_{ps}$ isobars, similar to the isobars in the $\tilde{T}-\tilde{S}$ plane of the black hole. From the result, we study the behaviour of the photon orbit radius $r_{ps}$ along the coexistence curve (Fig. \ref{coex}). As the temperature increases, the radius $r_{ps}$ for the coexistence LBH phase decreases, whereas for the coexistence SBH phase it increases. The $r_{ps}$ of both coexistence phases attain same value at the critical point $\tilde{T}=1$. Same behaviour is observed for the minimum impact parameter $u_{ps}$. In fig \ref{diffcoex} we display the differences of the quantities $r_{ps}$ and $u_{ps}$ with the phase transition temperature. Both $\Delta r_{ps}$ and $\Delta u_{ps}$ behaves acutely like the order parameter. They have non-zero value corresponding to first-order phase transition and vanish at the second-order phase transition. The behaviour in the neighbourhood of critical point is shown in the inlets. Near the critical point a change in concavity is observed. We numerically obtain the critical exponent of these differences near the critical point to be, \begin{equation} \Delta \tilde{r}_{ps} =3.57249(1-\tilde{T})^{0.510839} \end{equation} and\begin{equation} \Delta \tilde{u}_{ps} = 2.54786 (1-\tilde{T})^{0.506096}. \end{equation} This behaviour, i.e. $\Delta \tilde{r}_{ps} \sim (1-\tilde{T})^{1/2}$ and $\Delta \tilde{u}_{ps} \sim (1-\tilde{T})^{1/2}$, show that $\Delta \tilde{r}_{ps}$ and $\Delta \tilde{u}_{ps}$ can serve as the order parameters to characterise the black hole phase transition. These results strongly confirm our previous assertion that photon orbits and thermodynamic phase transitions are related to each other. \section{Concluding Remarks} \label{conclusion} In this article we show that the unstable circular photon orbit around the four dimensional Gauss-Bonnet AdS black hole reflects the phase transition information of the black hole. The radius of the photon orbit $r_{ps}$ and the minimum impact parameter $u_{ps}$ are studied in detail. The study establishes a link between the gravity and thermodynamics in the background of Gauss-Bonnet AdS strong gravity. In the first part of the article we presented the thermodynamics and phase transition of the black hole. The phase structure of the black hole is analysed using the coexistence curve and the metastable curves. These curves are the boundaries that separates different stable and metastable phases of the black holes, using which a clear understanding of phase transition features are obtained. The first-order and second order phase transition details are sought in this study, which are influenced by the Gauss-Bonnet coupling parameter $\alpha$. Throughout our study we keep in mind that, the extended phase space thermodynamics features are same for both the charged and neutral Gauss-Bonnet AdS black holes, as it was reported in our previous work \citep{Hegde:2020xlv}. In the second part of the article, using the Lagrangian of a photon moving freely in the equatorial plane of the black hole we investigated the null geodesics. Using the effective potential, we solve the photon orbit radius $r_{ps}$ and the minimum impact parameter $u_{ps}$ for the 4D Gauss-Bonnet AdS black hole. These two key quantities depend on the black hole parameters, especially the charge $Q$ and Gauss-Bonnet coupling parameter $\alpha$. To establish the relationship between the photon sphere and black hole phase transition we study the behaviour of $r_{ps}$ and $u_{ps}$ along the isobar and the isotherms of the system. The first order phase transition is revealed from these plots. When the pressure or temperature is below the critical value there exists two extreme values for $r_{ps}$ and $u_{ps}$, which coincide to form one extreme point for the critical values of pressure or temperature. Above the critical value of pressure or temperature the $r_{ps}$ and $u_{ps}$ do not exhibit any extremum. Thus they increase monotonically. This behaviour of the photon orbit isobar and isotherm are consistent with that of the black hole thermodynamics. Finally we probe the behaviour of $r_{ps}$ and $u_{ps}$ along the coexistence curve. The two coexistence branches, namely, small black hole and large black hole, have different $r_{ps}$ and $u_{ps}$ values. Their differences $\Delta r_{ps}$ and $\Delta u_{ps}$ serve as order parameters for the black hole phase transition. They vanish near the critical point, which corresponds to the second order phase transition. In the neighbourhood of this critical point, $\Delta r_{ps}$ and $\Delta u_{ps}$ have a critical exponent of $1/2$, which is obtained numerically. Our results show that in the background of Einstein-Maxwell-Gauss-Bonnet AdS spacetime, the black hole thermodynamics can be probed via the strong gravity effects and vice versa. \acknowledgments K.H. , N.K.A. and A.R.C.L. would like to thank U.G.C. Govt. of India for financial assistance under UGC-NET-SRF scheme.
1,108,101,564,488
arxiv
\section{Introduction and related works} \label{sec:introduction} Biological organisms are intrinsically modular at different scales \citep{lorenz2011emergence}. The collective self-organization at the cellular level results in the emergence of complex bodies and brains without any form of centralized control. Such modularity allows for mechanisms of local interaction which, in turn, result in collective learning and adaptation. In the artificial domain, modular robotics \citep{alattas2019evolutionary} provides a framework for the investigations of biologically-inspired principles of collective control through distributed coordination of the agents composing the robot \citep{cheney2014evolved}. In addition, modular robots allow for a high degree of reconfiguration and self-assembly \citep{pathak2019learning}, as well as fault tolerance and modules reusability. However, in order to exploit such opportunities, there is the need for modular distributed controllers, possibly embedded in each module. Therefore, the overall behavior of the robot is the result of the collective interplay of distributed sensing, local communication, and actuation of interacting body modules. In addition, identical modules would facilitate the reusability of the parts and robustness in case of damage \citep{huang2020one}. In this work, we focus on a specific type of modular robots, namely Voxel-based Soft Robots (VSRs) \citep{hiller2012automatic}. Since VSRs are robots made of interconnected soft blocks (voxels), each module may be considered an agent in a collective. As such, mechanisms of collective intelligence are desired. While such mechanisms of collective intelligence are rather popular in the context of swarm robotics \citep{hamann2018swarm}, e.g., via self-assembly of thousands of robots through local interactions \citep{rubenstein2014programmable}, they are less explored in the context of modular robotics. One paradigm of distributed neural control through local interactions of identical cells is Neural Cellular Automata (NCA) \citep{li2002neural,nichele2017neat,mordvintsev2020growing}. In case of robots composed of identical modules, each NCA cell can be embodied in a robot module. Such approach would in theory facilitate a physical realization as no global wiring nor centralized control would be needed. NCA have been successfully used to grow and replicate CA shapes and structures with neuroevolution \citep{nichele2017neat} and with differentiable learning \citep{mordvintsev2020growing}, to produce self-organising textures \citep{niklasson2021self}, to grow 3D artifacts \citep{sudhakaran2021growing}, for regenerating soft robots \citep{horibe2021regenerating}, and for controlling reinforcement learning agents \citep{variengien2021towards}. In this work, we introduce a novel embodied NCA model based on more biologically plausible Spiking Neural Network (SNN) controllers. We name it embodied Spiking Neural Cellular Automata (SNCA). SNNs incorporate neuronal and synaptic states in their neuron models, as well as the concept of time. SNCA open up several opportunities in the domain of modular robotics, such as mechanisms of homeostatic adaptation and local learning rules, e.g., spike-timing-dependent plasticity. In addition, nearby modules communicate natively through spikes which are not generated at every clock-cycle but only when the internal neural membrane potential reaches a specific threshold, which in turn changes the membrane potential of the post-synaptic neuron, either within the same robotic module or in a nearby module in case of modular robots. The advent of neuromorphic hardware, which natively supports SNNs execution and learning, may provide orders of magnitude improvement in energy consumption as compared to traditional neural networks \citep{blouw2019benchmarking}. Low energy consumption is considered to be an enabling factor for the physical realization of self-organizing VSRs. This work is organized as follows: in \Cref{sec:vsrs} we introduce VSRs, their morphology, and the proposed NCA-based controllers. In \Cref{sec:snn} we describe SNNs and SNCA. In \Cref{sec:experiments} we present the experimental results and discuss the insights they provide. Finally, in \Cref{sec:conclusions} we draw the conclusions. \section{Collective control of Voxel-Based Soft Robots} \label{sec:vsrs} Voxel-Based Soft Robots \citep{hiller2012automatic} are a kind of modular soft robots, composed of several elastically deformable cubes (\emph{voxels}). In this work, we experiment with a 2D version of VSRs, simulated in discrete time and continuous space \citep{medvet20202d}. The way in which VSRs achieve movement is a direct consequence of the unique combination of softness and modularity. The global behavior derives from the collective and synergistic contraction and expansion of individual voxels, similarly to what happens in biological muscles. Because of modularity, a VSR can be considered as an ensemble of simple sub-agents, the voxels, which are physically joined to obtain a greater structure, and whose individual behaviors influence eachother and concur to the emergence of coordination. Therefore, to characterize a VSR we need to specify how the voxels are assembled and what are their properties (\emph{morphology}), and how the voxels compute their control signals and communicate with one another (\emph{controller}). \subsection{VSR morphology: assembling individual voxels} \label{sec:vsr-morphology} The morphology of a VSR specifies how individual voxels are assembled, their sensory equipment, and their physical properties. A VSR can be represented as a 2D grid of voxels, describing their spatial organization and assembly. Adjacent voxels in the grid are rigidly linked: not only does this allow to assemble a robot out of primitive modules, but it also forces mutual physical interactions resulting in an overall complex dynamics. In addition, each voxel can be equipped with sensors to enable proprioception and awareness of the surroundings. For each voxel, we use three types of sensors: \begin{enumerate*}[label=(\alph*)] \item \emph{area} sensors, perceiving the ratio between the current area of the voxel and its rest area, \item \emph{touch} sensors, sensing if the voxel is in contact with the ground or with another body, and \item \emph{velocity} sensors, which perceive the velocity of the center of mass of the voxel along the $x$- and $y$-axes. \end{enumerate*} We normalize sensor readings to be defined in $[0,1]^4$. Concerning physical properties, we model voxels as compounds of spring-damper systems, masses, and distance constrains \citep{medvet2020design}, whose parameters can be changed to alter features like elasticity or actuation power. Each voxel changes volume (actually area, in the 2D case) over time, due to passive interactions with other bodies and the active application of a control signal. The latter is computed at each simulation time step and is defined in $[-1,1]$, where $-1$ corresponds to maximum requested expansion and $1$ corresponds to maximum requested contraction. In the employed model \citep{medvet20202d}, contraction and expansion correspond to linear variations of the rest-length of the spring-damper system, proportional to the control signal received. \subsection{VSR controller: the embodied Neural Cellular Automata paradigm} \label{sec:vsr-controller} A VSR controller derives from the ensemble of individual voxels controllers. However, achieving coordination while keeping the intelligent control of each voxel fully independent of the others is a difficult task. In fact, most studies involving VSRs either rely on independent, yet not intelligent, controllers based on trivial sinusoidal functions \citep{hiller2012automatic,corucci2018evolving,kriegman2018morphological}, or sacrifice modularity and deploy a central neural controller that has access to all voxels \citep{talamini2019evolutionary,ferigo2021evolving,nadizar2021effects,nadizar2022merging}. To solve this issue, \citet{medvet2020evolution} introduced the concept of distributed neural controllers, which exploit message passing between neighbors to allow the emerge of coordination thanks to collective intelligence. Here, we follow along the same direction, combining the key ideas of modularity and collective intelligence, with an approach based on Neural Cellular Automata (NCA) techniques \citep{li2002neural,nichele2017neat,mordvintsev2020growing}, in which the lookup table of each Cellular Automaton (CA) cell is replaced by an Artificial Neural Network (ANN). More in detail, we consider each voxel as a single \emph{embodied} NCA cell (from now on, simply referred to as NCA), which has access to the local sensor readings and to some information coming from the neighbors, to compute the local actuation value and the messages directed towards adjacent voxels. Our approach, anyhow, has a substantial difference with the standard NCA architectures: namely, it is strongly bound to the VSR morphology employed, as we only instantiate NCA cells in correspondence to the voxels. Formally, at every time step $k$, each NCA takes as input a vector $ \vec{x}^{(k)} = \left[ \vec{r}^{(k)} \ \vec{i}_{\uparrow}^{(k)} \ \vec{i}_{\leftarrow}^{(k)} \ \vec{i}_{\downarrow}^{(k)} \ \vec{i}_{\rightarrow}^{(k)} \right]$ and produces as output a vector $\vec{y}^{(k)} = \text{ANN}_{\vec{\theta}}\left(\vec{x}^{(k)}\right) =\left[ a^{(k)} \ \vec{o}_{\uparrow}^{(k)} \ \vec{o}_{\leftarrow}^{(k)} \ \vec{o}_{\downarrow}^{(k)} \ \vec{o}_{\rightarrow}^{(k)}\right]$ where $\vec{r}^{(k)} \in \mathbb{R}^4$ is the local sensor reading, $\vec{i}_{\uparrow}^{(k)}, \vec{i}_{\leftarrow}^{(k)}, \vec{i}_{\downarrow}^{(k)}, \vec{i}_{\rightarrow}^{(k)}$, each defined in $\mathbb{R}^{n_c}$, are values coming from adjacent voxels (from above, left, below, right, respectively, and set to $\vec{0} \in \mathbb{R}^{n_c}$ if no voxel is present in the corresponding direction), $a^{(k)}$ is the actuation value, $\vec{o}_{\uparrow}^{(k)}, \vec{o}_{\leftarrow}^{(k)}, \vec{o}_{\downarrow}^{(k)}, \vec{o}_{\rightarrow}^{(k)}$, each defined in $\mathbb{R}^{n_c}$, are values directed to adjacent voxels (to above, left, below, right, respectively), and $\vec{\theta}$ are the parameters of the ANN constituting the NCA. Values output by a NCA at time $k$ are used by an adjacent NCA at time $k+1$, e.g., given a NCA $a$ that outputs $\vec{o}_{a,\rightarrow}^{(k)}$, the NCA $b$ at its right will have $\vec{i}_{b,\leftarrow}^{(k+1)}=\vec{o}_{a,\rightarrow}^{(k)}$. We experiment with three ways of instantiating the general scheme described above, \emph{non-uniform directional} (\cancel{U}{}D{}), \emph{uniform directional} (U{}D{}), and \emph{uniform non-directional} (U{}\cancel{D}{}), which differ in the homogeneity of the individual cells (uniform vs.\ non-uniform) and in the information passed between voxels (directional vs.\ non-directional). The first two schemes are inspired by already existing forms of distributed controllers \citep{medvet2020evolution} and we consider them as baselines, whereas the U{}\cancel{D}{}-NCA is novel. The most evident, yet conceptually simple, difference lies in the \emph{uniformity}: in \cancel{U}{}-NCA, cells have a different ANN in each voxel (each with parameters $\vec{\theta}_i$), whereas in U{}-NCA all cells ANNs share the same parameters $\vec{\theta}$. Therefore, it follows that, for a given ANN architecture, the amount of parameters of \cancel{U}{}-NCA is $n_{\text{voxels}}$ times the amount of parameters of U{}-NCA. The second distinguishing element is \emph{directionality}. In \cancel{D}{}-NCA, cells send the same output to all the adjacent cells, i.e., $\vec{o}_{\uparrow}^{(k)}=\vec{o}_{\leftarrow}^{(k)}=\vec{o}_{\downarrow}^{(k)}=\vec{o}_{\rightarrow}^{(k)}=\vec{o}^{(k)}$ and $\vec{y}^{(k)}=\left[ a^{(k)} \ \vec{o}^{(k)}\right]$, whereas in D{}-NCA, cells send, in general, different outputs. The D{}-NCA hence corresponds to the one originally proposed by \citet{medvet2020evolution}. Contrarily, \cancel{D}{}-NCA are more adherent to the original concept of NCA as $\vec{o}^{(k)}$ can be interpreted as the current state of the cell. The proposed types of NCA controllers can be employed with any type of ANN, either fully-connected feed-forward NNs, i.e., multi-layer perceptrons (MLP), or more biologically plausible NNs, such as the SNNs described in the following section. \section{Spiking Neural Networks as robotic controllers} \label{sec:snn} Spiking Neural Networks (SNNs) are a type of ANNs in which biological resemblance plays a fundamental role \citep{gerstner2002spiking}. Often referred to as the third generation of ANNs \citep{maass1997networks}, SNNs are characterized by a more biologically and neuroscientifically faithful neural model than classical ANNs. The key element of SNNs is the modeling of the evolution over time of the membrane potential of neurons. Modifications of such potential are caused by incoming neural stimuli, which can either be excitatory (increasing the potential) or inhibitory (decreasing it). Neural stimuli occur in the form of spikes over time, which can propagate along synapses in order to reach different neurons of the SNN, enabling information passing within the network. The generation of said spikes is called \emph{firing}, and arises whenever the membrane potential of a neuron exceeds a given threshold. Despite the binary nature of spikes, the intensity of any stimulus received by a neuron is modulated by the strength of the synapse connecting the firing neuron (pre-synaptic neuron) and the neuron receiving the spike (post-synaptic neuron). Not unlike classical MLPs, synapses are modeled as weighted connections, where the weights play the main role in determining the behavior of the ANN, and can be subject to task-oriented optimization. What is indeed essentially different between MLPs and SNNs, is the way information is encoded, which is a direct consequence of the peculiarities of the two models. In particular, in MLPs there is no notion of time, and information is encoded in the form of real values traveling along the synapses. Conversely, SNNs are bound to the concept of time to compute the evolution of the neural membranes and for the propagation of spikes in the network. Within this framework, information is embedded in the time distribution of spikes. Hence, additional tools are required to interpret spike trains as real values and vice versa. Given their high biological resemblance, SNNs are extremely promising robotic controllers. In fact, faithfully mimicking the functioning of the nervous systems of living organisms could be an enabling factor for bringing the desirable traits of biological organisms to artificial agents, e.g., autonomy or adaptability. Moreover, the possibility of deploying SNNs on highly energy efficient neuromorphic hardware \citep{li2014activity} is an additional profitable feature, which could be of paramount importance with reference to energy constraints. \subsection{Discrete time Leaky Integrate and Fire model} \label{sec:lif} Several spiking neuron models have been proposed \citep{izhikevich2004model}, which, despite differing in terms of biological plausibility and computational costs, all share the main concepts derived from neuroscience. Among them, we employ the computationally efficient Leaky Integrate and Fire (LIF) model, simulated in discrete time. The LIF model represents the neural membrane as a capacitor, whose potential can be increased or decreased by inputs (excitatory or inhibitory), and exponentially decays with time. At each neural simulation time step $h$, the membrane potential $v^{(h)}_j$ of a LIF neuron $j$ is updated as: \begin{equation} \label{eq:membrane-potential} v^{(h)}_j = v^{(h-1)}_j + \sum_{i=1}^{n} w_{i,j} s_i^{(h)} - \Delta t_h \lambda_v v^{(h-1)}, \end{equation} with $w_{i,j} \in \mathbb{R}$ being the synaptic weight of the $i$-to-$j$ synapse, $n$ being the number of incoming synapses, $s_i^{(h)}\in\{0,1\}$ carrying pre-synaptic neuron spike, and $\Delta t_h=1/f_h$ being the neural simulation time resolution. After the update, and if the membrane potential $v^{(h)}_j$ exceeds a threshold $\vartheta^{(h)}_j$, the neuron $j$ outputs a spike, i.e., $s^{(h)}_j=1$, and the membrane potential is reset to its resting value $v_\text{rest}$, otherwise $s^{(h)}_j=0$. We enhance the LIF model introducing the biological concept of homeostatic plasticity. Homeostasis is a self regulatory mechanism present at various sites of living organisms, which aims at re-establishing equilibrium in contrast to strong stimuli that could unbalance a system \citep{turrigiano2004homeostatic}. In our case, homeostasis operates as a firing rate regulator, acting on the threshold $\vartheta_j^{(h)}$ of neurons, to prevent excessive or too scarce activity: \begin{equation} \label{eq:homeostasis} \vartheta^{(h)}_j = \min\left(\vartheta^{(h-1)}_j,\sum_{i=1}^{n} w_{i,j}\right) + \psi_j^{(h)}, \end{equation} with $\psi_j^{(h)}$ being a parameter updated as: \begin{equation} \psi_j^{(h)}= \begin{cases} \psi_j^{(h-1)} + \psi_\text{inc} & \text{if $s_j^{(h-1)}=1$}\\ \psi_j^{(h-1)} - \psi_j^{(h-1)}\lambda_\psi\Delta t_h &\text{otherwise. \end{cases} \end{equation} \subsection{The LIF model inside NCA} \label{sec:rate-coding} We employ the LIF model described above within the ANN for the NCA in our robots. We simulate both the robot mechanical models and the LIF SNNs in discrete time: however, we update the simulation of the LIF SNNs with a greater frequency. Namely, we update the mechanical model with a frequency $f_k=\SI{60}{\hertz}$ (the default value of the 2D-VSR-Sim by \citet{medvet20202d}) and the SNNs with a frequency $f_h=16 f_k\approx\SI{1}{\kilo\hertz}$ (as suggested by \citet{izhikevich2004model}). In practice, at each $k$, we build a spike train $\left(s^{(16k)},\dots,s^{(16k+15)}\right) \in \{0,1\}^{16}$ to be fed to the SNNs from each element of the sensor reading $\vec{r}^{(k)}$ and we compute the actuation value $a^{(k)}$ considering the spikes emitted by the corresponding SNN output neuron up to $h=16k$. Concerning the information traveling between pairs of SNNs, we simply copy the spike trains with a delay of $16$ time steps in the SNN simulation, i.e., one time step in the robot simulation, consistently with the description given in \Cref{sec:vsr-controller}. For performing the sensor reading and actuation value conversions, we take inspiration from rate coding \citep{bouvier2019spiking}, where real values are mapped to a frequencies, which are then used to generate spike trains \citep{wang2008behavior}. For spike trains to be fed to input neurons corresponding to sensor readings, we set: \begin{equation} s^{(h)} = \begin{cases} 1 &\text{if } \exists n \in \mathbb{N} \text{ s.t. } h=h_\text{last} + n \left\lfloor \frac{f_h}{ f^{(k)}} \right\rfloor \\ 0 &\text{otherwise}, \end{cases} \end{equation} where $h_\text{last}$ is the time step of the last spike to the neuron (initially set to $0$) and $f^{(k)}=r^{(k)} (f_\text{max}-f_\text{min})+f_\text{min}$, $r^{(k)} \in [0,1]$ being the element of the sensor reading. That is, we first linearly scale the scalar input to a frequency $f^{(k)} \in \left[f_\text{min},f_\text{max}\right]$ and then we emit spikes at frequency $f^{(k)}$, i.e., one spike each $\left\lfloor \frac{f_h}{ f^{(k)}} \right\rfloor$ time steps of the SNN simulation. We set $f_\text{min}=\SI{5}{\hertz}$ and $f_\text{max}=\SI{50}{\hertz}$ for biological plausibility: hence, with the maximum scalar input $r^{(k)}=1$, we emit one spike each $\frac{\SI{1}{\kilo\hertz}}{\SI{50}{\hertz}}=20$ time steps, whereas with the minimum input $r^{(k)}=0$, we emit one spike each $\frac{\SI{1}{\kilo\hertz}}{\SI{5}{\hertz}}=200$ time steps. For the actuation value, we set: \begin{equation}\label{eq:output-conv} a^{(k)} = 2\left( \frac{1}{f_\text{max}} \frac{f_h}{n_w} \sum_{k' = k - n_w + 1} ^ k \sum_{h=16k'}^{16k'+15} s^{(h)} \right) - 1. \end{equation} That is, we count the spikes in the last $n_w$ robot simulation time steps, we linearly scale this value to $[0,1]$ considering the maximum possible frequency $f_\text{max}$, and then we linearly scale to $[-1,1]$. The reason why we consider $n_w$ robot simulation time steps, instead of just the current one, is to have a better resolution of the actuation value and, hence, a smoother control. After preliminary experimentation, we set $n_w=5$. \section{Experiments and results} \label{sec:experiments} We performed an extensive experimental campaign to investigate how coordination can emerge from different forms of collective control. We aimed at evaluating if we could improve the baselines of distributed control for VSRs \citep{medvet2020evolution} with our novel contribution, the U{}\cancel{D}{}-SNCA. Therefore, we addressed the following research question: \emph{``are U{}\cancel{D}{}-SNCA superior with reference to the baselines of distributed control?''}. To determine the effectiveness of a collective controller, we deployed it onto a VSR, and optimized its parameters using as quality measure the velocity achieved by the robot performing locomotion on a flat terrain. In addition, we also assessed the controllers adaptability, by measuring the VSR velocity, after the optimization, on a set of unseen terrains, i.e., terrains not used to optimize the controller parameters. With the extent of obtaining more general results, we experimented with three different morphologies. \subsection{U{}\cancel{D}{}-SNCA vs.\ baseline embodied NCA}\label{sec:snca-vs-baseline} In order to provide an answer to the posed research question, we started by optimizing the parameters of three variants of NCA for each of the three considered morphologies, for a total of nine VSRs optimizations. Concerning the NCA, we took into consideration the U{}D{}- and \cancel{U}{}D{}-NCA as baselines, and we compared them against the U{}\cancel{D}{}-SNCA. We used a MLP with $\tanh$ as activation function for both baseline NCA, while we equipped the SNCA with a fully-connected feed-forward SNN based on the LIF neural model augmented with homeostasis, with the following parameters: $v_\text{rest}=\SI{0}{\milli\volt}$, $\lambda_v=\SI{0.01}{\per\second}$, $\vartheta^{(0)}=\SI{1}{\milli\volt}$, $\psi^{(0)}=\SI{0}{\milli\volt}$, $\psi_\text{inc}=\SI{0.2}{\milli\volt}$, $\lambda_\psi=\SI{0.01}{\per\second}$. For both ANNs, we used \num{1} hidden layer, setting its size equal to the size of the input layer. We set $n_c=1$ for both U{}D{}- and \cancel{U}{}D{}-NCA, and $n_c=4$ for the U{}\cancel{D}{}-SNCA, in order to make the sizes of the ANNs output layers equal. Our choice of NCA hyper-parameters was driven by some exploratory experiments and by previous work involving SNNs \citep{pontes2019conceptual} and NCA \citep{nichele2017neat,mordvintsev2020growing}. Regarding the morphologies, we experimented with \num{3} VSRs, a biped \vsr[1mm]{4}{3}{1111-1111-1001}, a comb \vsr[1mm]{7}{2}{1111111-1010101}, and a worm \vsr[1mm]{5}{1}{11111}. We chose these morphologies to test the NCA controllers versatility, because they resemble different forms of living organisms, which take advantage of their diverse body shapes to achieve diversified gaits. We relied on 2D-VSR-Sim \citep{medvet2020design} for the VSRs simulation, leaving all parameters to their default values. We made the code for the experiments publicly available at \url{https://github.com/giorgia-nadizar/VSRCollectiveControlViaSNCA}. To optimize the NCA parameters, we resorted to evolutionary algorithms (EAs) as they can easily overcome the struggles posed by the non-differentiability in SNNs. In addition, EAs are well suited for ill-posed problems with many local optima, which makes them particularly appropriate for optimizing the parameters of robotic controllers. In this study, we used a simple form of evolutionary strategy (ES). At first, the population is initialized with $n_\text{pop}$ individuals, i.e., numerical vectors $\vec{\theta}$, all generated by assigning to each element of the vector a randomly sampled value from a uniform distribution over the interval $[-1,1]$. Subsequently, $n_\text{gen}$ evolutionary iterations are performed, until reaching a total of $n_\text{evals}$ fitness evaluations. On every iteration, the fittest quarter of the population is chosen to generate $n_\text{pop}-1$ children, each obtained by adding values sampled from a normal distribution $N(0,\sigma)$ to each element of the element-wise mean $\vec{\mu}$ of all parents. The generated offspring, together with the fittest individual of the previous generation, end up forming the population of the next generation, which maintains the fixed size $n_\text{pop}$. We used the following ES parameters: $n_\text{pop}=36$, $n_\text{evals}=\num{30000}$, and $\sigma=0.35$. We verified that evolution was in general capable of converging to a solution with the chosen values, despite the different sizes of the search spaces corresponding to each NCA configuration. We optimized VSRs for the task of \emph{locomotion} on a flat terrain, the goal being traveling as fast as possible along the positive $x$-axis. We assessed the performance of a VSR by measuring its average velocity $v_x$ along the $x$-axis during a simulation of $\SI{30}{\second}$. We discarded the first $\SI{5}{\second}$ of each simulation to exclude the initial transitory from the velocity measurements. We used $v_x$ as fitness measure for selecting the best individuals in the ES. For each of the \num{9} VSRs resulting from the combination of \num{3} NCA and \num{3} morphologies, we performed \num{10} independent evolutionary optimizations, i.e., with different random seeds, for a total of \num{90} runs. Besides testing the VSR effectiveness upon parameters optimization, i.e., at the end of evolution, we also appraised their adaptability. We define a VSR controller as \emph{adaptable}, if it is able to achieve good performance in locomotion in spite of environmental changes. To evaluate this in practice, we took each optimized VSR and re-assessed it on a set of unseen terrains, i.e., terrains which none of its ancestors ever experienced locomotion on. In particular, we experimented with the following terrains: \begin{enumerate*}[label=(\alph*)] \item hilly with $6$ combinations of heights and distances between hills, \item steppy with $6$ combinations of steps heights and widths, \item downhill with $2$ different inclinations, and \item uphill with $2$ different inclinations. \end{enumerate*} As a result, we re-assessed each individual on a total of $16$ different terrains; we define its adaptability as the average of the $v_x$ on those terrains (each computed in a \SI{30}{\second} simulation, discarding the initial \SI{5}{\second}). The results of our experimental evaluation are summarized in \Cref{fig:snca-nca}. More in detail, for each of the considered VSR morphologies and NCA variants, we display the distribution of velocities achieved at the end of evolution by the best individuals, and their performance in terms of adaptability, i.e., the distribution of their average velocity on unseen terrains. In addition, above pairs of box plots, we report the $p$-values resulting from a two-sided Mann Whitney U statistical test; we consider, unless otherwise specified, $\alpha=0.05$ as confidence level. From \Cref{fig:snca-nca}, we observe that for the biped and the comb morphologies, U{}\cancel{D}{}-SNCA are always significantly better than the baseline in terms of adaptability, and they are significantly better at the end of evolution in all but one case. However, the outcomes seem to be exactly opposite for the worm morphology, making it difficult to provide a general answer to the posed research question. \begin{figure \centering \begin{tikzpicture} \def21{20.7} \def19{18.7} \def-1.5{-1.5} \def-3.5{-3.5} \begin{groupplot}[ boxplot, boxplot/draw direction=y, width=0.32\linewidth, height=0.4\linewidth, group style={ group size=3 by 1, horizontal sep=1mm, vertical sep=1mm, xticklabels at=edge bottom, yticklabels at=edge left, }, legend cell align={left}, ymin=-5.5,ymax=23.0, xticklabels=\empty, xmajorticks=true, xminorticks=false, xtick style={draw=none}, ymajorgrids=true, yminorgrids=true, minor y tick num=4, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.15pt, draw=gray!50}, title style={anchor=center, yshift=1ex}, legend style={draw=none} ] \nextgroupplot[ align=center, title={Biped \vsr[1mm]{4}{3}{1111-1111-1001}}, ylabel={$v_x$} ] \boxplotcouple{data/boxplot/b_biped_CA4.txt}{data/boxplot/v_biped_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_biped_Homo.txt}{data/boxplot/v_biped_Homo.txt}{MLP}{colorbrewer2}; \boxplotcouple{data/boxplot/b_biped_Hetero.txt}{data/boxplot/v_biped_Hetero.txt}{MLP}{colorbrewer3}; \pvalue{1}{3}{19}{$0.05$} \pvalue{1}{5}{21}{$0.02$} \pvaluebelow{2}{4}{-1.5}{$<0.01$} \pvaluebelow{2}{6}{-3.5}{$<0.01$} \nextgroupplot[ align=center, title={Comb \vsr[1mm]{7}{2}{1111111-1010101}} ] \boxplotcouple{data/boxplot/b_comb_CA4.txt}{data/boxplot/v_comb_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_comb_Homo.txt}{data/boxplot/v_comb_Homo.txt}{MLP}{colorbrewer2}; \boxplotcouple{data/boxplot/b_comb_Hetero.txt}{data/boxplot/v_comb_Hetero.txt}{MLP}{colorbrewer3}; \pvalue{1}{3}{19}{$<0.01$} \pvalue{1}{5}{21}{$<0.01$} \pvaluebelow{2}{4}{-1.5}{$<0.01$} \pvaluebelow{2}{6}{-3.5}{$<0.01$} \nextgroupplot[align=center, title={Worm \vsr[1mm]{5}{1}{11111}}, legend style={at={(1.2,.8)},anchor=north west}, legend columns=1, legend entries={{U{}\cancel{D}{}-SNCA}, {U{}D{}-NCA}, {\cancel{U}{}D{}-NCA}, {End of evolution}, {Re-assessment}},] \addlegendimage{mark=*,color=colorbrewer1,fill} \addlegendimage{mark=*,color=colorbrewer2,fill} \addlegendimage{mark=*,color=colorbrewer3,fill} \addlegendimage{area legend,color=black,fill} \addlegendimage{area legend,color=black,fill,fill opacity=0.4} \boxplotcouple{data/boxplot/b_worm_CA4.txt}{data/boxplot/v_worm_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_worm_Homo.txt}{data/boxplot/v_worm_Homo.txt}{MLP}{colorbrewer2}; \boxplotcouple{data/boxplot/b_worm_Hetero.txt}{data/boxplot/v_worm_Hetero.txt}{MLP}{colorbrewer3}; \pvalue{1}{3}{19}{$<0.01$} \pvalue{1}{5}{21}{$<0.01$} \pvaluebelow{2}{4}{-1.5}{$<0.01$} \pvaluebelow{2}{6}{-3.5}{$0.05$} \end{groupplot} \end{tikzpicture} \caption{ Box plots of the velocities $v_x$ achieved by the best individuals at the end of evolution and upon re-assessment on unseen terrains for different VSR morphologies (plot columns) and embodied NCA controllers (color). Above pairs of boxes we report the $p$-values resulting from Mann-Whitney U tests with the null hypothesis of equality of the means. } \label{fig:snca-nca} \end{figure} To further investigate on this apparent contradiction, we examined the behavior of a few evolved VSRs (videos are available at \url{https://giorgia-nadizar.github.io/VSRCollectiveControlViaSNCA/}) and found the reason behind the failure of the SNCA to be glaring for the worm morphology. In particular, we noticed that the NCA based on MLPs trigger a high frequency dynamics, resulting in a vibrating behavior, which for SNCA is prevented by homeostasis and $n_w=5$ when converting spikes to actuation values. However, for the worm morphology, vibration appears to be the only effective gait, as these VSRs are not able to properly bend, having only one row of voxels at disposal. Conversely, the biped and comb morphologies have more complex structures, which allow the discovery of a wider range of efficacious gaits. In fact, when we inspected the behaviors of these two families of VSRs, we could notice a broader variety of gaits, with some tendencies to vibration among those controlled by MLP-based NCA. Avoiding vibrating behaviors, which have been shown to be a strong attractor in evolution \citep{medvet2021biodiversity}, is of paramount importance, as this type of movement severely hinders adaptability, and constitutes an insurmountable barrier for the physical practicability of VSRs, i.e., a form of reality gap \citep{van2021influence,salvato2021crossing}. Even though it is possible to explicitly discourage vibrating behaviors, e.g., decreasing the actuation frequency, having a controller which avoids them by design is an undeniably significant accomplishment. \subsection{Strengths of the uniform non-directional SNCA}\label{sec:success-snca} From the experimental outcomes, however, another question arises: \emph{``what are the reasons behind the success of the U{}\cancel{D}{}-SNCA?''}. Namely, does the improvement lay in the non-directionality of the NCA or in the SNN employed? To address the newly emerged points, we deepened our analysis with a supplementary experimental campaign, encompassing new combinations of NCA architectures, neural models, and morphologies, for a total of \num{12} additional VSRs to be optimized. Regarding the morphologies, we experimented with the biped and the comb, discarding the worm for the reasons highlighted in \Cref{sec:snca-vs-baseline}. For what concerns the controllers, we extended the previous experiments by evaluating all missing combinations of NCA architecture and neural models. Among the latters, we also included a SNN composed of LIF neurons for which we disabled homeostasis, keeping the value of the threshold fixed throughout the simulation to its initial value $\vartheta_i^{(h)}=\vartheta_i^{(0)}=\SI{1}{\milli\volt}$. For each of the \num{12} new VSRs we repeated the experimental pipeline of \Cref{sec:snca-vs-baseline}. We display the results, together with the outcomes of the previous experiments, in \Cref{tab:summary}. Each cell of the table reports the median of the velocities achieved by VSRs at the end of evolution and upon re-assessment, grouped by morphology; each row corresponds to a NCA architecture, whereas we put neural models on the columns. We color cells proportionally to the median of velocities in order to better convey the information. \begin{table}[ht] \centering \pgfplotstableread[col sep=comma]{data/heatmap/heatmap.txt}\mytable \pgfplotstabletypeset[ color cells={min=0,max=15}, col sep=comma, columns/\space/.style={reset styles,string type,column type = {l},column type/.add={}{@{\hspace{.4em}}}}, every head row/.style={ before row={ \toprule & \multicolumn{6}{c}{End of evolution} & \multicolumn{6}{c}{Re-assessment} \\ \cmidrule(l{-0.5em}r{1em}){2-7} \cmidrule(l{-0.5em}r){8-13} & \multicolumn{3}{c}{Biped \vsr[1mm]{4}{3}{1111-1111-1001}} & \multicolumn{3}{c}{Comb \vsr[1mm]{7}{2}{1111111-1010101}} & \multicolumn{3}{c}{Biped \vsr[1mm]{4}{3}{1111-1111-1001}} & \multicolumn{3}{c}{Comb \vsr[1mm]{7}{2}{1111111-1010101}} \\ \cmidrule(l{-0.5em}r{1em}){2-4} \cmidrule(l{-0.5em}r{1em}){5-7} \cmidrule(l{-0.5em}r{1em}){8-10} \cmidrule(l{-0.5em}r){11-13} }, after row=\midrule, }, every last row/.style={after row=\bottomrule}, columns/1MLP/.style={column name={M},column type/.add={}{@{\hspace{.7em}}}}, columns/2MLP/.style={column name={M},column type/.add={}{@{\hspace{.7em}}}}, columns/3MLP/.style={column name={M},column type/.add={}{@{\hspace{.7em}}}}, columns/4MLP/.style={column name={M},column type/.add={}{@{\hspace{.7em}}}}, columns/1LIF/.style={column name={S},}, columns/2LIF/.style={column name={S},}, columns/3LIF/.style={column name={S},}, columns/4LIF/.style={column name={S},}, columns/1LIF-H/.style={column name={S-H},column type/.add={@{\hspace{.7em}}}{}}, columns/2LIF-H/.style={column name={S-H},column type/.add={@{\hspace{.7em}}}{}}, columns/3LIF-H/.style={column name={S-H},column type/.add={@{\hspace{.7em}}}{}}, columns/4LIF-H/.style={column name={S-H},column type/.add={@{\hspace{.7em}}}{}}, /pgfplots/colormap={greenish}{ rgb255=(240,249,232) rgb255=(168,221,181) rgb255=(123,204,196) rgb255=(78,179,211) rgb255=(43,140,190) rgb255=(8,88,158) }, /pgf/number format/fixed zerofill,precision=1 ]{\mytable} \caption{ Medians of velocities $v_x$ achieved by the best individuals at the end of evolution and upon re-assessment on unseen terrains for different morphologies. We put different NCA architectures on each row, and ANN models on the columns (M stands for MLP, S for SNN without homeostasis, S-H for SNN with homeostasis). Cells are colored proportionally to $v_x$. } \label{tab:summary} \end{table} From examining \Cref{tab:summary}, we can investigate on the importance of the two aforementioned factors. First, to weigh the impact of the NCA architecture, we compare the medians of different rows for each column of the table. We observe that U{}\cancel{D}{}-NCA are not worse than both D{}-NCA in all but one case, and they always equal or outperform D{}-NCA if combined with SNNs. We speculate this descends from the fact that, especially in absence of agent specialization, i.e., in the case of U{}-NCA, it is easier for the prototype individual to learn to pass a single message to all its clone neighbors and correctly interpret the information received. In addition, we highlight that U{}\cancel{D}{}-NCA are less prone to triggering vibrating dynamics by design, and are thus more successful in combination with SNNs, which display and take advantage of the same trait. Concerning the importance of the neural model, we note that SNNs, either with or without homeostasis, surpass MLPs in \num{10} out of \num{12} cases. To better appraise the influence of homeostasis in SNNs, we need to narrow our focus to the re-assessment results, where this neural model leads to neatly superior outcomes in all but one case, confirming its fundamental role in self-regulation and adaptation. Moreover, we can re-state that SNNs seem to be more naturally suited for being combined with U{}\cancel{D}{}-NCA, as both tend to move away from high frequency non-adaptable behaviors. Therefore, we can conclude that the superiority of our contribution lies in the successful combination of the novel U{}\cancel{D}{}-NCA architecture with SNNs with homeostasis. \section{Concluding remarks} \label{sec:conclusions} We explored the paradigm of collective control of Voxel-based Soft Robots (VSRs), a form of simulated modular soft robots, appraising the emergence of coordination from the synergistic actuation of individual agents, i.e., the voxels constituting the VSR. Taking inspiration from NCA, a form of distributed neural control, and from state-of-the-art forms of embodied control of VSRs, we introduced the novel concept of embodied Spiking Neural Cellular Automata (SNCA), in which we used Spiking Neural Networks (SNNs) as elementary units. To evaluate the performance of the proposed SNCA as a robotic controller, we compared it against the state-of-the-art embodied controllers, optimizing the controller parameters of three different VSRs for the task of locomotion. Our experimental results show that the SNCA is not only competitive with the pre-existing controllers, but it also leads to significantly more adaptable agents, outperforming their rivals when faced with unforeseen circumstances. Moreover, we highlight a trend towards less reality-gap prone behaviors in VSRs controlled by SNCA, which paves the way for the physical practicability of such robots. We believe our contribution can be considered as a starting point for several additional analyses, spanning across diverse research directions. Concerning SNNs, we plan to experiment with neuroplasticity in the form of unsupervised learning, to the extent of achieving greater generality and increased robustness of controllers \citep{qiu2020towards}. In addition, we will address the problem of collective control with a cooperative coevolution strategy aimed at optimizing an ensemble of heterogeneous SNCA controllers \citep{potter2000cooperative}. \section*{Supplementary material} \begin{figure}[ht] \centering \begin{tikzpicture} \begin{groupplot}[ boxplot, boxplot/draw direction=y, width=0.3\linewidth, height=0.35\linewidth, group style={ group size=4 by 3, horizontal sep=1mm, vertical sep=1mm, xticklabels at=edge bottom, yticklabels at=edge left }, ymin=-1,ymax=21,xticklabels=\empty, legend cell align={left} ] \nextgroupplot[ align=center, legend columns=3, legend entries={MLP,LIF-SNN,LIFH-SNN}, title={CA-1}, ylabel={$v_x$}, legend to name=best ] \addlegendimage{mark=*,color=colorbrewer1,fill} \addlegendimage{mark=*,color=colorbrewer5,fill} \addlegendimage{mark=*,color=colorbrewer2,fill} \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_biped_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_biped_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_biped_CA1.txt}; \nextgroupplot[align=center,title={CA-4}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_biped_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_biped_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_biped_CA4.txt}; \nextgroupplot[align=center,title={Homo}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_biped_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_biped_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_biped_homo.txt}; \nextgroupplot[align=center,title={Hetero},ylabel={Biped}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_biped_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_biped_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_biped_hetero.txt}; \nextgroupplot[ylabel={$v_x$}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_comb_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_comb_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_comb_CA1.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_comb_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_comb_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_comb_CA4.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_comb_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_comb_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_comb_homo.txt}; \nextgroupplot[ylabel={Comb}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_comb_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_comb_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_comb_hetero.txt}; \nextgroupplot[ylabel={$v_x$}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_worm_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_worm_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_worm_CA1.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_worm_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_worm_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_worm_CA4.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_worm_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_worm_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_worm_homo.txt}; \nextgroupplot[ylabel={Worm}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/b_worm_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/b_worm_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/b_worm_hetero.txt}; \end{groupplot} \end{tikzpicture} \\ \tikzexternaldisable\pgfplotslegendfromname{best}\tikzexternalenable \caption{Best. P-values missing.} \label{fig:best} \end{figure} \begin{figure}[ht] \centering \begin{tikzpicture} \begin{groupplot}[ boxplot, boxplot/draw direction=y, width=0.3\linewidth, height=0.35\linewidth, group style={ group size=4 by 3, horizontal sep=1mm, vertical sep=1mm, xticklabels at=edge bottom, yticklabels at=edge left }, ymin=-1,ymax=20,xticklabels=\empty, legend cell align={left} ] \nextgroupplot[ align=center, legend columns=3, legend entries={MLP,LIF-SNN,LIFH-SNN}, title={CA-1}, ylabel={$v_x$}, legend to name=validation ] \addlegendimage{mark=*,color=colorbrewer1,fill} \addlegendimage{mark=*,color=colorbrewer5,fill} \addlegendimage{mark=*,color=colorbrewer2,fill} \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_biped_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_biped_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_biped_CA1.txt}; \nextgroupplot[align=center,title={CA-4}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_biped_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_biped_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_biped_CA4.txt}; \nextgroupplot[align=center,title={Homo}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_biped_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_biped_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_biped_homo.txt}; \nextgroupplot[align=center,title={Hetero},ylabel={Biped}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_biped_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_biped_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_biped_hetero.txt}; \nextgroupplot[ylabel={$v_x$}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_comb_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_comb_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_comb_CA1.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_comb_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_comb_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_comb_CA4.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_comb_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_comb_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_comb_homo.txt}; \nextgroupplot[ylabel={Comb}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_comb_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_comb_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_comb_hetero.txt}; \nextgroupplot[ylabel={$v_x$}] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_worm_CA1.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_worm_CA1.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_worm_CA1.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_worm_CA4.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_worm_CA4.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_worm_CA4.txt}; \nextgroupplot \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_worm_homo.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_worm_homo.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_worm_homo.txt}; \nextgroupplot[ylabel={Worm}, yticklabel pos=right] \addplot[black, fill=colorbrewer1] table[y=MLP] {data/boxplot/v_worm_hetero.txt}; \addplot[black, fill=colorbrewer5] table[y=LIF] {data/boxplot/v_worm_hetero.txt}; \addplot[black, fill=colorbrewer2] table[y=LIF-H] {data/boxplot/v_worm_hetero.txt}; \end{groupplot} \end{tikzpicture} \\ \tikzexternaldisable\pgfplotslegendfromname{validation}\tikzexternalenable \caption{Validation. P-values missing.} \label{fig:validation} \end{figure} \subsection{Importance of the neural model} \begin{figure}[ht] \centering \begin{tikzpicture} \def21{21} \def19{19} \def-1.5{-1.5} \def-3.5{-3.5} \begin{groupplot}[ boxplot, boxplot/draw direction=y, width=0.4\linewidth, height=0.5\linewidth, group style={ group size=2 by 1, horizontal sep=1mm, vertical sep=1mm, xticklabels at=edge bottom, yticklabels at=edge left }, ymin=-6,ymax=23,xticklabels=\empty, legend cell align={left} ] \nextgroupplot[ align=center, legend columns=3, legend entries={{S-NCA (w/ hom.)}, {S-NCA (w/o hom.)}, {NCA}, {Evolution}, {Adaptation}}, title={Biped}, ylabel={$v_x$}, legend to name=snca ] \addlegendimage{mark=*,color=colorbrewer1,fill} \addlegendimage{mark=*,color=colorbrewer2,fill} \addlegendimage{mark=*,color=colorbrewer3,fill} \addlegendimage{area legend,color=black,fill} \addlegendimage{area legend,color=black,fill,fill opacity=0.4} \boxplotcouple{data/boxplot/b_biped_CA4.txt}{data/boxplot/v_biped_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_biped_CA4.txt}{data/boxplot/v_biped_CA4.txt}{LIF}{colorbrewer2}; \boxplotcouple{data/boxplot/b_biped_CA4.txt}{data/boxplot/v_biped_CA4.txt}{MLP}{colorbrewer3}; \pvalue{1}{5}{21}{$0.089$} \pvalue{1}{3}{19}{$1$} \pvalue{3}{5}{19}{$<0.01$} \pvaluebelow{2}{6}{-3.5}{$<0.01$} \pvaluebelow{2}{4}{-1.5}{$0.912$} \pvaluebelow{4}{6}{-1.5}{$<0.01$} \nextgroupplot[align=center,title={Comb}] \boxplotcouple{data/boxplot/b_comb_CA4.txt}{data/boxplot/v_comb_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_comb_CA4.txt}{data/boxplot/v_comb_CA4.txt}{LIF}{colorbrewer2}; \boxplotcouple{data/boxplot/b_comb_CA4.txt}{data/boxplot/v_comb_CA4.txt}{MLP}{colorbrewer3}; \pvalue{1}{5}{21}{$0.173$} \pvalue{1}{3}{19}{$0.122$} \pvalue{3}{5}{19}{$0.023$} \pvaluebelow{2}{6}{-3.5}{$0.121$} \pvaluebelow{2}{4}{-1.5}{$0.011$} \pvaluebelow{4}{6}{-1.5}{$0.280$} \end{groupplot} \end{tikzpicture} \\ \tikzexternaldisable\pgfplotslegendfromname{snca}\tikzexternalenable \caption{Further insight on the uniform non-directional case. Importance of homeostasis, and comparison of SNN with MLP.} \label{fig:unif-nondir} \end{figure} \subsection{Importance of the state size for non-directional NCA}\label{sec:state-size} \begin{figure}[ht] \centering \begin{tikzpicture} \def21{21} \def19{19} \def-1.5{-1.5} \def-3.5{-3.5} \begin{groupplot}[ boxplot, boxplot/draw direction=y, width=0.4\linewidth, height=0.5\linewidth, group style={ group size=2 by 1, horizontal sep=1mm, vertical sep=1mm, xticklabels at=edge bottom, yticklabels at=edge left }, ymin=-3,ymax=21,xticklabels=\empty, legend cell align={left} ] \nextgroupplot[ align=center, legend columns=2, legend entries={$n_c=4$, $n_c=1$, {Evolution}, {Adaptation}}, title={Biped}, ylabel={$v_x$}, legend to name=ca14 ] \addlegendimage{mark=*,color=colorbrewer1,fill} \addlegendimage{mark=*,color=colorbrewer2,fill} \addlegendimage{area legend,color=black,fill} \addlegendimage{area legend,color=black,fill,fill opacity=0.4} \boxplotcouple{data/boxplot/b_biped_CA4.txt}{data/boxplot/v_biped_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_biped_CA1.txt}{data/boxplot/v_biped_CA1.txt}{LIF}{colorbrewer2}; \pvalue{1}{3}{19}{$0.035$} \pvaluebelow{2}{4}{-1.5}{$0.123$} \nextgroupplot[align=center,title={Comb}] \boxplotcouple{data/boxplot/b_comb_CA4.txt}{data/boxplot/v_comb_CA4.txt}{LIF-H}{colorbrewer1}; \boxplotcouple{data/boxplot/b_comb_CA1.txt}{data/boxplot/v_comb_CA1.txt}{LIF}{colorbrewer2}; \pvalue{1}{3}{19}{$0.083$} \pvaluebelow{2}{4}{-1.5}{$0.026$} \end{groupplot} \end{tikzpicture} \\ \tikzexternaldisable\pgfplotslegendfromname{ca14}\tikzexternalenable \caption{Importance of state size (always S-NCA with homeostasis).} \label{fig:ca4-ca1} \end{figure}
1,108,101,564,489
arxiv
\subsection*{Semileptonic and Leptonic Decays of B Mesons} According to the SM, purely leptonic and semileptonic decays of $B$ mesons are mediated by the $W^-$ boson, as shown schematically in Figure~\ref{fig:feynman}. $B$ mesons are assumed to be composed of a b-quark and an anti-quark, either $\ensuremath{B^-}\xspace(b,\ensuremath{\overline u}\xspace)$ or $\ensuremath{\Bbar^0}\xspace(b,\ensuremath{\overline d}\xspace)$, whereas charm mesons (the spin-0 $D$ and spin-1 $D^*$ state) contain a c-quark and an anti-quark, $D^{0(*)}(c,\ensuremath{\overline u}\xspace)$ or $D^{+(*)}(c,\ensuremath{\overline d}\xspace)$. \begin{figure}[btp!] \centering \includegraphics[width=0.34\textwidth]{figures/figure1.pdf} \caption{{\bf Diagrams for SM decay processes:} (a) \ensuremath{B^- \rightarrow \ell^- \nulb}\xspace with a purely leptonic final state and (b) \ensuremath{\Bbar \rightarrow D^{(*)} \ell^- \nulb}\xspace) involving a charm meson and lepton pair and mediated by a vector boson $(W^-$). } \label{fig:feynman} \end{figure} For purely leptonic $\kern 0.18em\overline{\kern -0.18em B}{}\xspace$ decays, the SM prediction of the total decay rate $\Gamma$, which depends critically on the lepton mass squared $ m^2_{\ensuremath{\ell}\xspace}$, is \begin{equation} \Gamma^{SM}(\ensuremath{B^- \rightarrow \ell^- \nulb}\xspace) = \frac{G_F^2\; m_B \; m^2_{\ensuremath{\ell}\xspace}}{8\pi}|V_{ub}|^2 \left(1-\frac{m_\ensuremath{\ell}\xspace^2}{m_B^2} \right)^2 \times f^2_B . \label{eq:Gamma_pl} \end{equation} \noindent The first factor contains the Fermi constant $G_F=1.1663797 \times 10^{-5} \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace^{-2}$ and the $\ensuremath{B}\xspace$ meson mass, $m_B=5.279\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. All hadronic effects, due to the binding of quarks inside the meson, are encapsulated in the decay constant $f_B$. Recent lattice QCD calculations~\cite{Aoki:2013ldr} predict $f_B=(0.191 \pm 0.009)\ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace$. Taking into account the current world averages for the $B^-$ lifetime, $\tau_B=(1.638\pm 0.004)$\ensuremath{\rm \,ps}\xspace~\cite{Agashe:2014kda}, and the quark mixing parameter~\cite{Kobayashi:1973fv} for $b \ensuremath{\rightarrow}\xspace u$ transitions$, |V_{ub}|$~\cite{Amhis:2014hma}, the expected branching fraction, i.e., the frequency of this decay relative to all decay modes, is~\cite{Charles:2004jd} \begin{equation} {\cal B}^{SM} (\ensuremath{\Bub}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{\tau^-}\xspace \ensuremath{\overline{\nu}_\tau}\xspace)=(0.75~^{+0.10}_{-0.05}) \times 10^{-4}. \end{equation} \noindent Decays to the lower mass charged leptons, $e^-$ and $\mu^-$, are strongly suppressed by spin effects and have not yet been observed. The differential decay rate, ${\rm d}\Gamma$, for semileptonic decays involving $D^{(*)}$ mesons depends on both $\ensuremath{{\rm \,m}}\xspace^2_{\ensuremath{\ell}\xspace}$ and $q^2$, the invariant mass squared of the lepton pair~\cite{Korner:1989qb}, \begin{align} &\frac{{\rm d}\Gamma^{SM}(\ensuremath{\Bbar \rightarrow D^{(*)} \ell^- \nulb}\xspace)}{{\rm d}q^2}\, = \underbrace{\frac{G_F^2\; |V_{cb}|^2\; |\boldsymbol{p}^*_{D^{(*)}}| \; q^2}{96\pi^3 m_B^2} \left(1-\frac{m_\ensuremath{\ell}\xspace^2}{q^2} \right)^2}_{\text{universal and phase space factors}} \\ \nonumber & \times \underbrace{\left[(|H_{+}|^2+|H_{-}|^2+|H_{0}|^2) \left(1+\frac{m_\ensuremath{\ell}\xspace^2}{2q^2} \right) + \frac{3 m_\ensuremath{\ell}\xspace^2}{2q^2}|H_{s}|^2 \right]}_{\text{hadronic effects}}~. \label{eq:Gamma_sl} \end{align} \noindent The first factor is universal for all semileptonic $B$ decays, containing a quark flavor mixing parameter~\cite{Kobayashi:1973fv}, in this case $|V_{cb}|$~\cite{Amhis:2014hma} for $b \ensuremath{\rightarrow}\xspace c$ quark transitions, and $p^*_{D^{(*)}}$, the 3-momentum of the hadron in the $B$ rest frame, in this case a $D^{(*)}$ meson. The four helicity~\cite{helicity} amplitudes $H_+, H_-, H_0$ and $H_s$ capture the impact of hadronic effects. They depend on the spin of the charm meson and on $q^2$. The kinematic range, $m^2_{\ensuremath{\ell}\xspace} \le q^2 \le (m_B - m_{D^{*}})^2$, is sensitive to the lepton mass $m_{\ensuremath{\ell}\xspace}$ and the charm meson mass $m_{D^{*}}$. The much larger mass of the $\tau$ not only impacts the rate, but also the kinematics of the decays via the $H_s$ amplitude. All four amplitudes contribute to $\ensuremath{\Bbar \rightarrow D^* \ell^- \nulb}\xspace$, while only $H_0$ and $H_s$ contribute to $\ensuremath{\Bbar \rightarrow D \ell^- \nulb}\xspace$, which leads to a higher sensitivity of this decay mode to the scalar contribution $H_s$. Measurements of the ratios of semileptonic branching fractions remove the dependence on $|V_{cb}|$, lead to a partial cancellation of theoretical uncertainties related to hadronic effects, and reduce of the impact of experimental uncertainties. Current SM predictions~\cite{Na:2015kha, Fajfer:2012vx, Lattice:2015rga} are \begin{eqnarray} \label{eq:RD} {\cal R}^{SM}_D &=& \frac {{\cal B}(\kern 0.18em\overline{\kern -0.18em B}{}\xspace \ensuremath{\rightarrow}\xspace D \tau^- \ensuremath{\overline{\nu}_\tau}\xspace)} {{\cal B}(\kern 0.18em\overline{\kern -0.18em B}{}\xspace \ensuremath{\rightarrow}\xspace D e^- \ensuremath{\nub_e}\xspace)} = 0.300 \pm 0.008 \\ \label{eq:RDs} {\cal R}^{SM}_{D^*} &=&\frac {{\cal B}(\kern 0.18em\overline{\kern -0.18em B}{}\xspace \ensuremath{\rightarrow}\xspace D^* \tau^- \ensuremath{\overline{\nu}_\tau}\xspace)} {{\cal B}(\kern 0.18em\overline{\kern -0.18em B}{}\xspace \ensuremath{\rightarrow}\xspace D^* e^- \ensuremath{\nub_e}\xspace)} = 0.252 \pm 0.003 . \ \end{eqnarray} \noindent The predicted ratios relative to ${\cal B}(\kern 0.18em\overline{\kern -0.18em B}{}\xspace \ensuremath{\rightarrow}\xspace D^* \mu^- \ensuremath{\nub_\mu}\xspace)$ are identical within the quoted precision. \section*{Conclusions and Outlook} While the observed enhancements of the leptonic and semileptonic $B$ meson decay rates involving a $\tau$ lepton relative to the expectations of the SM of electroweak interactions are intriguing, their significance is not sufficient to unambiguously establish a violation of lepton universality at this time. However, the fact that these unexpected enhancements have been observed by three experiments operating in very different environments deserves further attention. At present, the measurements are limited by the size of the available data samples and uncertainties in the reconstruction efficiencies and background estimates. It is not inconceivable that the experiments have underestimated these uncertainties, or missed a more conventional explanation. Furthermore, while it is unlikely, it cannot be totally excluded that the theoretical SM predictions are not as firm as presently assumed. Currently, the experimenters are continuing their analysis efforts, refining their methods, enhancing the signal samples by adding additional decay modes, improving the efficiency and selectivity of the tagging algorithms, as well as the Monte Carlo simulations, and scrutinizing all other aspects of the signal extraction. In the near future, LHCb will make several important contributions, among them their first measurement of the $\ensuremath{\Bbar \rightarrow D \tau^-\nutb}\xspace$ decay, which will also improve results for $\ensuremath{\Bbar\rightarrow D^* \tau^-\nutb}\xspace$. Furthermore, the $\tau^{-} \ensuremath{\rightarrow}\xspace \pi^{-} \pi^{+} \pi^{-} \nu_{\tau}$ decay mode will be included. In addition, searches for lepton universality violation in semileptonic decays of other $B$ mesons and baryons are being planned. Beyond that, LHCb will continue to record data at the highest $pp$ collision energy available. By the end of 2017, the accumulated data sample is expected to increase by a factor of three. In the longer term future, LHCb is planning to further enhance the data rate capability and record much larger event samples. At KEK in Japan, the $\ensuremath{e^+e^-}\xspace$ collider is undergoing a major upgrade and is expected to enlarge the data sample by almost two orders of magnitude over a period of about ten years. In parallel, the capabilities of the Belle detector are also being upgraded. The operation of this new and more powerful detector is expected to start in 2018. The much larger event samples and the constrained $\BB$ kinematics will allow more precise measurements of kinematic distributions and detailed studies, for instance, of the $\tau$ polarization in $\ensuremath{B}\xspace \rightarrow D^{*} \tau \nu_{\tau}$ decays. The feasibility of such a measurement was recently presented \cite{Hirose:2016wfn}. For $\ensuremath{\Bub}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{\tau^-}\xspace \ensuremath{\overline{\nu}_\tau}\xspace$ decays, which currently have statistical and systematic uncertainties of 30\% or more for individual measurements, the substantially larger data samples are expected to lead to major reductions in these uncertainties allowing more accurate assessments of the compatibility with the SM predictions. Detailed studies of the overall physics goals and precision measurements that can be achieved by Belle II and LHCb are ongoing. In recent years, several experiments have examined decay rates and angular distributions for $\ensuremath{\Bu}\xspace$ decays involving a $K^{(*)+}$ meson and a lepton pair, $\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace K^{(*)+}\mu^+ \mu^-$ and $\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace K^{(*)+} e^+ e^-$. In the framework of the SM these decays are very rare, since they involve $b \ensuremath{\rightarrow}\xspace s$ quark transitions. LHCb~\cite{Aaij:2014ora} recently published a measurement of the ratio, \begin{equation} \label{eq:RK} {\cal R}_K = \frac {{\cal B}(\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace K^+ \mu^+ \mu^-)} {{\cal B}(\ensuremath{\Bu}\xspace \ensuremath{\rightarrow}\xspace K^+ e^+ e^- )} = 0.745 ^{+0.090}_{-0.074} \pm 0.036 , \ \end{equation} a value that is 2.6 standard deviations below the SM expectation of about 1.0. Earlier measurements by Belle~\cite{Wei:2009zv}, CDF~\cite{Aaltonen:2011ja}, and BABAR~\cite{Lees:2012tva} had significantly larger uncertainties and were fully consistent with lepton universality. Some theoretical models include new types of interactions that can explain this result. For instance, leptoquarks which can mediate this decay and result in higher rates for electrons than muons~\cite{Hiller:2014yaa,Becirevic:2016yqi}. BABAR~\cite{Lees:2012tva}, LHCb~\cite{Aaij:2015oid} and Belle~\cite{Wehle:2016yoi} have analyzed angular distributions for the four decay modes and observed general agreement with SM predictions, except for local deviations, the most significant by LHCb at the level of 3.4 standard deviations. Also here, more data are needed to enhance the significance of these measurements and find possible links to $\ensuremath{B}\xspace$ decays involving $\tau$ leptons. If the currently observed excess in the ratios \ensuremath{{\cal R}_{D}}\xspace\ and \ensuremath{{\cal R}_{D^*}}\xspace\ is confirmed, experimenters will use their large data samples to measure properties of signal events and learn about the nature of the new particles and interactions that contribute to these decays~\cite{Sakaki:2014sea,Alonso:2016gym}. In conclusion, we can expect much larger event samples from the upgraded LHCb and Belle experiments in the not too distant future. These data will be critical to the effort to understand whether the tantalizing results obtained to date are an early indication of beyond-the-SM physics processes or the result of larger-than-expected statistical or systematic deviations. A confirmation of new physics contributions in these decays would shake the foundations of our understanding of matter and trigger an intense program of experimental and theoretical research. \section*{Introduction} \lettrine[lines=3,findent=2pt]{\color{color1}M}{ }ore than 70 years of particle physics research have led to an elegant and concise theory of particle interactions at the sub-nuclear level, commonly referred to as the Standard Model (SM)~\cite{Mann:2010zz,Weinberg:1996kr}. Based on information extracted from experiments, theorists have combined the theory of electroweak (EW) interactions with quantum chromodynamics (QCD), the theory of strong interactions, and experiments have validated this theory to an extraordinary degree. Any observation that is proven to be inconsistent with SM assumptions would suggest a new type of interaction or particle. In the framework of the SM of particle physics the fundamental building blocks, quarks and leptons, are each grouped in three generations of two members each. The three charged leptons, the electron ($e^-$), the muon ($\mu^-$) and the tau ($\tau^-$) are each paired with a very low mass, electrically neutral neutrino, $\ensuremath{\nu_e}\xspace, \ensuremath{\nu_\mu}\xspace,$ and $\ensuremath{\nu_\tau}\xspace$. The electron, a critical component of matter, was discovered by J.J. Thomson~\cite{Thomson} in 1897. The discovery of the muon in cosmic rays by C. D. Anderson and S. H. Neddermeyer~\cite{Neddermeyer:1937md} in 1937 came as a surprise, similarly surprising was the first observation of $\ensuremath{\tau^+\tau^-}\xspace$ pair production by M. Perl et al.~\cite{Perl:1975bf} at the SPEAR $\ensuremath{e^+e^-}\xspace$ storage ring in 1975. As far as we know, all leptons are point-like particles, i.e. they have no substructure. The three generations are ordered by the mass of the charged lepton ranging from 0.511\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for $\epm$ to 105\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for $\ensuremath{\mu^{\pm}}\xspace$, and 1,777\ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace for $\taupm$~\cite{Ablikim:2014uzh}. These mass differences lead to vastly different lifetimes, from the stable electron to 2.2\ensuremath{\upmu \mathrm{s}}\xspace for muons, and 0.29\ensuremath{\rm \,ps}\xspace for taus. Charged leptons participate in electromagnetic and weak, but not strong interactions, whereas neutrinos only undergo weak interaction. The SM assumes that these interactions of the charged and neutral leptons are universal, i.e., the same for the three generations. Precision tests of lepton universality have been performed over many years by many experiments. To date no definite violation of lepton universality has been observed. Among the most precise tests is a comparison of decay rates of $K$ mesons, $K^- \ensuremath{\rightarrow}\xspace e^- \ensuremath{\nub_e}\xspace$ versus $K^-\ensuremath{\rightarrow}\xspace \mu^- \ensuremath{\nub_\mu}\xspace$~\cite{Lazzeroni:2012cx}~\cite{charge}. Furthermore, taking into account precision measurements of the tau and muon masses and lifetimes and the decay rates $\ensuremath{\tau^-}\xspace \ensuremath{\rightarrow}\xspace e^- \ensuremath{\nub_e}\xspace \ensuremath{\nu_\tau}\xspace$ and $\mu^- \ensuremath{\rightarrow}\xspace e^- \ensuremath{\nub_e}\xspace \ensuremath{\nu_\mu}\xspace$, the equality of the weak coupling strengths of the tau and muon was confirmed~\cite{Ablikim:2014uzh}. On the other hand, a recent determination of the proton radius, derived from very precise measurements of the Lamb shift in muonic hydrogen atoms~\cite{Pohl:2010zza} differs by about 4\% from measurements of normal hydrogen atoms and e-p scattering data. Studies of the origin of this puzzling difference are underway~\cite{Pohl:2013yb}. They are aimed at a better understanding of the proton radius and structure, and may reveal details of the true impact of muons and electrons on these interactions. Recent studies of purely leptonic and semileptonic decays of $B$ mesons of the form $\ensuremath{B^- \rightarrow \tau^- \nutb}\xspace$ and $\ensuremath{\Bbar \rightarrow D^{(*)} \ell^- \nulb}\xspace$, with $\ensuremath{\ell}\xspace = e, \mu,$ or $\tau$, have resulted in observations that seem to challenge lepton universality. These weak decays involving leptons are well understood in the framework of the SM, and therefore offer a unique opportunity to search for unknown phenomena and processes involving new particles, for instance, a yet undiscovered charged partner of the Higgs boson~\cite{Tanaka:1994ay}. Such searches have been performed on data collected by three different experiments, the LHCb experiment at the proton-proton ($pp$) collider at CERN in Europe, and the BABAR and Belle experiments at $\ensuremath{e^+e^-}\xspace$ colliders in the U.S.A. and in Japan. Measurements by these three experiments favor larger than expected rates for semileptonic $B$ decays involving $\tau$ leptons. Currently, the combined significance of these results is at the level of four standard deviations, and the fact that all three experiments report an unexpected enhancement has drawn considerable attention. A confirmation of this violation of lepton universality and an explanation in terms of new physics processes are a very exciting prospect! In the following, details of the experimental techniques and preliminary studies to understand the observed effects will be presented, along with prospects for improved sensitivity and complementary measurements at current and future facilities. \section*{Introduction} \label{sec:intro} \input{intro.tex} \subsection*{Standard model predictions of B meson decay rates} \label{sec:bdecays} \input{bdecays.tex} \subsection*{B meson production and detection} \label{sec:experiments} \input{experiments.tex} \subsection*{Measurements of \texorpdfstring{\ensuremath{B^- \rightarrow \tau^- \nutb}\xspace}{B -> TauNu} decays} \label{sec:taunu} \input{taunu.tex} \subsection*{Measurements of \texorpdfstring{\ensuremath{\Bbar \rightarrow D^{(*)} \tau^- \nutb}\xspace}{B -> D(*)TauNu} decays} \label{sec:dxtaunu} \input{dxtaunu.tex} \subsection*{Interpretations of results} \label{sec:interpretations} \input{interpretations.tex} \subsection*{Conclusions and outlook} \label{sec:conclusions} \input{conclusions.tex} \subsection*{Acknowledgements} \label{sec:acknowledgements} We recognize the contributions and dedication of our colleagues in the large international collaborations supporting the operation of the BaBar (M.F.S., R.K., V.L.), Belle (T.K., Y.S.) and LHCb (G.C., B.H.) detectors, the data processing and the data analyses on which the results presented in this Review are based. None of this would have been achieved without the efforts of the teams at SLAC, KEK and CERN who achieved excellent beam conditions and delivered high luminosities of the \ensuremath{e^+e^-}\xspace and pp storage rings over many years. We acknowledge support from the Organisation for Scientific Research (NWO) of the Netherlands, the US National Science Foundation and Department of Energy, the Natural Sciences and Engineering Research Council (NSERC) of Canada, the Excellence Cluster of the DFG of Germany: Origin and Structure of the Universe, and the Japan Society for the Promotion of Science (JSPS).
1,108,101,564,490
arxiv
\section{Introduction} The discrepancy of the lepton anomalous magnetic moment ($g-2$) is one of the leading candidates that indicate new physics beyond the standard model (SM). Both in the electron and muon sectors, the anomaly has been reported as \begin{align} \laq{g2e} \Delta a_e &= a_e^{\rm EXP} - a_e^{\rm SM} = (-8.7 \pm 3.6) \times 10^{-13}, \\ \laq{g2m} \Delta a_\mu &= a_\mu^{\rm EXP} - a_\mu^{\rm SM} = (27.4 \pm 7.3) \times 10^{-10}, \end{align} where $a_\mu^{\rm SM}$ is the SM prediction of the muon $g-2$~\cite{Davier:2017zfy,Keshavarzi:2018mgv}, and $a_\mu^{\rm EXP}$ is its experimental result~\cite{Bennett:2006fi,Roberts:2010cj}. Recently, a new discrepancy, $\Delta a_e$, was reported in the electron sector, due to the new measurement of the fine structure constant. See Refs.~\cite{Hanneke:2008tm,Hanneke:2010au} for the experimental value of the electron $g-2$, Ref.~\cite{Aoyama:2014sxa} for its theoretical prediction, and Ref.~\cite{Parker:2018vye} for the new result of the fine structure constant. It is challenging to explain both anomalies theoretically. In a wide class of new physics models, contributions to the lepton $g-2$ are scaled by the lepton mass squared. Suppose the muon $g-2$ anomaly is a sign of new physics, the electron $g-2$ is expected to receive a contribution, \begin{align} \laq{ratio} \frac{\Delta a_e}{\Delta a_\mu} \sim \frac{m_e^2}{m_\mu^2} \simeq 2.4\times 10^{-5}. \end{align} This is too small to explain the result \eq{g2e}. Thus, it seems to require very light new particles, which easily conflict with experimental constraints, e.g., from the LHC. In addition, the sign of Eq.~\eq{g2e} is opposite to Eq.~\eq{g2m}. Extra mechanisms may flip the sign. For instance, flavor violations in the lepton sector can solve the problems, though they are constrained tightly. New physics models have been studied to explain both anomalies~\cite{Davoudiasl:2018fbb,Crivellin:2018qmi,Liu:2018xkx,Dutta:2018fge,Han:2018znu}. Within the context of the supersymmetry (SUSY), lepton flavor violations are examined~\cite{Dutta:2018fge}. SUSY contributions to the electron $g-2$ are enhanced by the tau Yukawa coupling via the mixings of the selectrons with the staus, instead of introducing very light SUSY particles. Further, the sign is chosen appropriately by the mixings. However, it was argued that the lepton-flavor violating $\tau \to e\gamma$ restricts the system. In this letter, we propose a new mechanism to explain both anomalies within the minimal supersymmetric standard model (MSSM). We assume the minimal flavor violation (MFV) for the lepton sector, and thus, the model is free from the lepton flavor violations. The key observation is threshold corrections to the lepton Yukawa couplings. They are non-linear in SUSY particle masses so that even if the SUSY particle masses follow the MFV hypothesis, the relation \eq{ratio} can be changed drastically. In particular, the SUSY electron Yukawa coupling can be enhanced, and its sign can be opposite to the muon one. The scenario predicts flavor-dependent slepton masses. We will discuss the Higgs mediation scenario as an explicit model~\cite{Yamaguchi:2016oqz}. \section{Muon and electron $g-2$} \lac{1} The SUSY Yukawa couplings of leptons, $y_{i}$, are matched with the SM ones, $m_{i}/v$, non-trivially because of radiative corrections $\Delta_i$ and a ratio of the Higgs vacuum expectation values $\tan\beta\equiv \left<H^0_u\right>/\left<H^0_d\right>$ as~\cite{Carena:1999py,Marchetti:2008hw,Hofer:2009xb} \begin{align} \laq{yukawas} y_{i} \simeq {m_{i}\over v}{ \sqrt{1+\tan^2\beta} \over 1+\Delta_i}, \end{align} where the Higgs vacuum expectation value is $v^2 = \left<H^0_u\right>^2 +\left<H^0_d\right>^2\simeq (174 {\rm \, GeV} )^2$. In this Letter, we focus on a scenario with a large size of the Higgsino mass parameter, $\mu$, and large $\tan\beta$. Then, the radiative corrections are dominated by threshold corrections from Bino-slepton loop diagrams. In the mass-insertion approximation, they become~\cite{Marchetti:2008hw}\footnote{ In the numerical analysis, we use a general formula for evaluating $\Delta_i$, i.e., without assuming the mass-insertion approximation~\cite{Girrbach:2009uy}. } \begin{align} \Delta_i \simeq \mu \tan\beta \frac{g_Y^2 M_1}{16\pi^2} I(M_1^2, m_{\tilde{ i}_L}^2, m_{\tilde{i}_R}^2), \laq{delta} \end{align} with $i=e, \mu$, and its superpartner $\tilde{i}$. Here, $m_X$ is a mass of $X$, $g_Y$ is the gauge coupling of $\mathop{\rm U}(1)_Y$, and $M_1$ is the Bino mass. The loop function is defined as \begin{align} I(x,y,z) = -\frac{xy\ln (x/y) + yz \ln (y/z) + zx \ln (z/x)}{(x-y)(y-z)(z-x)}, \end{align} which satisfies $I(x,x,x)=1/2x^2$. By taking $m_{\tilde{i}_L}=m_{\tilde{i}_R}= M_1$, one obtains \begin{align} \Delta_i \sim -1 \({\mu \over -100 {\rm \, TeV} }\) \({\tan\beta\over 70}\) \({ 2 {\rm \, TeV} \over M_{1}}\). \end{align} When the Higgsino mass parameter is much larger than masses of the sleptons and the Bino, $|\Delta_i|$ can be as large as $\mathcal{O}(1)$. It is enhanced by $\mu \tan\beta$ coming from the trilinear coupling of $y_i \mu H_u^\* \tilde{i}_L\tilde{i}_R. $\footnote{ In general, there are also contributions from $y_i A_i H_d \tilde{i}_L \tilde{i}_R$. We will omit this term for simplicity. The extension with it is straightforward. } The sign of $\Delta_i$ can be either positive or negative depending on that of $\mu$. When RG effects are neglected, $\Delta_i$ depends on a relative size of the soft breaking parameters and $\mu$, and thus, does not change under the scaling, i.e., even by increasing the SUSY scale. Figure \ref{fig:del} shows $\Delta_i$ for varying the slepton soft masses with $m_{\tilde{i}_L}=m_{\tilde{i}_R}$, $M_1=1.5 {\rm \, TeV} $, $M_2=500 {\rm \, GeV} $, and $\tan\beta=70$. Here, $\mu=-100 {\rm \, TeV} $ (left) and $\mu=-500 {\rm \, TeV} $ (right). The red and blue lines denote $\Delta_\mu$ and $\Delta_e$, respectively. It is found that $\Delta_i$ can be around or smaller than $-1$. In the discontinuity region of the red line, an eigenstate of the smuon becomes tachyonic. The leading $\tan\beta$-enhanced radiative corrections are taken into account in Eq.~\eq{yukawas}, and $|\Delta_i|$ can be large~\cite{Marchetti:2008hw} (cf.~Ref.~\cite{Carena:1999py}).\footnote{ Such a large $|\Delta_i|$ has been discussed in the context of the muon $g-2$~\cite{Borzumati:1999sp,Endo:2013lva,Bach:2015doa, Tran:2018kxv}. } They include a resummation of the radiative corrections in the form of $(g_Y^2\mu\tan\beta/M_{\rm SUSY})^n$ to all orders, where $M_{\rm SUSY}$ is a typical scale of SUSY particle masses in loops, while other corrections are suppressed. \begin{figure}[t!] \begin{center} \includegraphics[width=70mm,bb=0 0 360 349]{Deltaf1.pdf} \includegraphics[width=72mm,bb=0 0 360 340]{Deltaf12.pdf} \end{center} \caption{$\Delta_e$ (blue) and $\Delta_\mu$ (red) for varying the slepton soft mass. Here, $m_{\tilde{i}_R}=m_{\tilde{i}_L}, M_1=1.5 {\rm \, TeV} , M_2=500 {\rm \, GeV} ,\tan\beta=70,$ with $\mu=-100 {\rm \, TeV} $ (left) and $\mu=-500 {\rm \, TeV} $ (right). } \label{fig:del} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=74mm,bb=0 0 360 302]{fig1.pdf} \includegraphics[width=70mm,bb=0 0 360 319]{fig12.pdf} \end{center} \caption{$(a_e)_{\rm SUSY}/m^2_e$ (blue) and $(a_\mu)_{\rm SUSY}/m^2_\mu$ (red) for varying the slepton mass. The other parameters are same as Fig.\,\ref{fig:del}. In particular, $\mu=-100 {\rm \, TeV} $ (left) and $\mu=-500 {\rm \, TeV} $ (right). The light blue (red) horizontal band represents the observed discrepancy for the electron (muon) $g-2$ at the $1\sigma$ level. The smuons on the pink vertical band are stable against the vacuum decay at the tree level, where the model parameters are evaluated at the scale of the slepton soft mass. } \label{fig:1} \end{figure} When $|\mu| \tan \beta$ is large, the SUSY contributions to the lepton $g-2$ of the $i$-th generation, $(a_i)_{\rm SUSY}$, are dominated by the Bino-slepton diagrams. In the mass-insertion approximation, they are represented as~\cite{Endo:2013lva}\footnote{ In the numerical analysis, we use the formula in Ref.~\cite{Lopez:1993vi,Chattopadhyay:1995ae,Moroi:1995yh} without the mass-insertion approximation for the one-loop contributions. In addition, the formula in Ref.~\cite{vonWeitershausen:2010zr} is used for $\delta_{\rm QED}$. } \begin{align} (a_i)_{\rm SUSY} \simeq \left( \frac{1 - \delta_{\rm QED}}{1 + \Delta_i } \right) {g_Y^2 \over 16\pi^2}{ m_i^2 \mu \tan\beta \, M_1 \over m_{\tilde{i}_L}^2 m_{\tilde{i}_R}^2}\, f_N\left( \frac{m_{\tilde{i}_L}^2}{M_1^2}, \frac{m_{\tilde{i}_R}^2}{M_1^2} \right), \laq{gm2} \end{align} where $f_N(x,y)$ is the loop function defined in Ref.~\cite{Endo:2013lva} and satisfies $f_N(1,1)=1/6$. QED corrections beyond the leading order are taken into account by $\delta_{\rm QED}$~\cite{Degrassi:1998es}. The radiative correction $\Delta_i$ appears, because $(a_i)_{\rm SUSY}$ is proportional to the SUSY Yukawa coupling of the lepton. The SUSY contributions \eq{gm2} are scaled by the lepton mass squared $m_i^2$. It is noticed that $(a_i)_{\rm SUSY}$ can be affected drastically by $\Delta_i$, when $\mu M_1$ is large negative. For $\mu M_1 < 0$, $\Delta_i$ is negative. Since $f_N(x,y)$ is positive, $(a_i)_{\rm SUSY}$ becomes positive (negative) for $1 + \Delta_i < 0\, (>0)$. In addition, \eq{gm2} is enhanced significantly around the cancellation point, \begin{align} \Delta_i = -1. \end{align} Thus, $(a_i)_{\rm SUSY}/m^2_i$ can have different size and sign for different flavors, depending on $1 + \Delta_i$. It is noticed that lepton-flavor mixings are not necessary, and thus, there are no constraints from the lepton flavor violations. In Fig.~\ref{fig:1} we show $(a_i)_{\rm SUSY}/m_i^2$ for the electron (blue line) and the muon (red line). In the horizontal blue band, the observed discrepancy for the electron $g-2$ (see Eq.~\eq{g2e}) is explained at the $1\sigma$ level, and that for the muon (see Eq.~\eq{g2m}) is shown by the red band. We find that the electron $g-2$ discrepancy is explained around the cancellation point $\Delta_e = -1$, corresponding to $m_{\tilde{e}_{L,R}}\simeq 2.3 {\rm \, TeV} $ $(6.0 {\rm \, TeV} )$ in the left (right) panel. The selectrons are relatively heavy and satisfy the collider constraints easily. On the other hand, the muon $g-2$ anomaly is explained by lighter smuons, because $(a_\mu)_{\rm SUSY}$ is required to be positive. Here, all the SUSY particles are set to satisfy the current collider/experimental bounds.\footnote{ Light smuons can satisfy the LHC bounds, e.g., by setting the LSP appropriately. } Too large $|\mu| \tan \beta$ spoils the stability of the electroweak vacuum. In the analysis, we used the formula provided in Ref.~\cite{Endo:2013lva} to derive the vacuum stability condition.\footnote{ The formula fits the result of \texttt{CosmoTransitions 1.0.2}~\cite{Wainwright:2011kj} at the tree level. It may suffer from a large scale uncertainty~\cite{Endo:2015ixx}. In particular, an energy gap exists between the scales of the charge-color breaking vacuum, $\gtrsim 10^8 {\rm \, GeV} $, and the electroweak vacuum, $\sim 100 {\rm \, GeV} $. Since the potential can be lifted in a large renormalization scale, the constraint might be alleviated. } The trilinear coupling associated with $\mu \tan \beta$ is proportional to the SUSY Yukawa coupling of the lepton. In the muon case, the vacuum is stable in the pink vertical band. There is a lower bound on the smuon masses, because the potential is stabilized when the smuons become heavy. In addition, an upper bound is obtained when the pink band appears to the left of the mass discontinuity region (see the right panel of Fig.~\ref{fig:1}). This is because, as the smuon masses increase, $|1 + \Delta_\mu|$ decreases according to Fig.~\ref{fig:del}, and thus, the SUSY Yukawa coupling, i.e., the trilinear coupling of the smuons, is enhanced. In Fig.~\ref{fig:1}, it is found that the smuons are required to be heavier than $4.2 {\rm \, TeV} $ for $\mu=-100 {\rm \, TeV} $, while they are limited in $600 {\rm \, GeV} \lesssim m_{\tilde{\mu}} \lesssim 1 {\rm \, TeV} $ for $\mu=-500 {\rm \, TeV} $ by the vacuum stability condition. In contrast, the vacuum stability constraint for the electron is highly alleviated and does not affect our scenario, because its Yukawa coupling is tiny. Two sample points are given in Table.\,\ref{tab:1}. In both cases, the electron $g-2$ discrepancy is explained. For the muon, in order to satisfy the vacuum stability constraint, the smuon should be either heavier (left panel of Fig.~\ref{fig:1}) or lighter (right panel) than the selectron. In the latter case, the muon $g-2$ anomaly is explained with satisfying the vacuum stability constraint.\footnote{ The masses of stau, stop, and sbottom should also be large enough to avoid the vacuum stability bound, though they are irrelevant to the electron and muon $g-2$. A scenario satisfying this setup will be discussed in the next section. } Our sample points are consistent with the current LHC bounds.\footnote{ Large $\tan\beta$ as much as $\gtrsim 70$ would not suffer from a Landau pole below the GUT scale $\sim 10^{16} {\rm \, GeV} $ if the gluino mass is large enough. The SUSY Yukawa coupling of the bottom quark can be suppressed by threshold corrections. } Consequently, we conclude that both the discrepancies of the electron and the muon $g-2$ can be explained simultaneously by choosing the slepton masses appropriately. \begin{table*}[!t] \begin{center} \begin{tabular}{|c|c|c|} \hline & {\bf I} & {\bf II} \\ \hline\hline ${\mu}$& $-100 $& $-500$ \\ ${\tan\beta}$ & 70 & 70 \\ $M_1, M_2$ &1.5, 1.0 & 1.5, 0.6 \\ $m_{\tilde{e}_{L,R}}$ &2.4, 2.3 & 6.0, 6.0\\ $m_{\tilde{\mu}_{L,R}}$ & 5.0, 5.0 & 0.7, 0.7 \\ \hline\hline $\Delta_e$ &$-0.97$& $-0.99$ \\ $\Delta_\mu$ &$-0.23$& $-23$ \\ $(a_{e})_{\rm SUSY}$ & $-8.8 \times 10^{-13}$ & $-7.3 \times 10^{-13}$ \\ $(a_\mu)_{\rm SUSY}$ & $-0.1 \times 10^{-9}$ & $3.1 \times 10^{-9}$ \\ \hline \end{tabular} \end{center} \caption{Two sample points which explain the electron $g-2$ discrepancy. All masses are in units of TeV. The upper parameters are input, while the results are given below.} \label{tab:1} \end{table*} Let us give three comments on the mechanism. First of all, our analysis is almost independent of the Wino mass. This is because the Wino diagrams relevant for $\Delta_i$ and $(a_i)_{\rm SUSY}$ internally exchange the Higgsino. The Higgsino is assumed to be so heavy that its contributions are suppressed. It is interesting to mention that our model can be compatible with the Wino LSP, which is a candidate of the dark matter. Secondly, let us mention how to test the mechanism. There are two ways, direct productions of the SUSY particles and indirect detections. The direct production of the selectrons are challenging, because they tend to be heavy for realizing $\Delta_e \simeq -1$. Their masses may exceed scopes of future collider experiments. In contrast, the smuons are as light as $\mathcal{O}(0.1-1) {\rm \, TeV} $, which could be tested in the LHC and future experiments. In particular, once the heavier smuon is produced, the branching fraction of its decay to the lighter one with the Higgs boson becomes sizable because of the large trilinear coupling. This may give a characteristic signature for the experiments. The scenario may be tested by indirect searches. Since $\Delta_i$ is close to $-1$ for the electron or large negative for the muon, the branching fractions of the (semi-) leptonic $B$ meson decays are affected when the heavy Higgs bosons are relatively light~\cite{Choudhury:1998ze,Babu:1999hn}.\footnote{ The quark sector also receives threshold corrections similarly. The SUSY Yukawa couplings of the down-type quarks can be enhanced with certain squark and gluino/Bino masses, and large $|\mu| \tan\beta$. Such effects may be observed in the quark flavor physics. } The decays can proceed by exchanging the heavy Higgs bosons, whose couplings to the leptons are given by the SUSY Yukawa couplings. The SUSY contributions to the muon channels are suppressed by large $|\Delta_\mu|^2$, whereas those to the electron modes are enhanced by $1/|1 + \Delta_e|^2$. Next, SUSY corrections to the SM Higgs boson decaying into lepton pairs are weak (see Ref.~\cite{Endo:2015oia}). In fact, those to the muon channel are suppressed by large $|\Delta_\mu|^2$. For the electron channel, although the corrections are enhanced by $1/|1 + \Delta_e|^2$, they are still suppressed by $\cos(\beta-\alpha)$, where $\alpha$ is a Higgs mixing angle, and may be below sensitivities of the future electron-positron colliders. Further, if the Wino-like neutralino is the dark matter, the scenario can be tested from direct/indirect dark matter search experiments. Lastly, let us comment on a parameter tuning for $1 + \Delta_e = \mathcal{O}(0.1-1)\%$. This cancellation can be linked with the mass hierarchy between the electron and the muon, $m_e/m_\mu=\mathcal{O}(0.1)\%$. The electron mass is realized by the SUSY Yukawa coupling, $y_e$, which is comparable to the muon one, because the Yukawa couplings satisfy the relation, \begin{align} {y_e \over y_\mu} \simeq {m_e \over m_\mu} {1+\Delta_\mu \over 1+\Delta_e}. \end{align} In general, the small electron mass may be chosen by an anthropic selection~\cite{Agrawal:1997gf}. Then, in our scenario, the selectron mass might be chosen to obtain the tiny electron mass. \section{Higgs mediation scenario} In order to explain the current discrepancies of the electron and muon $g-2$, the smuons are required to be lighter than the selectrons. In this section, we provide UV models to realize such a slepton spectrum. Let us assume the MFV for the slepton soft-breaking masses~\cite{Hall:1990ac, Ciuchini:1998xy, Buras:2000dm, DAmbrosio:2002vsn, Paradisi:2008qh},\footnote{ The smuons can also be embedded in $N=2$ SUSY multiplets~\cite{Shimizu:2015ara,Yin:2016pkz}. Here, SUSY breaking effects are suppressed due to the $N=2$ non-renormalization theorem. The smuons tend to be lighter than other sleptons.} \begin{align} m_{\tilde{i}_L}^2 &\simeq d_L+c_L y_i^2, \notag \\ m_{\tilde{i}_R}^2 &\simeq d_R+c_R y_i^2, \laq{model} \end{align} where higher order terms of $y_i$ are omitted. The first terms, $d_L$ and $d_R$, in the right-hand side are flavor-blind contributions, e.g., by SUSY-breaking mediations via gauge interactions. The second terms, $c_L$ and $c_R$, depend on the lepton Yukawa couplings, i.e., depend on lepton flavors. Such contributions are yielded by SUSY-breaking mediations via the Higgs sector, as we will discuss below. Here, the lepton Yukawa matrix is diagonalized without loss of generality, and hence, the slepton soft mass matrices are aligned to the Yukawa matrix. Then, there are no lepton-flavor violations.\footnote{ This is not the case beyond the MSSM, e.g. when strongly-coupled right-handed neutrinos are introduced to explain the neutrino masses~\cite{Hisano:1995cp}. This may be supported by the thermal leptogenesis~\cite{Fukugita:1986hr}. Even in this case, one can introduce a flavor symmetry in the lepton Yukawa couplings, $y^N_i H_u L_i N_i$, where $N_i$ is the right-handed neutrinos. The neutrino oscillations are realized if the neutrino mass term, $W=\sum_{ij} M^N_{\rm ij} N_i N_j$, breaks the flavor symmetry. Flavor-violating effects from the neutrino Yukawa couplings should be suppressed. Also, the neutrino masses can be obtained by introducing the dimension five operator, $W= 1/M(L_j H_u)(L_i H_u)$, in a high energy scale. Then, the scenario does not change. The baryogenesis works with active neutrino oscillations when an inflaton decays to either the left-handed leptons flavor-dependently or the Higgs boson~\cite{Hamada:2018epb}. } In this Letter, we do not assume anything special for the squarks and the gluino. Their masses depend on details of the UV models. Let us discuss a Higgs mediation scenario to realize the flavor-dependent mass spectrum, $c_L$ and $c_R$. The scenario was first identified in a non-universal Higgs masses model~\cite{Yamaguchi:2016oqz}, where radiative corrections with negative large Higgs mass squares provide positive contributions to the squark and slepton masses which depend on the Yukawa couplings, i.e., flavors. By taking $m_{H_u}^2\simeq m_{H_d}^2 < 0$, the slepton masses are estimated by RG running as (cf., Ref.~\cite{Yanagida:2018eho}), \begin{align} \laq{Higmed} c_R\simeq 2c_L \simeq \frac{1}{4\pi^2} \bar{m}^2 \log{\(\frac{ M_{\rm GUT} }{\bar{m}}\)}, \end{align} at the leading logarithmic approximation. Here, $M_{\rm GUT}\sim 10^{16} {\rm \, GeV} $ is the GUT scale, and $\bar{m}^2 \equiv -m_{H_d}^2$. Tachyonic mass spectrum is avoided for the pseudo-Higgs boson by assuming~\cite{Yamaguchi:2016oqz, Yin:2016shg} \begin{align} \mu \sim -\bar{m},~\tan\beta \gtrsim 50. \end{align} This setup is favored to realize large $|\Delta_i|$. On the other hand, $d_L$ and $d_R$ depend on flavor-blind mediation mechanisms.\footnote{ The anomaly mediation has been discussed within the context of the Higgs mediation, which is called the Higgs-anomaly mediation~\cite{Yin:2016shg,Yanagida:2016kag,Yanagida:2018eho}. The anomaly mediation provides flavor-blind masses. Such a setup can be realized by sequestering sfermions and gauginos away from the SUSY breaking sector, while the two Higgs multiplets are not. Then, the soft mass parameters are vanishing for the sfermions and the gauginos at the input scale, but are not for the Higgs. The former masses are generated at loop levels. Although this scenario can explain the muon $g-2$ anomaly, it is not possible to explain both the electron and muon $g-2$ anomalies simultaneously. This is because the squarks in the first two generations become tachyonic when $\bar{m}$ becomes too large compared with the gaugino masses. Such a difficulty is avoided if we take account of additional flavor-blind mediation. The additional contribution can induce large squark masses.} In the following, we do not specify the mechanism and leave them as free parameters. \begin{table*}[!t] \begin{center} \begin{tabular}{|c|c|c|c|} \hline & {\bf I} & {\bf II} & {\bf III} \\ \hline\hline $\bar{m}$& $100 $& $350$ & $350$ \\ ${\tan\beta}$ & 70 & 80 & 80\\ $\sqrt{d _L}$ & 2.0 & 0.28& 0.28 \\ $\sqrt{d_R}$ & 2.0 & 0.28& 0.28 \\ $M_1, M_2$ &1.0, 0.6 & 1.6, 0.6 & 1.6, 0.6 \\ \hline\hline $m_{\tilde{e}_{1,2}}$ & 2.0, 2.0 & 6.4, 4.5& 6.4, 4.5\\ $m_{\tilde{\mu}_{1,2}}$ & 4.9, 3.7 & 0.99, 0.64 & 16, 12\\ $m_{\tilde{\tau}_{1,2}}$ & 56, 39 & 220, 150 & 220, 150\\ \hline\hline $\Delta_e$ &$-0.96$& $-0.99$ & $-0.99$ \\ $\Delta_\mu$ &$-0.23$& $-15$ & $-0.19$ \\ $|y_e|$ &$0.005$& $0.023$ & $0.023$ \\ $|y_\mu|$ &$0.06$& $0.003$ & $0.062$ \\ $(a_{e})_{\rm SUSY} $& $-6.9 \times 10^{-13} $ & $-5.6 \times 10^{-13} $ & $-5.6 \times 10^{-13}$ \\ $(a_\mu)_{\rm SUSY}$& $-0.2 \times 10^{-9}$ & $2.6 \times 10^{-9}$ & $-0.01 \times 10^{-9}$ \\ \hline \end{tabular} \end{center} \caption{Higgs mediation sample points. All masses are in units of TeV. The model input parameters are provided above, and the results are given in middle and below. The selectron, smuon, and stau masses are shown in the middle. } \label{tab:2} \end{table*} There are two types of mass spectra for smuons and selectrons which are consistent with the Higgs mediation. According to the previous section, when $m_{\tilde{\mu}}\gg m_{\tilde{e}}$, one obtains $|\Delta_\mu|\ll1$ and $|y_\mu|\gtrsim |y_e|$. On the other hand, when $m_{\tilde{\mu}}\ll m_{\tilde{e}}$, $|\Delta_\mu|$ can be so large that $|y_\mu|$ becomes smaller than $|y_e|$. These two spectra can be realized by the Higgs mediation. In fact, when $\bar{m}^2>0$ they satisfy the following relation, \begin{align} \laq{Higgs} \(\frac{m^2_{\tilde{\mu}_{L,R}}-m^2_{\tilde{e}_{L,R}}}{y_\mu^2-y_e^2}\)>0. \end{align} Let us provide three data points of the Higgs mediation scenario in Table \ref{tab:2}. The model parameter $\bar{m}$ is input at the GUT scale, and the flavor-dependent contributions to the slepton masses are derived by solving the RG equations, i.e., by using Eq.~\eq{Higmed}. On the other hand, the flavor-blind contributions, $d_L$ and $d_R$, as well as the gaugino masses, $M_1$ and $M_2$, are free parameters in our analysis. Their values at the scale $\bar{m}$ are also provided in Table \ref{tab:2}. Then, the soft masses and the SUSY Yukawa couplings are derived by using Eq.~\eq{model} with the threshold corrections, $\Delta_i$. Points {\bf I} and {\bf III} explain the electron $g-2$ discrepancy, while the muon $g-2$ anomaly is not. On the other hand, both are explained at Point {\bf II}. In all cases, the vacuum stability condition is satisfied for the stau as well as the smuon, because the staus become heavy in the scenario. It is noticed that Points $\bf II$ and $\bf III$ have the same dimensionful input parameters, despite that the results are different. This is because multiple sets of the smuon SUSY-breaking masses satisfy Eq.~\eq{yukawas}. Then, the dimensionless parameters, particularly $y_\mu$, become different due to large threshold corrections. Before closing this section, let us mention about the stop and sbottom masses. They are likely to be as large as $\mathcal{O}(10-100) {\rm \, TeV} $ by the Higgs mediation similarly to the stau. Such a setup can be consistent with the SM Higgs boson mass and the vacuum stability condition in the squark sector particularly by assuming the gluino mass appropriately~\cite{Vega:2015fna}. However, we face with a severe little hierarchy problem due to the large stop masses. The discussion on this problem is beyond the scope of this letter and will be studied elsewhere. \section{Conclusions} We proposed an MSSM scenario to explain both the electron and muon $g-2$ discrepancies without introducing lepton flavor mixings. The discrepancies are different in scale and sign. The electron $g-2$ requires larger SUSY contributions than the muon $g-2$ with an opposite sign. In our scenario, this is realized by the threshold corrections to the SUSY Yukawa interactions with the flavor-dependent slepton mass spectrum. The electron Yukawa coupling becomes enhanced by them, and its sign can be opposed to the muon one. In order to explain both anomalies, the smuons are required to be (much) lighter than the selectrons. We discussed that such a mass spectrum is consistent with the Higgs mediation scenario. \vspace{1em} \noindent {\it Acknowledgements}: \\ W.Y. thanks the hospitality of the KEK theory center where this work was initiated. This work was supported by JSPS KAKENHI No.~16K17681 (M.E.) and 16H03991 (M.E.), and NRF Strategic Research Program NRF-2017R1E1A1A01072736 (W.Y.). \providecommand{\href}[2]{#2}\begingroup\raggedright
1,108,101,564,491
arxiv
\section{Introduction} In this paper, we study the so-called Metropolis-Hastings reversiblizations in a continuous-time and finite state space setting. This work is largely motivated by \cite{Choi16}, in which the author introduced two Metropolis-Hastings (MH) kernels $M_1$ and $M_2$ to study non-reversible Markov chains in discrete-time. While $M_2$ is a self-adjoint kernel, $M_2$ may not be Markovian, which makes further probabilistic analysis of $M_2$ to be difficult. In a continuous-time setting however, we will show that similar construction for $M_2$ still gives a valid Markov generator, and this observation motivates us to study fine theoretical properties of $M_2$. This paper is therefore devoted to the study of $M_1$ and $M_2$ and offers relevant comparison results between $M_1$, $M_2$ and the proposal chain. It turns out that $M_2$ enjoys superior hitting time and mixing time properties when compared with $M_1$, and so from a Markov chain Monte Carlo perspective, $M_2$ offers acceleration when compared with the classical MH algorithm $M_1$. The rest of this paper is organized as follow. In Section \ref{sec:M1M2}, we fix our notations and define the two MH generators $M_1$ and $M_2$ that we study throughout our paper. The main results can be found in both Section \ref{sec:geom} and Section \ref{sec:hitmixcompare}. In Section \ref{sec:geom} we provide interesting geometric interpretations of $M_1$, $M_2$ and their convex combinations as $\ell^1$ minimizers between the proposal chain and the set of generator that are reversible with respect to the target distribution. In Section \ref{sec:hitmixcompare} we compare various hitting time and mixing time parameters between $M_1$ and $M_2$. The final section is devoted to two concrete examples. More specifically, in the first example we consider the special case of Metropolised independent sampling in Section \ref{sec:MIS} and offer explicit spectral analysis for $M_1$ and $M_2$, while in the second example in Section \ref{sec:bd} we study birth-death proposal chain that allows for effective comparison of the fastest strong stationary time of $M_1$ and $M_2$. \section{Metropolis-Hastings kernels: $M_1$ and $M_2$}\label{sec:M1M2} In this section, we give the construction of continuous-time Metropolis-Hastings (MH) Markov chains. To fix our notation, we let $\mathcal{X}$ be a finite state space and $\mu$ be a target distribution on $\mathcal{X}$. It is perhaps well-known that the classical MH algorithm offers a way to construct a discrete-time Markov chain that is reversible with respect to $\mu$. For pointers on this subject, we refer readers to \cite{MRRTT53,RR04} and the references therein. Here we adapt the basic idea and recast the classical discrete-time MH algorithm to a continuous-time setting so as to construct what we call the first MH Markov chain. We note that similar construction of continuous-time Metropolis-type algorithms can be found in \cite{DM09}. \begin{definition}[The first MH generator]\label{def:M1} Given a target distribution $\mu$ on finite state space $\mathcal{X}$ and a proposal continuous-time irreducible Markov chain with generator $Q$, the first MH Markov chain has generator given by $M_1 = M_1(Q,\mu) = (M_1(x,y))_{x,y \in \mathcal{X}}$, where $$M_1(x,y) := \begin{cases} \min\left\{Q(x,y),\dfrac{\mu(y)}{\mu(x)}Q(y,x)\right\}, &\mbox{if } x \neq y; \\ - \sum_{z: z \neq x} M_1(x,z), & \mbox{if } x = y. \end{cases}$$ \end{definition} Note that the above definition closely resembles the classical MH algorithm, in which we simply substitute transition probability in the MH algorithm by the transition rate $Q$ of the proposal chain. By mirroring the transition effect of $M_1$ and capturing the opposite movement, we can construct another MH generator, which is what we call the second MH generator. More precisely, we give a definition for it as follows. \begin{definition}[The second MH generator]\label{def:M2} Given a target distribution $\mu$ on finite state space $\mathcal{X}$ and a proposal continuous-time irreducible Markov chain with generator $Q$, the second MH Markov chain has generator given by $M_2 = M_2(Q,\mu) = (M_2(x,y))_{x,y \in \mathcal{X}}$, where $$M_2(x,y) := \begin{cases} \max\left\{Q(x,y),\dfrac{\mu(y)}{\mu(x)}Q(y,x)\right\}, &\mbox{if } x \neq y; \\ - \sum_{z:z \neq x} M_2(x,z), & \mbox{if } x = y. \end{cases}$$ \end{definition} Comparing Definition \ref{def:M1} and \ref{def:M2}, we see that in the former we take $\min$ while in the latter we consider $\max$ for off-diagonal entries. It is what we meant by $M_2$ mirroring the transition effect of $M_1$. As another remark, we note that in the discrete-time setting, $M_2$ as defined in \cite{Choi16} may not be a Markov kernel. In the continuous-time setting however, $M_2$ as defined in Definition \ref{def:M2} is a valid Markov generator. To allow for effective comparison between these generators, we now introduce the notion of Peskun ordering of continuous-time Markov chains. This partial ordering was first introduced by \cite{Pesk73} for discrete time Markov chains on finite state space. It was further generalized by \cite{Tie98} to general state space, and by \cite{LM08} to continuous-time Markov chains. \begin{definition}[Peskun ordering] Suppose that we have two continuous-time Markov chains with generators $Q_1$ and $Q_2$ respectively. Both chains share the same stationary distribution $\pi$. $Q_1$ is said to dominate $Q_2$ off-diagonally, written as $Q_1 \succeq Q_2$, if for all $x \neq y \in \mathcal{X}$, we have $$Q_1(x,y) \geq Q_2(x,y).$$ \end{definition} For a given target distribution $\mu$ and proposal generator $Q$, define the time-reversal generator of $Q$ with respect to $\mu$ as $$ Q^*(x,y)=\frac{\mu(y)Q(y,x)}{\mu(x)},\quad x,y\in \mathcal{X}. $$ $Q$ is said to be $\mu$-reversible if and only if $Q=Q^*$. For convenience, let $\bar{Q}=(Q+Q^*)/2$. We also denote the inner product with $\mu$ by $\langle\cdot, \cdot \rangle_\mu$, that is, for any functions $f,\ g:\ \mathcal{X}\rightarrow\mathbb{R}$, $$ \langle f,g\rangle_\mu=\sum_{x\in \mathcal{X}}f(x)g(x)\mu(x). $$ In the following, we collect a few elementary observations and results on the behaviour of generators $Q,\ Q^*,\ M_1$ and $M_2$. \begin{lemma}\label{lem:M1M2} Given a target distribution $\mu$ on $\mathcal{X}$ and proposal chain with generator $Q$, then we have \begin{enumerate} \item $M_1$ and $M_2$ are $\mu$-reversible. \label{it:1} \item(Peskun ordering) $M_2 \succeq M_1$. \label{it:2} \item \label{it:3} For any function $f:\ \mathcal{X}\rightarrow \mathbb{R}$, $$\langle M_2 f,f \rangle_{\mu} \leq \langle M_1 f,f \rangle_{\mu}.$$ \end{enumerate} If we take $\mu = \pi$, the stationary distribution of the proposal chain, then we have \begin{enumerate}[resume] \item $Q + Q^* = M_1 + M_2$. \label{it:4} \item(Peskun ordering) $M_2 \succeq Q \succeq M_1$. \label{it:5} \item For any function $f:\ \mathcal{X}\rightarrow \mathbb{R}$, $$\langle M_2 f,f \rangle_{\pi} \leq \langle Q f,f \rangle_{\pi} \leq \langle M_1 f,f \rangle_{\pi}.$$ \label{it:6} \end{enumerate} \end{lemma} \begin{proof} For item \eqref{it:1}, it is easy to see that for $x \neq y$, $$ \mu(x)M_2(x,y) = \max\{\mu(x)Q(x,y),\mu(y)Q(y,x)\} = \max\{\mu(y)Q(y,x),\mu(x)Q(x,y)\} = \mu(y)M_2(y,x). $$ So $M_2$ is $\mu$-reversible. Similarly, the $\mu$-reversibility for $M_1$ can be derived via replacing $\max$ by $\min$ in the above argument. Next, we prove item \eqref{it:2}, which trivially holds since $$ M_2(x,y) = \max\left\{Q(x,y),\dfrac{\mu(y)}{\mu(x)}Q(y,x)\right\} \geq \min\left\{Q(x,y),\dfrac{\mu(y)}{\mu(x)}Q(y,x)\right\} = M_1(x,y). $$ Item \eqref{it:3} follows readily from \cite[Theorem $5$]{LM08} since both $M_1$ and $M_2$ are $\mu$-reversible. Next, we prove item \eqref{it:4}. We see that $$Q(x,y) + Q^*(x,y) = \min\left\{Q(x,y),Q^*(x,y)\right\} + \max\left\{Q(x,y),Q^*(x,y)\right\} = M_1(x,y) + M_2(x,y),$$ if $\mu=\pi$. We proceed to prove item \eqref{it:5}, which follows from $$M_2(x,y) = \max\left\{Q(x,y),Q^*(x,y)\right\} \geq Q(x,y) \geq \min\left\{Q(x,y),Q^*(x,y)\right\} = M_1(x,y).$$ Finally, we prove item \eqref{it:6}. For any function $f$, we see that $$\langle Q f,f \rangle_{\pi} = \left\langle \bar{Q} f,f \right\rangle_{\pi}.$$ As we have $M_2 \succeq \bar{Q} \succeq M_1$ and they are all reversible generators, desired results follow from \cite[Theorem $5$]{LM08}. \end{proof} The above lemma will be frequently exploited to develop comparison results in Section \ref{sec:hitmixcompare}. \section{Geometric interpretation of $M_1$ and $M_2$}\label{sec:geom} This section is devoted to offer a geometric interpretation for both $M_1$ and $M_2$. Suppose that we are given a target distribution $\mu$ on $\mathcal{X}$ and a proposal chain with generator $Q$ and stationary distribution $\pi$. Our result is largely motivated by the work of \cite{BD01}, who is the first to study geometric consequences of $M_1$ in discrete-time. As we will show in our main result Theorem \ref{thm:geomM1M2} below, it turns out that both $M_1$ and $M_2$ (as well as their convex combinations) minimize certain $\ell^1$ distance to the set of $\mu$-reversible Markov generator on $\mathcal{X}$. As a result, they are natural transformations that maps a given Markov generator to the set of $\mu$-reversible Markov generators. Let us now fix a few notations and define a metric to quantify the distance between two Markov generators. We write $\mathcal{R}(\mu)$ to be the set of conservative $\mu$-reversible Markov generators and $\mathcal{S}(\mathcal{X})$ to be the set of Markov generator on $\mathcal{X}$. For any $Q_1, Q_2 \in \mathcal{S}(\mathcal{X})$, we define a metric $d_{\mu}$ on $\mathcal{S}(\mathcal{X})$ to be $$d_{\mu}(Q_1,Q_2) := \sum_{x \in \mathcal{X}} \sum_{y: x \neq y} \mu(x) |Q_1(x,y)-Q_2(x,y)|.$$ To see that $d_{\mu}$ defines a metric, we have $d_{\mu}(Q_1,Q_2) = 0$ implies $Q_1(x,y) = Q_2(x,y)$ for all off-diagonal entries and since each row sums to zero we have $Q_1(x,x) = Q_2(x,x)$ for all $x \in \mathcal{X}$. The distance between $Q$ and $\mathcal{R}(\mu)$ is then defined to be \begin{align}\label{eq:l1metric} d_{\mu}(Q,\mathcal{R}(\mu)) := \inf_{M \in \mathcal{R}(\mu)} d_{\mu}(Q,M). \end{align} With the above notations in mind, we are now ready to state our main result in this section: \begin{theorem}\label{thm:geomM1M2} The convex combinations $\alpha M_1 + (1-\alpha)M_2$ for $\alpha \in [0,1]$ minimize the distance $d_{\mu}$ between $Q$ and $\mathcal{R}(\mu)$. That is, $$d_{\mu}(Q,\mathcal{R}(\mu))= d_{\mu}(Q,\alpha M_1 + (1-\alpha)M_2).$$ Moreover, $M_1$ (resp.~ $M_2$) is the unique closest element of $\mathcal{R}(\mu)$ that is coordinate-wise no larger (resp.~no smaller) than $Q$ off-diagonally. \end{theorem} \begin{rk} Taking $\mu = \pi$ in Theorem \ref{thm:geomM1M2}, the stationary distribution of $Q$, $\alpha = 1/2$ and using $Q + Q^* = M_1 + M_2$ by Lemma \ref{lem:M1M2}, we see that $$d_{\pi}(Q,\mathcal{R}(\pi)) = d_{\pi}(Q,\bar{Q}).$$ Thus, the additive reversiblization $\bar{Q}$ is a natural transformation of $Q$ that minimizes the distance $d_{\pi}$ between $Q$ and the set of $\pi$-reversible generators. \end{rk} To illustrate this result, we consider the simplest possible case of two-state with $\mathcal{X} = \{0,1\}$ and $$Q_{(a,b)} = \begin{bmatrix} -a & a \\ b & -b \end{bmatrix},$$ where $a,b > 0$. Thus this generator $Q_{(a,b)}$ can be parameterized as $(a,b)$ on $\mathcal{S}(\mathcal{X}) = \{Q_{(a,b)};~(a,b) \in \mathbb{R}^+ \times \mathbb{R}^+\}$. The intersection of $\mathcal{S}(\mathcal{X})$ and the line $\mu(1)b - \mu(0)a = 0$ is therefore the set of $\mu$-reversible generator $\mathcal{R}(\mu)$, that is $\mathcal{R}(\mu) = \{Q_{(a,b)};~\mu(1)b - \mu(0)a = 0, a,b > 0\}$. These are illustrated in Figure \ref{fig:geometric} below. In Figure \ref{fig:geometric}, there are two points $(a_1,b_1)$ and $(a_2,b_2)$. The former point lies above $\mathcal{R}(\mu)$ while the latter lies below the straight line $\mathcal{R}(\mu)$, and hence they represent two non-reversible Markov chains. We see that $M_1$ projects vertically for $(a_1,b_1)$ and horizontally for $(a_2,b_2)$. On the other hand, $M_2$ mirrors the action of $M_1$ and does the opposite: it projects horizontally for $(a_1,b_1)$ and vertically for $(a_2,b_2)$. In addition, we can compute the distance $d_{\mu}$ explicitly between these generators as in Theorem \ref{thm:geomM1M2}: \begin{align*} d_{\mu}(Q_{(a_1,b_1)}, M_1(Q_{(a_1,b_1)},\mu) ) &= |\mu(1) b_1 - \mu(0) a_1|, \\ d_{\mu}(Q_{(a_1,b_1)}, M_2(Q_{(a_1,b_1)},\mu) ) &= |\mu(0) a_1 - \mu(1) b_1|, \\ \end{align*} and \begin{align*} &\quad d_{\mu}(Q_{(a_1,b_1)}, \alpha M_1(Q_{(a_1,b_1)},\mu) + (1-\alpha)M_2(Q_{(a_1,b_1)},\mu) )\\ &= (1-\alpha)|\mu(0) a_1 - \mu(1) b_1| + \alpha |\mu(1) b_1 - \mu(0) a_1| \\ &= |\mu(0) a_1 - \mu(1) b_1|, \end{align*} where $\alpha \in [0,1]$. From Theorem \ref{thm:geomM1M2}, they all minimize the distance between $Q_{(a_1,b_1)}$ and $\mathcal{R}(\mu)$. \begin{figure} \includegraphics[width=0.7\linewidth]{geometric} \caption{$M_1$ and $M_2$ as $\ell^1$ projections in the $2 \times 2$ case} \label{fig:geometric} \end{figure} We now proceed to give a proof of Theorem \ref{thm:geomM1M2}. \begin{proof}[Proof of Theorem \ref{thm:geomM1M2}] The proof is inspired by the proof of Theorem $1$ in \cite{BD01}. We first define two helpful half spaces: \begin{align*} H^{<} = H^{<}(Q,\mu) &:= \big\{(x,y);~\mu(x)Q(x,y) < \mu(y)Q(y,x)\big\}, \\ H^{>} = H^{>}(Q,\mu) &:= \big\{(x,y);~\mu(x)Q(x,y) > \mu(y)Q(y,x)\big\}. \end{align*} We now show that for $N \in \mathcal{R}(\mu)$, $d_{\mu}(Q,N) \geq d_{\mu}(Q,M_2)$. First, we note that \begin{align*} d_{\mu}(Q,N) \geq \sum_{(x,y) \in H^{<}} \big[\mu(x) |Q(x,y) - N(x,y)| + \mu(y) |Q(y,x) - N(y,x)|\big]. \end{align*} As $N$ is $\mu$-reversible, setting $N(x,y) = Q(x,y) + \epsilon_{xy}$ gives $N(y,x) = \frac{\mu(x)}{\mu(y)}(Q(x,y) + \epsilon_{xy})$. Plugging these expressions back yields \begin{align*} d_{\mu}(Q,N) &\geq \sum_{(x,y) \in H^{<}} \Big[\mu(x) |\epsilon_{xy}| + \mu(y) \left|Q(y,x) - \frac{\mu(x)}{\mu(y)}(Q(x,y) + \epsilon_{xy})\right|\Big] \\ &= \sum_{(x,y) \in H^{<}} \big[\mu(x) |\epsilon_{xy}| + \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) - \mu(x) \epsilon_{xy}\right|\big] \\ &\geq \sum_{(x,y) \in H^{<}} \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) \right| = d_{\mu}(Q,M_2), \end{align*} where we use the reverse triangle inequality $|a-b| \geq |a| - |b|$ in the second inequality. For uniqueness, if $N$ is off-diagonally no smaller than $Q$, then $\epsilon_{xy} \geq 0$. If anyone is strictly positive, we have $d_{\mu}(Q,N) > d_{\mu}(Q,M_2)$. Alternatively, the uniqueness can be seen by observing that if $N(x,y) > Q(x,y)$ then $N(y,x) > \dfrac{\mu(x)}{\mu(y)}Q(x,y)$. Similarly, we can show $d_{\mu}(Q,N) \geq d_{\mu}(Q,M_1)$ via substituting $H^{<}$ by $H^{>}$. To see that $d_{\mu}(Q,M_1) = d_{\mu}(Q,M_2)$, we have \begin{align*} d_{\mu}(Q,M_2) &= \sum_{(x,y) \in H^{<}} \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) \right| \\ &= \sum_{(y,x) \in H^{>}} \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) \right| = d_{\mu}(Q,M_1). \end{align*} As for convex combinations of $M_1$ and $M_2$, we see that \begin{align*} d_{\mu}(Q,\alpha M_1 + (1-\alpha)M_2) &= (1-\alpha)\sum_{(x,y) \in H^{<}} \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) \right| \\ &\quad + \alpha \sum_{(x,y) \in H^{>}} \left|\mu(y) Q(y,x) - \mu(x)Q(x,y) \right|\\ &= (1-\alpha) d_{\mu}(Q,M_2) + \alpha d_{\mu}(Q,M_1) = d_{\mu}(Q,M_1). \end{align*} \end{proof} \section{Performance comparisons between $\bar{Q}$, $M_1$ and $M_2$}\label{sec:hitmixcompare} The evaluation of the performance of Markov chains depends on the comparison criterion. Popular comparison criteria that have appeared in the literature include mixing time and hitting times, spectral gap, asymptotic variance and large deviations, see e.g. \cite{Pesk73,RR97,CLP99,DHN00,GM00,SGS10,CH13,Bie16,HM17,HM18,FHY92,CCHP12,Hwang05} and references therein. In this section we give some comparison theorems based on these parameters for chains $\bar{Q}$, $M_1$ and $M_2$. Recall our setting that we are given a proposal irreducible chain $X = (X_t)_{t \geq 0}$ with generator $Q$, transition semigroup $(P_t)_{t \geq 0}$ and stationary distribution $\pi$. We are primarily interested in the behaviour of the following parameters: \begin{itemize} \item(Hitting times) We write \begin{align*} \tau_{A} &= \tau_{A}(Q) := \inf\{t \geq 0; X_t \in A\},\\ \tau_{A}^+& = \tau_{A}^+(Q) := \inf\{t > 0; X_t \in A \text{ and there exists } s \in (0,t) \text{ such that } X_s \neq X_0\} \end{align*} to be the first hitting time and the first return time to the set $A \subseteq \mathcal{X}$ of chain $X$ respectively, and the usual convention of $\inf \emptyset = \infty$ applies. We also adapt the common notation that $\tau_y := \tau_{\{y\}}$ (resp.~$\tau_y^+ := \tau_{\{y\}}^+$) for $y \in \mathcal{X}$. The commute time $t_{com}^{x,y}$ between two states $x,\ y$ is $$t_{com}^{x,y} = t_{com}^{x,y}(Q) := \E_x(\tau_y(Q)) + \E_y(\tau_x(Q)).$$ Another hitting time parameter of interest is the average hitting time $t_{av}$, which is defined to be $$t_{av} = t_{av}(Q,\pi) := \sum_{x,y} \mathbb{E}_x(\tau_y) \pi(x)\pi(y).$$ In fact, $t_{av}$ equals to the sum of the reciprocals of the non-zero eigenvalues of $-Q$, and it also has close connection with the notion of strong ergodicity, see for example \cite{Mao04,CuiMao10}. \item(Total variation mixing time) For $\epsilon > 0$, we write the total variation mixing time $t_{mix}(\epsilon)$ to be $$t_{mix}(\epsilon) = t_{mix}(Q,\pi,\epsilon) := \inf\big\{t \geq 0; \sup_x ||P_t(x,\cdot) - \pi||_{TV} < \epsilon\big\},$$ where for any probability measure $\nu$ and $\pi$ on $\mathcal{X}$, $||\nu - \pi||_{TV} := \frac{1}{2} \sum_x |\nu(x) - \pi(x)|$ is the total variation distance between these two measures. A commonly used metric is $t_{mix}(1/4)$ where we take $\epsilon = 1/4$. \item(Spectral gap) Denote the spectral gap of $Q$ to be $$\lambda_2 = \lambda_2(Q,\pi) := \inf\big\{\langle -Qf,f \rangle_{\pi}:\ \pi(f) = 0, \pi(f^2) = 1\big\}.$$ The relaxation time $t_{rel}$ is then the reciprocal of $\lambda_2$, that is, $$t_{rel} = t_{rel}(Q,\pi) := \dfrac{1}{\lambda_2}.$$ Note that in the finite state space setting, $\lambda_2$ is the second smallest eigenvalue of $-\bar{Q}$. \item(Asymptotic variance) For a mean zero function $f$, i.e., $\pi(f)=0$, the central limit theorem for Markov processes \cite[Theorem $2.7$]{KLO12} gives $t^{-1/2} \int_0^t f(X_s)ds$ converges in probability to a mean zero Gaussian distribution with variance $$\sigma^2(f,Q,\pi) := -2 \langle f,g \rangle_{\pi},$$ where $g$ solves the Poisson equation $Qg = f$. \item(Large deviation) Let the occupation measure of the Markov chain $X$ be $$ L_t=\frac{1}{t}\int_{0}^{t}\delta_{X_s} ds, $$ and the rate function be $$ I(Q,\nu)=\sup_{u>0}\Big(-\sum_{x\in \mathcal{X}}\nu(x)\frac{Qu(x)}{u(x)}\Big),\quad \nu\in \mathcal{P}(\mathcal{X}), $$ where $\mathcal{P}(\mathcal{X})$ is the set of probability distributions on $\mathcal{X}$. It follows from the large deviation principle that for large $t$ and $A\in \mathcal{P}(\mathcal{X})$, $$ \mathbb{P}(L_t\in A)\approx \text{exp}\left(-t\inf_{\nu\in A}I(Q,\nu)\right). $$ We refer readers to \cite{Ho00} for further references on the subject of large deviations of Markov chains. \item(Capacity) For any disjoint subset $A,B$ of $\mathcal{X}$, we define the capacity between $A$ and $B$ to be $$\mathrm{cap}(A,B) = \mathrm{cap}(A,B,Q,\pi) := \sum_{x \in A} \pi(x) \mathbb{P}_x(\tau_A^+ > \tau_B^+).$$ If $Q$ is reversible with respect to $\pi$, the classical Dirichlet principle for capacity gives $$\mathrm{cap}(A,B) = \inf\{\langle -Qf,f \rangle_{\pi}:\ f|_{A} = 1, f|_{B} = 0\}.$$ Note that in \cite{Do94} and \cite{GL14}, the authors derived the Dirichlet principle for non-reversible Markov chains. \end{itemize} With the above parameters in mind, we are now ready to state our first comparison result between $M_1$ and $M_2$: \begin{theorem}[Comparison theorem between $M_1(Q,\mu)$ and $M_2(Q,\mu)$]\label{thm:compareM1QmuM2Qmu} Given a target distribution $\mu$ on finite state space $\mathcal{X}$ and proposal irreducible chain with generator $Q$, we have the following comparison results between $M_1 = M_1(Q,\mu)$ and $M_2 = M_2(Q,\mu)$: \begin{enumerate} \item(Hitting times)\label{it:hit} For $\lambda > 0$ and $A \subseteq \mathcal{X}$, we have $$\mathbb{E}_{\mu}(e^{-\lambda \tau_A(M_1)}) \leq \mathbb{E}_{\mu}(e^{-\lambda \tau_A(M_2)}).$$ In particular, $\mathbb{E}_{\mu}(\tau_A(M_1)) \geq \mathbb{E}_{\mu}(\tau_A(M_2))$. Furthermore, $t_{av}(M_1,\mu) \geq t_{av}(M_2,\mu)$. \item(Total variation mixing time)\label{it:tmix} There exists a positive constant $C_{\mu}$ that depends on $\mu$ such that $$t_{mix}(M_2,\mu,1/4) \leq C_{\mu}t_{mix}(M_1,\mu,1/4).$$ That is, $t_{mix}(M_2,\mu,1/4) \lesssim_{\mu} t_{mix}(M_1,\mu,1/4).$ \item(Spectral gap)\label{it:l2} We have $\lambda_2(M_1,\mu)\leq \lambda_2(M_2,\mu)$. That is, the exponential $\ell^2$-convergence rate of chain $M_2$ is faster than that of chain $M_1$, or $t_{rel}(M_1,\mu) \geq t_{rel}(M_2,\mu).$ \item(Asymptotic variance)\label{it:asymvar} For $h \in \ell^2_0(\mu) = \{h;~\mu(h) = 0\}$, $$\sigma^2(h,M_1,\mu) \geq \sigma^2(h,M_2,\mu).$$ \item(Large deviations)\label{it:larde} For any $\nu \in \mathcal{P}(\mathcal{X})$, $I(M_1,\nu)\leq I(M_2,\nu)$. That is, the deviations for chain $M_2$ from the invariant distribution are asymptotically less likely than for $M_1$. \item(Capacity)\label{it:cap} For any disjoint $A,B \subseteq \mathcal{X}$, $$\mathrm{cap}(A,B,M_1,\mu) \leq \mathrm{cap}(A,B,M_2,\mu).$$ In particular, if we take $A=\{x\}$ and $B=\{y\}$, we have $$ t_{com}^{x,y}(M_1) \geq t_{com}^{x,y}(M_2). $$ \end{enumerate} \end{theorem} Theorem \ref{thm:compareM1QmuM2Qmu} shows that $M_2$ has superior mixing properties than $M_1$ in almost all aspects: $M_2$ has smaller mean hitting time, average hitting time, commute time, total variation mixing time, relaxation time and asymptotic variance and larger rate function and capacity between any two disjoint sets. As a result, it seems to suggest that for Markov chain Monte Carlo purpose one should use $M_2$ whenever possible since it is faster than its classical Metropolis-Hastings counterpart $M_1$. In Section \ref{sec:MIS}, we offer an explicit spectral analysis for both $M_1$ and $M_2$ in the Metropolised independent sampling setting. In the next result, we take $\mu = \pi$, the stationary distribution of the proposal chain $Q$ and offer comparison results between $M_1, M_2$ and $\bar{Q}$. Recall that all these generators minimize the distance between $Q$ and $\mathcal{R}(\pi)$ in Theorem \ref{thm:geomM1M2}. They however behave differently based on different parameters: \begin{theorem}[Comparison theorem between $M_1$, $M_2$ and $\bar{Q}$]\label{thm:compareM1M2additive} Given a target distribution $\pi$ on finite state space $\mathcal{X}$ and a proposal chain with generator $Q$. If $\pi$ is the stationary distribution of chain $Q$, then we have the following comparison results between $M_1 = M_1(Q,\pi)$, $M_2 = M_2(Q,\pi)$ and $\bar{Q} = (Q+Q^*)/2$: \begin{enumerate} \item(Hitting times)\label{hit} For $\lambda > 0$ and $A \subseteq \mathcal{X}$, we have $$ \mathbb{E}_{\pi}(e^{-\lambda \tau_A(M_1)}) \leq \mathbb{E}_{\pi}(e^{-\lambda \tau_A(\bar{Q})}) \leq \min\big\{\mathbb{E}_{\pi}(e^{-\lambda \tau_A(Q)}),\ \mathbb{E}_{\pi}(e^{-\lambda \tau_A(M_2)})\big\}. $$ In particular, for any $A\subseteq\mathcal{X}$, $$ \mathbb{E}_{\pi}(\tau_A(M_1)) \geq \mathbb{E}_{\pi}(\tau_A(\bar{Q})) \geq \max\big\{\mathbb{E}_{\pi}(\tau_A(Q)),\ \mathbb{E}_{\pi}(\tau_A(M_2))\big\}. $$ Furthermore, $$ t_{av}(M_1,\pi) \geq t_{av}(\bar{Q},\pi)\geq \max\big\{t_{av}(Q,\pi),\ t_{av}(M_2,\pi)\big\}. $$ \item(Total variation mixing time) There exists positive constants $C_{\pi}^{(1)}, C_{\pi}^{(2)}$ that depend on $\pi$ such that $$t_{mix}(M_2,\pi,1/4) \leq C_{\pi}^{(1)}t_{mix}(\bar{Q},\pi,1/4) \leq C_{\pi}^{(2)}t_{mix}(M_1,\pi,1/4).$$ That is, $t_{mix}(M_2,\pi,1/4) \lesssim_{\pi} t_{mix}(\bar{Q},\pi,1/4) \lesssim_{\pi} t_{mix}(M_1,\pi,1/4).$ \item(Spectral gap) We have $$ \lambda_2(M_1,\pi)\leq \lambda_2(\bar{Q},\pi)\leq \lambda_2(M_2,\pi). $$ That is, $t_{rel}(M_1,\pi)\geq t_{rel}(\bar{Q},\pi)\geq t_{rel}(M_2,\pi).$ \item(Asymptotic variance)\label{asva} For $h \in \ell^2_0(\pi) = \{h;~\pi(h)=0\}$, $$ \sigma^2(h,M_1,\pi) \geq \sigma^2(h,\bar{Q},\pi) \geq \max\big\{\sigma^2(h,Q,\pi),\ \sigma^2(h,M_2,\pi)\big\}. $$ \item(Large deviations)\label{lade} For any $\nu\in\mathcal{P}(\mathcal{X})$, $$ I(M_1,\nu)\leq I(\bar{Q},\nu)\leq \min\big\{I(Q,\mu),\ I(M_2,\nu)\big\}. $$ \item(Capacity)\label{capa} For any disjoint $A,B \subseteq \mathcal{X}$, $$ \mathrm{cap}(A,B,M_1,\pi) \leq \mathrm{cap}(A,B,\bar{Q},\pi) \leq \min\big\{\mathrm{cap}(A,B,Q,\pi),\ \mathrm{cap}(A,B,M_2,\pi)\big\}. $$ In particular, if we take $A=\{x\}$ and $B=\{y\}$, we have $$ t_{com}^{x,y}(M_1) \geq t_{com}^{x,y}(\bar{Q}) \geq \max\big\{t_{com}^{x,y}(Q),\ t_{com}^{x,y}(M_2)\big\}. $$ \end{enumerate} \end{theorem} \begin{rk} In Theorem \ref{thm:compareM1M2additive}, we provide a comparison theorem between between $M_1$, $M_2$ and $\bar{Q}$ based on different parameters. We believe that similar results should hold between $Q$ and $M_2$, and conjecture that $M_2$ should mix faster than $Q$ simply because we are taking the maximum between $Q$ and $Q^*$ for off-diagonal entries. We are not able to prove it however due to the non-reversibility of $Q$. \end{rk} The rest of this section is devoted to the proof of Theorem \ref{thm:compareM1QmuM2Qmu} and Theorem \ref{thm:compareM1M2additive}. \subsection{Proof of Theorem \ref{thm:compareM1QmuM2Qmu}} Many of our results follow from the Peskun ordering between $M_1$ and $M_2$ as well as Lemma \ref{lem:M1M2}. We first prove item \eqref{it:hit}. As $M_2 \succeq M_1$ and both are $\mu$-reversible, \cite[Theorem $3.1$]{HM18} gives the desired result on the Laplace transform order of hitting time. Next, we prove item \eqref{it:tmix}. First, we fix $A \subseteq \mathcal{X}$ such that $\mu(A) \geq 1/4$, and by item \eqref{it:hit}, we have $$\mathbb{E}_{\mu}(\tau_A(M_2)) \leq \mathbb{E}_{\mu}(\tau_A(M_1)) \leq \sup_x \mathbb{E}_{x}(\tau_A(M_1)) \leq \sup_{x,A: \mu(A) \geq 1/4} \mathbb{E}_{x}(\tau_A(M_1)).$$ By \cite[Theorem $1.3$]{Oliveria12}, there exists universal constant $C^{(1)}$ such that $\sup_{x,A: \mu(A) \geq 1/4} \mathbb{E}_{x}(\tau_A(M_1)) \leq C^{(1)} t_{mix}(M_1,\mu,1/4).$ On the other hand, let $x^* := \arg \max \mathbb{E}_{x}(\tau_A(M_2))$ and $\mu_{min} := \min_x \mu(x)$, we then have $$\mu(x^*) \mathbb{E}_{x^*}(\tau_A(M_2)) \leq \mathbb{E}_{\mu}(\tau_A(M_2)) \leq C^{(1)} t_{mix}(M_1,\mu,1/4),$$ which becomes $$\sup_{x,A: \mu(A) \geq 1/4} \mathbb{E}_{x}(\tau_A(M_2)) \leq \dfrac{C^{(1)}}{\mu_{min}} t_{mix}(M_1,\mu,1/4).$$ Using again \cite[Theorem $1.3$]{Oliveria12}, there exists universal constant $C^{(2)}$ such that $$\sup_{x,A: \mu(A) \geq 1/4} \mathbb{E}_{x}(\tau_A(M_2)) \geq C^{(2)} t_{mix}(M_2,\mu,1/4).$$ Desired result follows from taking $C_{\mu} = \frac{C^{(1)}}{C^{(2)}\mu_{min}}$. Now, we prove item \eqref{it:l2}. Using the definition of spectral gap and Lemma \ref{lem:M1M2}, we see that \begin{align*} \lambda_2(M_2,\mu) &= \inf\{\langle -M_2f,f \rangle_{\mu}:\ \mu(f) = 0, \mu(f^2) = 1\} \\ &\geq \inf\{\langle -M_1f,f \rangle_{\mu}:\ \mu(f) = 0, \mu(f^2) = 1\} \\ &= \lambda_2(M_1,\mu). \end{align*} From \cite[Chapter 9]{Chenb92}, this leads to \begin{align*} \sup_{||f||_{\ell^2(\mu)} \leq 1} ||e^{M_2t}f - \mu(f)||_{\ell^2(\mu)} = e^{-\lambda_2(M_2,\mu)t} &\leq e^{-\lambda_2(M_1,\mu)t} = \sup_{||f||_{\ell^2(\mu)} \leq 1} ||e^{M_1t}f - \mu(f)||_{\ell^2(\mu)}.\\ \end{align*} For item \eqref{it:asymvar}, it readily follows from \cite[Theorem $6$]{LM08}. We proceed to prove item \eqref{it:larde}. Denote $R=M_2-M_1$. It is easy to see that $R$ is also a $\mu$-reversible generator. Since $M_1$, $M_2$ and $R$ are $\mu$-reversible generators, from \cite[Theorem IV.14]{Ho00}, \begin{align*} I(M_2,\nu)&=-\sum_{x,y}\sqrt{\frac{\nu(x)}{\mu(x)}}\mu(x)M_2(x,y)\sqrt{\frac{\nu(y)}{\mu(y)}}\\ &=-\sum_{x,y}\sqrt{\frac{\nu(x)}{\mu(x)}}\mu(x)\Big(M_1(x,y)+R(x,y)\Big)\sqrt{\frac{\nu(y)}{\mu(y)}}\\ &\geq I(M_1,\nu), \quad \text{for }\nu\in \mathcal{P}(\mathcal{X}). \end{align*} Finally, we prove item \eqref{it:cap}. We use again Lemma \ref{lem:M1M2} and the Dirichlet principle for capacity of reversible Markov chains in \cite[Chapter 2, Theorem 6.1]{Liggb85} to give \begin{align*} \mathrm{cap}(A,B,M_1,\mu) &= \inf\big\{\langle -M_1f,f \rangle_{\mu}:\ f|_{A} = 1, f|_{B} = 0\big\} \\ &\leq \inf\big\{\langle -M_2f,f \rangle_{\mu}:\ f|_{A} = 1, f|_{B} = 0\big\} \\ &= \mathrm{cap}(A,B,M_2,\mu). \end{align*} In particular, taking $A = \{x\}$ and $B = \{y\}$, we have $$\mu(x)\mathbb{P}_x(\tau_y(M_1) < \tau_x^+(M_1)) \leq \mu(x)\mathbb{P}_x(\tau_y(M_2) < \tau_x^+(M_2)).$$ Together with $|M_1(x,x)| \leq |M_2(x,x)|$ and \cite[Chapter $2$ Corollary $8$]{AF14}, desired result follows since \begin{align*} t_{com}^{x,y}(M_2) &= \dfrac{1}{|M_2(x,x)|\mu(x)\mathbb{P}_x(\tau_y(M_2) < \tau_x^+(M_2))}\\ &\leq \dfrac{1}{|M_1(x,x)|\mu(x)\mathbb{P}_x(\tau_y(M_1) < \tau_x^+(M_1)) }\\ &= t_{com}^{x,y}(M_1). \end{align*} \quad $\square$ \subsection{Proof of Theorem \ref{thm:compareM1M2additive}} Since $M_1,\ \bar{Q},\ M_2$ are $\pi$-reversible and $M_2\succeq \bar{Q}\succeq M_1$, the proof of the results for them is omitted as it is essentially the same as the proof of Theorem \ref{thm:compareM1QmuM2Qmu} with $\mu$ substituted by $\pi$. It remains to prove the results for chain $Q$. For hitting times, from \cite[Theorem 3.3]{HM18} it follows that $$ \mathbb{E}_{\pi}(e^{-\lambda \tau_A(\bar{Q})}) \leq \mathbb{E}_{\pi}(e^{-\lambda \tau_A(Q)})\quad \text{and}\quad \mathbb{E}_\pi(\tau_A(\bar{Q}))\geq \mathbb{E}_\pi(\tau_A(Q)), $$ for any $A\subseteq\mathcal{X}$. Hence, item \eqref{hit} holds. Next, we prove item \eqref{asva}. In fact, applying \cite[Theorem 4.3]{HM18.2} to the continuous-time case, i.e., replacing $I-P$ by $Q$ in its proof, gives that $$ \sigma^2(h,\bar{Q},\pi) \geq \sigma^2(h,Q,\pi),\quad \text{for}\ h\in \ell^2_0(\pi). $$ For large deviations, \cite[Proposition 3.2]{Bie16} gives the desired result. Finally, we use the Dirichlet principle of capacity in \cite{GL14} to prove item \eqref{capa}. More specifically, from \cite[Lemma 3.2]{GL14}, we can see that \begin{align*} \text{cap}(A,B,Q,\pi)&=\inf_{f|_A=1,f|_B=0}\sup_{g|_A,g|_B \text{constants}} \big\{2\langle Q^*f,g \rangle_\pi-\langle g,(-\bar{Q})g\rangle_\pi\big\} \\ &\geq \inf_{f|_A=1,f|_B=0}\langle f, (-\bar{Q})f\rangle_\pi\\ &=\text{cap}(A,B,\bar{Q},\pi), \end{align*} where we take $g=-f$ in the inequality. \quad $\square$ \section{Examples}\label{sec:ex} In this section, two examples are provided to illustrate our main results. In the first example in Section \ref{sec:MIS}, we give an eigenanalysis for the case of Metropolised independing sampling, while in the second example in Section \ref{sec:bd}, we consider the case when $Q$ is a birth-death chain and compare the fastest strong stationary time of $M_1$ and $M_2$. \subsection{Metropolised independent sampling and spectral analysis of $M_2$}\label{sec:MIS} In this section, we offer an explicit spectral analysis for both $M_1$ and $M_2$ of Metroplised independent sampling on a finite state space $\mathcal{X} = \llbracket 1,m \rrbracket = \{1,2,\ldots,m\}$ with $m \in \mathbb{N}$. This section is inspired by the work of \cite{Liu96} who offered the first explicit eigenanalysis of $M_1$ for Metropolised independent sampling. We will show that similar results can be obtained for $M_2$ using the techniques therein. Suppose that $\mathbf{p} = (p_y)_{y \in \mathcal{X}}$ is a probability distribution on $\mathcal{X}$, and denote by $P = (P(x,y))_{x,y \in \mathcal{X}}$ with $P(x,y) = p_y$ to be a transition matrix of the form $$P = \begin{bmatrix} p_1 & p_2 & \ldots & p_m \\ \vdots & \vdots & \vdots & \vdots \\ p_1 & p_2 & \ldots & p_m \end{bmatrix}.$$ In addition, we take the proposal chain to be the continuized chain of $P$ with generator $Q := P - I$, where $I$ is the identity matrix of size $m \times m$. For a given target distribution $\mu = (\mu(x))_{x \in \mathcal{X}}$, we define $$w_x := \dfrac{\mu(x)}{p_x}$$ and assume without loss of generality (by relabelling the state space) that $w_1 \geq w_2 \geq \ldots \geq w_m$. As a result, both $M_1(Q,\mu)$ and $M_2(Q,\mu)$ take the form \begin{align*} M_1(Q,\mu) &= \begin{bmatrix} p_1 + \gamma_1-1 & \frac{\mu(2)}{w_1} & \frac{\mu(3)}{w_1} & \ldots & \frac{\mu(m)}{w_1} \\ p_1 & p_2 + \gamma_2 - 1 & \frac{\mu(3)}{w_2} & \ldots & \frac{\mu(m)}{w_2} \\ p_1 & p_2 & p_3 + \gamma_3 - 1 & \ldots & \frac{\mu(m)}{w_3} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ p_1 & p_2 & p_3 & \ldots & p_m - 1 \end{bmatrix}, \\ M_2(Q,\mu) &= \begin{bmatrix} p_1-1 & p_2 & p_3 & \ldots & p_m \\ \frac{\mu(1)}{w_2} & p_2 + \beta_2 - 1 & p_3 & \ldots & p_m \\ \frac{\mu(1)}{w_3} & \frac{\mu(2)}{w_3} & p_3 + \beta_3 - 1 & \ldots & p_m \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac{\mu(1)}{w_m} & \frac{\mu(2)}{w_m} & \frac{\mu(3)}{w_m} & \ldots & p_m + \beta_m - 1 \end{bmatrix}, \end{align*} where for $x \in \llbracket 1,m-1 \rrbracket$ and $i \in \llbracket 2,m \rrbracket$, $$\gamma_x := \sum_{j = x}^{m} \dfrac{\mu(j)}{w_j} - \dfrac{\mu(j)}{w_x} \geq 0, \quad \beta_i := \sum_{j=1}^i \dfrac{\mu(j)}{w_j} - \dfrac{\mu(j)}{w_i} \leq 0.$$ In our result below, we show that $(\beta_i-1)_{i \in \llbracket 2,m \rrbracket}$ (resp.~$(\gamma_x-1)_{x \in \llbracket 1,m-1 \rrbracket}$) are the eigenvalues of $M_2(Q,\mu)$ (resp.~$M_1(Q,\mu)$). \begin{proposition}[Eigenanalysis of $M_1$ and $M_2$ for Metropolised independent sampling]\label{prop:eigenM1M2} Given a target distribution $\mu$ on $\mathcal{X} = \llbracket 1,m \rrbracket$ and proposal chain with generator $Q = P - I$, the non-zero eigenvalues-eigenvectors of $M_2(Q,\mu)$ are $(\beta_i-1,\mathbf{v}_i)_{i \in \llbracket 2,m \rrbracket}$, while that of $M_1(Q,\mu)$ are $(\gamma_x-1,\mathbf{w}_x)_{x \in \llbracket 1,m-1 \rrbracket}$, where \begin{align*} \mathbf{v}_i &= \bigg(-\mu(i),-\mu(i),\ldots,-\mu(i),\underbrace{\sum_{j \leq i-1} \mu(j)}_{\text{i}^{th} position},0,\ldots,0 \bigg)^T, \\ \mathbf{w}_x &= \bigg(0,0,\ldots,0,\underbrace{\sum_{j \geq x+1} \mu(j)}_{\text{x}^{th} position},-\mu(x),\ldots,-\mu(x) \bigg)^T. \end{align*} \end{proposition} \begin{rk} In fact, the spectral information of $M_1(Q,\mu)$ can be obtained from $M_2(Q,\mu)$ by reordering the index in the way of changing $i$ to $m-i+1$ and replacing $\beta$ by $\gamma$. \end{rk} \begin{rk} As explicit eigenvalues information are available for $M_2(Q,\mu)$, by means of Theorem \ref{thm:compareM1QmuM2Qmu} item \eqref{it:l2} we have $$e^{(\max_{i \in \llbracket 2,m \rrbracket} \beta_i-1)t} = \sup_{||f||_{\ell^2(\mu)} \leq 1} ||e^{M_2t}f - \mu(f)||_{\ell^2(\mu)} \leq \sup_{||f||_{\ell^2(\mu)} \leq 1} ||e^{M_1t}f - \mu(f)||_{\ell^2(\mu)} = e^{(\gamma_1-1)t}.$$ \end{rk} \subsection{Proof of Proposition \ref{prop:eigenM1M2}} In this section, we prove Proposition \ref{prop:eigenM1M2} for $M_2$ by adapting similar techniques as in \cite{Liu96}. The case for $M_1$ has already been done in \cite[Theorem $2.1$]{Liu96}. First, we denote $\mathbf{1} := (1,1,\ldots,1)^T$ to be the column vector of all ones and $G$ by $$G := M_2(Q,\mu) -Q = \begin{bmatrix} 0 & 0 & 0 & \ldots & 0 \\ \frac{\mu(1)}{w_2}-p_1 & \beta_2 & 0 & \ldots & 0 \\ \frac{\mu(1)}{w_3}-p_1 & \frac{\mu(2)}{w_3}-p_2 & \beta_3 & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \frac{\mu(1)}{w_m}-p_1 & \frac{\mu(2)}{w_m}-p_2 & \frac{\mu(3)}{w_m}-p_3 & \ldots & \beta_m \end{bmatrix}.$$ Note that $\mathbf{1}$ is a common right eigenvector of both $M_2(Q,\mu) + I$ and $M_2(Q,\mu) + I - G = P$ with eigenvalue $1$. Since $M_2(Q,\mu) + I - G = P$ is of rank one, the rest of the eigenvalues of $M_2(Q,\mu) + I$ and $G$ have to be the same, and hence the non-zero eigenvalues for $M_2(Q,\mu)$ are $(\beta_i-1)_{i \in \llbracket 2,m \rrbracket}$. To determine the eigenvectors, we begin with the eigenvectors for $G$. \begin{lemma} For $i \in \llbracket 2,m \rrbracket$, $$\mathbf{u}_i = (\mathbf{u}_i(x))_{x \in \llbracket 1,m \rrbracket} = \bigg(0,0,\ldots,0,\underbrace{\sum_{j \leq i} \mu(j)}_{\text{i}^{th} position},\mu(i),\ldots,\mu(i) \bigg)^T$$ is an eigenvector associated with eigenvalue $\beta_i$ of $G$. \end{lemma} \begin{proof} We consider the $k$-th entry of the vector $G\mathbf{u}_i$, denoted by $G\mathbf{u}_i(k)$, for the following three cases $k < i$, $k = i$ and $k > i$. In the first case when $k < i$, $$G\mathbf{u}_i(k) = 0 = \beta_i \mathbf{u}_i(k).$$ In the second case when $k = i$, we have $$G\mathbf{u}_i(k) = \beta_i \sum_{j \leq i} \mu(j) = \beta_i \mathbf{u}_i(k).$$ Finally, in the last case when $k > i$, we check that \begin{align*} G\mathbf{u}_i(k) &= G(k,i) \sum_{j \leq i} \mu(j) + \mu(i) \sum_{j = i+1}^k G(k,j)\\ &= \left(\dfrac{\mu(i)}{w_k} - p_i\right) \sum_{j \leq i} \mu(j) + \mu(i) \sum_{j = i+1}^{k-1} \left( \dfrac{\mu(j)}{w_k} - \dfrac{\mu(j)}{w_j} \right) + \mu(i) \beta_k \\ &= \mu(i)\left(\sum_{j=1}^i \dfrac{\mu(j)}{w_j} - \dfrac{\mu(j)}{w_i}\right) = \beta_i \mu(i) = \beta_i \mathbf{u}_i(k). \end{align*} \end{proof} We now proceed to show that $\mathbf{v}_i = \mathbf{u}_i - \mu(i) \mathbf{1}$ is an eigenvector. First, we note that $$\mathbf{p} \mathbf{u}_i = p_i \sum_{j \leq i} \mu(j) + \mu(i) \sum_{j=k+1}^m p_j = \mu(i) (1-\beta_i),$$ and so $$\left(M_2(Q,\mu) + I \right)\mathbf{u}_i = G \mathbf{u}_i + \mathbf{1} \mathbf{p} \mathbf{u}_i = \beta_i \mathbf{u}_i + \mu(i) (1-\beta_i) \mathbf{1}.$$ Since $\mathbf{1}$ is a right eigenvector of $M_2(Q,\mu) + I$ with eigenvalue $1$, for any $t$ we have $$\left(M_2(Q,\mu) + I\right)(\mathbf{u}_i - t \mathbf{1}) = \beta_i \left(\mathbf{u}_i - \dfrac{t - \mu(i)(1-\beta_i)}{\beta_i}\mathbf{1}\right).$$ Solving for $t = \dfrac{t - \mu(i)(1-\beta_i)}{\beta_i}$ gives $\mathbf{v}_i = \mathbf{u}_i - \mu(i) \mathbf{1}$ is an eigenvector. \subsection{Comparing the fastest strong stationary time of birth-death Metropolis chains $M_1$ and $M_2$}\label{sec:bd} In our second example, we consider the case when the state space is $\mathcal{X} = \llbracket 0,n \rrbracket$ and the proposal chain $Q$ is an ergodic birth-death chain, that is, $Q(x,y) > 0$ if and only if $|y-x| = 1$. In this setting, it is easy to see that both $M_1(Q,\mu)$ and $M_2(Q,\mu)$ are birth-death chains. For this example we are primarily interested in the so-called fastest strong stationary time $T_{sst}(Q)$ starting from $0$, which is a randomized stopping time that satisfies, for $t > 0$, $$\max_{y \in \mathcal{X}} \bigg\{1- \dfrac{e^{Qt}(0,y)}{\pi(y)}\bigg\} = \mathbb{P}_0(T_{sst}(Q) > t).$$ As $Q$ is an birth-death chain, classical results (see e.g. \cite[Corollary $4.2$]{DSC06} or \cite[Theorem $1.4$]{Fill}) tell us that $T_{sst}(Q)$ under $\mathbb{P}_0$ has the law of convolution of exponential distributions with parameters being the non-zero eigenvalues of $-Q$. This fact, together with the Courant-Fischer min-max theorem of eigenvalues, give rise to the following comparison results: \begin{proposition}[Fastest strong stationary time of birth-death Metropolis chains $M_1$ and $M_2$] Given a target distribution $\mu$ on finite state space $\mathcal{X}$ and birth-death proposal chain with generator $Q$, by writing the eigenvalues of $-M_1 = -M_1(Q,\mu)$ and $-M_2 = -M_2(Q,\mu)$ in ascending order for $j = 1,2$ $0 = \lambda_1(M_j) \leq \lambda_2(M_j) \ldots \leq \lambda_{|\mathcal{X}|}(M_j),$ we have, for $i \in \llbracket 1,|\mathcal{X}| \rrbracket$, $$\lambda_i(M_1) \leq \lambda_i(M_2).$$ Consequently, there is a Laplace transform order of the fastest strong stationary time starting at $0$ between $M_1$ and $M_2$, that is, for $\alpha > 0$, $$\mathbb{E}_0(e^{-\alpha T_{sst}(M_2)}) \geq \mathbb{E}_0(e^{-\alpha T_{sst}(M_1)}).$$ In particular, the mean and variance of the fastest strong stationary time are ordered by \begin{align*} \mathbb{E}_0(T_{sst}(M_2)) &\leq \mathbb{E}_0(T_{sst}(M_1)), \\ \mathrm{Var}_0(T_{sst}(M_2)) &\leq \mathrm{Var}_0(T_{sst}(M_1)). \end{align*} \end{proposition} \begin{proof} First, by Lemma \ref{lem:M1M2} we have $\langle -M_2 f,f \rangle_{\mu} \geq \langle -M_1 f,f \rangle_{\mu}$. By the Courant-Fischer min-max theorem of eigenvalues \cite[Theorem $4.2.6$]{HJ13}, this leads to $$\lambda_i(M_1) \leq \lambda_i(M_2).$$ Consequently, for $\alpha > 0$, $$\dfrac{\lambda_i(M_1)}{\lambda_i(M_1) + \alpha} \leq \dfrac{\lambda_i(M_2)}{\lambda_i(M_2) + \alpha}.$$ Taking the product yields $$\mathbb{E}_0(e^{-\alpha T_{sst}(M_2)}) = \prod_{i=2}^{|\mathcal{X}|} \dfrac{\lambda_i(M_2)}{\lambda_i(M_2) + \alpha} \geq \prod_{i=2}^{|\mathcal{X}|} \dfrac{\lambda_i(M_1)}{\lambda_i(M_1) + \alpha} = \mathbb{E}_0(e^{-\alpha T_{sst}(M_1)}),$$ where the equalities follow from the fact that both $M_1$ and $M_2$ are birth-death chains and so their fastest strong stationary times are distributed as convolution of exponential distributions with parameters being the non-zero eigenvalues. In particular, we have \begin{align*} \mathbb{E}_0(T_{sst}(M_2)) = \sum_{i=2}^{|\mathcal{X}|} \dfrac{1}{\lambda_i(M_2)} &\leq \sum_{i=2}^{|\mathcal{X}|} \dfrac{1}{\lambda_i(M_1)} = \mathbb{E}_0(T_{sst}(M_1)), \\ \mathrm{Var}_0(T_{sst}(M_2)) = \sum_{i=2}^{|\mathcal{X}|} \dfrac{1}{\lambda_i(M_2)^2} &\leq \sum_{i=2}^{|\mathcal{X}|} \dfrac{1}{\lambda_i(M_1)^2} = \mathrm{Var}_0(T_{sst}(M_1)). \end{align*} \end{proof} \noindent \textbf{Acknowledgements}. The authors would like to thank the anonymous referee for constructive comments that improve the presentation of the manuscript. Michael Choi acknowledges the support from the Chinese University of Hong Kong, Shenzhen grant PF01001143. Lu-Jing Huang acknowledges the support from NSFC No.11771047 and Probability and Statistics: Theory and Application(IRTL1704). The authors would also like to thank Professor Yong-Hua Mao for his hospitality during their visit to Beijing Normal University, where this work was initiated. \bibliographystyle{abbrvnat}
1,108,101,564,492
arxiv
\section{Introduction} Non-perturbative and phase-sensitive nonlinear optics opens new windows to physical phenomena in gases, liquids, and solids on the attosecond timescale\cite{Krausz2014}. These attosecond ($10^{-18}$s) processes in atomic\cite{Hentschel:2001,Sansone:2011,Chini:2014} and solid state semiconductor systems\cite{Garg2016,Ghimire_2014,Sederberg2020,Ghimire2019,You:17, Li2020, Higuchi:14} are most commonly probed by way of light emitted from high harmonic generation (HHG). Conductors\cite{Boolakee2022,Bionta2021} and plasmonics\cite{Hommelhoff2006,Putnam2017,Piglosiewicz2014,PhysRevLett.109.244803,Heide2020,Kruger2011,Lemell2003} similarly exhibit these attosecond phenomena but are probed by phase sensitive current generation. Such ultrafast physical phenomena in both high and low bandgap materials can potentially be used for generating and controlling ultrafast transient electron currents for petahertz ($10^{18}$~Hz) electronics and exploring light matter interactions at attosecond timescales\cite{Krausz2014,Higuchi2017,Schultze2013,Schiffrin2013,Boolakee2022}. Despite the large energy difference in optical fields required to drive these processes, the requirement of stable few-cycle optical pulses for exciting sub-cycle dynamics is common across this wide range of research topics. Probing these dynamics require pulses with a well defined carrier envelope phase (CEP), i.e. a repeatable and controllable waveform, to provide a known electric potential on the timescale of the cycle of light. Non-perturbative HHG processes in gases and semiconductors typically require high power lasers with \textmu J~to~m J energies and $\leq$10 kHz repetition rates. But similar phase dependent processes are present in solid state conductors which require significantly lower optical energies, $\leq$5 nJ, to probe and are therefore compatible with high repetition rate sources, $\ge$80 MHz. To elucidate the phase sensitivity in the HHG process, the CEP is locked to a well defined value, and slowly scanned to measure the phase-dependent spectral shifts\cite{Hentschel:2001,Sansone:2011,Chini:2014,Sederberg2020,Ghimire2019,Ghimire_2014,You:17, Li2020, Higuchi:14}. This technique relies on the high fluxes achieved with low repetition rate sources to produce measurable spectral shifts with a grating spectrometer. On the other hand, when exploring these processes in conductors, one can take advantage of the high repetition rate and high frequency modulation and demodulation techniques to perform similar measurements\cite{Putnam2017,Boolakee2022}. With a non-zero \textit{f}$_{\text{ceo}}$, the CEP cycles through 2$\pi$ at a well defined rate. By using a lock in at \textit{f}$_{\text{ceo}}$, and simple symmetry arguments\cite{Boolakee2022}, one can determine the phase dependent current generation as well as the contributions of the interband and intraband currents to the total signal. The power of this technique relies in the high frequency modulation and demodulation allowing for high signal to noise measurements that can detect exceedingly small current modulations using patterned electrodes. However, this approach relies on plasmonic enhancements in conductors and does not produce HHG light. Addressing subcycle attosecond dynamics in HHG with high repetition rate sources requires a significant advancement in the methods used to detect these CEP dependent signals due to the low pulse energies of the generated HHG light. Here we introduce high-sensitivity frequency comb techniques found in sub-cycle current generation in conductors to study phase-sensitive harmonic generation from solid dielectrics with low energy (nJ) pulses at 100 MHz. This advance is enabled by our technique of low-noise and scalable short pulse generation that overcomes conventionally limited powers from 100~MHz Er:fiber combs to produce 10~nJ 20~fs pulses at 1550~nm\cite{Lesko2021}. We focus these pulses into 500~\textmu m ZnO (11-20) to 2~TW/cm$^{2}$, producing harmonics in a near continuum to wavelengths as short as 200~nm, without the need for high pressure hollow core fibers, pulse picking, or complicated vacuum apparatuses. To measure CEP dependent spectral modulations we leverage the low noise properties of our Er:fiber comb to characterize extremely small amplitude modulation sidebands in the ultraviolet (UV) harmonics that arise from the nonzero $\textit{f}_{\text{ceo}}$. This approach, which we call CAMS (carrier-envelope amplitude modulation spectroscopy), provides 85~dB of signal-to-noise ratio (SNR) at a 1~Hz resolution bandwidth (RBW), allowing us to measure the effect of the CEP cycling in a 4 cycle pulse. Analyzing multiple harmonics with CAMS, reveals the impact of the crystalline symmetry on the periodic spectral modulations that arise from the pump CEP. We further confirm the non-perturbative nature of our generated light by gating our pulse, effectively shortening it, and observing increased modulation and non-perturbative power scaling. The use of solid state target and a fiber laser system results in a simple, robust, and vacuum free apparatus to measure these strong field effects. We anticipate systems like this will be useful not only for measuring field sensitive physics in solids and potentially gases, but also for broadband spectroscopy in a dual comb modality. \section{Experimental Setup} An outline of the experimental setup is shown in Figure 1a. The setup is based on a commercial, 100~MHz low noise polarization maintaining (PM) Er:fiber oscillator at 1550~nm (Menlo Systems). The $\textit{f}_{\text{ceo}}$ of the oscillator is stabilized by a conventional \textit{f}-2\textit{f} interferometer to a maser referenced signal at $\textit{f}_{\text{ceo}}=1$~MHz. This provides a well defined cycling of the CEP with the sampling defined by $(2\pi\textit{f}_{\text{ceo}})/(\textit{f}_{\text{rep}})$ $\sim 63$~mrad/pulse for the 100~MHz oscillator. The $\textit{f}_{\text{rep}}$ is not locked but maintains enough stability over a typical 10 second average to negligibly impact the rate of CEP cycling. The oscillator pulses are amplified and spectrally broadened to support few cycle pulses, as described in Ref. \cite{Lesko2021}. Briefly, the oscillator output is stretched in a normal dispersion fiber and amplified to 20~nJ in a purpose-built erbium/ytterbium doped fiber amplifier. The pulses are then compressed with a grating compressor, spectrally broadened in PM normal dispersion highly nonlinear fiber (ND-HNLF), and compressed again using fused silica (FS) and third order dispersion mirrors. The dispersion is optimized with the FS wedges for a near transform limited pulse at the back face of the generation crystal. The result are 10~nJ, 20~fs, few-cycle pulses at the back face of crystal measured by second harmonic generation frequency resolved gating (SHG-FROG, Supplemental Figure 1). The pulses are then tightly and achromatically focused by an off-axis parabolic mirror (OAP) to a 1/e$^2$ radius of 4.5 \textmu m in a 500~\textmu m thick a-plane single crystal ZnO (11-20). Our SHG-FROG measurements (supplemental Figure 1) indicate that linear compression in the ZnO dominates over nonlinear compression processes. Spatially, the focus is placed at the back of the crystal to avoid re-absorption of light generated above the ZnO bandgap. The peak power is estimated to be $\sim$0.675~MW corresponding to a peak intensity of $>$2~TW/cm$^2$. While the peak power of the pulse is enough to consider self-focusing, the lack of observed self collapse and spectral broadening for the driving pulse would suggest this is not a dominant process for producing high intensities. The dynamics between the possible self-focusing and nonperturbative harmonic generation would require further study. The UV and visible (UV/Vis, 200~-~650~nm) light generated with the ZnO crystal is collected by an OAP with high reflectance in the UV (Acton \#1200, 120~-~600~nm), and sent to a purpose-built monochromator. Our monochromator is based on a 1800~g/mm grating blazed for 250~nm (Richardson Gratings) mounted on a rotation stage and a fast, UV-sensitive photo-multiplier tube (PMT, H6780-03 Hamamatsu). The estimated resolving power is $\lambda/\delta\lambda=125$ at 250~nm. The DC photocurrent produced by the PMT is amplified using a low noise current pre-amplifier (Femto) and measured simultaneously on an oscilloscope and spectrum analyzer. At each grating position, the dark photocurrent is measured and subtracted, while the $\textit{f}_{\text{ceo}}$ power is normalized to the $\textit{f}_{\text{rep}}$ power. The power spectral density is calibrated by measuring the the third harmonic yield with a notch filter and power meter. Due to detector saturation, the spectra of the $3^{rd}$ and $4^{th}$ harmonic are taken with a neutral density filter in line before being scaled to match the optical power measured and concatenated with the higher harmonics. A UV Glan-Thompson polarizer is used in a rotation mount to measure the polarization of the generated light. \begin{figure}[h!] \centering\includegraphics[width=\textwidth]{Slide2.PNG} \caption{Overview of solid state high harmonic generation (HHG) driven by a frequency comb: \textbf{(a)} Experimental setup utilizing short pulse generation at 1550 nm with a low-noise Er:fiber comb to produce high-power, 20~fs pulses. The pulses drive HHG in a 500~\textmu m thick, a-plane cut ZnO (11-20). Generated UV and visible light is detected by a monochromator and photomultiplier tube. \textbf{(b)} Spectra resulting from HHG in ZnO. HHG oriented along the centrosymmetric axis (0001, blue) yields predominately odd harmonics while the noncentrosymmetic axis (1-100, red) yields both even and odd harmonics. The peak at $\sim$385~nm appears to be consistent with photoluminescence on the centrosymmetric axis demonstrated in\cite{Hollinger20,Hollinger21}. } \end{figure} \section{Results} Spectra of UV and visible light from 200-650 nm generated in the ZnO crystal are presented in Figure 1b. The cut of the ZnO crystal (a-plane, 11-20) enables the crystal to be oriented such that excitation primarily occurs along either the centrosymmetric axis (0001, blue) or the noncentrosymmetric axis (1-100, red). Due to crystal symmetry, generation along the centrosymmetric axis predominantly yields odd harmonics of the fundamental 1550 nm driving laser, as well as photoluminesence at $\sim$385~nm\cite{Hollinger20,Hollinger21}. Some even harmonics are also observed along the centrosymmetric axis, but are significantly weaker relative to the odd harmonics. The observation of weak even harmonics may arise from harmonic generation at the crystal surface where the symmetry of the crystal is broken, from off-axis generation due the tight focus of the driving laser, or from co-propagating surface generated second harmonic. In contrast, generation along the noncentrosymmetric axis yields both even and odd harmonics, resulting in more continuous spectral coverage compared to the centrosymmetric axis. In both crystal orientations, generation up to the 7th harmonic ($\sim$221~nm) is observed. Given the intensity of the driving pulse, harmonics beyond the 7th harmonic are expected\cite{Wang2017,Liu:17}, but lie beyond the wavelength range of the detector. In both crystal orientations, the harmonics are observed to be primarily co-polarized from the driving laser field. The observation of co-polarized harmonics is consistent with previous HHG experiments in ZnO\cite{Ghimire2011,Jiang_2019}. With the 10~nJ, 100~MHz, driving laser, >10$^{10}$ photons/second/nm are produced at 221~nm, the highest observed harmonic (see Supplemental Figure 2). With a short and tightly focused pulse, we begin to observe CEP dependent spectra, similar to systems with much higher pulse energies and lower repetition rates. To detect and measure this effect, we spectrally filter the generated light using a monochromator and detect the RF spectrum with a high bandwidth UV sensitive PMT (Figure 1a). By having an $\textit{f}_{\text{ceo}} \neq 0$, the CEP cycles at a well defined rate and changes the harmonic generation process from pulse to pulse (Figure 2a). The change in the CEP is imprinted onto the spectrum (Figure 2b(i)) and measured in the time domain as modulations on the repetition rate of the comb (Figure 2b(ii)). The Fourier transformation of the monochromator signal reveals the amplitude modulation depth ($\beta$) and frequency, which is proportional to the rate at which the CEP cycles (i.e. $\textit{f}_{\text{ceo}}$). This method of Carrier-envelope Amplitude Modulation Spectroscopy (CAMS) allows for narrow resolution bandwidth (1~Hz) to measure modulations as small as -85~dBc, relative the $f_{rep}$ tone, as shown Figure 2c. Notably, we only observe $\textit{f}_{\text{ceo}}$ and 2$\textit{f}_{\text{ceo}}$ in the RF spectra, corresponding to the periodic dependence (2$\pi$ and $\pi$ respectively) of the UV light on the CEP. The effect of the CEP periodicity is seen in Figure 2d. When the noncentrosymmetric axis is used the $\textit{f}_{\text{ceo}}$ tone is significantly increased. This is due to the lack of degeneracy between a CEP of 0 and $\pi$ (i.e. a cosine and -cosine pulse) in the noncentrosymmetric material. UV generation on the centrosymmetric axis results in a suppression of the $\textit{f}_{\text{ceo}}$ tone. The $\textit{f}_{\text{ceo}}$ does not disappear due to the small amount of surface generated second harmonic\cite{Li2020} and tight focusing. Both of these effects slightly break the degeneracy of the two CEP values (0 and $\pi$) resulting in $\textit{f}_{\text{ceo}}$ being present on both axes. Narrow-band RF detection allows the possibility for two additional measurements that can be taken. With a lock-in detection scheme, the phase difference between the amplitude modulations at $\textit{f}_{\text{ceo}}$ and 2$\textit{f}_{\text{ceo}}$ can be measured as a function of wavelength (Supplemental Figure 3). This measurement could potentially be used to give information on the chirp of the UV pulses, similar to a traditional CEP scan using a grating spectrometer\cite{You:17}. In future experiments, without spectral filtering by a grating monochrometer, one could observe the center of mass shifts (timing jitter) of the UV pulses relative to the driving laser envelope. The center of mass shift could yield information about the phase delay of the generated UV light with respect to the driving laser's CEP. Here, the analysis at higher harmonics of the $\textit{f}_{\text{rep}}$ could be beneficial, where the timing jitter has a stronger impact on the signal than the amplitude modulation. \begin{figure}[h!] \centering\includegraphics[width=\textwidth]{Slide3.png} \caption{Amplitude modulation from carrier envelop phase (CEP) dependent harmonic generation measured by Carrier-envelope Amplitude Modulation Spectroscopy (CAMS). \textbf{(a)} Schematic describing the timing of the driving laser pulses with CEP cycling, as well as the resulting CEP dependent UV generation. \textbf{(b)} Schematic describing the detection method of the CEP dependent spectrum, (i) The CEP's dependence can be seen across the spectra, with certain regions yielding larger signals. (ii) Isolating one wavelength region, this CEP dependent spectral intensity can be seen as modulation on the \textit{f}$_{\text{rep}}$ tone. (iii) A Fourier transform reveals the modulation depth of the CEP dependent intensity ($\beta$) as well as the frequency as well as the CEP cycling frequency (\textit{f}$_{\text{ceo}}$). \textbf{(c)} At the center of harmonics, (data from the third harmonic, $\sim$500~nm) we achieve $>$85 dB sensitivity to CEP effects allowing for measurements of small modulations. We observe \textit{f}$_{\text{ceo}}$ and 2\textit{f}$_{\text{ceo}}$ tones in the spectrum. \textbf{(d)} RF spectra at 439~nm showing the effect of symmetry breaking on the relative amplitudes of \textit{f}$_{\text{ceo}}$ and 2\textit{f}$_{\text{ceo}}$. The presence of \textit{f}$_{\text{ceo}}$ and 2\textit{f}$_{\text{ceo}}$ corresponds to a 2$\pi$ and $\pi$ periodicity of the signal on the CEP, respectively.} \end{figure} With CAMS, we measure the modulation depth $\beta$ across the spectrum (from 200 nm~to~700 nm) for both the centrosymmetric (Figure 3a) and noncentrosymmetric (Figure 3b) axes. As noted above, the increased symmetry breaking on the noncentrosymmetric axis gives much larger $\beta$ across the spectrum corresponding to increased modulation with a 2$\pi$ periodicity. On both crystallographic axes, the lower order harmonic's modulation depth follows a trend of being more prominent at the wings of each harmonic and sharply diminished at the center. This is because at a $\sim$3.9 cycle pulse length, the positions of the harmonic centers are: i) not shifting dramatically with change of CEP, and ii) being averaged over the entire 2$\pi$ of the CEP by our integration time (set by the $\textit{f}_{\text{ceo}}$ and resolution bandwidth). One clue as to the probable nonperturbative character of the generated spectra is the flat modulation depth between the 5th and 7th harmonic on the centrosymmetric, where only $\textit{f}_{\text{ceo}}$ is present, despite there being no distinguishable 6th harmonic. One would expect to see 2$\textit{f}_{\text{ceo}}$ between these harmonics from heterodyne gain (a 5\textit{f} - 7\textit{f} interference) if this was a perturbative process. Furthermore, we do not observe any $\textit{f}_{\text{ceo}}$ tone anywhere on the 1550 nm fundamental, which would originate from a cascaded $\chi^{(2)}$ (perturbative) process. \begin{figure}[h!] \centering\includegraphics[width=\textwidth]{Slide4.png} \caption{CAMS spectra of measured RF power modulation (shown as n\textit{f}$_{\text{ceo}}$/\textit{f}$_{\text{rep}}$) across the UV/Vis spectrum on the centrosymmetric \textbf{(a)} and noncentrosymmetric \textbf{(b)} axes. } \end{figure} To further investigate these arguments of perturbative and non-perturbative processes generating the UV/Vis light, we use polarization assisted amplitude gating (PASSAGE)\cite{Timmers:16} (Figure 4a) to slightly reduce the number of cycles that contribute to the nonlinear process. To implement PASSAGE, two achromatic $\lambda/2$ waveplates impart a $\lambda$ shear, while an achromatic $\lambda/4$ waveplate imparts ellipticity. We compensate the additional dispersion of the waveplates by adding/removing UVFS glass. The time-varying polarization effectively reduces the number of cycles contributing to the generation of UV light. The impact of such polarization gating has been observed with gases, which exhibit a strong dependence on polarization\cite{Timmers:16}. However, solids such as MgO and ZnO exhibit anisotropy, and similar polarization dependent effects on harmonic generation are also expected\cite{Ghimire2011,Jiang_2019,Hollinger21}. When we measure the modulation depth (\textit{f}$_{\text{ceo}}$/\textit{f}$_{\text{rep}}$) as a function of the $\lambda$/4 plate angle, we see an increase of 23.3~dB at 45~degrees (Figure 4b). Furthermore, the shape of the curve, and singular peak of the ellipticity on the modulation depth (Figure 4b) suggests that a time-dependent elliptical profile on the driving pulse is largely maintained, despite the birefringence of the a-cut ZnO crystal\cite{Hollinger21,Jiang_2019}. Since the efficiency of HHG in ZnO, and most solids in general, possesses weaker sensitivity to the polarization\cite{Hollinger21,Jiang_2019} of the driving pulse than gas phase HHG, the shape of the modulation depth as a function of ellipticity is present but not sharp. Nonetheless, our measurements show that PASSAGE, and more generally polarization gating, can still be applied to solid state HHG in ZnO, yielding an increased sensitivity to the driving laser CEP. The overall yield of UV light is decreased by approximately a factor of 8, which is in agreement to scaling seen in the gas\cite{Timmers:16}, and consistent with a non-perturbative picture. In a perturbative picture, the 5th harmonic light would be reduced by a factor of $>$10000 based on a simple intensity$^5$ scaling from the reduction of the driving pulse, far below our detection range. To further show that the generated UV light has non-perturbative character, we measure the change in modulation depth ($\Delta\beta$) from PASSAGE across part of the UV spectrum (Figure 4c). If our generation of UV light was governed by perturbative nonlinear optics, we would expect the harmonics to increase in width due to the effectively shorter (sub-cycle) driving pulse. The $\Delta\beta$, normalized heterodyne gain from n\textit{f} - (n-1)\textit{f} interference, from this spectral widening would correspond to a increase in signal closer to the center of each harmonic. This would arise from the increase in spectral overlap closer to the center of the harmonics. However, this trend in $\Delta\beta$ is not observed between the 5th and 7th harmonics (Figure 4c). Deviation in the observed $\Delta\beta$ trend from that expected for a purely perturbative harmonic generation mechanism provides further evidence for a non-perturbative generation mechanism.\par \begin{figure}[h!] \centering\includegraphics[width=\textwidth]{Slide5.png} \caption{Increasing the carrier envelope phase effects in ZnO. \textbf{(a)} We utilize polarization assisted amplitude gating (PASSAGE) to reduce the number of cycles contributing to UV generation. By introducing a (I) temporal shear by a $\lambda$ plate we are able to impart a time dependent ellipticity (II) by a $\lambda$/4 plate. This reduces the effective driving pulse to a $<$1 cycle pulse. Finally we can optimize the CEP and chirp by FS wedges (III). \textbf{(b)} Modulation depth ($\beta$) as a function of the $\lambda$/4 angle, showing a 23.5~dB increase in the UV modulation depth when fewer cycles are used in the generation process. \textbf{(c)} Measured increase in modulation ($\Delta\beta$) from using PASSAGE on the centrosymmetric axis between the 7th and 5th harmonics.The shape of the increase in modulation does not match a perturbative picture of widening harmonics.} \end{figure} The CEP-sensitive spectral modulations detected by CAMS are measured with $>$85 dB (at 1 Hz RBW) of dynamic range. The sensitivity afforded by CAMS exceeds that of a traditional UV spectrometer based on commercially available cameras\cite{fellers_davidson} by several orders of magnitude. Leveraging this high sensitivity, one can envision extending the technique to study the CEP dependent process in semiconductors, where the CEP dependent signal would be drastically smaller for the intraband currents than the interband currents\cite{You:17}. Furthermore, one could probe materials that are thought to not have CEP sensitivity in their interband or intraband currents\cite{You22017}. These intraband and interband current phenomena have been studied in conductor systems\cite{Boolakee2022} with similar lock-in techniques, but they have not been studied with such sensitivity in semiconductor solids and gases. \section{Conclusions} In summary, we present non-perturbative single pass solid state HHG based on a robust, low noise, and compact 100 MHz Er:fiber frequency comb. We utilize high frequency modulation/demodulation techniques to measure the spectral modulation from the CEP cycling with 85~dB SNR. This allows us to measure the spectral modulations from CEP sensitive HHG UV pulse shearing with a 4 cycle pulse. This simple, robust, high intensity and high repetition rate source will be useful for investigating field sensitive physics in semiconductor solids and gases that benefit from detection of weak signals and the intrinsic fast averaging at the 100 MHz rate. Furthermore, the broadband UV/Vis spectrum that is generated with the noncentrosymmetric axis of ZnO will match broadband atmospheric UV absorbers such as NO$_2$ and SO$_2$ for dual comb spectroscopy. With the amount of light generated at the 5th harmonic, we estimate it will be possible to measure spectra across 100 THz of optical bandwidth with 10 GHz resolution for averaging times <1 hour. Work towards such experiments, including the construction of a second frequency comb system, is ongoing. \begin{backmatter} \bmsection{Funding} This research was supported by the Defense Advanced Research Projects Agency SCOUT Program, the Air Force Office of Scientific Research (FA9550-16-1-0016) and NIST. \bmsection{Acknowledgments} Mention of specific products or trade names is for technical and scientific information and does not constitute an endorsement by NIST. D.M.B.L. acknowledges award 70NANB18H006 from NIST. K.F.C. acknowledges support from the National Research Council. The authors thank J. Ye and C. Zhang for loaning VUV optics, as well as T. Allison, J. Biegert, M. Chini, S. Ghimire and F. Quinlan, for valuable comments and discussion. \bmsection{Disclosures} The authors declare no conflicts of interest. \bmsection{Data availability} Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. \end{backmatter}
1,108,101,564,493
arxiv
\section{Introduction} \label{sec:intro} The disk-halo interface is host to diverse baryonic processes that regulate the buildup of stellar mass in star-forming galaxies \citep{ShapiroField1976,Bregman1980,Norman1989,deAvillez2000}. Supernova activity in galactic disks generate wind-blown bubbles in the interstellar medium (ISM; \citealt{MacLowMcCray1988,TenorioTagle1988,Korpi1999}), some of which are sufficiently powerful to evacuate thermalized supernova ejecta and entrain cold material through this interface into the circumgalactic medium \citep[CGM; e.g.,][]{TomisakaIkeuchi1986,Veilleux1995,Cooper2008,Fielding2018}. At the same time, the material required to feed ongoing star formation must likewise pass through this region, originating either beyond or in distant regions of the galactic halo, or condensing from previously ejected stellar and interstellar material \citep[e.g.,][]{LehnerHowk2011,Marasco2012,KimOstriker2018}. In the Milky Way, disk-halo material is observed in emission across a broad range of phases, including hot, diffuse gas traced by X-ray emission \citep{Egger1995,Kerp1999,KuntzSnowden2000}, a warm, denser phase traced by H$\alpha$ emission \citep[e.g.,][]{WeinerWilliams1996,Bland-Hawthorn1998,Haffner2003}, the cool, neutral material that emits at 21 cm \citep[e.g.,][]{Bajaja1985,WakkervanWoerden1991,Kalberla2005,McClure-Griffiths2009}, and a subdominant cold phase arising in molecular clouds \citep{Gillmon2006,Heyer2015,Rohser2016}. Gas with temperatures spanning much of this range has likewise been observed in metal-line absorption toward distant QSOs or UV-bright Galactic stars \citep[e.g.,][]{Richter2001a,Richter2001c,Wakker2001,Howk2003,Yao2009,LehnerHowk2011,Werk2019}. Indeed, detection of absorption due to the \ion{Ca}{2} $\lambda \lambda 3934, 3969$ and \ion{Na}{1} $\lambda \lambda 5891, 5897$ transitions toward stars in the Galactic disk and halo provided the first evidence for the existence of the ISM \citep[e.g.,][]{Hartmann1904,Hobbs1969,Hobbs1974}, and for the presence of interstellar material above the Galactic plane \citep{MunchZirin1961}. The propensity of these transitions to arise in warm (temperature $T < 10,000$ K) and cold ($T<1000$ K) gas phases, respectively, make them effective tracers of the neutral ISM \citep{Crawford1992,Welty1996,Richter2011,Puspitarini2012}. Over the past several decades, detailed study of these absorption transitions have, e.g., provided important constraints on the distances and temperatures of massive \ion{H}{1} cloud complexes \citep{Wakker2001,BenBekhti2008,BenBekhti2012}; revealed the small-scale structure of neutral material in the Milky Way halo \citep{Smoker2015,Bish2019}; and placed novel constraints on the physics and composition of interstellar dust \citep[e.g.,][]{Phillips1984,Sembach1994,Welty1996,Murga2015}. The comprehensive analysis of \ion{Ca}{2} and \ion{Na}{1} transitions in several hundred QSO spectra by \citet{BenBekhti2012} demonstrated that this absorption has comparable Milky Way sky coverage to that of \ion{H}{1} detected in emission, and that approximately half of the absorber sample have positions and velocities consistent with those of known \ion{H}{1} complexes. Absorption from \ion{Ca}{2} is also known to trace cool circumgalactic material in the halos of external galaxies. Several studies have used spectroscopy of background QSO sightlines to identify this transition in association with known foreground systems, reporting detections within projected separations $R_{\perp} \lesssim 30$ kpc \citep[e.g.,][]{BoksenbergSargent1978,Boksenberg1980,Blades1981,Bergeron1987,Zych2007}. Taking advantage of Sloan Digital Sky Survey (SDSS) spectroscopy of more than 100,000 quasars, \citet{ZhuMenard2013} analyzed the mean \ion{Ca}{2} signal induced as a function of projected separation from nearly one million foreground galaxies, tracing significantly detected absorption from $R_{\perp} \sim 7$ to $200$\,kpc. Detections of circumgalactic \ion{Na}{1}, on the other hand, have been rarer: prior to the advent of the SDSS, fewer than ten galaxy-absorber pairs, all within $R_{\perp} < 15$ kpc, were reported in the literature \citep[e.g.,][]{Bergeron1987,Womble1990,Stocke1991,Richter2011}. The mining of SDSS QSO spectra for individual \ion{Na}{1} systems increased this sample by a modest factor \citep[e.g.,][]{Cherinka2011,York2012,Straka2015}; however, the limited signal-to-noise and spectral resolution of these data are ill-suited to detailed study of either of these transitions in individual QSO sightlines. Instead, \ion{Na}{1} absorption has long been leveraged to study ISM kinematics in ``down-the-barrel" galaxy spectroscopy, revealing ubiquitous, large-scale outflows in massive, starbursting and active galactic nucleus (AGN)-host systems \citep[e.g.,][]{Heckman2000,Martin2005,Rupke2005,Rupke2017,Veilleux2020,Rupke2021}, as well as in more typical star-forming galaxies with stellar masses $10 < \log M_*/M_{\odot} < 11$ \citep{ChenTremonti2010,Concas2019,RobertsBorsani2020}. In this work, we use medium-resolution ($\mathcal{R} \approx 8000$) optical spectroscopy of 21 bright quasars confirmed to lie exceptionally close to known foreground systems at redshifts $0.03 < z < 0.20$ to study cold ($T\lesssim10,000$\,K) disk-halo material traced by \ion{Ca}{2} $\lambda \lambda 3934,3969$ and \ion{Na}{1} $\lambda \lambda 5891, 5897$ absorption. These sightlines were drawn from a unique sample of quasars surveyed by the SDSS, for which unassociated foreground nebular emission lines were identified in their SDSS fiber spectra. These systems, called Galaxies on Top of Quasars (or GOTOQs), were first discovered by \citet{Noterdaeme2010}, and later studies have since uncovered a sample of 103 such objects \citep{York2012,Straka2013,Straka2015}. \citet{Kulkarni2022} recently presented {\it HST}/COS spectroscopy of eight GOTOQs (including five in the present study), confirming that these systems give rise to damped or subdamped Ly$\alpha$ absorption in all cases. \citet{Straka2015} performed photometric analysis of the SDSS imaging of the full sample of 103 pairs, constraining galaxy luminosities, stellar masses, and impact parameters ($R_{\perp}$), and used emission-line fluxes measured from the SDSS spectroscopy to assess the galaxies' star formation activity at the location of the fiber. Here we combine these measurements with our sensitive follow-up optical spectroscopy to explore the incidence and kinematics of \ion{Ca}{2} and \ion{Na}{1} absorption within $R_{\perp} < 13$ kpc of a sample of external galaxies for the first time. We use our sample to trace the dependence of the absorption strengths of these transitions on the stellar masses ($M_*$) and local star formation rates (SFRs) of the foreground host systems, as well as their relationship to the dust reddening along the sightlines. The relatively high (echellette) spectral resolution of our dataset, in combination with the uniquely small impact parameters of the QSOs we target, permit novel insights into the ubiquity of galactic fountain flows in the nearby star-forming galaxy population. We describe our sample selection and echellette spectroscopy in Section~\ref{sec:obs}, and describe salient properties of the foreground host galaxies in our sample as measured by \citet{Straka2015} in Section~\ref{sec:fg_galaxies}. We detail our methods of measuring foreground galaxy redshifts and absorption-line equivalent widths, column densities, and kinematics in Section~\ref{sec:analysis}. Section~\ref{sec:results} presents our results on the relationship between these absorption-line properties and $R_{\perp}$, dust reddening, foreground galaxy $M_*$, and local SFR. In Section~\ref{sec:model}, we develop a simple model of the \ion{Ca}{2}- and \ion{Na}{1}-absorbing properties of the Milky Way's ISM and demonstrate that such a model fails to explain the large column densities and kinematic widths we measure. We discuss the implications of these findings in light of complementary studies of \ion{Ca}{2} and \ion{Na}{1} absorption detected toward background QSO sightlines and in down-the-barrel galaxy spectroscopy in Section~\ref{sec:discussion}. We adopt a $\Lambda$CDM cosmology with $H_0 = 70~\rm km~s^{-1}~Mpc^{-1}$, $\Omega_{\rm M} = 0.3$, and $\Omega_{\rm \Lambda}=0.7$. Magnitudes quoted are in the AB system. \section{Sample Selection and Observations} \label{sec:obs} Our target quasar sample is drawn from the parent sample of 103 GOTOQs discovered in SDSS spectra by \citet{Noterdaeme2010}, \citet{York2012}, \citet{Straka2013}, and \citet{Straka2015}. The latter study performed photometric analysis of the SDSS imaging of all QSO-galaxy pairs, measuring galaxy luminosities, impact parameters, and stellar masses. They also calculated SFRs from the extinction-corrected H$\alpha$ and [\ion{O}{2}] luminosities measured within the SDSS fiber spectroscopy of the background QSOs. The intervening galaxies in this parent sample span a range of redshifts $0 < z < 0.84$, have impact parameters 0.4\,kpc $< R_{\perp} <$ 12.7 kpc, and span a wide range in stellar mass ($7.3 < \log M_*/M_{\odot} < 11.5$). We used the following criteria to select targets for follow-up echellette-resolution spectroscopy: (1) a continuum-emitting counterpart to the foreground system was identified by \citet{Straka2015}; (2) the foreground galaxy redshift must be such that the \ion{Na}{1} D doublet falls outside of spectral regions with significant atmospheric absorption (at observed wavelengths $\lambda_{\rm obs} = 6850$--6950 \AA\ and 7580--7710 \AA); and (3) the quasar must be sufficiently bright to yield a $2\sigma$ rest equivalent width ($W_r$) detection limit of $\approx 0.02$ \AA\ at $\lambda_{\rm obs} = 6000$--7500 \AA\ in an exposure time of $\leq 1$\,hour. This latter constraint corresponds to an $r$-band magnitude limit of $m_r \lesssim 19.1$ for the background quasar. Approximately 36 GOTOQs in the \citet{Straka2015} parent sample satisfy all of these criteria. We completed follow-up spectroscopy of 21 of these targets. Table~\ref{tab.gotoqs} lists their coordinates, as well as the QSO redshifts, impact parameters, and other properties of the foreground galaxies as reported in \citet{Straka2015}. SDSS color images of each system are included in Figure~\ref{fig:ims}. Our observations were carried out using the Echellette Spectrograph and Imager \citep[ESI;][]{Sheinis2002} on the Keck II Telescope on 2017 March 6 UT and 2017 June 22-23 UT. Seeing conditions ranged between FWHM $\sim 0.4\arcsec$--$0.8\arcsec$ over the course of the program. We used the $0.5\arcsec$ wide longslit with ESI, which affords an FWHM resolution of $\mathcal{R} \approx 8000$ ($37.3~\rm km~s^{-1}$), a spectral dispersion of $10~\rm km~s^{-1}$, and a typical wavelength coverage of $\rm 3990-\!10130\,\AA$. We exposed for between 20 and 70 minutes total per object, dividing each observation into two to four individual exposures. The data were reduced using the XIDL ESIRedux data reduction pipeline\footnote{\url{https://www2.keck.hawaii.edu/inst/esi/ESIRedux/}}. The pipeline includes bias subtraction, flat-fielding, wavelength calibration, the tracing of order curvature, object identification, sky subtraction, cosmic ray rejection, and relative flux calibration. We also used it to apply a vacuum and heliocentric correction to each spectrum. \begin{figure*} \includegraphics[clip,width=1.0\textwidth,trim={0 1cm 0 1cm}]{fig_GOTOQ_images.pdf} \caption{SDSS $gri$ color imaging of all GOTOQs for which we have obtained ESI spectroscopy \citep{York2000}. Each panel is $25\arcsec \times 25\arcsec$. The images are labeled with the corresponding GOTOQ ID at the upper left. The dashed white circle indicates the size of the $3\arcsec$ diameter fiber used for the SDSS spectroscopy of each system. \label{fig:ims}} \end{figure*} \begin{figure*} \includegraphics[clip,width=1.0\textwidth]{fig_Mstar_Rperp_Mstar_SFR.pdf} \caption{ {\it Left:} Distribution of $\log M_*/M_{\odot}$ vs.\ $R_{\perp}$ for our sample. Points are color-coded by the $R_{\perp}$ value for each system across all panels, as indicated by the color bar at right. {\it Middle:} Distribution of $\log M_*/M_{\odot}$ vs.\ $R_{\perp}/R_{\rm eff,est}$ for our foreground GOTOQ sample. {\it Right:} Distribution of $\rm SFR_{local}$ vs.\ $\log M_*/M_{\odot}$ for our foreground GOTOQ sample. The SFR values shown here should be considered lower limits on the total SFR of each system due to fiber losses. The grayscale histogram shows the distribution of total SFR vs.\ $M_*$ for all galaxies included in the MPA-JHU catalog of these values for SDSS DR7 \citep{Brinchmann2004}. The turquoise line shows a linear fit to the minimum in the galaxy distribution between star-forming and quiescent systems by \citet{Moustakas2013} for $z=0$. \label{fig:rperp_mstar_sfr}} \end{figure*} \begin{deluxetable*}{lccccccccc} \tablewidth{0pt} \tablecaption{Observed GOTOQ Sample\label{tab.gotoqs}} \tablehead{ \colhead{Sight Line} & \colhead{R.A.} & \colhead{Decl.} & \colhead{$z_{\rm QSO}$\tablenotemark{a}} & \colhead{$z_{\rm H\alpha}$\tablenotemark{b}} & \colhead{$R_{\perp}$\tablenotemark{a}} & \colhead{$m_r$(QSO)\tablenotemark{a}} & \colhead{$\log M_*/M_{\odot}$\tablenotemark{a}} & \colhead{SFR(H$\alpha$)\tablenotemark{a}} & \colhead{$E(B-V)_{(g-i)}$\tablenotemark{a}} \\ & \colhead{(J2000)} & \colhead{(J2000)} & & & \colhead{\hfill(kpc)} & \colhead{(mag)} & & \colhead{($M_{\odot}~\rm yr^{-1}$)} & \colhead{(mag)} } \startdata GOTOQJ0013--0024 & 00:13:42.45 & --00:24:12.60 & 1.641 & 0.15537 & 3.40 & 18.58 & 10.4 & 60.30 & 0.24 \\ GOTOQJ0851+0719 & 08:51:13.74 & +07:19:59.80 & 1.650 & 0.13010 & 5.63 & 17.97 & 8.4 & 0.35 & $-0.06$ \\ GOTOQJ0902+1414* & 09:02:50.47 & +14:14:08.29 & 0.980 & 0.05044 & 3.59 & 18.43 & 9.7 & 0.03 & 0.05 \\ GOTOQJ0950+5442* & 09:50:13.74 & +54:42:54.65 & 0.700 & 0.04586 & 0.98 & 18.33 & 8.8 & \nodata & $-0.09$ \\ GOTOQJ1005+5302 & 10:05:14.21 & +53:02:40.04 & 0.560 & 0.13547 & 3.60 & 18.79 & 9.6 & 0.19 & $-0.15$ \\ GOTOQJ1044+0518* & 10:44:30.26 & +05:18:57.32 & 0.900 & 0.10781 & 3.51 & 17.77 & 8.8 & 0.54 & 0.21 \\ GOTOQJ1135+2414* & 11:35:55.66 & +24:14:38.10 & 1.450 & 0.03426 & 3.88 & 19.22 & 9.4 & 0.01 & $-0.10$ \\ GOTOQJ1158+3907* & 11:58:22.85 & +39:07:12.96 & 1.160 & 0.18337 & 4.66 & 18.04 & 7.4 & 0.15 & 0.07 \\ GOTOQJ1220+2837 & 12:20:37.23 & +28:37:52.03 & 2.200 & 0.02762 & 6.88 & 17.91 & 8.7 & \nodata & 0.04 \\ GOTOQJ1238+6448 & 12:38:46.68 & +64:48:36.60 & 1.560 & 0.11859 & 7.00 & 17.93 & 7.7 & 2.21 & 0.10 \\ GOTOQJ1241+6332 & 12:41:57.55 & +63:32:41.63 & 2.620 & 0.14270 & 10.57 & 17.96 & 10.6 & 0.40 & 0.22 \\ GOTOQJ1248+4035 & 12:48:14.43 & +40:35:35.13 & 2.110 & 0.15132 & 4.00 & 19.11 & 10.2 & \nodata & 0.12 \\ GOTOQJ1328+2159 & 13:28:24.33 & +21:59:19.66 & 0.330 & 0.13524 & 12.68 & 18.96 & 9.1 & 0.16 & $-0.16$ \\ GOTOQJ1429+0120 & 14:29:17.69 & +01:20:58.93 & 1.130 & 0.08395 & 3.43 & 18.70 & 9.9 & 0.11 & 0.13 \\ GOTOQJ1457+5321* & 14:57:19.00 & +53:21:59.27 & 1.200 & 0.06594 & 4.24 & 18.16 & 9.3 & 0.04 & 0.00 \\ GOTOQJ1459+3713 & 14:59:38.50 & +37:13:14.70 & 1.220 & 0.14866 & 4.40 & 19.02 & 9.6 & 3.55 & $-0.04$ \\ GOTOQJ1525+0202* & 15:25:14.08 & +02:02:54.68 & 1.220 & 0.09019 & 3.12 & 18.73 & 8.3 & \nodata & 0.14 \\ GOTOQJ1605+5107 & 16:05:21.26 & +51:07:40.95 & 1.230 & 0.09899 & 3.80 & 18.58 & 9.3 & 0.19 & 0.19 \\ GOTOQJ1656+2541 & 16:56:43.35 & +25:41:36.80 & 0.243 & 0.03451 & 1.13 & 18.16 & 8.9 & 0.03 & 0.42 \\ GOTOQJ1659+6202 & 16:59:58.94 & +62:02:18.14 & 0.230 & 0.11026 & 7.15 & 17.80 & 9.8 & 0.26 & $-0.03$ \\ GOTOQJ1717+3203 & 17:17:04.14 & +32:03:20.93 & 0.660 & 0.20016 & 7.51 & 18.68 & 9.9 & 1.15 & 0.00 \\ \enddata \tablenotetext{a}{These quantities are drawn from the analysis of \citet{Straka2015}. Values of $\log M_*/M_{\odot}$ and SFR(H$\alpha$) were calculated for the foreground galaxy. As discussed in Section~\ref{sec:fg_galaxies}, the latter estimates should be considered lower limits due to the likelihood of fiber losses, and are referred to as $\rm SFR_{\rm local}$ throughout the text. Values of $E(B-V)_{(g-i)}$ refer to the background QSO and are estimated by comparing each QSO's $(g-i)$ color to the median $(g-i)$ color for QSOs at the same redshift as reported in \citet{Schneider2007}.} \tablenotetext{b}{This is the foreground galaxy redshift calculated as described in Section~\ref{subsec:redshifts}.} \tablenotetext{*}{For sight lines marked with an asterisk, we use SDSS spectra rather than ESI spectra to determine a precise emission-line redshift. } \end{deluxetable*} \section{Foreground Galaxy Properties} \label{sec:fg_galaxies} For this analysis, we draw on stellar mass estimates reported by \citet{Straka2015} for the parent GOTOQ sample. Stellar masses were determined via spectral energy distribution (SED) model fits to photometry of the host galaxies measured in the five SDSS passbands with the photometric redshift code \texttt{HYPERZ} \citep{Bolzonella2000,Straka2015}. The left panel of Figure~\ref{fig:rperp_mstar_sfr} shows the $\log M_*/M_{\odot}$ distribution of our foreground host sample vs.\ $R_{\perp}$. These systems span an overall wide range of stellar masses ($7.4 \le \log M_*/M_{\odot} \le 10.6$), with a median $\log M_*/M_{\odot} = 9.3$. Our sightlines sample this parameter space relatively thoroughly within $R_{\perp} < 9$ kpc; however, we caution that our constraints beyond $R_{\perp} > 10$ kpc are sparse. Under the assumption that the absorption strength of our transitions of interest at a given $R_{\perp}$ may depend on the relative extent of a galaxy's stellar component, we use the observed relation between $M_*$ and effective radius ($R_{\rm eff}$) for late-type galaxies to estimate $R_{\rm eff}$ for each host. We use the best-fit $R_{\rm eff}$-$M_*$ relation estimated by \citet{vanderWel2014} for systems having $0 < z < 0.5$: \begin{equation} R_{\rm eff, est} = 10^{0.86} \left (\frac{M_*}{5 \times 10^{10} M_{\odot}} \right )^{0.24}{\rm kpc}. \end{equation} These values fall in the range $1.2~\mathrm{kpc} \le R_{\rm eff,est} \le 6.8~\mathrm{kpc}$ for our sample. These authors also assess the intrinsic scatter in this relation, estimating $\sigma(\log R_{\rm eff}) = 0.16$. The true size of any given galaxy in our sample may therefore differ from this best-fit estimated size by a few kiloparsecs; however, we note that estimates of galaxy halo virial radii (often used in CGM studies in the same way we will use $R_{\rm eff, est}$ below) are typically subject to a greater degree of uncertainty. We normalize the $R_{\perp}$ value for each system by its $R_{\rm eff, est}$ estimate and compare this quantity to $\log M_*/M_{\odot}$ in the middle panel of Figure~\ref{fig:rperp_mstar_sfr}. Due to the correlation between $M_*$ and $R_{\rm eff,est}$, the sightlines with $R_{\perp}/R_{\rm eff,est} > 2$ tend to probe the lower-$M_*$ hosts in our sample (i.e., those with $\log M_*/M_{\odot} \lesssim 9$). We likewise make use of the SFRs estimated by \citet{Straka2015} for these systems from the extinction-corrected H$\alpha$ luminosities measured in the SDSS fiber spectra. Extinction corrections were determined from the ratio of H$\alpha$ to H$\beta$ line luminosities and adopted an SMC extinction curve \citep{Straka2015}. The \citet{Kennicutt1998} empirical calibration was then applied to the intrinsic H$\alpha$ luminosities. As noted by \citet{Straka2015}, because the SDSS fibers used to observe these galaxies were typically placed such that a significant fraction of their H$\alpha$ line emission was lost (see Figure~\ref{fig:ims}), these SFRs should be considered lower limits on the total star formation activity of the hosts. Moreover, the fraction of line emission missed by the fiber is likely larger for systems with larger impact parameters. We explore this effect in Appendix~\ref{sec:appendix_SFRfrac}, modeling the distribution of star formation in each foreground system as an exponential disk with a scale radius consistent with $R_{\rm eff,est}$. This simple analysis implies that the SDSS fibers capture $\gtrsim10\%$ of the H$\alpha$ emitted by the majority of the galaxies probed within $R_{\perp} < 5$ kpc, but may miss $\gtrsim 90-99\%$ of the H$\alpha$ emission from systems at larger spatial offsets. The measured H$\alpha$ luminosities and SFRs instead provide accurate assessments of the star formation activity close to the absorbing material detected along our QSO sightlines (i.e., the ``local" SFR). For this reason, we refer to this quantity as $\rm SFR_{local}$ below. The distribution of $\rm SFR_{local}$ and $\log M_*/M_{\odot}$ values for our foreground galaxy sample is shown in the rightmost panel of Figure~\ref{fig:rperp_mstar_sfr} with colored points. The grayscale histogram shows the distribution of {\it total} SFR and $\log M_*/M_{\odot}$ for the SDSS DR7 galaxy population \citep{Brinchmann2004}. The turquoise curve shows a linear fit to the minimum in the bimodal galaxy distribution estimated by \citet{Moustakas2013} and extrapolated to $z=0$. Several of our foreground galaxies lie below this line, in the parameter space primarily occupied by non-star-forming, early-type systems. This is likely because we have not measured their total, integrated SFRs (as described above). The modeling we perform in Appendix~\ref{sec:appendix_SFRfrac} implies that all of our systems likely have total SFRs $> 0.1~M_{\odot}~\rm yr^{-1}$, and that those systems with $R_{\perp} > 5$ kpc may have total SFRs $\gtrsim 10~M_{\odot}~\rm yr^{-1}$. The latter galaxies should therefore be considered starbursting systems. The location of our sample in this parameter space may likewise be affected by overestimation of the galaxy stellar masses due to systematics associated with SED modeling of the shallow SDSS photometry. The $1\sigma$ uncertainty intervals for the $\log M_*/M_{\odot}$ values reported by \citet{Straka2015} for our sample have a mean of 0.54 dex, and range up to 2.0 dex. \section{Line Profile Analysis} \label{sec:analysis} \subsection{Foreground Galaxy Redshifts} \label{subsec:redshifts} Because we are interested in the detailed kinematic structure of absorption detected along our target sight lines, and because \citet{Straka2015} reported redshifts with only four significant figures, we draw on our ESI spectra to measure more precise redshifts for our GOTOQ sample. We inspected each ESI spectrum for the presence of narrow emission features at the observed wavelengths of H$\alpha$ and [\ion{O}{3}] $\lambda 5008$ for the associated foreground galaxy. We identified both transitions in eight sight lines, and identified only H$\alpha$ in an additional six systems. The remaining seven sight lines (indicated with asterisks in Table~\ref{tab.gotoqs}) lack narrow emission features at the expected locations of H$\alpha$ and [\ion{O}{3}]; this is most likely because the ESI slit placement was insufficiently close to the foreground system. For these sightlines, we use their SDSS DR16 spectra \citep{York2000,Ahumada2020} to re-assess the GOTOQ redshift. We determine the continuum level of each QSO by fitting a spline function to feature-free spectral regions using the \texttt{lt\_continuumfit} GUI, available with the Python package \texttt{linetools}\footnote{\url{https://linetools.readthedocs.io/en/latest/}} \citep{linetools2016}. This tool presents the user with an automatically generated continuum spline, fit to a set of knots whose flux levels are determined from the mean flux in a series of spectral ``chunks". We performed a visual inspection of these knots, adjusting their placement in cases where their location was unduly affected by nearby absorption or emission features. We subtracted this continuum level from each spectrum and performed a Gaussian fit to the residual flux in a spectral region within either $\pm300~\rm km~s^{-1}$ (for ESI spectra) or $\pm 600~\rm km~s^{-1}$ (for the SDSS spectra) of the observed wavelength of H$\alpha$. We used the Levenberg-Marquardt least-squares fitter available within the \texttt{astropy.modeling} package \citep{astropy2018} to determine the best-fit Gaussian wavelength centroid for this line. The typical magnitude of the redshift uncertainty implied by the covariance matrix for the fitted parameters is 2--$5~\rm km~s^{-1}$ for the ESI spectra and 4--$15~\rm km~s^{-1}$ for the SDSS spectra. The fitted redshift values are all within a maximum of $\pm112\rm ~km~s^{-1}$ of those published for the foreground systems by \citet{Straka2015}. We refer to the redshifts determined via this method as $z_{\rm H\alpha}$\ in the following text. \subsection{Absorption-line Profile Characterization and Modeling} \label{subsec:abs_modeling} We then characterized the absorption strength and kinematics of the \ion{Ca}{2} H \& K and \ion{Na}{1} transitions associated with each GOTOQ. We used the \texttt{XAbsSysGui}, available with \texttt{linetools}, to perform a visual inspection of these transitions. In cases in which an absorption feature is clearly evident within $\pm300~\rm km~s^{-1}$ of $z_{\rm H\alpha}$, we use this GUI to manually select the velocity window to be used for the computation of the $W_r$ of each line. We also noted the occasional presence of blended absorption features that are unassociated with $z_{\rm H\alpha}$. In cases of transitions lacking clear absorption features, velocity windows were set to $\pm 150~\rm km~s^{-1}$ by default, but were adjusted as necessary to exclude unassociated blends. These windows were used to calculate upper limits on $W_r$. Spectral regions covering the \ion{Ca}{2} H \& K and \ion{Na}{1} doublet transitions in the rest frame of the corresponding foreground galaxies for five systems in our sample are shown in Figure~\ref{fig:velplot1}. Similar figures showing the remaining sight lines are included in Appendix~\ref{sec:appendix_spectra}. Our ESI spectra have signal-to-noise ratios (S/Ns) in the range 20--$34~\rm pix^{-1}$ with a median ${\rm S/N} = 24~\rm pix^{-1}$ within $\lesssim 400~\rm km~s^{-1}$ of the GOTOQ \ion{Na}{1} transitions. The spectral S/N within 200--$400~\rm km~s^{-1}$ of the \ion{Ca}{2} K transitions ranges between 2--$22~\rm pix^{-1}$, with a median ${\rm S/N} = 10~\rm pix^{-1}$. We used the velocity windows mentioned above to compute the $W_r$ for each \ion{Ca}{2} and \ion{Na}{1} transition. For those sightlines yielding a significantly detected $W_r$ in at least one transition, we refer to these absorbers as ``systems" in the following. We also used the apparent optical depth method \citep{SavageSembach1991} to compute the column density of each transition and its uncertainty. For those systems in which both doublet lines are significantly detected and unblended, we computed the mean of the column densities of both doublet lines, weighted by their respective uncertainties, and report this value as $N_{\rm aod}$. For those systems in which only the transition with the larger oscillator strength (\ion{Ca}{2} K or \ion{Na}{1} 5891) is significantly detected, we adopt its apparent optical depth column density as the value of $N_{\rm aod}$. For those sightlines in which the stronger line is not detected, we report $3\sigma$ upper limits on the column density computed from the apparent optical depth method for the stronger transition only. All velocity limits, $W_r$ and $N_{\rm aod}$ values, and the associated uncertainties ($\sigma_{W_r}$ and $\sigma_{N_{\rm aod}}$) are reported in Tables~\ref{tab.CaIIabsinfo} and \ref{tab.NaIabsinfo}. \begin{figure*}[!ht] \includegraphics[width=1.0\textwidth,clip]{fig_velplots_p1.pdf} \caption{Regions of five of our ESI GOTOQ spectra showing the locations of \ion{Ca}{2} H \& K and \ion{Na}{1} $\lambda \lambda 5891, 5897$ transitions associated with the foreground galaxy. The velocity is defined relative to the GOTOQ redshift estimated from a Gaussian fit to its H$\alpha$ emission as described in Section~\ref{subsec:redshifts} ($z_{\rm H\alpha}$). The gray horizontal line indicates the continuum level, and the gray shaded region shows the velocity window selected for computation of $W_r$, $N_{\rm aod}$ and $\Delta v_{90}$. The blue and red bars show the pixels that contain $>5\%$ of the total apparent optical depth of the line (determined by stepping inward from the profile edges), and the length of these bars corresponds to $\Delta v_{90}$. Best-fit profile models are shown with cyan (for \ion{Ca}{2}) and orange (for \ion{Na}{1}) curves for systems with significantly detected absorption (see Section~\ref{subsec:abs_modeling} for details). \label{fig:velplot1}} \end{figure*} \citet{Straka2015} measured unblended $W_r$(\ion{Na}{1} 5891) values using the corresponding SDSS spectra for 16 of our 21 sightlines: 12 of these are upper limits consistent with our constraints; two are detections consistent with our values; and two of the \citet{Straka2015} $W_r$(\ion{Na}{1} 5891) values are larger by $1.4\!-\!2.6\sigma$. These authors likewise presented measurements of $W_r$(\ion{Ca}{2} K) for each of our sightlines, five of which are upper limits consistent with our constraints. The remainder are detections that are all larger than our measurements, and 10 of these differ by $>1.0\sigma$. This offset may arise from the use of a larger velocity window by \citet[][although the adopted window is not specified in that work]{Straka2015} and\slash or the inclusion of noise features for some systems. We characterize the velocity spread of each significantly detected absorption-line system in a model-independent way using a modified version of the $\Delta v_{90}$ measurement described in \citet{ProchaskaWolfe1997}. We first smooth the apparent optical depth profile of each system with a boxcar of width $= 37.3\rm ~km~s^{-1}$ and replace any negative apparent optical depth values with a value of zero. We then step inward from the left and right edges of each profile, summing the apparent optical depths to identify the pixels containing $>5\%$ of the integrated optical depth of the system. The corresponding value of $\Delta v_{90}$ is the velocity width between these left- (at relative velocity $\delta v_{90,\rm left}$) and rightmost pixels (at relative velocity $\delta v_{90, \rm right}$). This measurement is listed in Tables~\ref{tab.CaIIabsinfo} and \ref{tab.NaIabsinfo}, and we make use of both these values and our estimates of $\delta v_{90,\rm left}$ and $\delta v_{90,\rm right}$ in the kinematic analyses to follow. We performed Voigt profile modeling of each significantly detected absorption system using the publicly available \texttt{veeper} Python package\footnote{\url{https://github.com/jnburchett/veeper}}. The \texttt{veeper}, developed by coauthor J.~Burchett, determines best-fit values of the column density ($N_{\rm vp}$), Doppler parameter ($b_{\rm D}$), and central velocity relative to $z_{\rm H\alpha}$\ ($\delta v$) via least-squares minimization. Parameter space was explored using the iterative \texttt{MPFIT} software, originally written in IDL by C.~Markwardt\footnote{\url{http://cow.physics.wisc.edu/~craigm/idl/idl.html}} and then rewritten in Python by M.~Rivers\footnote{\url{http://cars.uchicago.edu/software}}. The user sets initial guesses for each parameter by eye and may then inspect the resulting fit using an interactive GUI. The permitted values of $b_{\rm D}$ were limited to the range $1~\mathrm{km~s^{-1}} < b_{\rm D} < 85\rm ~km~s^{-1}$. Both transitions of each ion were fit simultaneously, and we adopted a Gaussian line spread function with $\sigma = 15.8~\rm km~s^{-1}$ across the full spectral range. Each absorber was fit twice; once with a single velocity component and, again, with two velocity components initially offset by $\pm10~\rm km~s^{-1}$. We adopted the best-fit parameters of the two-component fit if it yielded a lower reduced-$\chi^2$ ($\chi^2_r$) value than the one-component fit and reasonable values for the formal 1$\sigma$ parameter uncertainties calculated from the covariance matrix (i.e., $\sigma_{\log N_{\rm vp}} < 0.5$). While some of these systems may have more than two absorbing structures along the line of sight, we did not attempt more complex profile modeling (e.g., with three or more components) because we generally achieved low $\chi^2$ values with our one- or two-component fits ($\chi^2_r = 0.67\!-\!4.52$), and because our primary findings and conclusions would not be affected by invoking more complex analyses. There are two absorption-line systems for which both our one-component and two-component \texttt{veeper} fitting fails to yield useful parameter constraints (i.e., $\sigma_{\log N_{\rm vp}} > 1$ or $\sigma_{b_{\rm D}} > 50\rm ~km~s^{-1}$): the \ion{Ca}{2} absorber toward GOTOQJ1328+2159, and the \ion{Na}{1} absorber toward GOTOQJ1429+0120. We posit that this is due to noise features in these profiles that cause the two doublet lines to exhibit unphysical doublet ratios. In these cases, we fix the value of the total column density to $N_{\rm aod}$ and perform a one-component \texttt{veeper} fit allowing only the $b_{\rm D}$ and $\delta v$ parameters to vary. We also note that the 1$\sigma$ parameter uncertainties calculated from the covariance matrix for each absorber fit are formally allowed to overlap regions of parameter space that are excluded from exploration during the fitting process. This results in values of $\sigma_{b_{\rm D}}>b_{\rm D}$ for a few of the weaker components in our two-component fits, implying 1$\sigma$ confidence intervals that extend to negative values. The Doppler parameters are thus not well-constrained in these cases; however, the corresponding uncertainties on $\log N_{\rm vp}$ and $\delta v$ should reflect the distribution of each parameter value that corresponds to a $\Delta \chi^2 = 1$ if all other parameters are allowed to vary to keep the $\chi^2$ as low as possible (i.e., they are ``marginalized" uncertainty intervals). Best-fit Voigt profile models for each securely detected absorber in our sample are shown in Figure~\ref{fig:velplot1} and in Appendix~\ref{sec:appendix_spectra}. The resulting best-fit values of each model parameter, along with their uncertainties, are listed in Tables~\ref{tab.CaIIabsinfo} and \ref{tab.NaIabsinfo}. The two systems for which we adopt a fixed column density in our profile fitting are indicated with an asterisk in the $\log N_{\rm vp}$ table columns. We will primarily use our $N_{\rm vp}$ values where available in the analysis to follow. We note that while the values of $\log N_{\rm aod}$ and the total $\log N_{\rm vp}$ (summed over all components) are universally within $\pm0.2$ dex for all of our \ion{Ca}{2} absorbers and for the vast majority of our \ion{Na}{1} systems, there are three sightlines for which the total $\log N_{\rm vp}$(\ion{Na}{1}) exceeds $\log N_{\rm aod}$(\ion{Na}{1}) by 0.35--0.66 dex (J1238+6448, J1248+4035, and J1717+3203). As these absorbers are among the strongest systems in our sample, these offsets are likely due to saturation effects. \begin{deluxetable*}{lccccccccc} \tablewidth{700pt} \tabletypesize{\scriptsize} \tablecaption{\ion{Ca}{2} Absorption-line Equivalent Widths, Kinematics, and Best-fit Voigt Profile Model Parameters\label{tab.CaIIabsinfo}} \tablehead{ \colhead{Sight Line} & \colhead{$R_{\perp}$} & \colhead{$W_r$(\ion{Ca}{2} K)\tablenotemark{a}} & \colhead{Velocity Limits} & \colhead{$\log N_{\rm aod}$(\ion{Ca}{2})\tablenotemark{a}} & \colhead{$\Delta v_{90}$(\ion{Ca}{2} K)} & \colhead{$\log N_{\rm vp}$(\ion{Ca}{2})} & \colhead{$b_{\rm D}$(\ion{Ca}{2})} & \colhead{$\delta v$(\ion{Ca}{2})} & \colhead{$\chi_r^{2}$(\ion{Ca}{2})} \\ \colhead{} & \colhead{\hfill(kpc)} & \colhead{(\AA)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm cm^{-2}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm cm^{-2}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{} } \startdata J0013--0024 & 3.4 & $0.96\pm0.07$ & [$-83,131$] & $13.02\pm0.09$ & $130$ & $13.17\pm0.12$ & \nodata & \nodata & $1.58$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.71\pm0.22$ & $11.4\pm15.9$ & $-28.5\pm6.2$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.99\pm0.15$ & $47.3\pm17.1$ & $28.1\pm15.7$ & \nodata \\ J0851+0719 & 5.6 & $ < 0.12$ & [$-150,150$] & $ < 12.14$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J0902+1414 & 3.6 & $ < 0.20$ & [$-42,70$] & $ < 12.41$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J0950+5442 & 1.0 & $0.64\pm0.08$ & [$-139,108$] & $12.93\pm0.05$ & $150$ & $12.96\pm0.05$ & $29.6\pm5.1$ & $-6.0\pm3.1$ & $0.99$ \\ J1005+5302 & 3.6 & $0.21\pm0.03$ & [$-95,88$] & $12.40\pm0.06$ & $150$ & $12.35\pm0.07$ & $55.0\pm11.3$ & $-9.7\pm7.5$ & $0.98$ \\ J1044+0518 & 3.5 & $0.32\pm0.03$ & [$-78,105$] & $12.56\pm0.04$ & $100$ & $12.62\pm0.04$ & \nodata & \nodata & $1.00$ \\ & & \nodata & \nodata & \nodata & \nodata & $11.68\pm0.20$ & $8.9\pm42.8$ & $-21.3\pm11.6$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.57\pm0.04$ & $20.0\pm5.1$ & $45.3\pm2.3$ & \nodata \\ J1135+2414 & 3.9 & $ < 0.24$ & [$-75,94$] & $ < 12.49$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1158+3907 & 4.7 & $0.15\pm0.04$ & [$-59,94$] & $12.27\pm0.12$ & $100$ & $12.31\pm0.10$ & $54.0\pm17.3$ & $63.1\pm11.6$ & $1.19$ \\ J1220+2837 & 6.9 & $0.33\pm0.07$ & [$-50,47$] & $12.75\pm0.08$ & $50$ & $12.91\pm0.14$ & $13.5\pm6.6$ & $-4.5\pm3.2$ & $1.33$ \\ J1238+6448 & 7.0 & $0.86\pm0.04$ & [$-150,122$] & $13.11\pm0.02$ & $180$ & $13.15\pm0.02$ & \nodata & \nodata & $0.97$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.65\pm0.04$ & $22.1\pm4.3$ & $-80.0\pm2.2$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.98\pm0.02$ & $30.9\pm2.5$ & $40.5\pm1.5$ & \nodata \\ J1241+6332 & 10.6 & $0.65\pm0.03$ & [$-42,113$] & $13.03\pm0.02$ & $80$ & $13.09\pm0.02$ & $35.8\pm2.5$ & $33.9\pm1.6$ & $2.30$ \\ J1248+4035 & 4.0 & $0.58\pm0.03$ & [$-73,94$] & $12.92\pm0.03$ & $90$ & $13.02\pm0.05$ & \nodata & \nodata & $1.65$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.89\pm0.04$ & $17.6\pm3.1$ & $-3.8\pm1.4$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.45\pm0.17$ & $7.2\pm4.3$ & $61.7\pm2.7$ & \nodata \\ J1328+2159 & 12.7 & $0.24\pm0.03$ & [$-42,55$] & $12.53\pm0.05$ & $60$ & \nodata* & $13.3\pm6.0$ & $7.1\pm2.8$ & $1.22$ \\ J1429+0120 & 3.4 & $0.24\pm0.06$ & [$-48,72$] & $12.57\pm0.11$ & $60$ & $12.66\pm0.20$ & $10.0\pm8.9$ & $10.4\pm3.7$ & $1.19$ \\ J1457+5321 & 4.2 & $ < 0.20$ & [$-100,119$] & $ < 12.39$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1459+3713 & 4.4 & $ < 1.05$ & [$-150,150$] & $ < 13.47$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1525+0202 & 3.1 & $ < 0.21$ & [$-67,150$] & $ < 12.38$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1605+5107 & 3.8 & $0.45\pm0.08$ & [$-100,41$] & $12.84\pm0.07$ & $80$ & $12.86\pm0.06$ & $26.6\pm6.9$ & $-16.1\pm3.9$ & $1.50$ \\ J1656+2541 & 1.1 & $ < 0.42$ & [$-59,47$] & $ < 12.80$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1659+6202 & 7.2 & $0.40\pm0.03$ & [$-81,99$] & $12.76\pm0.03$ & $80$ & $12.96\pm0.19$ & \nodata & \nodata & $0.83$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.63\pm0.04$ & $15.5\pm5.1$ & $-4.5\pm2.3$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.69\pm0.36$ & $4.0\pm2.1$ & $29.2\pm1.6$ & \nodata \\ J1717+3203 & 7.5 & $0.62\pm0.02$ & [$-114,77$] & $12.99\pm0.02$ & $90$ & $13.18\pm0.06$ & \nodata & \nodata & $1.09$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.52\pm0.10$ & $19.5\pm8.3$ & $-30.3\pm5.6$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $13.08\pm0.08$ & $10.0\pm1.7$ & $11.4\pm1.9$ & \nodata \\ \enddata \tablecomments{Best-fit Voigt profile model parameters for \ion{Ca}{2} systems fit with a single absorbing component are listed in a single table row. For systems fit with two components, we list the total $\log N_{\rm vp}$ of the system in the first row for each sightline and include best-fit $\log N_{\rm vp}$, $b_{\rm D}$, and $\delta v$ values for the individual components in the following two rows.} \tablenotetext{a}{Upper limits are reported at the $3\sigma$ level.} \tablenotetext{*}{To model this absorber, we fixed the column density to the measured value of $N_{\rm aod}$ as described in Section~\ref{subsec:abs_modeling}.} \end{deluxetable*} \begin{deluxetable*}{lccccccccc} \tablewidth{700pt} \tabletypesize{\scriptsize} \tablecaption{\ion{Na}{1} Absorption-line Equivalent Widths, Kinematics, and Best-fit Voigt Profile Model Parameters\label{tab.NaIabsinfo}} \tablehead{ \colhead{Sight Line} & \colhead{$R_{\perp}$} & \colhead{$W_r$(\ion{Na}{1} 5891)\tablenotemark{a}} & \colhead{Velocity Limits} & \colhead{$\log N_{\rm aod}$(\ion{Na}{1})\tablenotemark{a}} & \colhead{$\Delta v_{90}$(\ion{Na}{1} 5891)} & \colhead{$\log N_{\rm vp}$(\ion{Na}{1})} & \colhead{$b_D$(\ion{Na}{1})} & \colhead{$\delta v$(\ion{Na}{1})} & \colhead{$\chi_r^{2}$(\ion{Na}{1})} \\ \colhead{} & \colhead{\hfill(kpc)} & \colhead{(\AA)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm cm^{-2}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm cm^{-2}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{($\rm km~s^{-1}$)} & \colhead{} } \startdata J0013--0024 & 3.4 & $0.62\pm0.03$ & [$-60,98$] & $12.60\pm0.02$ & $100$ & $12.78\pm0.07$ & \nodata & \nodata & $0.67$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.37\pm0.05$ & $25.0\pm5.8$ & $-10.2\pm3.6$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.57\pm0.10$ & $5.4\pm1.9$ & $36.8\pm2.1$ & \nodata \\ J0851+0719 & 5.6 & $ < 0.12$ & [$-150,150$] & $ < 11.80$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J0902+1414 & 3.6 & $0.21\pm0.03$ & [$-40,90$] & $12.11\pm0.05$ & $60$ & $12.34\pm0.09$ & $6.4\pm1.5$ & $1.4\pm1.6$ & $0.89$ \\ J0950+5442 & 1.0 & $0.50\pm0.05$ & [$-125,119$] & $12.46\pm0.03$ & $150$ & $12.46\pm0.03$ & $52.1\pm5.3$ & $-2.8\pm3.5$ & $0.92$ \\ J1005+5302 & 3.6 & $ < 0.12$ & [$-150,150$] & $ < 11.78$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1044+0518 & 3.5 & $ < 0.10$ & [$-150,150$] & $ < 11.69$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1135+2414 & 3.9 & $ < 0.10$ & [$-100,47$] & $ < 11.71$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1158+3907 & 4.7 & $ < 0.10$ & [$-81,99$] & $ < 11.69$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1220+2837 & 6.9 & $0.79\pm0.02$ & [$-61,72$] & $12.74\pm0.01$ & $70$ & $12.87\pm0.02$ & $18.3\pm1.1$ & $-7.8\pm0.5$ & $4.52$ \\ J1238+6448 & 7.0 & $0.73\pm0.03$ & [$-14,111$] & $12.75\pm0.01$ & $70$ & $13.41\pm0.27$ & \nodata & \nodata & $0.80$ \\ & & \nodata & \nodata & \nodata & \nodata & $12.44\pm0.06$ & $10.5\pm3.4$ & $33.0\pm1.5$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $13.36\pm0.31$ & $6.0\pm1.1$ & $67.0\pm0.7$ & \nodata \\ J1241+6332 & 10.6 & $0.85\pm0.03$ & [$-61,122$] & $12.76\pm0.01$ & $100$ & $12.84\pm0.01$ & \nodata & \nodata & $2.57$ \\ & & \nodata & \nodata & \nodata & \nodata & $11.99\pm0.05$ & $16.6\pm3.8$ & $-16.0\pm2.0$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.77\pm0.01$ & $23.6\pm1.0$ & $50.0\pm0.6$ & \nodata \\ J1248+4035 & 4.0 & $0.36\pm0.02$ & [$-59,61$] & $12.40\pm0.02$ & $60$ & $12.75\pm0.05$ & $7.0\pm0.5$ & $-1.3\pm0.5$ & $0.87$ \\ J1328+2159 & 12.7 & $0.11\pm0.04$ & [$-39,97$] & $11.79\pm0.14$ & $70$ & $11.73\pm0.11$ & $13.0\pm15.3$ & $6.3\pm5.7$ & $0.83$ \\ J1429+0120 & 3.4 & $0.18\pm0.02$ & [$-39,63$] & $12.09\pm0.04$ & $50$ & \nodata* & $10.1\pm5.1$ & $-2.4\pm2.0$ & $1.32$ \\ J1457+5321 & 4.2 & $0.26\pm0.04$ & [$-95,86$] & $12.16\pm0.07$ & $100$ & $12.12\pm0.06$ & $36.6\pm8.2$ & $-26.3\pm4.9$ & $0.89$ \\ J1459+3713 & 4.4 & $ < 0.14$ & [$-150,150$] & $ < 11.86$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1525+0202 & 3.1 & $ < 0.14$ & [$-150,150$] & $ < 11.85$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1605+5107 & 3.8 & $0.26\pm0.04$ & [$-89,99$] & $12.21\pm0.05$ & $90$ & $12.23\pm0.08$ & $9.1\pm5.4$ & $3.3\pm1.9$ & $1.16$ \\ J1656+2541 & 1.1 & $ < 0.12$ & [$-50,150$] & $ < 11.77$ & \nodata & \nodata & \nodata & \nodata & \nodata \\ J1659+6202 & 7.2 & $0.24\pm0.03$ & [$-78,102$] & $12.17\pm0.04$ & $60$ & $12.28\pm0.03$ & $13.8\pm3.5$ & $6.3\pm1.4$ & $0.89$ \\ J1717+3203 & 7.5 & $0.65\pm0.03$ & [$-78,80$] & $12.71\pm0.01$ & $60$ & $13.16\pm0.11$ & \nodata & \nodata & $1.06$ \\ & & \nodata & \nodata & \nodata & \nodata & $13.11\pm0.11$ & $8.8\pm0.9$ & $8.7\pm1.0$ & \nodata \\ & & \nodata & \nodata & \nodata & \nodata & $12.19\pm0.43$ & $5.4\pm9.6$ & $-34.0\pm2.6$ & \nodata \\ \enddata \tablecomments{Best-fit Voigt profile model parameters for \ion{Na}{1} systems fit with a single absorbing component are listed in a single table row. For systems fit with two components, we list the total $\log N_{\rm vp}$ of the system in the first row for each sightline and include best-fit $\log N_{\rm vp}$, $b_{\rm D}$, and $\delta v$ values for the individual components in the following two rows.} \tablenotetext{a}{Upper limits are reported at the $3\sigma$ level.} \tablenotetext{*}{To model this absorber, we fixed the column density to the measured value of $N_{\rm aod}$ as described in Section~\ref{subsec:abs_modeling}.} \end{deluxetable*} \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_ewCaII_Rperp.pdf} \includegraphics[width=0.5\textwidth]{fig_ewNaI_Rperp.pdf} \caption{Total system $W_r(\mbox{\ion{Ca}{2} K})$ (left) and $W_r$(\ion{Na}{1} 5891) (right) vs.\ projected distance from the associated host galaxy. Colored points show constraints from our ESI spectroscopy. Upper limits, indicated with open squares, are shown in cases for which $W_r < 3\sigma_{W_r}$ and represent 3$\sigma$ limits. Gray points show measurements reported in \citet{Straka2015} from their analysis of SDSS spectroscopy probing the parent sample of GOTOQs. We exclude absorbers that were flagged as blended by \citet{Straka2015}. We also exclude any systems in which the \ion{Ca}{2} K transition falls more than 20 \AA\ blueward of the Ly$\alpha$ emission line of the corresponding QSO to avoid blending from the Ly$\alpha$ forest. Black solid lines show best-fit linear relations between $\log W_r$ and $R_{\perp}$ (see Section~\ref{subsec:Wr_Rperp}), and medium gray contours show the inner $\pm34$\% of the locus of fits drawn at random from the posterior probability density function of each linear model. The light gray region extends the boundaries of the medium gray $1\sigma$ region by the best-fit value of $\sigma_C$ to approximately indicate the degree of intrinsic scatter implied by the data. Our $W_r$ measurements exhibit no apparent anticorrelation with increasing projected distance from the foreground host. \label{fig:ew_rperp}} \end{figure*} \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_ewCaII_Rperp_renorm.pdf} \includegraphics[width=0.5\textwidth]{fig_ewNaI_Rperp_renorm.pdf} \caption{Total system $W_r$(\ion{Ca}{2} K) (left) and $W_r$(\ion{Na}{1} 5891) (right) vs.\ projected distance from the associated host galaxy, normalized by the galaxy's estimated effective radius. Symbols, lines and contours are as described in the Figure~\ref{fig:ew_rperp} caption. The relation between $\log W_r$(\ion{Ca}{2} K) and $R_{\perp}/R_{\rm eff,est}$ for the combined ESI and SDSS sample has a slope $m = -0.035^{+0.028}_{-0.030}$, indicative of a weak anticorrelation between these quantities. The relation between $\log W_r$(\ion{Na}{1} 5891) and $R_{\perp}/R_{\rm eff,est}$ exhibits no significant anticorrelation. \label{fig:ew_rperp_renorm}} \end{figure*} \section{\ion{Ca}{2} and \ion{Na}{1} Absorption Properties of the Disk-Halo Interface}\label{sec:results} Here we examine the incidence of \ion{Ca}{2} and \ion{Na}{1} absorption in the disk-halo environment and assess the relation between their absorption strengths and $R_{\perp}$. \subsection{$\log W_r$-$R_{\perp}$ Relations}\label{subsec:Wr_Rperp} We show our measurements of the total system $W_r$(\ion{Ca}{2} K) and $W_r$(\ion{Na}{1} 5891) vs.\ $R_{\perp}$ in Figure~\ref{fig:ew_rperp} with colored points. Detections and 3$\sigma$ upper limits on $W_r$ for these transitions measured from SDSS spectra of the parent GOTOQ sample by \citet{Straka2015} are shown in gray. Within our ESI sample, absorption detections span the full range of $R_{\perp}$ probed, with nondetections arising only within $< 6$~kpc. We note here that several of our foreground galaxies observed at $R_{\perp} > 6$~kpc may have higher global SFRs ($\gtrsim10\!-\!100~M_{\odot}~\rm yr^{-1}$; see Appendix~\ref{sec:appendix_SFRfrac}) than those observed at $R_{\perp} < 6$ kpc. Under the assumption that galaxies that are more actively star-forming will have larger $W_r$(\ion{Ca}{2}) and $W_r$(\ion{Na}{1}) across a broad range of impact parameters, this potential bias may drive an enhancement in our observed $W_r$ values at large $R_{\perp}$. While we cannot reliably quantify the global SFRs of our galaxy sample with current data, we can instead draw on the measurements of global $M_*$ described in Section~\ref{sec:fg_galaxies} to assess the degree to which an analogous relation between $M_*$ and $W_r$ may impact the distributions of datapoints shown in Figure~\ref{fig:ew_rperp}. The left-hand panel of Figure~\ref{fig:rperp_mstar_sfr} demonstrates that our sample contains equal numbers of galaxies with stellar masses falling above and below the median value ($\log M_*/M_{\odot} = 9.3$) at $R_{\perp}>6$ kpc. This suggests that the bias described above does not have a major impact on our analysis of the relation between absorber properties and $R_{\perp}$; however, we caution that new data enabling the measurement of the global SFRs in our foreground galaxies are needed to fully disentangle the relationships between $R_{\perp}$, global star-formation activity, and $W_r$. The blue curves in Figure~\ref{fig:ew_rperp} show the power-law relation (with 1$\sigma$ uncertainties) fit to the mean \ion{Ca}{2} K absorption signal measured in SDSS spectra of QSO sightlines vs.\ the projected separation of these QSOs from known foreground systems by \citet{ZhuMenard2013}. Because this latter analysis included all sightlines having 3 kpc $< R_{\perp} <$ 10 kpc in a single bin, the fitted relation is insensitive to potential changes in the power-law slope at very small separations. Nevertheless, the absorbers in our dataset do not exhibit larger $W_r$ at smaller projected separations as implied by this fit. We caution that the foreground galaxy sample identified by \citet{ZhuMenard2013} has higher stellar masses than those we study here (i.e., the median stellar mass in the former sample is $\log M_*/M_{\odot}\sim10.3$), which could explain the larger $W_r$ implied by their fitted relation at $R_{\perp}\sim3$--4 kpc. Figure~\ref{fig:ew_rperp_renorm} shows the same $W_r$ measurements presented in Figure~\ref{fig:ew_rperp} vs.\ $R_{\perp}/R_{\rm eff, est}$. We remind the reader that those galaxies probed at $R_{\perp}/R_{\rm eff, est}>2$ have systematically lower stellar masses than those probed at $R_{\perp}/R_{\rm eff, est}<2$. Moreover, the modeling described in Appendix~\ref{sec:appendix_SFRfrac} suggests the former systems exhibit a broad range of global SFRs, spanning between ${\sim}0.5~M_{\odot}~\rm yr^{-1}$ and $>100~M_{\odot}~\rm yr^{-1}$. While absorption nondetections are more evenly distributed across this parameter space than across the range in $R_{\perp}$, the relation between $W_r$ and $R_{\perp}/R_{\rm eff, est}$ does not exhibit a clear anticorrelation for either ion. To quantitatively test for correlations (or a lack thereof) in these quantities, we model these datasets assuming a linear relation between $\log W_r$ and either $R_{\perp}$ or $R_{\perp}/R_{\rm eff, est}$: \begin{equation}\label{eq:linear} \log W_r = b + m R_{\perp}. \end{equation} We follow \citet{Chen2010a} and \citet{Rubin2018a} to compute the likelihood function for this model. Briefly, for all securely detected $W_r$ values, the contribution to the logarithm of the likelihood is $\chi^2 / 2$. For non-detections, each term in the product used to compute the likelihood is the integral from $-\infty$ to the value of the $W_r$ upper limit of a Gaussian function similar in form to that used to calculate $\chi^2$ (see \citealt{Rubin2018a} for the full likelihood function). We also assume that the relation in Equation~\ref{eq:linear} has an intrinsic cosmic variance, $\sigma_C$, such that the Gaussian variance adopted for each measurement in the likelihood function is $s_i^2 = \sigma_i^2 + \sigma_C^2$, with $\sigma_i$ equal to the measurement uncertainty in each $\log W_r$ value. We use the Python software package \texttt{emcee} to perform affine-invariant ensemble Markov Chain Monte Carlo sampling of the posterior probability density function (PPDF) for this model \citep{Foreman-Mackey2013}. We adopt uniform priors for all three parameters within the intervals $-5.0 < m < 5.0$ (with $m$ having units of either $\rm kpc^{-1}$ or being unitless, as appropriate), $-10.0 < b < 10.0$, and $-10.0 < \ln \sigma_C < 10.0$. We implement 100 ``walkers", each of which take 5000 steps (the first 1000 of which are discarded) to thoroughly sample the PPDF. We interpret the median and $\pm34$th percentiles of the marginalized PPDF for each parameter as its best value and uncertainty interval. We show the resulting best-fit relations between $\log W_r$ and either $R_{\perp}$ or $R_{\perp}/R_{\rm eff, est}$ for the combined ESI and SDSS datasets in Figures~\ref{fig:ew_rperp} and \ref{fig:ew_rperp_renorm}, respectively, with solid black lines. The medium gray contours show the inner $\pm34$\% of the locus of fits for 1000 sets of parameters drawn at random from the PPDF of each data-model comparison. The light gray contours indicate the boundaries of the inner $\pm34$\% locus, extended on either side by the best-fit value of $\sigma_C$. We also list the best-fit parameters and their uncertainty intervals for each dataset in Table~\ref{tab:linear_fits}. Three of the four best-fit values of the slope ($m$) are consistent with zero, confirming a lack of any significant correlation between both $\log W_r(\mbox{\ion{Ca}{2} K})$ and $\log W_r(\mbox{\ion{Na}{1} 5891})$ and $R_{\perp}$, as well as between $\log W_r(\mbox{\ion{Na}{1} 5891})$ and $R_{\perp}/R_{\rm eff,est}$. The $\log W_r(\mbox{\ion{Ca}{2} K})$-$R_{\perp}/R_{\rm eff,est}$ relation has a slope $m=-0.035^{+0.028}_{-0.030}$, weakly suggestive of an anticorrelation between these variables. Given our finding in Section~\ref{subsec:abs_modeling} that the \citet{Straka2015} $W_r$ values are frequently larger than those we measure for the same sightlines, we also perform the same modeling including only our ESI dataset. The resulting best-fit model parameters are listed in Table~\ref{tab:linear_fits}. Here again, three of the four best-fit slopes are consistent with zero. Moreover, the $\log W_r(\mbox{\ion{Na}{1} 5891})$-$R_{\perp}$ relation has a slope that is marginally \emph{positive} ($m=+0.058^{+0.046}_{-0.042}\,{\rm kpc}^{-1}$). All together, we interpret these results as further confirmation of a lack of any anticorrelation between $W_r$ and $R_{\perp}$ or $R_{\perp}/R_{\rm eff,est}$. Keeping in mind the caveat that these findings may be affected by a bias in our galaxy sample toward higher global SFRs at larger $R_{\perp}$ (as discussed toward the beginning of this section), we note that the lack of a strong dependence of our $W_r$ values on projected distance is unique among the QSO-galaxy pair literature. The vast majority of these studies instead have reported a statistically significant decline in the $W_r$ of a wide range of ionic transitions (including transitions of \ion{H}{1}, \ion{C}{2}, \ion{C}{3}, \ion{C}{4}, \ion{Si}{2}, \ion{Si}{3}, \ion{Mg}{2} and \ion{Ca}{2}) with $R_{\perp}$ \citep[e.g.,][]{LanzettaBowen1990,Kacprzak2008,Chen2010a,Nielsen2013,Werk2013,ZhuMenard2013,Burchett2016,Kulkarni2022}. However, these works have included sight lines over a much larger range of projected separations ($R_{\perp} \gtrsim 100$ kpc) than are included here, and many of them have included few (if any) sightlines with $R_{\perp} < 15$~kpc \citep[e.g.,][]{LanzettaBowen1990,Chen2010a,Werk2013}. The findings of \citet{Kacprzak2013}, a study of \ion{Mg}{2} absorption along a sample of seven GOTOQ sightlines selected from \citet{Noterdaeme2010} and \citet{York2012}, confirm that sightlines with impact parameters $\gtrsim 10$ kpc drive the well-known anticorrelation between $W_r$(\ion{Mg}{2} 2796) and $R_{\perp}$, while the $W_r$(\ion{Mg}{2} 2796) values for sightlines within this projected distance exhibit no significant dependence on $R_{\perp}$. On the other hand, \citet{Kulkarni2022} noted that the strong anticorrelation between $N$(\ion{H}{1}) and $R_{\perp}$ exhibited by their sample of 113 galaxies associated with DLAs and sub-DLAs (assembled from their study of eight GOTOQs and the literature across $0 < z < 4.4$) appears to extend well within $R_{\perp} < 10$ kpc. This apparent disagreement with both \citet{Kacprzak2013} and the present study may be driven by a variety of factors, including the use of different ionic transitions and quantities characterizing absorption-line strength (i.e., $W_r$ vs.\ $N$), and differing absorber-galaxy pair selection criteria. \begin{deluxetable*}{llllc} \tablecaption{Best-fit Parameters for Linear $\log W_r - R_{\perp}$ Models\label{tab:linear_fits}} \tabletypesize{\footnotesize} \tablehead{ \colhead{Data Set} & \colhead{Relation} & \colhead{$m$} & \colhead{$b$} & \colhead{$\sigma_C$}\\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{}} \startdata ESI \& \citet{Straka2015} & $\log W_r(\mbox{\ion{Ca}{2} K})$-$R_{\perp}$ & $-0.009\pm 0.015~\rm kpc^{-1}$ & $-0.32\pm0.09$ & $0.34_{-0.04}^{+0.05}$\\ & $\log W_r(\mbox{\ion{Ca}{2} K})$-$R_{\perp}/R_{\rm eff,est}$ & $-0.035_{-0.030}^{+0.028}$ & $-0.30\pm0.07$ & $0.33_{-0.04}^{+0.05}$\\ & $\log W_r(\mbox{\ion{Na}{1} 5891})$-$R_{\perp}$ & $+0.006_{-0.027}^{+0.026}~\rm kpc^{-1}$ & $-0.84_{-0.18}^{+0.15}$ & $0.46_{-0.07}^{+0.10}$ \\ & $\log W_r(\mbox{\ion{Na}{1} 5891})$-$R_{\perp}/R_{\rm eff,est}$ & $-0.028_{-0.053}^{+0.046}$ & $-0.76_{-0.14}^{+0.12}$ & $0.46_{-0.07}^{+0.10}$\\ \hline ESI Only & $\log W_r(\mbox{\ion{Ca}{2} K})$-$R_{\perp}$ & $+0.022_{-0.028}^{+0.031}~\rm kpc^{-1}$ & $-0.64_{-0.20}^{+0.17}$ & $0.35_{-0.07}^{+0.10}$\\ & $\log W_r(\mbox{\ion{Ca}{2} K})$-$R_{\perp}/R_{\rm eff,est}$ & $-0.006_{-0.061}^{+0.063}$ & $-0.52_{-0.15}^{+0.14}$ & $0.34_{-0.07}^{+0.10}$\\ & $\log W_r(\mbox{\ion{Na}{1} 5891})$-$R_{\perp}$ & $+0.058_{-0.042}^{+0.046}~\rm kpc^{-1}$ & $-1.05_{-0.30}^{+0.25}$ & $0.52_{-0.11}^{+0.16}$ \\ & $\log W_r(\mbox{\ion{Na}{1} 5891})$-$R_{\perp}/R_{\rm eff,est}$ & $+0.016_{-0.097}^{+0.098}$ & $-0.78_{-0.23}^{+0.20}$ & $0.54_{-0.11}^{+0.16}$\\ \enddata \end{deluxetable*} \subsection{Column Densities and Covering Fractions}\label{subsec:results_cf} Figure~\ref{fig:logN_rperp} shows the total system column densities (including all velocity components) of \ion{Ca}{2} (left) and \ion{Na}{1} (right) in each GOTOQ sightline in our sample vs.\ $R_{\perp}$ (top row) and vs.\ $R_{\perp}/R_{\rm eff,est}$ (bottom row). As with the $W_r$ values discussed above, the measured column densities do not appear to exhibit any dependence on either $R_{\perp}$ or $R_{\perp}/R_{\rm eff, est}$. We assess the covering fraction ($f_{\rm C}$) of these absorbers by dividing the number of systems with column densities above a given threshold by the total number of sightlines (excluding nondetections above the threshold). These thresholds are chosen to lie just above the majority of 3$\sigma$ upper limits for each ion; i.e., $N(\mbox{\ion{Ca}{2}})>10^{12.5}~\rm cm^{-2}$ and $N(\mbox{\ion{Na}{1}})>10^{12.0}~\rm cm^{-2}$. We adopt the $\pm34$th percentile Wilson score intervals as uncertainty intervals for each covering fraction. Overall, we measure covering fractions $f_{\rm C}(\mbox{\ion{Ca}{2}})= 0.63^{+0.10}_{-0.11}$ and $f_{\rm C}(\mbox{\ion{Na}{1}})= 0.57^{+0.10}_{-0.11}$. We also compute covering fractions within two bins in $R_{\perp}$ and $R_{\perp}/R_{\rm eff,est}$ and show the results with filled boxes in Figure~\ref{fig:logN_rperp}. These covering fractions do not vary significantly (i.e., by $>2\sigma$) as a function of either of these measures of projected distance. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_logN_CaII_Rperp.pdf} \includegraphics[width=0.5\textwidth]{fig_logN_NaI_Rperp.pdf} \includegraphics[width=0.5\textwidth]{fig_logN_CaII_Rperp_renorm.pdf} \includegraphics[width=0.5\textwidth]{fig_logN_NaI_Rperp_renorm.pdf} \caption{{\it Top Row:} Total system column density of \ion{Ca}{2} (left) and \ion{Na}{1} (right) vs.\ projected distance from the associated GOTOQs. Open squares with downward arrows represent 3$\sigma$ upper limits calculated using the apparent optical depth method. The filled boxes indicate the $\pm34$th percentile Wilson score confidence intervals, with respect to the right axes, for the covering fraction of absorbers having $\log N(\mbox{\ion{Ca}{2}})>12.5$ and $\log N(\mbox{\ion{Na}{1}})>12.0$, respectively. {\it Bottom Row:} Same as above, vs.\ $R_{\perp}/R_{\rm eff,est}$. \label{fig:logN_rperp}} \end{figure*} It is notable that the overall $f_{\rm C}$ values for \ion{Ca}{2} and \ion{Na}{1} are statistically consistent with each other, given that \ion{Ca}{2} is known to trace a wider range of gas densities and temperatures \citep{Phillips1984,Vallerga1993,BenBekhti2012,Murga2015}. If we instead adopt equivalent column density thresholds for both ions ($N > 10^{12.5}~\rm cm^{-2}$), we find a value $f_{\rm C}(\mbox{\ion{Na}{1}})=0.33^{+0.11}_{-0.09}$ which is $1.9\sigma$ below that of $f_{\rm C}$(\ion{Ca}{2}). This difference accords with a picture in which \ion{Na}{1}-absorbing structures are smaller in size and/or less abundant than \ion{Ca}{2}-absorbing clouds \citep[e.g.,][]{Bish2019}. These values are also broadly consistent with the incidence of intermediate and high-velocity \ion{Ca}{2} and \ion{Na}{1} absorbers detected toward a sample of 408 QSO sightlines probing the Milky Way disk-halo interface and halo by \citet{BenBekhti2012}, in spite of their use of more sensitive column density thresholds: these authors measured $f_{\rm C} = 0.5$ for a threshold $N(\mbox{\ion{Ca}{2}})\ge 10^{11.4}~\rm cm^{-2}$ and $f_{\rm C} = 0.35$ for a threshold $N(\mbox{\ion{Na}{1}})\ge 10^{10.9}~\rm cm^{-2}$. Similar covering fractions for these ions were measured toward multiple stellar sightlines probing intermediate-velocity material $\sim3$ kpc above the Milky Way's disk by \citet{Bish2019} (i.e., $\log N(\mbox{\ion{Ca}{2}})> 11.5) = 0.63^{+0.07}_{-0.14}$ and $f_{\rm C}(\log N(\mbox{\ion{Na}{1}})>11.3) = 0.26^{+0.06}_{-0.08}$). This implies that our GOTOQ sightlines have overall higher column densities than those measured in both the \citet{BenBekhti2012} and \citet{Bish2019} samples. We speculate that this may be due to the limited path through the Milky Way probed by the stellar and QSO sightlines used in these studies. In particular, because the focus of these works is on characterizing extraplanar material, they have explicitly excluded absorbers having velocities consistent with that of the Milky Way's disk rotation curve (i.e., ISM absorbers) from their analyses. The intermediate- and high-velocity clouds targeted by \citet{BenBekhti2012} are typically found to be located within $<2.5$ kpc and $\sim5$--20 kpc away from the Milky Way's disk, respectively, in cases in which distance information is available (see \citealt{Richter2017} and references therein). Our GOTOQ sightlines, by contrast, are sensitive to all absorbers above our column density detection threshold ($\log N(\mbox{\ion{Ca}{2}}) \gtrsim 12.1$--12.4 and $\log N(\mbox{\ion{Na}{1}}) \gtrsim 11.9$) regardless of velocity or location along the line of sight. This bias is compounded by a lack of Milky Way halo sightlines located at low Galactic latitudes: existing sightline samples probe relatively short paths through the disk and extraplanar region due to their height above the disk plane \citep[e.g,][]{Bish2021}. Finally, we note that our \ion{Ca}{2} and \ion{Na}{1} covering fractions are significantly lower than the unity covering fraction measured for \ion{Mg}{2} absorbers having $W_r(\mbox{\ion{Mg}{2}}~2796)>1$ \AA\ detected along the seven GOTOQ sightlines studied by \citet{Kacprzak2013}. These absorbers have larger $W_r$ values than any in our sample and probe a broader range of gas phases that are known to extend well beyond galactic disks into their halos \citep[e.g.,][]{BergeronStasinska1986,Chen2010a,Nielsen2013,Lan2014}. Figure~\ref{fig:logN_CaII_NaI} compares our total column density constraints for \ion{Na}{1} and \ion{Ca}{2} in individual sightlines. We find that, in general, larger column densities of \ion{Na}{1} are associated with larger column densities of \ion{Ca}{2}. The purple filled region in this figure indicates the range in the average ratio $\langle N$(\ion{Na}{1})/$N$(\ion{Ca}{2})$\rangle \approx0.2$--0.9 measured along high-latitude Milky Way halo sightlines by \citet{Murga2015}. This latter work analyzed the coadded spectra of many thousands of extragalactic sources, and the absorption signal they report arises from material at all velocities along the line of sight (including contributions from both the Milky Way's ISM and CGM). Our measurements largely fall within this range, suggesting that the gaseous environments probed by our QSO sample are similar to those arising in the Milky Way. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{fig_logN_CaII_logN_NaI.pdf} \caption{Total system column density of \ion{Ca}{2} vs.\ that of \ion{Na}{1} in individual GOTOQ sightlines. Sightlines along which we do not securely detect one or both of these ions are indicated with open squares placed at the corresponding 3$\sigma$ upper limit. The purple filled region indicates the range in the average ratio $N$(\ion{Na}{1})/$N$(\ion{Ca}{2}) observed toward high Galactic latitude sightlines probing the Milky Way by \citet{Murga2015}. \label{fig:logN_CaII_NaI}} \end{figure} \subsection{Absorption Kinematics}\label{subsec:results_kinematics} The best-fit component velocities (relative to $z_{\rm H\alpha}$) of each absorption system with a total $W_r > 3\sigma_{W_r}$ are shown in Figure~\ref{fig:vel_rperp} vs.\ projected distance from the associated galaxy host. The uncertainty interval on each point is set to extend from $\delta v_{\rm 90,left}$ to $\delta v_{\rm 90, right}$ to indicate the velocity space covered by each system. We remind the reader that $z_{\rm H\alpha}$\ need not be equivalent to the average redshift of each host galaxy but rather indicates the redshift of nebular emission along the same sightline as that used to probe the absorbing gas. We therefore interpret absorption with velocities very close ($|\delta v| \lesssim 10~\rm km~s^{-1}$) to $z_{\rm H\alpha}$\ as interstellar material lying in the host galaxy's disk and rotating with its \ion{H}{2} regions. We assume that absorbers with larger velocity offsets (or extents) may be extraplanar in nature and/or part of ongoing bulk outflow from or inflow toward the disk. This rough velocity criterion is motivated by the theoretical considerations laid out in Section~\ref{sec:model}, where they will be further refined (to account for the $M_*$ and $R_{\perp}$ of each system, as well as uncertainties in foreground galaxy orientation). For comparison, the detailed study of the spatially resolved velocity distributions of disk and extraplanar absorbers in the nearby galaxy M33 by \citet{Zheng2017} identified \ion{Si}{4} components having velocities within $\pm20\rm ~km~s^{-1}$ of the local \ion{H}{1} 21 cm emission peak as ``disk" absorbers, and uncovered numerous extraplanar absorbers at relative velocities $\pm 30\!-\!110\rm ~km~s^{-1}$. Among the 20 \ion{Ca}{2} velocity components included in Figure~\ref{fig:vel_rperp}, only three (15\%) have $|\delta v| > 50~\rm km~s^{-1}$; 11 (55\%) have $|\delta v| > 20~\rm km~s^{-1}$; and 14 (70\%) have $|\delta v| > 10~\rm km~s^{-1}$. The remaining six systems have velocity centroids consistent with galactic disk rotation. The $\Delta v_{90}$ values for the \ion{Ca}{2} absorbers, however, lie in the range $50~\mathrm{km~s^{-1}}\le \Delta v_{90} \le 180\rm ~km~s^{-1}$, and thus imply the presence of outflowing/inflowing absorbing material in every case. The \ion{Na}{1} absorbers exhibit component velocity offsets at yet lower rates: among the 17 components shown, only one (6\%) has $|\delta v| > 50~\rm km~s^{-1}$; six (35\%) have $|\delta v| > 20~\rm km~s^{-1}$; and only eight (47\%) have $|\delta v| > 10~\rm km~s^{-1}$. These profiles are all likewise kinematically broad ($50~\mathrm{km~s^{-1}} \le \Delta v_{90}(\mbox{\ion{Na}{1} 5891}) \le 150\rm ~km~s^{-1}$), suggesting that the ongoing fountain motions traced by \ion{Ca}{2} also include a cold component. For reference, Figure~\ref{fig:vel_rperp} shows the radial velocity that would be required to escape a dark matter halo having $M_h = 10^{10}M_{\odot}$, assuming that $R_\perp$ is equal to the total distance ($R$) from the halo center (rather than the projected distance), and that $v_{\rm esc} = \sqrt{2GM_h/R}$. Our foreground systems have a range in stellar mass $7.4 \lesssim \log M_*/M_{\odot} \lesssim 10.6$, implying they range in halo mass over $10.3 \lesssim \log M_h/M_{\odot} \lesssim 12.0$ \citep{Moster2013}; thus, the escape velocity of an $M_h = 10^{10} M_{\odot}$ halo may safely be considered the minimum required for these absorbers to escape from any system in our sample. With the caveat that our spectroscopy is sensitive only to motion along the line of sight (such that our $\delta v$ values are likely somewhat lower than the three-dimensional velocity of the gas), we find that none of the absorbers in our sample have central velocities close to that required for escape from their host systems. Moreover, none of the velocity limits of the profiles (indicated by the [$\delta v_{\rm 90, left}$, $\delta v_{\rm 90, right}$] intervals) extend beyond this escape velocity limit. \begin{figure*}[ht] \includegraphics[width=\textwidth]{fig_vel_Rperp.pdf} \caption{Absorption velocity offsets relative to $z_{\rm H\alpha}$\ for \ion{Ca}{2} (left) and \ion{Na}{1} (middle) for each securely detected absorption system vs.\ projected distance from the galaxy host. Systems fit with single velocity components are shown with solid light blue and orange squares. The primary and secondary components of systems fit with two velocity components are shown with light blue/orange squares outlined with dark blue/red and open squares outlined with dark blue/red, respectively. Error bars show the span of velocities included in the $\Delta v_{90}$ measurement for the \ion{Ca}{2} K and \ion{Na}{1} 5891 lines. The gray curves indicate the radial velocity required to escape an $M_h = 10^{10}M_{\odot}$ halo (a conservative minimum escape velocity given the stellar mass distribution of our sample) as a function of total distance from the halo center. The projected velocity of all detected absorption is well below this threshold. The rightmost panel shows the distribution of best-fit $b_{\rm D}$ values for all components in our \ion{Ca}{2} (cyan) and \ion{Na}{1} (orange) absorbers. The median value of each distribution is shown with a vertical dashed line. \label{fig:vel_rperp}} \end{figure*} The rightmost panel of Figure~\ref{fig:vel_rperp} shows the distribution of the best-fit $b_{\rm D}$ values for our \ion{Ca}{2} and \ion{Na}{1} absorption component sample. The median value of the former is $18.5~\rm km~s^{-1}$, while the median value of $b_{\rm D}$(\ion{Na}{1}) is $10.5~\rm km~s^{-1}$, close to the resolution of our spectrograph. In contrast, the QSO absorption-line study of \ion{Ca}{2} and \ion{Na}{1} absorption in Milky Way disk-halo clouds by \citet{BenBekhti2012} measured median Doppler parameter values of $3.3\rm ~km~s^{-1}$ for \ion{Ca}{2} and $2.1\rm ~km~s^{-1}$ for \ion{Na}{1}, with maximum values of $\approx 10\rm ~km~s^{-1}$ for both ions. This suggests that the absorbing components in our sample are likely composed of multiple individual ``clouds", and that our $b_{\rm D}$ values are predominantly reflective of turbulent velocity dispersions among these clouds (with a subdominant contribution from thermal broadening). Figure~\ref{fig:vel_CaII_NaI} shows the best-fit $\delta v$ value for each \ion{Ca}{2} component vs.\ the corresponding value of $\delta v$(\ion{Na}{1}) for each system. Systems for which we have fit \ion{Ca}{2} (or \ion{Na}{1}) with a single component and the other ion with two components appear twice, each with the same $y$- (or $x$-) axis value. There are three systems for which we fit two velocity components to both ions; in these cases, we match components in order of increasing velocity. We do not require that the $\delta v$ values for \ion{Ca}{2} and \ion{Na}{1} fall within some minimum velocity offset to include them here; instead, we use this figure to assess the degree to which our fitted \ion{Na}{1} and \ion{Ca}{2} components exhibit similar velocities. The component velocities align closely along many of our sightlines: the quantity $|\delta v$(\ion{Ca}{2}) $-$ $\delta v$(\ion{Na}{1})$|$ has a median value $10.9\rm ~km~s^{-1}$, and exceeds $25\rm ~km~s^{-1}$ for only four of the 17 component pairs considered. However, the Pearson correlation coefficient for these measurements is 0.23 with a $P$-value of $37\%$, indicating a relatively high likelihood that uncorrelated data could yield a similar or more extreme coefficient. If we consider only those systems for which we adopt consistent numbers of components for both \ion{Ca}{2} and \ion{Na}{1}, we measure a Pearson correlation coefficient of 0.35 with a $P$-value of $29\%$. Given that our spectroscopy likely cannot resolve the individual absorbing structures producing the observed line profiles, as well as the significant probability that \ion{Na}{1} occurs in fewer of these structures than does \ion{Ca}{2} \citep[e.g.,][]{BenBekhti2012,Bish2019}, our simple approach to modeling these profiles likely obfuscates the velocity alignment of these ions. Even with this limitation, our dataset points to a relatively high degree of velocity coherence between the two gas phases we trace. In Milky Way studies, the kinematics of these ions are typically compared via analysis of the $N(\mbox{\ion{Ca}{2}})/N(\mbox{\ion{Na}{1}})$ ratio as a function of velocity relative to the local standard of rest \citep[LSR; e.g.,][]{RoutlySpitzer1952,Sembach1994,BenBekhti2012}. This ratio has average values of $N(\mbox{\ion{Ca}{2}})/N(\mbox{\ion{Na}{1}}) \approx 0.69$ at velocities close to the LSR and increases at larger velocity offsets (likely due to the so-called Routly-Spitzer effect; \citealt{RoutlySpitzer1952, Sembach1994}). While these measurements are not directly analogous to those presented in Figure~\ref{fig:vel_CaII_NaI}, they are similarly suggestive of kinematic coherence of these ions. \begin{figure}[h] \includegraphics[width=\columnwidth]{fig_vel_CaII_NaI.pdf} \caption{Fitted absorption velocity offsets relative to $z_{\rm H\alpha}$\ for securely detected \ion{Ca}{2} systems vs.\ those for securely detected \ion{Na}{1} systems. Error bars show the uncertainties in these fitted values. Absorbers fit with single velocity components in both transitions are shown with solid blue squares. Absorbers in which \ion{Ca}{2} was fit with one component and \ion{Na}{1} was fit with two components are indicated with solid blue and open squares outlined in orange. Absorbers in which \ion{Na}{1} was fit with one component and \ion{Ca}{2} was fit with two components are indicated in a similar fashion, as listed in the legend. Velocities for absorbers in which both ions were fit with two components are shown with open red squares. \label{fig:vel_CaII_NaI}} \end{figure} \subsection{Relation between $W_r$ and Dust Reddening} \ion{Na}{1} and \ion{Ca}{2} absorption is known to be correlated with dust across a variety of astrophysical environments, including in the Milky Way ISM and halo \citep[e.g.,][]{Sembach1993,MunariZwitter1997,Poznanski2012,Murga2015} and in external galaxies \citep[e.g.,][]{Wild2005,ChenTremonti2010,Phillips2013,Baron2016,Rupke2021}. However, current evidence suggests that the strength and form of the relationship between $E(B-V)$ and $W_r$(\ion{Na}{1}) in particular depends on the environment probed and/or on the approach to measuring these quantities \citep[e.g.,][]{Rupke2021}. Here we investigate the relationship between dust reddening and $W_r$(\ion{Na}{1}) and $W_r$(\ion{Ca}{2}) in our GOTOQ sample, and compare it to that derived for the Milky Way. We adopt the estimate of $E(B-V)_{(g-i)}$ reported by \citet{Straka2015} for the QSOs in our sample as a proxy for the dust column density associated with each foreground host. These estimates are based on the observed-frame $(g-i)$ color excess of each QSO relative to the median $(g-i)$ for QSOs at the same redshift in the fourth edition of the SDSS Quasar Catalog \citep{Schneider2007}. In a study of the relation between QSO colors and the presence and strength of foreground \ion{Mg}{2} absorbers in the SDSS QSO sample, \citet{York2006} found that the QSO color excess $\Delta (g-i)$ is tightly correlated with the dust reddening $E(B-V)$ associated with foreground absorbers and measured from composite QSO spectra shifted into the absorber rest frame. These authors adopted an SMC reddening law \citep{Prevot1984} to calculate the expected relation $E(B-V)_{(g-i)} = \Delta (g-i)(1+z_{\rm abs})^{-1.2}/1.506$, and found that the average $E(B-V)_{(g-i)}$ in samples of $\gtrsim 100$ objects corresponds closely to the $E(B-V)$ of their composite spectra: $\langle E(B-V)_{(g-i)}\rangle = 0.98 \times E(B-V) - 0.002$. However, \citet{York2006} also demonstrated that $\Delta (g-i)$ values for individual quasars with no detected foreground absorbers exhibit significant scatter with FWHM $\approx 0.27$\,mag\footnote{This quantity is estimated by fitting a Gaussian model to a digitized version of the data in Figure 3 of \citet{York2006}.} (with a mean value $\Delta (g-i) = -0.013$). This implies an intrinsic dispersion $\sigma_{\rm intr} (\Delta (g-i)) = 0.12$. To estimate the total uncertainty in each $E(B-V)_{(g-i)}$ value, we consider both this intrinsic scatter and uncertainty due to measurement error. \citet{Straka2015} stated that the maximum error in their measurements of apparent magnitudes for both the QSOs and foreground galaxies in their GOTOQ sample is $0.05$ mag. We therefore assume a measurement error of $\sigma_{\rm meas}(\Delta (g-i)) = 0.07$. We multiply both $\sigma_{\rm meas}$ and $\sigma_{\rm intr}$ by the quantity $(1+z_{\rm abs})^{-1.2}/1.506$ and add the results in quadrature to compute a total $\sigma_{\rm tot}(E(B-V)_{(g-i)})$ for each GOTOQ sightline. Figure~\ref{fig:ew_EBV} shows $E(B-V)_{(g-i)}$ estimates for our sample with error bars indicating $\sigma_{\rm tot}(E(B-V)_{(g-i)})$ vs.\ the total $W_r$ of \ion{Ca}{2} K and \ion{Na}{1} 5891 for each system. Light blue and orange points indicate sightlines lacking any intervening absorbers (other than the system associated with $z_{\rm H\alpha}$). Red points indicate sightlines along which between one and nine unassociated intervening absorbers were detected in their SDSS spectra by \citet{Straka2015}. These seven QSOs may be subject to some additional reddening from these intervening absorbers, although \citet{Straka2015} found that dust in the GOTOQs themselves is likely the dominant source of attenuation for these systems. We first note that there is no relationship between $E(B-V)_{(g-i)}$ and either $W_r$(\ion{Ca}{2} K) or $W_r$ (\ion{Na}{1} 5891) evident among our GOTOQ sample. The distribution of $W_r$ values in subsamples having $E(B-V)_{(g-i)} < 0.05$ and $E(B-V)_{(g-i)} > 0.05$ have medians of $W_r$(\ion{Ca}{2} K) $=0.28$ \AA\ and $0.38$ \AA, respectively, with dispersions of 0.18--0.31 \AA, and medians of $W_r$(\ion{Na}{1} 5891) $=0.18$\,\AA\ and $0.22$ \AA, with dispersions of 0.29--0.31 \AA\ (adopting the measured values of $W_r$ for all sightlines, rather than upper limits for nondetections). We therefore are not sensitive to any significant shift in these distributions between low and high reddening values. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_ewCaII_EBVgi.pdf} \includegraphics[width=0.5\textwidth]{fig_ewNaI_EBVgi.pdf} \caption{ Total system $W_r$(\ion{Ca}{2} K) (left) and $W_r$(\ion{Na}{1} 5891) (right) vs.\ the dust reddening measured along the QSO sightline, $E(B-V)_{(g-i)}$. Upper limits, indicated with open squares, are shown in cases in which $W_r < 3\sigma_{W_r}$, and represent 3$\sigma$ limits. Sightlines shown in light blue and orange have no intervening systems that are unassociated with the known foreground galaxy. Sightlines indicated in red exhibit one or more unassociated intervening systems in their SDSS spectra. The solid blue curves show the best-fit relations between the $W_r$ of these ions due to the Milky Way's ISM/halo and dust reddening measured by \citet{Murga2015}. The dashed blue curves represent the $\pm1\sigma$ uncertainties in these fits. The purple curves show the best-fit relation (and the $\pm 1\sigma$ uncertainty in the relation) between $W_r$(\ion{Na}{1} 5891) and dust reddening in the Milky Way measured by \citet{Poznanski2012}. The small purple circles/triangles show $W_r(\mbox{\ion{Na}{1} 5891})$ values/3$\sigma$ upper limits measured from high-resolution QSO spectra by \citet{Poznanski2012}, plotted vs.\ the reddening toward that coordinate in the \citet{Planck2016} dust map. Data points outlined in dark blue are offset by $>3\sigma$ from the closest point on the best-fit \citet[][left]{Murga2015} and \citet[][right]{Poznanski2012} relations. \label{fig:ew_EBV}} \end{figure*} We also assess the degree to which our dataset is consistent with the average relationships between dust reddening and \ion{Ca}{2}/\ion{Na}{1} absorption strength in the Milky Way. These relationships have been investigated both in works using high-resolution spectroscopy of samples of $<100$ QSOs or early-type stars \citep[e.g.,][]{Richmond1994,MunariZwitter1997}, and more recently in studies taking advantage of the ${>}100,000$ QSO spectra and ${>}800,000$ galaxy spectra obtained over the course of the SDSS \citep{Abazajian2009}. These latter works \citep{Poznanski2012,Murga2015} grouped these spectra into bins based on the dust reddening of each source implied by the \citet{SFD98} map of the dust distribution across the sky. They then constructed the median stack of the spectra in each bin and measured the $W_r$ of \ion{Ca}{2} H \& K (in the case of \citealt{Murga2015}) and the $W_r$ for both \ion{Na}{1} doublet transitions in each stack. The best-fit relations between $E(B-V)$ and $W_r$ of the relevant transition reported in these studies are included as solid curves in Figure~\ref{fig:ew_EBV}. Dashed curves show the same relations with the best-fit parameters offset by their $\pm1\sigma$ uncertainties. Also included in the right-hand panel of Figure~\ref{fig:ew_EBV} are $W_r$(\ion{Na}{1}) measurements reported by \citet{Poznanski2012} for a small sample of high-resolution QSO spectra. We estimate the reddening of these sources by querying the \citet{Planck2016} dust map available with the \texttt{dustmaps} Python package \citep{Green2018}. Most of the measurements for our GOTOQ sample are formally consistent with these relationships, given the large uncertainties in our $E(B-V)_{(g-i)}$ estimates. However, their distribution appears to exhibit significant scatter around these relationships, and indeed more dispersion than the \citet{Poznanski2012} sample of individual $W_r$(\ion{Na}{1}) measurements. To quantitatively identify outliers in our sample, we first determine the closest point on each best-fit relation ($x_j$, $y_j$) to that of each data point (i.e., such that the Euclidean distance $d_j = \sqrt{(E(B-V)_{(g-i), j} - x_j)^2 + (W_{r,j} - y_j)^2}$ is minimized). For sightlines that did not yield significant detections of a given ion, we use the formally measured value of $W_r$ (rather than its upper limit) to compute $d_j$. We then determine the significance of the distance $d_j$ by computing \[ \mathcal{N}(\sigma_{d, j}) = \sqrt{\left (\frac{E(B-V)_{(g-i), j} - x_j}{\sigma_{\rm tot}(E(B-V)_{(g-i), j})} \right )^2 + \left (\frac{W_{r,j} - y_j}{\sigma_{W_{r,j}}} \right )^2}. \] The seven systems for which $\mathcal{N}(\sigma_{d, j}) > 3$ relative to the best-fit \citet{Murga2015} relation for \ion{Ca}{2} are outlined in dark blue in Figure~\ref{fig:ew_EBV} (left). All of these systems lie at $W_r$ values $\approx 0.2$--0.5 \AA\ higher than that implied by the QSO's dust reddening level. We outline in dark blue the five \ion{Na}{1} systems for which $\mathcal{N}(\sigma_{d, j}) > 3$ relative to the best-fit \citet{Poznanski2012} relation in the right panel of Figure~\ref{fig:ew_EBV}. Again, most of these systems have higher $W_r$(\ion{Na}{1} 5891) values than would be predicted by \citet{Poznanski2012}. The overall high incidence of these outliers (comprising 33\% and 24\% of our \ion{Ca}{2} and \ion{Na}{1} samples, respectively), implies that these best-fit relations may underpredict the amount of low-ion metal absorption associated with low values of $E(B-V)$. If we apply a $14\%$ recalibration to the $E(B-V)$ values used in \citet{Poznanski2012} and \citet{Murga2015} as recommended by \citet{SchlaflyFinkbeiner2011}, the number of \ion{Na}{1} outliers remains the same, and the number of \ion{Ca}{2} outliers is reduced to six (or 29\% of our sample). Studies of dust across a range of environments, from the SMC \citep{Welty2006,Welty2012} to the ISM of QSO host galaxies \citep{Baron2016}, have likewise indicated that $E(B-V)$ values of $\gtrsim 0.05$ mag are associated with higher $W_r$(\ion{Na}{1}) than implied by \citet{Poznanski2012}. As the \citet{Poznanski2012} relation is commonly invoked to estimate the reddening of both type I and II supernovae in combination with measurements of $W_r$(\ion{Na}{1}) in spectroscopy of these objects \citep[e.g.,][]{SmithAndrews2020,Bruch2021,Dastidar2021}, it is important to appreciate potential biases that may arise from this calibration \citep[e.g.,][]{Phillips2013}. Moreover, given the wide range in stellar masses of our GOTOQ host galaxies, we suggest that our sample may better represent the varied dust and ISM properties of supernova host galaxies than those focused purely on the Milky Way, SMC, or QSO host systems. \subsection{Relations between Absorption Strength and Host Galaxy Properties}\label{subsec:ew_dv_Mstar_SFR} Here we investigate the relationships between the $W_r$ of \ion{Ca}{2} and \ion{Na}{1} absorption and the stellar masses and local star formation activity of the associated foreground host galaxies. Figure~\ref{fig:ew_SFR_Mstar} shows our total system $W_r$(\ion{Ca}{2} K) and $W_r$(\ion{Na}{1} 5891) measurements vs.\ $\log M_*/M_{\odot}$ (top row) and {\bf $\rm SFR_{\rm local}$} (bottom row). Our $W_r$(\ion{Ca}{2} K) values appear to exhibit correlations with both $\rm SFR_{\rm local}$ and $M_*$. The Pearson correlation coefficient for the relationship between our directly measured $W_r$(\ion{Ca}{2} K) values and foreground galaxy local SFR is $\rho_{\rm P} = 0.61$ with a $P$-value $=0.009$, indicative of a relation that is close to linear and a very low probability that these variables are uncorrelated. If we exclude the system with the highest-$\rm SFR_{local}$ value (of $60.3~M_{\odot}~\rm yr^{-1}$) from this analysis, we find a $\rho_{\rm P} = 0.50$ with a $P$-value $=0.05$, confirming that this correlation is not driven solely by a single extreme system. For the relationship between $W_r$(\ion{Ca}{2} K) and $\log M_*/M_{\odot}$, we find $\rho_{\rm P} = 0.35$ with $P=0.12$, which does not rule out the null hypothesis that these variables are uncorrelated. Our $W_r$(\ion{Na}{1} 5891) measurements, shown in the right panels of Figure~\ref{fig:ew_SFR_Mstar}, yield correlation coefficients of $\rho_{\rm P} = 0.35$ and $0.27$ when considered vs.\ {\bf $\rm SFR_{local}$} and $\log M_*/M_{\odot}$, respectively, with associated $P$-values in the range $0.17 \le P \le 0.24$. These values likewise do not rule out a lack of correlation between these quantities. We also assess the covering fraction of strong \ion{Ca}{2} and \ion{Na}{1} absorbers as a function of $\rm SFR_{local}$ and $M_*$. Here, we consider strong systems to have $W_r > 0.2$\,\AA\ and divide our sample into bins at the median values $\log M_*/M_{\odot} = 9.3$ and ${\rm SFR_{local}} = 0.2~M_{\odot}~\rm yr^{-1}$. We calculate the incidence and corresponding uncertainty intervals of strong absorbers in each bin as described in Section~\ref{subsec:results_cf} and show the results with filled boxes in Figure~\ref{fig:ew_SFR_Mstar}. Our $f_{\rm C}$ estimates do not differ significantly at low vs.\ high $\rm SFR_{\rm local}$ or stellar mass. Instead, we find that even systems having $\log M_*/M_{\odot} < 9.3$ have $f_{\rm C}(W_r($\ion{Ca}{2} K$)>0.2~\mathrm{\AA}) = 0.63^{+0.14}_{-0.18}$ and $f_{\rm C}(W_r($\ion{Na}{1} 5891$)>0.2~\mathrm{\AA}) = 0.40^{+0.16}_{-0.14}$. We measure similar covering fractions for systems with $\mathrm{SFR_{local}}<0.2~M_{\odot}~\rm yr^{-1}$: $f_{\rm C}(W_r($\ion{Ca}{2} K$)>0.2~\mathrm{\AA}) = 0.57^{+0.17}_{-0.18}$ and $f_{\rm C}(W_r($\ion{Na}{1} 5891$)>0.2~\mathrm{\AA}) = 0.33^{+0.17}_{-0.13}$. These fractions suggest that both transitions may be utilized to trace ISM kinematics in down-the-barrel spectroscopy across the galaxy population, including in systems with $\log M_{*}/M_{\odot} \lesssim 9.0$ \citep[e.g.,][]{SchwartzMartin2004}. Finally, we investigate the relationship between our $\delta v$ measurements for individual absorption components (presented in Section~\ref{subsec:results_kinematics}) and both $\log M_*/M_{\odot}$ and $\rm SFR_{local}$. We show the former in Figure~\ref{fig:vel_Mstar}. While we do not uncover notable trends in either of these relations, this figure highlights the relatively high velocity offsets ($\delta v \sim 33$--$80\rm ~km~s^{-1}$) of all primary and secondary components detected close to the two lowest-$M_*$ foreground systems in our sample (having $\log M_*/M_{\odot} < 8$). Among the 27 single/primary component velocities shown, only one other system has a primary component velocity offset $>33\rm ~km~s^{-1}$. Because such large $\delta v$ values are unusual at $\log M_*/M_{\odot} > 8.5$, and given the high equivalent widths of the absorption associated with one of these sightlines (GOTOQJ1238+6448 has $W_r$(\ion{Ca}{2} K) $= 0.86 \pm 0.04$ \AA\ and $W_r$(\ion{Na}{1} 5891) $= 0.73\pm 0.03$ \AA), we speculate that these absorbers may in fact be associated with other nearby systems that failed to give rise to line emission that could be detected in the SDSS or ESI spectra. Alternatively, this absorption may be tracing either outflowing material or ongoing accretion. Regardless of whether we exclude these very low-$M_*$ systems from our sample, we measure a statistically significant correlation between the local SF activity in our foreground galaxies and $W_r$(\ion{Ca}{2} K) (i.e., the subsample having $\log M_{*}/M_{\odot} > 8$ yields $\rho_{\rm P} = 0.72$ and $P=0.002$). This finding is reminiscent of the positive correlation between H$\alpha$ flux and $W_r$(\ion{Ca}{2} K) identified among the GOTOQ parent sample by \citet{Straka2015} and is suggestive of a physical link between star formation activity and the strength/velocity spread of \ion{Ca}{2} absorption in the ISM and halo. We discuss the implications of this finding in Section~\ref{subsec:discussion_CaIISFR}. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_ewCaII_Mstar.pdf} \includegraphics[width=0.5\textwidth]{fig_ewNaI_Mstar.pdf} \includegraphics[width=0.5\textwidth]{fig_ewCaII_SFR.pdf} \includegraphics[width=0.5\textwidth]{fig_ewNaI_SFR.pdf} \caption{ Total system $W_r$(\ion{Ca}{2} K) (left) and $W_r$(\ion{Na}{1} 5891) (right) vs.\ $\log M_*/M_{\odot}$ (top row) and $\rm SFR_{local}$ (bottom row) measured for the foreground host system. Large colored points indicate constraints from our ESI spectroscopy. Upper limits, indicated with open squares, are shown in cases in which $W_r < 3\sigma_{W_r}$ and represent 3$\sigma$ limits. Gray points show measurements reported in \citet{Straka2015} for the parent GOTOQ sample. The filled boxes indicate the $\pm34$th percentile Wilson score confidence intervals, with respect to the right axes, for the covering fraction of absorbers having $W_r > 0.2$ \AA\ in our ESI sample. \label{fig:ew_SFR_Mstar}} \end{figure*} \begin{figure}[h] \includegraphics[width=\columnwidth]{fig_vel_both_Mstar.pdf} \caption{Fitted component velocity offsets relative to $z_{\rm H\alpha}$\ for \ion{Ca}{2} (light blue) and \ion{Na}{1} (orange) absorbers. Systems fit with single velocity components are shown with solid light blue and orange squares. The primary and secondary components of systems fit with two components are shown with filled and open squares outlined in a complementary color. Error bars show the span of velocities included in the $\Delta v_{90}$ measurement for the \ion{Ca}{2} K and \ion{Na}{1} 5891 lines. Symbols have been offset horizontally by $\pm 0.1$ for clarity. \label{fig:vel_Mstar}} \end{figure} \section{A Simple Model of the ISM Contribution to GOTOQ \ion{Ca}{2} and \ion{Na}{1} Column Densities and Kinematics}\label{sec:model} Our QSO sightline sample is unusual in the context of CGM studies for its close impact parameters (over the range $R_{\perp} = 1$--13 kpc). A minority of these sightlines lie within the estimated half-light radius of the foreground host, and, as a consequence of our selection technique, all of our sample sightlines lie within the extent of emission from \ion{H}{2} regions and/or an ionized gas layer. Moreover, it is well known that the \ion{H}{1} component of disk galaxies is greater in radial extent than that of the stellar or ionized gas component (e.g., the ratio $R_{\rm HI}/R_{25}\gtrsim1.5$--2; \citealt{BroeilsRhee1997, Swaters2002,Begum2008,Wang2013,Wang2016}). Each of our GOTOQ sightlines is therefore very likely to be probing the warm and/or cold neutral medium within this disk, along with any outflowing or infalling material along the line of sight. Here we consider the extent to which (1) the column densities we measure are consistent with those of a neutral gas disk having a \ion{Ca}{2} and \ion{Na}{1} distribution similar to that observed in the Milky Way; and (2) the kinematics of our absorber sample are consistent with those predicted for the ISM of galaxies with similar stellar masses. \begin{figure*}[ht] \includegraphics[width=0.5\textwidth]{fig_logN_CaII_Rperp_MWmodel.pdf} \includegraphics[width=0.5\textwidth]{fig_logN_NaI_Rperp_MWmodel.pdf} \includegraphics[width=0.5\textwidth]{fig_vel_CaII_Mstar_MW_model.pdf} \includegraphics[width=0.5\textwidth]{fig_vel_NaI_Mstar_MW_model.pdf} \caption{{\it Top Row:} Total system column density of \ion{Ca}{2} (left) and \ion{Na}{1} (right) vs.\ projected distance from the associated GOTOQs. Symbols correspond to those used in Figure~\ref{fig:logN_rperp}. The filled regions indicate the range in column densities predicted for a Milky Way-like ISM observed from an external viewpoint using the simple model described in Section~\ref{subsec:modeling_columndensities}, and assuming three different disk inclinations ($i=0^{\circ}, 50^{\circ}$, and $75^{\circ}$, shown in red, purple, and turquoise, respectively). The horizontal dotted lines show the value of the central perpendicular column density of the Milky Way disk model, adjusted by a factor $1 / \cos i$. {\it Bottom Row:} Fitted component velocity offsets relative to $z_{\rm H\alpha}$\ for \ion{Ca}{2} (left) and \ion{Na}{1} (right) absorbers. Systems fit with single velocity components are shown with solid light blue and orange squares. The primary and secondary components of systems fit with two components are shown with filled and open squares outlined in a complementary color. Error bars show the span of velocities included in the $\Delta v_{90}$ measurement for the \ion{Ca}{2} K and \ion{Na}{1} 5891 lines. Colored boxes indicate the maximum projected line-of-sight velocity width predicted using simple tilted-ring models with extraplanar layers placed at $z = \pm 0.82$ kpc (for \ion{Ca}{2}) and $\pm0.43$~kpc (for \ion{Na}{2}). The maximum rotation velocity ($V_{\infty}$) of each model is set by the stellar mass Tully-Fisher Relation, and the $R_{\rm V}$ parameter is varied to model both steeply rising (purple boxes) and gradually increasing (turquoise boxes) rotation curves. \label{fig:logN_rperp_MWmodel}} \end{figure*} \subsection{Column Densities}\label{subsec:modeling_columndensities} It is common in the literature to describe the interstellar density distribution of a given ion as an exponential function that decreases with height $\vert z\vert$ above the Milky Way disk plane: $n(z) = n_0 \mathrm{e}^{-\vert z \vert/h}$ \citep[e.g.,][]{Jenkins1978,Bohlin1978,EdgarSavage1989,Sembach1994,Savage2003,SavageWakker2009}. The scale height, $h$, and the mid-plane density, $n_0$, may then be constrained by fitting this function to observations of ionic column densities toward samples of Milky Way disk and halo stars (and/or quasars). \citet{Sembach1993} and \citet{Sembach1994} carried out such a study focusing on \ion{Ca}{2} and \ion{Na}{1}, finding $n_0(\mbox{\ion{Ca}{2}}) =6.85_{-0.41}^{+0.76} \times 10^{-10}~\rm cm^{-3}$, $h(\mbox{\ion{Ca}{2}}) = 0.82_{-0.09}^{+0.07}$\,kpc, $n_0(\mbox{\ion{Na}{1}}) =1.27_{-0.18}^{+0.20} \times 10^{-9}~\rm cm^{-3}$, and $h(\mbox{\ion{Na}{1}}) = 0.43_{-0.08}^{+0.12}$\,kpc. We adopt these values to build our ISM model. We further assume that the disk density declines exponentially with radius, with the scale radius measured from 21 cm mapping of the Milky Way \ion{H}{1} distribution ($R_{\rm S} = 3.75$ kpc; \citealt{KalberlaKerp2009}). We may therefore write our adopted disk density distribution as \begin{equation}\label{eq:disk_density} n(r, z) = n_0 \exp \left [ -\frac{r}{R_{\rm S}} - \frac{\vert z \vert}{h}\right ]. \end{equation} Given this density distribution, the total column density observed along a quasar sightline passing through the disk oriented at an inclination $i$ at a location ($x$, $y$) may be calculated via the integral $N(x, y) = \int n(r, z) ds$, with $ds$ representing the differential length element along the line of sight, and with $r$ and $z$ being dependent on $s$ \citep[e.g.,][]{ProchaskaWolfe1997}. To compute this integral, we adopt a simplified version of the tilted-ring model framework that is commonly used to model \ion{H}{1} kinematics and surface brightnesses in disk galaxies \citep[e.g.,][]{Rogstad1974,Bosma1978,deBlok2008,Oh2011,Kamphuis2015,Oh2018}. In the standard, two-dimensional approach, a galaxy's disk is modeled as a series of concentric ellipses. Each ellipse has an independent central coordinate ($x_{\rm C}$, $y_{\rm C}$), position angle ($\phi$), and inclination ($i$). Here, we set these parameters to the same value for every ellipse $j$. To create a three-dimensional model, we replicate this initial set of rings, assigning each set $k$ a thickness $\Delta z = 0.05$ kpc and height $z$ such that the model extends to $z = \pm 10$ kpc. We assign each ring an ionic volume density according to Equation~\ref{eq:disk_density} and compute the corresponding column density $N_{j, k} = n_{j, k} \Delta z / \cos i$. We then calculate the ($x$, $y$) coordinates of each ring, interpolating the values of $N_{j, k}$ onto a fixed Cartesian grid. Finally, we sum these column densities over all layers to compute $N(x,y)$. We generate three such models at inclinations $i = 0^{\circ}$, $50^{\circ}$, and $75^{\circ}$. We then compute the range in $N(x,y)$ values predicted at a given $R_{\perp} = \sqrt{x^2 + y^2}$ for $0~\mathrm{kpc} < R_{\perp} < 15~\mathrm{kpc}$. We show the resulting column density distributions in the upper panels of Figure~\ref{fig:logN_rperp_MWmodel}, along with the total system column density measurements for our sample (described in Section~\ref{subsec:results_cf}). For reference, we also show the value $N = 2 n_0 h / \cos i$ with horizontal dotted lines. For sightlines in which \ion{Ca}{2} is securely detected, our measurements are typically well above the maximum column densities predicted for a moderately inclined disk (with $i=50^{\circ}$). Even in the extreme case of a disk inclined to $75^{\circ}$, all six of our sightlines at $R_{\perp} > 6$ kpc yield \ion{Ca}{2} measurements significantly above the projected range of column densities at similarly large impact parameters. Our \ion{Na}{1} column densities overall exhibit somewhat greater consistency with our model predictions over the full range of $R_{\perp}$ of our sample; nevertheless, several of our measurements lie well above those predicted for $i = 50^{\circ}$. Given the simplicity of this modeling, as well as our lack of knowledge of the orientation of our foreground galaxy sample, we cannot use this approach to estimate in detail the contribution of an ISM component to the column densities measured along each sightline. Indeed, the numerous upper limits we place on $N$(\ion{Na}{1}) within $R_{\perp} < 5$ kpc suggest that our simple model likely overpredicts the \ion{Na}{1} column density in some of our foreground systems and/or does not properly capture the patchiness of \ion{Na}{1} absorption in the ISM. Moreover, we have assumed here that the volume densities and scale heights of these ions do not vary with overall galaxy stellar mass or SFR. If, for example, volume density is correlated with mass (as obliquely suggested by the findings presented in Section~\ref{subsec:ew_dv_Mstar_SFR}), our models would tend to overpredict the ISM contribution to the observed column densities, given the stellar mass distribution of our sample. If the volume density of these ions is instead strongly correlated with global SFR, our modeling may underpredict their ISM column densities in light of the analysis presented in Appendix~\ref{sec:appendix_SFRfrac}. However, we emphasize that our model predictions for moderately inclined disks lie well below ($>0.9$ dex) every measured $N$(\ion{Ca}{2}) value in our sample at $R_{\perp} > 6$ kpc. We furthermore consider the former scenario to be more likely, given that our empirical constraints on $M_*$ are significantly more secure than those on the global SFRs of our sample. In view of this likelihood, we interpret the failure of our ISM model to reproduce the large \ion{Ca}{2} column densities (as well as the largest \ion{Na}{1} column densities) we observe as an indication that there is a significant contribution to these columns from material that is not interstellar. These systems must instead arise at least in part from an extraplanar, or circumgalactic, component. Such absorbers are known to arise in the Milky Way in association with intermediate- and high-velocity \ion{H}{1} clouds, which are understood to lie at distances $\sim 0.5$--20 kpc from the disk \citep{KuntzDanly1996,Wakker2001,Thom2006,Wakker2007,Wakker2008}. We infer that the phenomena giving rise to these extraplanar or halo clouds are active across our foreground galaxy sample. \subsection{Kinematics}\label{subsec:modeling_kinematics} We may also use this framework to predict the distribution of line-of-sight velocities exhibited by the neutral gas disk component of our foreground galaxy sample. We again begin with a single set of tilted rings, assigning each ring a rotation velocity \begin{equation}\label{eq:vrot_r} V_{\rm rot} (r) = V_{\infty} \tanh (r / R_{\rm V}), \end{equation} with $V_{\infty}$ equal to the maximum rotation velocity, and $R_{\rm V}$ setting the steepness of the rotation curve in the central regions of the disk. As described in \citet{Rogstad1974} and \citet{Begeman1989}, the line-of-sight component of this velocity is $V_{\rm LOS} (x, y) = V_{\rm sys} + V_{\rm rot}(r) \sin i \cos \theta$, with $\theta$ representing the azimuthal angle counterclockwise from the major axis in the disk plane, and $V_{\rm sys}$ representing the recession velocity of the system. We then generate two additional, equivalent sets of tilted rings, placing them at heights $z = \pm h$ above and below the first set. This placement ensures that the map of line-of-sight velocity differences ($\Delta V_{\rm LOS}$) between these two layers is representative of the maximum velocity offsets that can be produced by a thick galactic disk exhibiting solid-body rotation. To generate a kinematic model for each foreground galaxy in our sample, we use the stellar mass Tully-Fisher relation derived by \citet{Bloom2017} from spatially resolved H$\alpha$ kinematics of nearby galaxies over the stellar mass range $8.0 < \log M_*/M_{\odot} < 11.5$ in the SAMI Galaxy Survey \citep{Allen2015}: \begin{equation}\label{eq.TFR} \log (V_{\rm rot,TF}/\mathrm{km~s^{-1}}) = 0.31\log (M_*/M_{\odot}) - 0.93.\footnote{This relation is derived from stellar masses calculated by \citet{Taylor2011} for the GAMA Survey. This work adopted the same cosmology and the same stellar population synthesis models as used in \citet{Straka2015} for stellar mass estimation.} \end{equation} Here, $V_{\rm rot,TF}$ is the velocity measured at $2.2R_{\rm eff}$. This relation was determined from a fit to kinematic data for galaxies with low values of a quantitative asymmetry indicator, and thus may be considered an upper limit on the rotation velocity for lower-$M_*$, dispersion-dominated systems \citep{Bloom2017}. We calculate the $V_{\rm rot,TF}$ implied by this relation for each foreground galaxy, and then set $V_{\infty} = V_{\rm rot,TF}$. Because the $R_{\rm V}$ parameter in Equation~\ref{eq:vrot_r} is unconstrained for our sample, we generate two models for each system, one with $R_{\rm V} = 2$~kpc (creating a steeply rising rotation curve) and one with $R_{\rm V} = 10$ kpc (creating a gradually increasing rotation curve). We compute the distribution of $\Delta V_{\rm LOS}$ for both of these models, assuming $i=75^{\circ}$. Finally, we determine the maximum value of $\Delta V_{\rm LOS}$ predicted at the $R_{\perp}$ of the corresponding GOTOQ (max[$\Delta V_{\rm LOS}$]). We have indicated these values with colored vertical bars in the bottom panels of Figure~\ref{fig:logN_rperp_MWmodel}. Each bar is centered at $\delta v = 0\rm ~km~s^{-1}$ and extends to $\pm {\rm max}[\Delta V_{\rm LOS}]/2$. Note that these bars do not indicate the absolute velocity offset of the material in the layers from $V_{\rm sys}$ (which would extend to many tens of kilometers per second). Instead, because our $z_{\rm H\alpha}$\ measurements assess $V_{\rm LOS}(x, y)$ (rather than $V_{\rm sys}$), we are concerned only with the maximum potential velocity offset of extraplanar layers from the former quantity. As is evident from Figure~\ref{fig:logN_rperp_MWmodel}, the magnitude of max[$\Delta V_{\rm LOS}$] increases with increasing $M_*$ and is larger for \ion{Ca}{2} relative to \ion{Na}{1} due to its larger scale height. This quantity is also to some extent dependent on $R_{\perp}$, as sightlines that probe locations at which the rotation velocity is increasing steeply with radius are predicted to trace overall larger values of $\Delta V_{\rm LOS}$ (although we find that our predictions are not significantly affected by our choice of $R_{\rm V}$). However, regardless of the mass or $R_{\perp}$ of the system, we observe both \ion{Ca}{2} and \ion{Na}{1} absorption over a broader range of velocities than is predicted in this simple framework along nearly every sightline in our sample. The eight sightlines fit with a single \ion{Ca}{2} component all exhibit $\Delta v_{90}$ values (i.e., the span of the error bars in the bottom panels of Figure~\ref{fig:logN_rperp_MWmodel}) larger than max[$\Delta V_{\rm LOS}$] by $\ge 30\rm ~km~s^{-1}$. Similarly, the nine sightlines fit with a single \ion{Na}{1} component exhibit $\Delta v_{90}$(\ion{Na}{1}) values greater than the corresponding max[$\Delta V_{\rm LOS}$] by $\ge 32\rm ~km~s^{-1}$. The vast majority of sightlines fit with two \ion{Ca}{2} components or two \ion{Na}{1} components exhibit component velocity differences ($|\delta v_1 - \delta v_2|$) greater than the predicted max[$\Delta V_{\rm LOS}$] by $\ge 20\rm ~km~s^{-1}$. The foregoing discussion does not account for the artificial broadening of our observed line profiles due to the finite resolution of our spectrograph (with FWHM $\approx 37.3\rm ~km~s^{-1}$). \citet{Prochaska2008} performed a detailed comparison of $\Delta v_{90}$ values measured from both ESI and Keck/HIRES spectra of the same QSO sightlines probing foreground damped Ly$\alpha$ systems, finding that $\Delta v_{90}$ measurements obtained from the ESI spectra were larger than those measured with HIRES by about half the FWHM spectral resolution element. We therefore expect that our $\Delta v_{90}$ measurements may be biased high by $\approx 19\rm ~km~s^{-1}$; however, this level of bias does not reconcile our measurements with the max[$\Delta V_{\rm LOS}$] predictions described above. In light of the failure of this simple model to reproduce the broad absorption profiles observed, we conclude that the gas kinematics must be broadened by ongoing gas outflow from and/or infall onto the galaxy disks. Moreover, given that the analysis presented in Section~\ref{subsec:results_kinematics} demonstrated that the bulk of the absorbing material remains within the gravitational potential well of each host, we ascribe the observed motions to Galactic Fountain-like activity. We discuss the novelty and implications of this conclusion in Section~\ref{subsec:gf}. \section{Discussion}\label{sec:discussion} \subsection{The Relationship between Absorption Detected along GOTOQ Sightlines and in Galaxy Spectroscopy} The rest-frame optical wavelengths of the \ion{Ca}{2} and \ion{Na}{1} transitions studied here have historically made them signatures of choice for studies of the Milky Way ISM \citep[e.g.,][]{Hobbs1969,Hobbs1974,Sembach1993,Welty1996,BenBekhti2012} and the CGM of nearby galaxies \citep{BoksenbergSargent1978,Boksenberg1980,Bergeron1987,Zych2007,Richter2011,ZhuMenard2013}. Analysis of the \ion{Na}{1} D doublet in nearby galaxy spectroscopy has also provided some of the most important evidence for the ubiquity of cold gas outflows among star-forming systems \citep[e.g.,][]{Heckman2000,SchwartzMartin2004,Rupke2005,Martin2005,ChenTremonti2010,RobertsBorsani2020}. Much of the literature focusing on this signature targeted galaxies known to be undergoing starburst activity \citep[e.g., by using an infrared luminosity selection criterion;][]{Heckman2000,Rupke2005,Martin2005}, establishing that outflows occur with an incidence that increases with IR luminosity (to $\approx 80\%$ among low-redshift ULIRGs; \citealt{Rupke2005}), and that their typical velocities increase from $10$ to $30\rm ~km~s^{-1}$ among starbursting dwarfs to $100-1000\rm ~km~s^{-1}$ among ULIRGs \citep{Martin2005}. Study of \ion{Na}{1} outflow signatures in more typical star-forming galaxies was facilitated by the galaxy spectroscopy obtained over the course of the SDSS \citep[e.g.,][]{ChenTremonti2010}. While these spectra typically lack the S/N required for analyses of \ion{Na}{1} kinematics in individual galaxies, multiple studies have taken the approach of coadding many tens or hundreds of spectra to constrain the mean outflow absorption profile as a function of, e.g., stellar mass, inclination, or specific SFR \citep[e.g.,][]{ChenTremonti2010,Concas2019}. Figure~\ref{fig:ewboth_Mstar} compares a subset of these findings with some of the results of our GOTOQ study. We focus here on measurements reported by \citet{ChenTremonti2010}, as they are most directly comparable to the $W_r$ measured along our GOTOQ sightlines. In detail, \citet{ChenTremonti2010} divided their $z\sim0.1$ galaxy sample into face-on (with inclinations $i < 60^{\circ}$) and edge-on ($i > 60^{\circ}$) subsamples, then binned each of these subsamples by stellar mass over the range $10.3 \lesssim \log M_* / M_{\odot} < 11.2$. After coadding the spectra in each of these bins, they performed stellar continuum modeling to remove the component of the \ion{Na}{1} absorption profile arising in stellar atmospheres. They then modeled the residual \ion{Na}{1} absorption with two velocity components: a ``systemic" component with a central velocity fixed to that of the system and an ``outflow" component with a central velocity that was allowed to vary freely. They reported the total $W_r$ (including both doublet lines) of the systemic components ($W_{r, \rm systemic}$) fit to their edge-on subsamples, and reported the total $W_r$ of the outflow components ($W_{r, \rm outflow}$) fit to their face-on subsamples. The approximate parameter space covered by these measurements as a function of $M_*$ is indicated in Figure~\ref{fig:ewboth_Mstar} with filled magenta and turquoise shapes, respectively. \citet{ChenTremonti2010} note that both $W_{r, \rm systemic}$ and $W_{r, \rm outflow}$ increase strongly with $M_*$, and these trends are reflected in the overall slopes of these regions. Here we compare these values with the total \ion{Na}{1} rest equivalent width $W_{r, \rm tot}(\mbox{\ion{Na}{1}}) = W_r(\mbox{\ion{Na}{1} 5891}) + W_r(\mbox{\ion{Na}{1} 5897})$ measured along each of our sightlines (excluding GOTOQJ0851+0791, for which one of the doublet lines is severely blended). This comparison reveals that all foreground galaxies in our sample having stellar masses within or close to the range studied by \citet{ChenTremonti2010} exhibit higher values of $W_{r, \rm tot}$(\ion{Na}{1}) than were measured in either the outflowing or systemic components of systems with comparable $M_*$ values. In detail, we consider here the five GOTOQs having $\log M_*/M_{\odot} > 9.8$. Our measurements for four of these systems are close to a factor of 10 higher than $W_{r, \rm systemic}$ at approximately equivalent stellar masses, and are $\approx 0.4$--1.0 \AA\ higher than $W_{r, \rm outflow}$. This offset may be due in part to the different experimental designs of these two studies: our QSO sightlines probe all absorption along the line of sight, both behind and in front of the foreground host; whereas down-the-barrel spectroscopy is sensitive only to material foreground to the galaxy's stellar populations. We posit that were we able to use down-the-barrel spectroscopy to probe material arising on the {\it far} side of the galaxies' stellar components (i.e., the gas along the line of sight that is beyond the galaxies' stars from the point of view of the observer), this would result in a potential increase in the observed $W_r$ by a factor of two (as indicated with the open regions in Figure~\ref{fig:ewboth_Mstar}). This is likely an overestimate, particularly for $W_{r, \rm systemic}$, as saturated absorbing components with velocities that overlap those observed on the front side would not add to the observed total equivalent width. Even so, our GOTOQ $W_{r, \rm tot}$ values still lie well above the predicted equivalent widths implied by the \citet{ChenTremonti2010} measurements. The $3\arcsec$ diameter fibers used for the SDSS spectroscopy extend to radii of $R_{\perp} = 2.5$ kpc at the median redshift of the \citet{ChenTremonti2010} sample ($z=0.09$), whereas the five GOTOQs relevant to this comparison are being probed at impact parameters $R_{\perp} = 3.4, 3.4, 4.0, 7.5$, and $10.6$ kpc, or $R_{\perp}/R_{\rm eff, est} = 0.5, 1.5, 0.7, 0.7$, and 1.6. While these sightlines are not passing through the galaxy centers, they are nevertheless likely probing star-forming regions in their disks. The elevated absorption strengths we observe may imply either that (1) there is a significant contribution to the GOTOQ \ion{Na}{1} profiles from inner halo material distributed toward the galaxy outskirts; (2) our GOTOQ sightlines do not fully sample the distribution of $W_{r, \rm tot}$(\ion{Na}{1}) values for the galaxy population as a whole due to their small numbers; or (3) the \citet{ChenTremonti2010} absorption strengths are suppressed due to resonantly-scattered \ion{Na}{1} emission or to overestimation of the contribution of stellar atmospheres to the coadded line profiles. Galactic fountain models invoking feedback-driven condensation and cooling of material from hot coronal gas may provide a theoretical explanation for the putative detection of excess cold clouds located close to the disk but at projected separations of $>0.5 R_{\perp}/R_{\rm eff}$ from a galaxy's axis of symmetry \citep{Marasco2012,Fraternali2013}. However, given our relatively small sample, this dataset cannot distinguish between the three scenarios laid out above. We simply note here that \ion{Na}{1} emission from scattering has been found to be weakest in edge-on galaxies and thus should have a minimal effect on $W_{r, \rm systemic}$ \citep{ChenTremonti2010,RobertsBorsani2020}. In addition, some degree of ``contamination" of the spectral libraries used to model the stellar continuum by interstellar material in the Milky Way remains a distinct possibility. Comparison to continuum models constructed from purely theoretical stellar spectra suggests this can result in an underestimate of the $W_{r, \rm tot}$(\ion{Na}{1}) arising from the ISM of $\approx 0.5\!-\!0.7$ \AA\ (K. S.\ Parker et al, {\it in prep}). Finally, we comment that the opposite effect has been found in comparisons of the $W_r$ of outflows traced by \ion{Mg}{2} $\lambda \lambda 2796, 2803$ absorption in galaxy spectra \citep{Rubin2014} to the strength of circumgalactic \ion{Mg}{2} absorption at impact parameters $10~\mathrm{kpc} \lesssim R_{\perp} \lesssim 170~\mathrm{kpc}$ \citep{Chen2010a}. The enhanced $W_r$(\ion{Mg}{2}) detected down-the-barrel relative to those typically detected along QSO sightlines suggests that the bulk of the outflowing material does not reach distances of more than ${\sim}10$ kpc. The $W_r$(\ion{Mg}{2}) values measured along GOTOQ sightlines at impact parameters $R_{\perp} < 6$ kpc by \citet{Kacprzak2013} are closer in strength to those observed in galaxy spectra ($1.75~\mathrm{\AA}<W_r$(\ion{Mg}{2} 2796) $<3.11~\mathrm{\AA}$), implying that the region $\sim6$--10 kpc from a galaxy's center may be an important interface for the stalling of \ion{Mg}{2}-absorbing wind material. Comparison of our $W_r$(\ion{Na}{1}) measurements to the \citet{ChenTremonti2010} results places no such constraint on the potential extent of \ion{Na}{1}-absorbing winds. \begin{figure}[h] \includegraphics[width=\columnwidth]{fig_ewNaIboth_Mstar.pdf} \caption{The total system \ion{Na}{1} equivalent width ($W_{r, \rm tot}$(\ion{Na}{1}) = $W_r$(\ion{Na}{1} 5891) $+$ $W_r$(\ion{Na}{1} 5897)) vs. $\log M_*/M_{\odot}$ measured in our ESI spectroscopy. Upper limits, indicated with open squares, are shown in cases in which $W_{r, \rm tot} < 3\sigma_{W_{r, \rm tot}}$, and represent $3\sigma$ limits. Measurements of GOTOQJ0851+0791 have been excluded, as the \ion{Na}{1} 5897 profile in that sightline is contaminated by blending. The turquoise and magenta filled regions show the distribution of $W_r$ values measured by \citet{ChenTremonti2010} in coadded SDSS galaxy spectra for the blueshifted and systemic components of the \ion{Na}{1} absorption profile, respectively. The open regions show the locus of values covered by the \citet{ChenTremonti2010} measurements if they are corrected upward by a factor of two to account for the potential contribution of material located on the far side of the stellar continua. \label{fig:ewboth_Mstar}} \end{figure} \subsection{The Relationship between $W_r$(\ion{Ca}{2} K) and Star Formation}\label{subsec:discussion_CaIISFR} We identified a strong, statistically significant correlation between $W_r$(\ion{Ca}{2} K) and the local SFR of the absorber host measured within the SDSS fiber (see Figure~\ref{fig:ew_SFR_Mstar}). To our knowledge, our study presents the first evidence for such a relationship (although a correlation between H$\alpha$ flux and $W_r$(\ion{Ca}{2} K) was noted in \citealt{Straka2015}). However, evidence that strong \ion{Ca}{2} absorbers have significant dust content was uncovered more than a decade ago \citep{Wild2005,Zych2009}. By coadding SDSS QSO spectra in the rest frame of strong \ion{Ca}{2} systems, \citet{Wild2007} detected and characterized associated [\ion{O}{2}] $\lambda\lambda 3727, 3729$ emission arising within the SDSS fibers, measuring an average SFR of 0.1--$0.5~M_{\odot}~\rm yr^{-1}$. Additional evidence for a connection with star formation was contributed by \citet{ZhuMenard2013}, whose stacking analysis detected a stronger mean \ion{Ca}{2} absorption signal in the halos of star-forming galaxies relative to red-sequence hosts at fixed stellar mass. The relation is reminiscent of the now well-established evidence for a correlation between \ion{Mg}{2} absorption strength and SFR. This is seen both in down-the-barrel studies \citep{Rubin2014,Bordoloi2014}, as well as in QSO absorption-line systems and in QSO-galaxy pair experiments. \citet{Menard2011} assessed the [\ion{O}{2}] luminosity of \ion{Mg}{2} absorbers as a function of their $W_r$, reporting a 15$\sigma$ correlation between these quantities, and showing that the distribution function of $W_r$(\ion{Mg}{2}) can be related to the [\ion{O}{2}] luminosity function using a simple scaling. \citet{Lan2014} studied the host galaxy properties of \ion{Mg}{2} absorbers detected in the SDSS QSO sample as a function of their $W_r$, finding that stronger absorbers are surrounded by higher numbers of star-forming galaxies within $R_{\perp} < 50$ kpc. \citet{LanMo2018} confirmed this connection, measuring larger average $W_r$(\ion{Mg}{2}) within $50$ kpc of emission-line galaxies selected from SDSS-IV/eBOSS with higher SFRs. Taken together, these studies provide strong evidence for star formation activity as a primary origin of strong \ion{Mg}{2} absorption. Our findings, together with the literature reviewed above, are suggestive of a similarly strong link between star formation and \ion{Ca}{2}-absorbing material, which in turn implies that \ion{Ca}{2} may prove an effective tracer of winds in down-the-barrel studies. Very few such works have made use of \ion{Ca}{2} H \& K for this purpose, due to the blue spectral coverage required, as well as to the strength of these transitions in stellar atmospheres and the potential for confusion from H$\epsilon$ $\lambda 3971$ line emission. However, if these systematics could be successfully mitigated via detailed stellar population and emission-line modeling \citep[e.g.,][]{Westfall2019}, \ion{Ca}{2} may prove a useful probe of the spatially resolved outflow kinematics of warm, neutral gas in advance of the availability of UV-sensitive IFUs that will map the motions of more highly ionized material in absorption \citep[e.g.,][]{Tumlinson2019}. \subsection{Galactic Fountains in External Galaxies}\label{subsec:gf} The Galactic Fountain model was originally introduced to explain the origin of high-velocity clouds (HVCs) of \ion{H}{1} in the Milky Way halo \citep{ShapiroField1976}. In this picture, a dynamic, hot corona is continually fed and heated by supernova ejecta. Material lofted above the disk rises and cools, moving outward radially along the pressure gradient of the corona. Thermal instabilities trigger the condensation of neutral clouds from the hot gas, which purportedly fall back toward the disk on ballistic trajectories. There have been numerous theoretical investigations exploring the implications of this model for the ionized component of Milky Way HVCs \citep[e.g.,][]{Marasco2013}; the metallicities of HVCs \citep{MarascoFraternali2017}; and the X-ray emitting properties of the Milky Way's coronal plasma \citep{JoungMacLow2006,Henley2015}. A now significant body of theoretical work has also invoked this model to predict the properties of the gaseous components of external galaxies. Surveys of 21 cm \ion{H}{1} emission in nearby star-forming systems having rotation velocities $\gtrsim 80\rm ~km~s^{-1}$ have revealed ubiquitous extraplanar layers of neutral gas that extend to $\gtrsim 1$ kpc above the disk plane \citep{vanderHulstSancisi1988,Fraternali2002,Oosterloo2007,Marasco2019}, and that typically lag behind the disk rotation speed \citep{Fraternali2002,Barbieri2005}. This ``anomalous" component arises within the inner few kiloparsecs of the disk, and has typical masses of $\sim10^{8-9} M_{\odot}$ \citep{Marasco2019}. \citet{FraternaliBinney2008} argued against an external (or circumgalactic) origin for this material, given that if it were to fall onto the host galaxy disks over a freefall time, the implied accretion rates would be orders of magnitude larger than the SFRs of nearby spirals. This inconsistency in itself is strongly suggestive of a fountain origin for extraplanar \ion{H}{1} gas \citep{FraternaliBinney2008,Marasco2019}. While models that adopt purely ballistic trajectories for supernova ejecta fail to reproduce the observed lag in the rotation of extraplanar layers \citep{FraternaliBinney2006}, a modification of these models that accounts for the interaction between feedback-driven outflow and cool accretion flows successfully explains both the surface brightness and kinematics of the extraplanar material in the two well-studied spirals NGC 891 and NGC 2403 \citep{FraternaliBinney2008}. The models described above adopt a Gaussian distribution for the velocities of clouds ejected from the galactic disk, with the dispersion adjusted to achieve the closest match between the predicted and observed \ion{H}{1} surface brightnesses. Another common approach to the modeling of extraplanar \ion{H}{1} layers is to adopt a single value for the layer velocity perpendicular to the disk \citep[e.g.,][]{Marasco2019}. Both approaches predict velocity widths of $\gtrsim50\!-\!150\rm ~km~s^{-1}$ along individual lines of sight through 21 cm emission-line maps of moderately inclined galaxies \citep{Marasco2019}. The $\Delta v_{90}$ widths we have measured imply similar velocity spreads of $>50 - 180\rm ~km~s^{-1}$ across our sample, over the full range of impact parameters we probe ($R_{\perp} = 1\!-\!13$~kpc). We have seen from the analysis presented in Section~\ref{subsec:modeling_kinematics} that these velocity widths far exceed those predicted for the interstellar component of these galaxies. Instead, they are qualitatively consistent with the kinematics predicted by commonly invoked fountain models. Our line profile modeling additionally demonstrates that these velocity widths frequently arise from multiple, distinct structures (i.e., components) with velocity offsets of $>40\rm ~km~s^{-1}$, and arise from both the warm neutral material traced by \ion{Ca}{2} and a colder phase traced by \ion{Na}{1}. Furthermore, whereas extraplanar \ion{H}{1} is typically well fit by models that adopt exponential scale lengths of $R_{\rm g} = 1\!-\!7$ kpc for the surface density of the layer (i.e., $\Sigma(R)\propto e^{R/R_{\rm g}}$; \citealt{Marasco2019}), our absorption velocity widths suggest that the physical processes driving galactic fountain flows persist to $R_{\perp} \sim 7$ kpc and beyond. Finally, we emphasize the novel stellar mass distribution of our sample in this context: six of the 14 foreground galaxies giving rise to securely-detected \ion{Ca}{2} have $M_*$ values that imply rotation velocities in the range $V_{\rm rot,TF} = 20\!-\!80\rm ~km~s^{-1}$. The consistently high $\Delta v_{90}$ values we measure across this parameter space provide novel evidence for galactic fountain activity in such low-mass systems. A handful of alternative observational approaches have offered additional evidence for galactic fountain flows in external galaxies. The H$\alpha$ and radio continuum emission from the extraplanar layers of the nearby edge-on spiral NGC 891, along with the properties of the dust complexes that pervade them, have long been interpreted as consistent with galactic fountain model predictions \citep{Rand1990,Dettmar1990,BregmanHouck1997,HowkSavage1997,Kamphuis2007}. These studies also offer direct evidence for the multiphase nature of the galactic fountain material, with dust-bearing clouds likely tracing a similar phase to that giving rise to \ion{Na}{1} \citep{HowkSavage2000}. Such multiphase ``interstellar thick disks" are now known to be ubiquitous \citep{ZschaechnerRand2015,Boettcher2016,Bizyaev2017,Li2021} and are observed to exhibit metallicities ranging from a factor of two lower than the host galaxy disk to slightly above that observed in the host \citep[e.g.,][]{Howk2018}. Most recently, \citet{Rupke2021} took advantage of echellette-resolution optical IFU spectroscopy of eight nearby AGN-dominated galaxies to trace the down-the-barrel kinematics of these layers, identifying ongoing outflow and inflow in nearly every system via the Doppler shift of \ion{Na}{1}, with the projected areas subtended by these flows covering up to 25\% of the optically bright stellar disks. Modern theoretical studies of galactic fountain flows have used high-resolution numerical simulations to make detailed predictions for the temperature distribution and kinematics of extraplanar material, drawing physical links between recent-past star formation activity and the launch of expanding superbubbles \citep[e.g.,][]{Creasey2013,Martizzi2016,Girichidis2016,KimOstriker2018,Vijayan2020,Kado-Fong2020}. While the vast majority of these studies do not simulate material in the coolest phases traced by \ion{Na}{1}, recent work by \citet{Girichidis2021} and \citet{FarberGronke2021} investigated the formation and survival of such cool, dust-enshrouded material explicitly. The former study found that magnetized, hot wind material can effectively trigger the condensation of a molecular phase from a high-density ($n \gtrsim 0.5~\rm cm^{-3}$), warm ($T\sim10^{3-4}$ K) cloud \citep{Girichidis2021}; while the latter found that this phase can survive over numerous cloud-crushing times if the cloud is sufficiently large, and that dust grains can likewise survive in $\gtrsim 100$ pc clouds if the temperature for dust destruction $T_{\rm dest} > 10^4$ K \citep{FarberGronke2021}. These theoretical advances, along with ongoing efforts to link the results of parsec-resolution numerical simulations of galactic disks to simulations encompassing dark matter halo scales \citep[e.g., SMAUG;][]{Kim2020}, will enable detailed comparison of the predictions of these models to the observed kinematics and absorption/emission-line strengths of extraplanar material. Such comparisons are crucial to affirming these theoretical efforts, as the associated predictions have not yet been rigorously compared to the numerous in-hand observational constraints. \section{Summary and Conclusions} We have analyzed medium-resolution optical spectroscopy of 21 bright quasars known \emph{a priori} to lie exceptionally close to foreground galaxies having redshifts $0.03 < z <0.20$ with the purpose of assessing the strength and kinematics of \ion{Ca}{2} H \& K and \ion{Na}{1} $\lambda\lambda 5891, 5897$ absorption arising in their ISM and disk-halo interface. The foreground systems were identified serendipitously via intervening nebular emission lines in SDSS spectra of the quasars by \citet{Straka2013,Straka2015}, who located their photometric counterparts in SDSS imaging and measured impact parameters in the range $1~\mathrm{kpc} < R_{\perp} < 13~\mathrm{kpc}$. The foreground galaxies span a broad range of stellar masses ($7.4 \le \log M_*/M_{\odot} \le 10.6$), and the strength of the H$\alpha$ emission detected in the SDSS fibers implies that their global SFRs lie both within and well above the star-forming sequence at $z\sim 0$. Our spectroscopy, with a velocity resolution $\rm FWHM\approx37.3\rm ~km~s^{-1}$, is sensitive to absorbers having $W_r(\mbox{\ion{Ca}{2} K}) \gtrsim 0.2$ \AA\ and $W_r(\mbox{\ion{Na}{1} 5891}) \gtrsim 0.15$~\AA. We used Voigt profile modeling to derive column densities, Doppler parameters, and component velocities for each securely detected system. We also calculated a nonparametric measure of the profile velocity widths ($\Delta v_{90}$). Our analysis has revealed the following: \begin{itemize} \item We find no evidence for an anticorrelation between the $W_r$ values we measure and either $R_{\perp}$ or $R_{\perp}/R_{\rm eff, est}$ (i.e., the impact parameter normalized by an estimate of the effective radius of the foreground host galaxy). Modeling of the relation between $\log W_r(\mbox{\ion{Ca}{2} K})$ (or $\log W_r(\mbox{\ion{Na}{1} 5891})$) and either measure of projected separation as linear yields slopes that do not significantly differ from zero. This is unusual in the context of the QSO-galaxy pair studies literature, which in the vast majority of cases report statistically significant anticorrelations between $W_r$ and $R_{\perp}$ at larger impact parameters than we probe ($15~\mathrm{kpc} \lesssim R_{\perp} \lesssim 100~\mathrm{kpc}$). \item Strong absorption with column densities $N(\mbox{\ion{Ca}{2}})> 10^{12.5}~\rm cm^{-2}$ ($N(\mbox{\ion{Na}{1}})> 10^{12.0}~\rm cm^{-2}$) occurs with an incidence $f_{\rm C}(\mbox{\ion{Ca}{2}})=0.63^{+0.10}_{-0.11}$ ($f_{\rm C}(\mbox{\ion{Na}{1}})=0.57^{+0.10}_{-0.11}$) within our sample. We find no evidence for a dependence of these covering fractions on $R_{\perp}$ or $R_{\perp}/R_{\rm eff, est}$. These $f_{\rm C}$ values are consistent with the incidence of significantly weaker intermediate- and high-velocity \ion{Ca}{2} and \ion{Na}{1} absorbers (with $N(\mbox{\ion{Ca}{2}})> 10^{11.4}~\rm cm^{-2}$ and $N(\mbox{\ion{Na}{1}})> 10^{10.9}~\rm cm^{-2}$) detected in the Milky Way \citep{BenBekhti2012}. This implies that our sightlines exhibit overall stronger absorption than those probing Milky Way extraplanar/halo clouds, likely due to their longer path lengths through both the ISM and CGM. \item The velocities of our \ion{Ca}{2} and \ion{Na}{1} component samples exhibit overall small offsets relative to the H$\alpha$ emission velocities measured along the same sightlines ($z_{\rm H\alpha}$). Among 20 \ion{Ca}{2} (and 17 \ion{Na}{1}) components, only three (one) have fitted relative velocities $|\delta v| > 50\rm ~km~s^{-1}$. The portions of each line profile contributing $90\%$ of the apparent optical depth all extend to a maximum $\delta v < 120\rm ~km~s^{-1}$ and, thus, trace material that must remain gravitationally bound to even the lowest-$M_*$ system in the sample. However, the corresponding $\Delta v_{90}$ widths lie in the range $50-180\rm ~km~s^{-1}$, indicating the absorption has contributions from both interstellar and extraplanar material. \item We find no evidence for a correlation between the dust reddening measured along our QSO sightlines and the $W_r$ of the \ion{Ca}{2} K or \ion{Na}{1} 5891 transitions. Between a quarter and a third of our absorber sample are 3$\sigma$ outliers from the best-fit relations between these quantities measured toward extragalactic probes of the Milky Way halo. \item We find no evidence for a strong dependence of the $W_r$ of either ion on the $M_*$ of our foreground galaxies. Instead, we measure an overall high incidence of $W_r > 0.2$ \AA\ absorbers ($f_{\rm C} \sim 0.4\!-\!0.6$) across the full $M_*$ range of our sample. We additionally report a significant ($>3\sigma$) correlation between $W_r(\mbox{\ion{Ca}{2} K})$ and the local SFR implied by the H$\alpha$ emission-line luminosity measured from SDSS fiber spectra of the sightlines. These findings suggest that (1) \ion{Na}{1} is an effective probe of disk-halo gas kinematics across the full $M_*$ range of our sample; and that (2) down-the-barrel spectroscopy of the \ion{Ca}{2} transition will be sensitive to star formation-driven outflows of warm, neutral gas. \end{itemize} The \ion{Na}{1} absorption strengths we measured along our sample sightlines are significantly larger than the $W_r$ of either outflowing or interstellar material close to the systemic velocity measured in coadded SDSS spectra of galaxies with similar stellar masses. In addition, our measured column densities of both \ion{Ca}{2} and \ion{Na}{1} are too large to arise from a Milky Way-like ISM. Instead, the columns and large velocity widths ($\Delta v_{90} = 50\!-\!180\rm ~km~s^{-1}$) of these absorbers require a significant contribution from material with velocities offset by $\delta v >20\rm ~km~s^{-1}$ from the galaxies' \ion{H}{2} regions, but which is gravitationally bound to each system. Galactic Fountain models provide a natural explanation for these kinematics and column densities at least in a qualitative sense. Assuming this interpretation is apt, our analysis provides novel evidence for Galactic Fountain activity in low-$M_*$, nearby galaxies. It further suggests that fountain-driven gas motions arise at large projected separations from the nuclei of the host galaxies ($R_{\perp} \gtrsim 7$ kpc). While some groups are now pursuing important, direct comparison between fountain flows as observed in 21 cm emission and \ion{H}{1} emission-line kinematics predicted in cosmological simulations \citep[e.g.,][]{El-Badry2018,Oman2019,Watts2020,Manuwal2021}, the QSO absorption-line measurements we present here offer a complementary, and in some ways simpler, point of comparison for Galactic Fountain model predictions. Such comparisons are crucial to improving our understanding of the cycling of multiphase gas flows through galaxy disks. \begin{acknowledgments} The authors are grateful for support for this project from NSF grants AST-1715630 and AST-2009417. K.L.C. acknowledges partial support from NSF grant AST-1615296 and appreciates the observational support of K.~Emerson and T.~Wells, University of Hawai`i at Hilo undergraduate students at the time. V.P.K. acknowledges partial support from NSF grant AST-2009811. J.X.P. acknowledges support from NSF grant AST-1911140. J.K.W. acknowledges support from NSF-AST 1812521 and an RCSA Cottrell Scholar grant, ID number 26842. The authors also wish to thank the anonymous referee, whose suggestions helped to improve this work. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M.~Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is \url{http://www.sdss.org}. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. \end{acknowledgments} \vspace{5mm} \facilities{Keck(ESI), SDSS} \software{astropy \citep{astropy2013,astropy2018}, linetools \citep{linetools2016}, veeper, MPFIT} \clearpage \subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document}
1,108,101,564,494
arxiv
\section{Introduction} \IEEEPARstart{T}{he} ever increasing requirements of wireless services in \gls{MandE}, as well as in healthcare and wellbeing demands are transforming the way data is communicated and processed. Future networks are anticipated to support massive number of connected devices requesting a variety of different services such as mobile video streaming, \gls{arvr}, as well as mission-critical applications. Such services require data, computation, and storage to be performed more often with ultra-high success rate and minimal latency. \Gls{mec} has emerged as an infrastructure that enables data processing and storage at the network edge as a means to cut down the latency between the network nodes and the remote servers that typically existed in cloud computing architectures \cite{ShiEC16}. Instead, edge computing can be provided as a service at the network edge to minimize the service latency, network complexity, and save the device nodes' energy and battery consumption. Edge networking in cellular systems aims to efficiently provide the required connectivity, data access, bandwidth, and computation resources to end devices \cite{VodafoneEdgeCellular, SaadEC18}. Edge base stations in proximity of network users will not only relay content from and to the network core, but will help execute the users processing tasks, provide customized content and computing services, and control the connectivity and interaction between coupled network nodes. In essence, the performance of edge computing is predominantly assessed through two main components, communication between the edge server and the end device, and the processing at the edge server. Further, several optimization aspects are considered to optimize these two components. Optimizing the communication part can be explored through wireless bandwidth and power allocation, edge server selection, computation task distribution, task splitting, and partial task offloading. For the processing part, computation cycle allocation, task queuing and prioritization, joint computing, and predictive computing are critical factors to optimize the computing efficiency. The focus of the \gls{5g} cellular networks has shifted from merely increasing the data communication rate to providing service-specific performance guarantees in terms of ultra-reliability and low latency. This shift is fueled by the emergence of new use cases that require genuine support to critical and latency-sensitive communication services. Nonetheless, ultra-reliability and low latency are often seen as contradictory requirements \cite{conf:latencyReliability_soret_2014}, compelling the use of distinctive set of tools to be efficiently realized. Yet, these individually challenging per se requirements are anticipated to be met together for networks of diverse topologies and heterogeneous services. This article discusses the feasibility and potential of providing edge computing services with latency and reliability guarantees. In particular, it first sheds light on the services that can be offered from edge computing networks. It follows by looking into how \gls{urllc} contributes to and benefits from edge computing. The article proceeds by presenting selected use cases that reflect the interplay between edge computing and \gls{urllc}. Finally, the article ends with our concluding remarks and future works. \section{Edge Computing Services} Legacy network architectures relied on centrally located and centrally-controlled servers with high computational and storage powers to provide on-demand computing to network devices \cite{Kaibin_MEC_survey}. These servers could support high number of network nodes over a large geographical area. However, the large distance between the cloud computing server and end-user device results in higher service latency. Moreover, the centralized architecture limited the ability to provide context-aware service, and to preserve the user data privacy. Future wireless networks are evolving towards supporting new set of applications that require minimal latency and high level of service personalization. This motivated the shift towards distributed networking architectures where the network resources are available close to users at the network edge. Edge computing aims to provide computing, content, and connectivity services closer to the data source and consumption points. It is applicable to scenarios with different network environments and use cases. This diversity led to several implementations that did not follow specific standard or interoperability. The \gls{etsi} has been working on solving this issue through providing an efficient standardized \gls{mec} that can be integrated across several applications and service providers \cite{etsi_mec}. \gls{mec} also enables providers to deploy edge computing services on top of wireless mobile networks. This will allow cellular operators to integrate computing into the services provided to their users. In this regard, the term \emph{edge networking} refers to the action and process of serving a user or device at the network edge. \subsection{Content at the Edge} The idea of leveraging the network edge as a content storage has gained popularity in the last few years \cite{Bastug_caching}. Existing popularity patterns on the contents requested by network users motivated developing proactive networks. A proactive server can predict popular contents, prefetch them from the core network, and have them stored and readily available at the network edge, hence cutting down delivery times once users request them. Proactive networks require efficient methods to predict the popularity of the content to be cached, as well as high storage capacity to cache this content. Edge caching not only minimizes the service latency but also the load on the backhaul network by prefetching the popular content in the off-peak times \cite{Bastug14,Bennis16,Elbamby14}. Further, we envision that the notion of edge content will be extended to include new types of data that can be served from the network edge to support the new use cases. One application to which the future network edge will provide information is the distributed machine learning application. The tight latency requirements and the need for minimizing the information exchange mandate the development of distributed machine intelligence schemes in which edge servers play a major rule. Edge machine learning \cite{Park:2018aa, Li_edgeInt18} will allow end users to locally develop their own machine learning models instead of relying on centralized approaches. However, "machine learning applications" rely on information from other network nodes that affect their state and utility. The network edge role here will be to bring the information necessary for enhancing or complementing the local model close to the user. \subsection{Computing at the Edge} Processing is becoming as an important commodity to cellular applications as content. The use of applications ranging from smart factory, self-driving vehicles, to virtual and augmented reality are growing by the day and are becoming more resource greedy and less latency tolerant. While part of the computing load of these applications is served using their local processing units, constraints on size, portability, battery life-time or lack of full access to task data limit the ability to locally execute computing tasks. Edge computing promises to pool powerful yet proximate computing resources at the network edge, as well as to provide connectivity and seamless information exchange between neighboring nodes. It is also set to allow for the realization of various \gls{5g} verticals that require low-latency and high-reliable computing, such as \gls{vr} and mission-critical \gls{iot} applications. Yet, there are several components that need to be addressed to realize low-latency and high-reliable edge computing. Executing computing tasks at the edge often requires the task data to be offloaded to the edge server before execution. This introduces communication delay that adds to the service latency. In addition, how to queue and schedule the computing tasks at the edge server plays a major role in the queuing and processing latency. Our vision is that the availability of more data and computing power will shape how the edge network performs computing. Similar in vein to proactive content caching, where knowledge of users’ preferences and future interests allow for prefeching of their content, data availability and machine learning will help to speed up computing the tasks of network nodes. Predicting vehicles’ future locations and path allows the edge network to proactively render and deliver its high-definition live map. In \gls{vr} applications, predicting users’ future \gls{fov} allows rendering the corresponding part of its $360^{\circ}$ frame with minimal latency. Several other enablers are vital to achieve ultra-reliable and low-latency computing, such as task replications, parallel, and coded computing, which will be addressed in detail in the following section. \begin{figure*}[ht] \centering \includegraphics[width=.92\textwidth]{MEC-Environment.pdf} \caption{Breakdown of key URLLC enablers for edge computing, exemplified over an Industry 4.0/Smart Factory ecosystem that includes cyber-physical systems, \gls{iot} and \gls{mec}.} \label{Fig:mec_env} \end{figure*} \subsection{Control at the Edge} Most of the existing cloud and edge computing architectures rely on centralized decision-making schemes which requires all the network nodes to send their local states data to a central controller. Instead, distributed decision making, in which the decision-making process is distributed among the edge servers will allow for low latency, and privacy preserving operation \cite{EdgeMesh17}, which is essential for mission-critical applications. Indeed, the control of the network devices’ performance requires policies that adapt to their local states. This can be challenging for scenarios where the local state dynamically varies due to highly dynamic environment or due to the nature of the application, such as in mission-critical applications. \Gls{rl} solutions can provide efficient control policies that maximize the system rewards by finding policies that map those dynamically changing states into actions. These decision-making policies need to take into account the effect of actions on the environment and update the reward accordingly. In centralized architectures, classical reinforcement learning is often performed offline, not taking into account reliability in decision making for example under noisy feedback. Edge control can provide robust decision-making, where multi-agent \gls{rl} architectures can be used to provide communications efficient methods that take latency and reliability into account in dynamic and mission-critical environments. Latency stems from the local state exchanges between edge devices, in which the overhead due to the state exchange increases exponentially with the number of devices. This can be addressed using the \gls{mfg} theory~\cite{MFGJ}, which can tackle this by approximating the average state as a collection of agents' instantaneous states. \section{URLLC Enablers and Challenges} \subsection{URLLC overview} The prime focus of the recent groundswell of mission critical applications such as autonomous vehicles, immersive \gls{vr}/\glsunset{ar}\gls{ar} experiences, industrial automation, and robotics, is to provide services with guaranteed high reliability and low latency. Therein, latency deductions in channel estimations, information exchange among the network elements, decision making, computation tasks completion, and memory access within devices have utmost importance. Along with them, guaranteed low-latency in operations, ensuring connectivity, and speed-precision-and-accuracy of computations are essential to assure the reliability of mission critical applications. Due to the on-device constraints on storage, processing capability, and availability and accessibility of network resources, it is mandatory to utilize the edge servers to maintain the quality-of-service in mission critical applications. To support the communication among user devices within mission critical applications and the edge servers, \gls{urllc}, that has been introduced as one of the main service in 5G systems, plays a pivotal role. In this section, we identify the key enablers of reliability and low-latency in wireless edge computing networks, and the challenges towards realizing each of them. Moreover, in Table \ref{tab:urllc_mec}, we summarize the issues and enablers of providing latency and reliability guarantees in wireless edge computing networks, as well as the applications and use cases these enablers are targeting. \begin{table*}[] \centering \resizebox{\textwidth}{!}{\begin{tabular}{p{1cm}p{3.3cm}p{4.4cm}p{4.9cm}} & Demands/Challenges & Enablers & MEC applications and use cases \\[3pt] \hline \textbf{Low latency} & bandwidth, backhauls & mmWave & extended reality, vehicular edge computing (Sec. \ref{usecaseVR} and Sec.~\ref{usecaseV2V})\\[3pt] & low propagation delay & proximity based computing & deep reinforcement learning based task offloading (Sec.~\ref{usecaseDQN}, use case 4)\\[3pt] & computing power, task dependency & parallel and coded computing & \cite{Chen_EdgeIoT18,CodedMR15} \\[12pt] & low propagation delay, energy efficiency & proactive computing & use case 6 in Sec.~\ref{usecaseVR1}\\[12pt] & low prediction delay & edge machine learning & edge computing for federated learning (use case 1 in Sec. \ref{usecaseDistML})\\[3pt] \hline \textbf{High reliability} & channel intermittency & multi-connectivity, task replication & use case 6 in Sec.~\ref{usecaseVR1} and \cite{ElbambyEurasip2018,PerfectoVRStreaming}\\[3pt] & low communication cost, data privacy & federated learning &edge computing for federated learning (use case 4 in Sec.~\ref{usecaseDQN})\\[3pt] & rare event detection & extreme event control & extreme value theoretic edge computing and vehicular federated learning (use cases 2 and 3 in Sec.~\ref{usecaseEVT}) \end{tabular}} \caption{Challenges and enablers of realizing low latency and high reliability in wireless edge computing.} \label{tab:urllc_mec} \end{table*} \subsection{URLLC Enablers for Edge Computing} \subsubsection{Low latency Enablers} There are several components that contribute to latency in edge networking. In this regard, enabling low latency requires several techniques to be implemented and integrated together at different levels of edge networking systems. At the communication level, proximity-based computing and \gls{mmwave} links play major roles in reducing task offloading latency from edge devices to servers by reducing distance attenuation and providing broad bandwidth with high directionality, respectively. In addition, \gls{mmwave} also enables wireless backhauling \cite{jnl:mmWBackhaul_CommunMag_2014,conf:vu_mmWBackhaul_EUWireless_2016} that facilitates edge servers' prefetching popular content with low latency. At the processing level, proactive computing provides significant latency reduction while maximizing resource efficiency by avoiding repetitive and redundant on-demand computing \cite{ElbambyProactive17,Oueis16, ElbambyEurasip2018}. Next, coded computing is effective in reducing parallel computing latency, which eliminates the dependency of processing tasks, thereby minimizing the worst-case latency due to a straggling task. Last but not least, \gls{ml} is crucial in supporting low-latency mission critical applications, by empowering edge servers and devices to locally carry out their decision-making. \vspace*{.2cm} \enablerLat {High capacity mmWave links}: Driven by the spectrum shortage below 6 GHz, communications in the radio frequencies encompassing the electromagnetic spectrum from 30 to 300 GHz, i.e. the \gls{mmwave} or \gls{itu}'s \gls{ehf} band, have been attracting a growing attention~\cite{jnl:mmWitWillWork_Rappaport_2013,jnl:5GWhatWillB_Andrews_2014,jnl:5disruptive5G_Boccardi_2014}, to the point of being currently considered the most important technology to achieve the 10 Gbps peak data rates foreseen for the upcoming \gls{5g} systems~\cite{jnl:mmW_FutureMobile_JSAC_2017}. Having abundant available spectrum, the main appeal of \gls{mmwave} communications comes from the use of generous bandwidths that \textendash ranging from the 0.85GHz in the 28GHz band to 5 GHz in the 73GHz band\textendash are more than ten times greater than \gls{lte}'s 20 MHz cellular channel~\cite{jnl:mmW_5Goverview_Rappaport_2017}, and grant an important channel capacity increase~\cite{mmWaveChannelandCellularCapacity_2014}. However, signal propagation at these frequencies is harsh and inherently different from that at the microwave band~\cite{jnl:mmWave_Survey_2018} experiencing 1) higher pathloss for equal antenna gains due to a stronger atmospheric attenuation whereby signals are more prone to being absorbed by foliage and rain, 2) higher penetration losses as \glspl{mmwave} are blocked when trying to pass through walls, buildings, or obstacles, and 3) higher transmit power consumptions than in lower bands to preserve an equal \gls{snr} unless directional antennas together with advanced signal processing that includes \gls{mimo}~\cite{jnl:mmW_MassiveMIMO_2014} and \gls{bf} techniques are used. Notably, due to the shorter wavelengths in \gls{mmwave} bands it is possible to pack more antennas at the transmitter and receiver devices and, thanks to the spatial degrees of freedom afforded, use analog or hybrid \gls{bf} \textendash fully digital \gls{bf} implies having one dedicated \gls{rf} chain per antenna which currently discourages its use in \glspl{mmwave} due to the unaffordable power consumption and costs\textendash~to build a radiation pattern with narrow beams which will be subsequently steered towards the receivers while the energy radiated through the sidelobes is minimized or negligible. To administer high capacity links with \glspl{mmwave}, transmitters' and receivers' mainlobes need to be precisely aligned towards each other if favored with a clear, unobstructed, \gls{los} path. In practice, when a \gls{mue} is in the connected state, \gls{ul} control channels are used to periodically feed back to the \gls{bs} its best transmit beam index; similarly \gls{dl} control channels are used to report \glspl{mue}' best transmit beams. Data transmission is then performed through the best beam pair. However, during initial access and handover, i.e. in random access, such information on the best beams is not available which hinders taking full benefit from \gls{bf}. Henceforth, in analog \gls{bf}, to discover and then maintain the best transmit-receive beam pairs, a series of techniques referred to as beamtraining or beamsearching, are applied. Then, beam tracking is performed to adapt the beamforming, e.g., due to \glspl{mue}' movement leading to transmitter-receiver beam misalignments. Nevertheless, a full new directional channel discovery process will need to be triggered if the \gls{sinr} drops below a certain threshold due to e.g., blockages and/or interference~\cite{Mattia:18}. As analog \gls{bf} employs a single \gls{rf} chain, it is challenging to adjust the beam to channel conditions, leading to some performance loss. Moreover, analog \gls{bf} does not provide multiplexing gains as it can only operate a single data stream. Therefore, to bring all the benefits of \gls{mmwave} while benefiting from multiplexing gains for \gls{mec}, \gls{mimo} hybrid \gls{bf} architectures, which strike a balance between performance, complexity, and power consumption, should be considered. Finally, as adaptive beamforming requires precise \gls{csi}, one of the key challenges for \gls{mmwave} to work as a low-latency enabler for \gls{mec} lies on the availability of expedited \gls{csi} acquisition schemes together with directionality-aware mobility and beam management procedures~\cite{mmWBeamManag_Tutorial_Giordani_2019}. In the next subsection a series of reliability enablers will be discussed to reduce the delay incurred to counteract the intermittent blockages and temporal disruptions of the \gls{mmwave} channel. Largely, these techniques are in line with the idea of overbooking radio resources as a protection against channel vulnerability~\cite{conf:barbarossa_overbookResources_2017} or to consider risk-sensitive approaches~\cite{Vu:2018:RSRL}. \vspace*{.2cm} \enablerLat {Proximity-based Computing}: Reducing the distance between the application and the \gls{mec} server is a key latency enabler. This idea is motivated by the concept of bringing the transmitter and the receiver closer to one another yielding capacity improvements \cite{jnl:5GWhatWillB_Andrews_2014}. With the low proximity between the application and \gls{mec} server, over-the-air latency that has a significant contribution to the \gls{e2e}, sometimes dominating over the computing latency, can be greatly minimized. Network densification, the concept of dense deployment of small cells, remote radio units, and relay heads that has been an attractive research interest during recent years~\cite{UDN_survey,JHParkTWC:15,MobilMFGSG:GC16,MFGSG:GC16,kim:2017:MFCA,Samarakoon16}, plays a major role in proximity-based computing. While boosting the capacity and coverage, the dense deployment of access points offers the opportunity of introducing additional computing resources at the network edge. Henceforth, the user devices in the network are capable of uploading their computational tasks to access points and download the corresponding outputs after the processing with high data rates yielding lower latencies. Another proximity-based computing technique is mobility assisted \gls{mec}. Therein, networks of connected vehicles, \gls{uav}, and robots with high processing power can assist the computational tasks of the users \cite{Li:2019,Hagenauer:2017}. The high processing power of above devices that are dedicated to users provides low computational latencies. Moreover, their flexible connectivity with the users due to the mobility and high data rates therein due to the proximity offer lower communication latencies, yielding reduced \gls{e2e} latencies. Computing location swapping is another proximity-based computing method. Therein, groups of users coexist in either physical (located close by) or virtual spaces (interact and/or share computing tasks). In his regard, proximity alone provides low communication latency, yet could yield poorly utilized computational resources. Combining the user groups in virtual space and their physical locations, some users can swap their associated \gls{mec} servers to improve both computing and communication latencies, resulting better \gls{e2e} performance \cite{Park:2018:VR}. Although the proximity-based computing enables low latency in \gls{mec}, the concept itself brings up new challenges to the network design and resource optimization therein. The increased interference is one of the challenges in both network densification and computing location swapping. Due to the limited availability of both communication and computation resource, increased interference may degrade both uplink and downlink communication yielding increased \gls{e2e} latency~\cite{romanous15}. In this regard, interference avoidance, management, and mitigation techniques as well as use of higher frequency channels are viable remedies. Another challenge is the frequent handover due to the dynamics of environment and user mobility ~\cite{romanous15,arshad16}. While handover may incur undesirable latencies, the concept of \gls{mxconn} can be utilized, in which users receive computing assistance from several \gls{mec} servers. \vspace*{.2cm} \enablerLat {Edge Machine Learning}: Inference (or prediction) capabilities with low latency is one of the main reason for \gls{ml} to be popular in \gls{mec} as well as several other communication applications such as coding, beamforming, resource optimization, caching, scheduling, routing, and security \cite{wang2018RL,PerfectoVRStreaming,kato17,mao18}. While the majority of the ML-based communication system design literature is rooted on the centralized and offline ML techniques, the upturn of mission critical applications for massive number of connected devices demands for the intelligence at the network edge~\cite{Park:2018aa,KaibinIntEdge18}. In contrast to conventional centralized \gls{ml} designs, the edge \gls{ml} is capable of generating inference within an instance at the edge devices, presenting the opportunity to greatly reduce the \gls{e2e} latency in \gls{mec} applications. Such intelligence at the edge devices can 1) predict the uncertainties in channel dynamics, communication and computation resource availability, interference, and network congestion at the local devices; 2) explore and learn about the network environment with minimal additional signaling overheads; and 3) characterize and model the network behavior in which the system performance is analyzed. At the \gls{mec} servers, such prior knowledge provides the opportunities to smartly schedule their computing resources and share the results with the corresponding user devices. Furthermore, at the events of connectivity losses, edge \gls{ml} at the user devices allows the decision making within the devices using the forecast on system behaviors, allowing uninterrupted end-user service experiences. This ability to operate offline/off-grid can reduce the number of latency-critical parallel tasks at the \gls{mec} server, in which network-wide end user experience is improved. The challenge of enabling low latency in \gls{mec} via edge \gls{ml} relies on the training latency and inference accuracy therein. In the distributed setting, each edge device lacks the access to the large global training data set, in which training over local data can degrade the inference accuracy. To improve the inference accuracy, edge \gls{ml} devices may need often cooperation among one another or with a centralized helper, which incurs additional overheads and thus, increased training latency. In this regard, further investigations need to be carried out to optimize the tradeoff between training latency and inference accuracy depending on the design architectures, communication models, and application requirement. \vspace*{.2cm} \enablerLat {Proactive Computing}: Although edge computing is capable of minimizing the latency induced due to the high propagation delay of cloud computing, it still experiences delay due to offloading the task data to the edge server, processing delay, as well a queuing delay for both operations. % While these delays are inevitable in some cases, there exists situations in which the task has already been executed before for another user at a different time. % Take for example an \gls{ar} case in which visitors of a specific spot in an exhibition or museum request a specific task of augmenting an object to the view of this spot, or the task of object identification by multiple vehicles in an \glspl{its} system. % Executing these tasks redundantly each time it is requested is certainly not resource efficient, and is causing higher delays to these tasks as well as other tasks sharing these resources. % Here, executing and caching the results of these tasks in advance, such that they are served when requested with minimal latency, can be a major latency minimizer. % The ideas of prefetching tasks \cite{KoPrefetching17} and proactive computing \cite{ElbambyProactive17,Oueis16} aim to develop techniques that learns and predicts which tasks are to be requested in the future and pre-compute them. Indeed, the success of proactive computing lies on a well-aimed choice of which tasks to proactively compute and which are to leave for real-time processing. Essentially, this involves developing efficient prediction methods that studies the popularity patterns of the computing tasks to decide on which tasks to prefetch. % The idea also relies on the availability of storage capabilities at the edge servers \cite{EjderVR17}. \vspace*{.2cm} \enablerLat {Parallel and Coded Computing}: The computing task data can be distributed over multiple servers in different edge computing scenarios. For example, in a smart vehicle scenario where the navigation map data can be partly stored in several edge servers. Parallel execution of computing tasks over multiple servers significantly impacts the efficiency and speed of task execution. Moreover, it eliminates the need to collect the full task dataset in a single entity. For example, \emph{partial offloading} can be performed where only a partition of the task is offloaded to where its required input data is available \cite{Kaibin_MEC_survey}. The implementation of parallel computing depends on the correlation between the task partitions, i.e., only partitions that are not dependent on each other can be executed in parallel, whereas dependent tasks have to be executed sequentially. task dependency graph models and task partitioning \cite{Kaibin_MEC_survey,Chen_EdgeIoT18} are used to tackle the inter-dependency between the different task partitions. A challenge in realizing parallel computing, however, is the resulting high inter-server communication load. Moreover, it suffers from the straggling effect, where a missing result from a single node delays the entire computation process. The concept of coded computing has shown to address both of these challenges \cite{CodedMR15}. Through exploiting the redundancy in the task partitions execution at different servers, coded multicast messages, e.g. via maximum distance separable (MDS) codes, can be used to deliver the results of the missing partitions simultaneously to multiple servers. This approach significantly reduced the amount of data that has to be communicated between the servers, at the expense of more redundant task executions at each server. Coded computing also helps in minimizing the overall computing latency through minimum latency codes. In conventional parallel computing task, each server executes a partition of the task and returns its result to the client. In this model, one delayed or failed partition will cause a delay or failure to the entire task. Alternatively, by generating redundant task data that are coded combinations of the original task data and executing these coded tasks, the result can be recovered by decoding the data from only a subset of the servers, eliminating the effect of a delayed or failed result. Optimizing the creation of the redundant coded tasks enables an inverse linear trade-off between the computing latency and computing load \cite{CodedComputing17}. \subsubsection{High Reliability Enablers} For \gls{mec} to fulfill its role and run applications on devices’ behalf, i.e. offloading the computing, it needs to be able to operate below stringent latency values, which are unachievable in traditional \gls{mcc} systems or too demanding to be run locally due to excessive computational and communication power In this regard, to exploit both the high capacity of \gls{5g} mobile connections and the extensive computing capabilities located at the edge cloud, the concept of reliability is introduced with a two-fold interpretation: In the first place, we find the classical notion of reliability related to error-robustness guarantees. As such, it allows to be tackled at different layers, including the reliability of the wireless link at the \gls{phy}. Another fundamental notion of reliability, that has been widely adopted for wireless communications and standardization bodies as the \gls{3gpp}, is that of reliability understood as a probabilistic bound over the latency. Understood in its most classical form, it is common that a toll in return for ensuring high reliability will have to be paid in the form additional/increased delays. For instance, at the \gls{phy} layer the use of parity, redundancy, and re-transmission will increase the latency. Also, in multi-user environments allocating multiple sources to a single user while clearly beneficial at an individual level, could potentially impact the experienced latency of the remaining users. Next, we will set forth some of the enablers for both notions of reliability. \vspace*{.2cm} \enablerRel{Multi-Connectivity}: Compared to wired transmissions, in wireless environments temporary outages are common due to impairments in the \gls{sinr}. These originate from, among others, stochasticity of the wireless channels, fluctuating levels of interference, or mobility of the \glspl{mue}. The term \glsreset{mxconn}\gls{mxconn}~\cite{Mxconn_Architectures_5G} encompasses several techniques developed with the overarching aim of enhancing effective data rates and the mobility robustness, i.e. the reliability, of wireless links. For that purpose, \gls{mxconn} exploits different forms of diversity to cut down on the number of failed handovers, dropped connections and, generally speaking, \glspl{rlf} that might cause service interruptions~\cite{Soret_ReliabilityLatencyThroughputTradeoffs_2014,Team_fettweis_mxconn_HowReliable_2018}. \gls{mxconn} solutions are classified as intra or inter frequency, i.e., depending on whether they operate using the same frequency or, otherwise, combine multiple carrier frequencies. Examples of the former include \gls{comp}~\cite{tech:CoMP_3gppTR36.819} transmissions and \glspl{sfn}~\cite{SFN_Seminal_Eriksson}. \gls{comp} involves a set of techniques that exploit rather than mitigate \gls{ici} to improve the performance at the cell edge. On performing joint processing, dynamic point selection (JP/DPS) or coordinated scheduling and beamforming (CS/CB) in the \glsunset{ul}\gls{ul}/\glsunset{dl}\gls{dl}, \glspl{bs} effectively operate as if assembled in a distributed multiple antenna system. \glspl{sfn} embody a form of synchronous multicell transmission whereby various sources use the same time and frequency resource to non-coherently transmit signals to a receiver. The multiple received copies will be then constructively combined if their propagation delays are tightly bounded or, else, will induce \gls{isi}~\cite{Team_fettweis_mxconn_2017}. As for inter-frequency \gls{mxconn}, \gls{ca}~\cite{tech:CA_3gppTR36.823} and \gls{dc} are its most noteworthy examples. In \gls{ca} contiguous or non-contiguous component carriers, possibly allocated to several different \glspl{bs}, are combined and the scheduling and interference management orchestrated over these frequency bands aiming to enhance the resulting system's capacity. As for \gls{dc}, this framework provides solutions for inter-frequency, for \glspl{hetnet} scenarios, and for different wireless standards \gls{mxconn} so that a \gls{ue} will be simultaneously connected, respectively, in two different frequencies, to two different types of \glspl{bs} or two different wireless standards~\cite{Team_fettweis_mxconn_SFN_2015}. Recently, the idea of \gls{dc} for \gls{mmwave} and microwave bands has been proposed~\cite{Team_sundeep_mxconn_conf_2016,JHParkTWC:15} as an effective approach to facilitate cellular \gls{mmwave} \gls{ia}~\cite{Team_sundeep_IA_mag_2016} as well as \gls{mmwave} handover~\cite{Team_sundeep_DC_handover_2017}. In like manner, \gls{mmwave} and sub 6~GHz \gls{dc} can team together to augment the reliability of the \gls{mmwave} working as fallback to compensate eventual \gls{mmwave} channel vulnerability, e.g. to blocking events. Finally, the benefits of integrating communication interface diversity for reliability purposes are also studied in~\cite{team_popovski_latencyUR_2016} in the context of \gls{mtc}. \gls{sfn} operation is proposed in use case 6 detailed in Section \ref{usecaseVR1}. The goal is to protect against \gls{mmwave} channel intermittence by increasing the rate of those links between the \glspl{mmap} and the \glspl{vrp} that, otherwise, would jeopardize the immersive experience. \vspace*{.2cm} \enablerRel{Task Replication}: While \gls{mxconn} can boost the reliability in the presence of channel fluctuations, it requires coordination between the different servers that are connected to the end user. However, when coordination is not possible, reliability can still be enhanced through the task replication. Similar to packet replication in data communication, a user can offload a computing task to multiple servers that are not connected to each other and receive the result from whichever has the result ready first. This mechanism provides more guarantees of task execution, at the expense of reduced system capacity, due to the under-utilization of computing servers. One realization of this concept is proposed in \cite{tail_at_scale}, namely, \emph{hedged requests}, is when the user sends one replica of the task to the server that is believed to be most suitable, then follows by sending another replica to an additional server after some delay. Completion pending remaining requests are canceled once a result is received from any server. While task replication is can be efficient in ensuring the reliability in in the case of channel dynamics, it incurs significant additional load. To combat this, one can offload the task to an additional server only when the delay from the first server exceeds a certain threshold \cite{tail_at_scale} This approach is investigated in \cite{ElbambyEurasip2018}. Therein, it shown that imposing such condition can significantly curb the latency variability without inducing much additional load. \vspace*{.2cm} \enablerRel{Federated Machine Learning}: While performing \gls{ml} inference at the network edge yields low latency, distributed training of their \gls{ml} models across different edge nodes improves the inference reliability. To be specific, each learning agent optimizes its \gls{ml} model during the training phase so as to maximize the inference accuracy over locally available training data. The measured inference accuracy at the training phase is however not always identical to the inference accuracy at the test phase, primarily because of unseen training data samples. This accuracy gap is known as the \emph{generalization error} that measures the inference reliability under unseen data samples~\cite{Bosquet:2004}. A straightforward way to reduce the generalization error is exchanging training data samples among edge nodes. Data exchange, however, incurs extra communication and computation cost, and may not be available for user-generated private data. To address this problem, \gls{fl} has recently been proposed~\cite{pap:jakub16,Brendan17}, in which edge nodes exchange and aggregate their local \gls{ml} models, thereby preserving data privacy, avoiding extra computation, and reducing communication overhead when \gls{ml} model sizes are sufficiently smaller than data sizes. \Gls{fl} is still a nascent field of research, calling for co-designing communication, computation, and \gls{ml} architectures~\cite{Park:2018aa,KaibinIntEdge18}. For instance, the original \gls{fl} algorithm has the communication payload size being proportional to the \gls{ml} model sizes, and thus cannot deal with deep neural network models. Proper model compression and parameter quantization techniques are thus needed, while trading the increased communication efficiency off against the reduced accuracy. Furthermore, the server in current \gls{fl} algorithms simply aggregates uploaded local models, although it has higher computation resources compared to the edge devices. Along with these \gls{fl} architectures, computing task offloading, task scheduling, and resource allocations should be jointly optimized towards achieving reliability under uncertainties on \gls{mec} operations, including unseen data samples, channel fluctuations, and time-varying communication and computation resources. \vspace*{.2cm} \enablerRel{Extreme Event Control}: \begin{comment} \end{comment} As mentioned previously, one reliability notion is the probability of violation or failure over a latency bound, which can be mathematically expressed as $\Pr(\mbox{Latency}>L_{\rm bound})$. This probability ranges from $10^{-3}$ to $10^{-9}$, depending on the mission-critical application in 5G networks \cite{MehdiURLLC:18}. To meet the ultra-reliability requirements, we should focus on the extreme events with very low occurrence probabilities. However, in classical communication systems, the designed approaches are based on the expected metrics, e.g., average rate and average latency, in which the random event realizations with higher probability distribution function (PDF) values dominate the system performance. In other words, the conventional average-based approaches are inadequate for enhancing reliability performance, and instead we need to take into account the metrics or statistics, which are related to or affect the extreme events, such as \begin{itemize} \item worst-case measurement, e.g., largest latency in the network, \item tail/decay behavior of the complementary cumulative distribution function (CCDF), \item very low bound violation probability, \item threshold deviation and its higher-order statistics, e.g., variance, \end{itemize} % % % while designing the URLLC-enabled MEC systems. To analytically analyze these metrics and statistics, extreme value theory (EVT) \cite{EVT:Cole,EVT:Han} is a useful methodology for mathematical characterization and, thus, provides a powerful framework for extreme event control. Let us introduce the fundamental theorems in EVT as follows, which characterize the aforementioned metrics and their statistics. \begin{theorem}[{\bf Fisher--Tippett--Gnedenko theorem \cite{EVT:Cole}}]\label{Thm: GEV} We consider $n$ independent and identically distributed ({\it i.i.d.})~samples from a random variable $X$, i.e., $X_1,\cdots,X_n\overset{i.i.d.}{\sim}X$ and define $Z_n\coloneqq\max\{X_1,\cdots,X_n\}$. If $Z_n$ converges to a non-degenerate distribution as $n\to\infty$, we can approximate the limit as a generalized extreme value (GEV) distribution which is characterized by a location parameter $\mu\in\mathbb{R}$, a scale parameter $\sigma >0$, and a shape parameter $\xi \in\mathbb{R}$. \end{theorem} Among them, the shape parameter governs the GEV distributions' tail behaviors \cite{EVT:Han}, which are sorted into three types depending on the value of $\xi$. \begin{enumerate} \item When $\xi>0$, the GEV distribution has a \emph{heavy-tailed} CCDF which is more weighted than an exponential function. \item When $\xi=0$, the GEV distribution has a \emph{light tail}, in which the CCDF has a thinner tail than an exponential function. \item When $\xi<0$, the GEV distribution is \emph{short-tailed}. That is, the CCDF has a finite upper endpoint at $z=\mu-\sigma/\xi$. \end{enumerate} When $\xi\geq0$, the upper endpoint of the CCDF approaches infinity. \begin{theorem}[{\bf von Mises conditions \cite{EVT:Han}}]\label{Thm: von mises} In Theorem \ref{Thm: GEV}, the characteristic parameters $(\mu,\sigma,\xi)$ of the approximated GEV distribution can be asymptotically found as per $\mu=\lim\limits_{n\to\infty} F_X^{-1}(1-1/n),$ $\sigma=\lim\limits_{n\to\infty} \frac{1}{nf_X(F_X^{-1}(1-1/n))},$ and $\xi=-1-\lim\limits_{x\to\infty}\frac{[1-F_X(x)]f^{'}_X(x)}{[f_X(x)]^2}.$ \end{theorem} \begin{theorem}[{\bf Pickands--Balkema--de Haan theorem \cite{EVT:Cole}}]\label{Thm: Pareto} Consider the random variable $X$ in Theorem \ref{Thm: GEV} and a threshold $d$. As $d\to F^{-1}_{X}(1)$, the CCDF of the excess value $Y|_{X>d}=X-d>0$ can be approximated as a generalized Pareto distribution (GPD) whose mean and variance are $\tilde{\sigma}/(1-\xi)$ and $\frac{\tilde{\sigma}^2}{(1-\xi)^2(1-2\xi)}$, respectively. \end{theorem} Analogously to the GEV distribution, the GPD is characterized by a scale parameter $\tilde{\sigma}>0$ and a shape parameter $\xi\in\mathbb{R}$. In Theorems \ref{Thm: GEV} and \ref{Thm: Pareto}, $\xi$ is identical while $\sigma=\tilde{\sigma}+\xi(\mu-d)$. Note that Theorems \ref{Thm: GEV} and \ref{Thm: von mises} provide a way to characterize the worst-case metric and its tail behavior, whereas Theorem \ref{Thm: Pareto} is directly related to the bound violation and its statistics. Since the characteristic parameters of the GEV distribution and GPD are identical or related, the results of these three theorems are complementary to one another. Nevertheless, some tradeoffs and dilemmas exist when we apply the results of EVT and estimate the characteristic parameters. For example, we need to trade off data availability, which affects the performance, convergence speed, and estimation accuracy. Specifically, given $N$ {\it i.i.d.}~realizations of $X$ (i.e., $N/n$ realizations of $Z_n$), larger $n$ theoretically gives the better approximation of the GEV distribution but slows down the convergence of parameter estimations due to the less availability of data samples of $Z_n$. The similar tradeoff between high threshold $d$ and availability of threshold-exceeding data can be found from Theorem \ref{Thm: Pareto}. Additionally, if the distribution of $X$, e.g., delay of a single user, is unknown beforehand, this agnostic makes Theorem \ref{Thm: von mises} difficult to characterize the network-wide largest delay. Fortunately, thanks to the mature development in the ML field, the aforementioned issues can be tackled by using the ML approaches, in which unsupervised learning provides a way to infer a mathematical expression of the unknown distribution, while the lack of available data is addressed in an FL manner by aggregating and averaging the estimated characteristic parameters of all distributed devices. \section{Applications and Use cases} In this section, we elaborate on some of the prospective services and applications for whom offloading their computing tasks to the edge significantly improves their performance in terms of latency and reliability. In particular, we focus on two scenarios where offloading task computing to the network edge will be beneficial: 1) when end users have limited computing capabilities, e.g., \gls{vr} \glspl{hmd}); and 2) when end users have sufficient computing and energy resources, but are accessible only to a fraction of the entire information for the computation input, e.g., vehicular edge computing scenarios. We follow by presenting different edge computing use cases in which the \gls{urllc} enablers are utilized. \subsection{Edge Computing Applications} \subsubsection{Extended Reality}\label{usecaseVR} \Gls{xr} is an umbrella term that covers all virtual or combined real-virtual environments, including \gls{vr}, \gls{ar} and \gls{mr}. These environments differ in the nature of the content a user sees or interacts with. While \gls{vr} describes environments where users are fully immerse in a virtual world, \gls{ar} refers to the view of a virtual environment that is merged or supplemented by elements or inputs from the real-world. \gls{ar} can be categorized as a special case of the more general \gls{mr}, which refers to the environments that mixes together real and virtual elements that can interact with each other. \gls{xr} is anticipated to be one of the leading applications to leverage edge computing. Providing high quality \gls{xr} experience comes with high computation resource demand. At the same time, \gls{xr} applications are highly sensitive to delay. Typically, a maximum \gls{e2e} delay, also known as \gls{mtp} delay, of 15-20 milliseconds can be tolerated in \gls{vr}. Higher delay values trigger what is known as motion sickness, resulting from a visual-motor sensory conflict. This makes it unrealistic to rely on remote cloud servers for processing. On the other hand, Processing \gls{xr} locally on the user device has several complications. First, \gls{xr} devices, such as \glspl{hmd} and smartphones are often equipped with limited compute capabilities. This limitation is due to the device size, manufacturing cost, as well as to limit the heat generated from powering the device. Second, running applications on different types of devices, with different hardware, operating systems, and platforms is a challenging task. For these reasons, existing standalone \gls{xr} devices often provide limited content quality. Standalone~\gls{vr} headsets operate with reduced frame resolution and frame rate~\cite{VR_Network_Mag}, whereas AR headsets such as Microsoft HoloLens restrict the amount of renderable polygons~\cite{AR_EC}. For these reasons, the success of \gls{xr} requires providing high computation and storage resources close to the end users. In this regard, edge computing is an intuitive solution to provide such services~\cite{conf:Elbamby2018-WCNC}. Today's most powerful \gls{vr} headsets rely on edge computers to perform sophisticated rendering. However, wired connections are still used between the headsets and the edge servers, due to the high rate requirement of \gls{vr} applications. This limits the mobility and convenience of \gls{vr} users and hence decrease the \gls{qoe}. The need for a better \gls{xr} \gls{qoe} and the advancement in wireless communication capabilities motivate the development of wireless \gls{xr} systems that incorporate powerful edge computers and high capacity wireless links~\cite{ABIQualcommVR:17, ParkGC:18,OsvaldoAR:17,conf:Elbamby2018-WCNC,PerfectoVRStreaming}. \Gls{mmwave} communication can provide large spectrum and high data rates, making it a solid candidate for wireless \gls{xr}. Moreover, the directionality of \gls{mmwave} links allow for leveraging multi-user transmission techniques such as multicasting and broadcasting to deliver common and correlated content to multiple users in a way that minimizes the communication delay. However, directional \gls{mmwave} links suffer outages due to signal blockage. This affects the link signal quality and increases the channel variability, and hence decreases the link reliability. \gls{mxconn} can be a viable solution to provide robust \gls{mmwave} communication. Using \gls{mxconn}, an \gls{xr} user maintains multiple simultaneous communication links with multiple servers. \vspace*{0.5cm} \subsubsection{Vehicular Edge Computing and V2X/V2V for ADAS:}\label{usecaseV2V} Future autonomous driving vehicles comprised as nodes of the \gls{iov}, a larger mobility network which can be considered as an extended application of the \gls{iot} to \glspl{its}\cite{WinWinMAG_Huawei_2011}, will operate as hubs integrating multiple technologies and consuming and producing massive volumes of data~\cite{RoadmapFutuAuto_McKinsey_2014}. The \glspl{adas} to be equipped in these vehicles, especially those pertaining to the area of traffic safety, heavily depend on reliable and instantaneous decision-making processes that hinge on inputs from multiple sensory data sources, including \gls{lidar}, automotive radar, image processing, computer vision, etc.~\cite{conf:Perfecto2017-EuCNC}. As an example, we can think of successful object identification from \gls{lidar} point clouds or speed and trajectory prediction for dynamic objects moving within a vehicle's vicinity. Hereof, it is essential that these vehicles are equipped with powerful computing and processing capabilities to swiftly handle high data volumes rather than solely relying on cloud services that, in the above example, may classify the objects or predict trajectories from raw data with higher accuracy but, possibly, incurring to do so in unacceptable delays. Moreover, for next-generation \gls{adas} it is envisaged that vehicles will communicate with each other as well as with an increasingly intelligent roadway infrastructure through the use of \gls{v2x} and \gls{v2v} communications, ultimately exploiting high capacity \gls{mmwave} links~\cite{jnl:Perfecto2017-JSAC,conf:Perfecto2017-interplay}. Consequently, the cumbersome volume of locally generated data could be exacerbated by the acquisition of data from both the environment and from surrounding vehicles. Indeed, vehicular edge computing will play a pivotal role to support delay-sensitive as well as future emerging multimedia-rich applications in vehicular networks, which is buttressed by the growing body of literature devoted to the area of content-centric applications of vehicular \gls{mec} \cite{VehicularEdge_ContentCentric_2016,VehicularEdge_PredOffloading_2017,VehicularEdge_ContentDelivery_2018,VehicularEdge_BigDataEE_2018} that are frequently combined with \gls{ml} to provide reliability as edge analytics~\cite{ITS_DL_MEC_Walid_2019}, to leverage huge volumes of information~\cite{VehicularEdge_BigDataEE_2018} or to provide an integrated framework for dynamic orchestration of networking, caching, and computing resources in next generation vehicular networks~\cite{Vehicular_DL_Caching+Computing_2018}. Being not nearly as tightly constrained by size or by the access to a power supply as their counterpart \gls{iot} devices or smartphones, the computational and storage capabilities in vehicular terminals could allow them to run locally or collaboratively, using vehicles as the infrastructures for communication and computation as proposed in~\cite{Vehicular_VehAsInfrastructures_2016}, resource-hungry applications\footnote{However, the longer product's life-span in the automotive industry, according to the US Department of Transportation as of 2018 the average age of on-the-road vehicles is over 11 years~\cite{CarLifeSpan_2018}, could quickly turn onboard \gls{cpu}/\gls{gpu} processing capabilities obsolete.}. In this regard, provided that computing and processing capabilities may not be the limiting factor, a second advantage of running these applications in the network edge is substantiated by the availability of data collected from multiple vehicles in edge servers. Access to this information raw or preprocessed can augment individual vehicles' situational awareness by extending their own sensing range. Resorting to edge contents can thus provide \emph{a bigger picture} at acceptable delays. The later idea is exemplified in the third usecase in upcoming Section \ref{usecaseEVTFL-V2V} where the information from different vehicles is combined in the network edge following \gls{fl} principles and used to refine a global model for transmission queue length distribution for the purpose of providing ultra-reliable low-latency \gls{v2v} communications. \subsection{Use Cases} Next, we present different case studies in which the \gls{urllc} enablers are utilized in edge computing settings. \vspace*{0.2cm} \label{usecaseDistML} \usecase{Edge Computing for Federated Machine Learning}: As addressed in Sect. 3.2.1 and 3.2.2, edge \gls{ml} is envisaged to be a key enabler for \gls{urllc}, in which both inference and training processes of \gls{ml} models, e.g., \glspl{nn}, are pushed down to the network edge~\cite{Park:2018aa}. This direction of edge\gls{ml} has been fueled by \gls{fl}~\cite{pap:jakub16,Brendan17,KimWCL:18,Amiri:2019,Ha:19,Wang:2018aa} under a \textit{data split} architecture (see Fig.~\ref{Fig:split_helper_device}), where edge devices collectively train local models with their own user-generated data via a coordinating edge server that aggregates locally computed model updates, referred to as \gls{msi}. The \gls{mec} framework can further improve \gls{fl} by its co-design with training architectures and algorithms. In view of this, on the one hand, each edge device is able to optimize the \gls{msi} type depending on the \gls{nn} model size and channel quality. As done in \gls{fl}, one can exchange the model parameter \gls{msi} whose payload size is proportional to the model size, which is not feasible for deep \glspl{nn} under poor channel conditions. Alternatively, one can exchange model output MSI whose payload size is independent of the model size, referred to as federated distillation (FD)~\cite{Jeong:18}. As shown in Fig.~\ref{Fig:FD_comm}, this fundamentally results in FD's incomparably smaller communication payload per MSI exchange than FL, and can thereby better cope with poor channel conditions. \begin{figure}[t!] \centering \subfigure[Data split.]{ \includegraphics[width=.45\columnwidth]{Fig_split_h-d.pdf} \label{Fig:split_helper_device} } \subfigure[Model split.]{ \includegraphics[width=.45\columnwidth]{Fig_split_model.pdf} \label{Fig:split_model} } \caption{Edge \gls{ml} architectural splits: (a) data split and (b) model split.} \label{Fig:arch_split} \end{figure} \begin{figure}[t!] \centering \subfigure[Communication cost.]{ \includegraphics[width=.8\columnwidth]{FD_comm_cropped.pdf} \label{Fig:FD_comm} } \subfigure[Test accuracy.]{ \includegraphics[width=.8\columnwidth]{FD_cropped.pdf} \label{Fig:FD} } \caption{Communication cost and inference accuracy of federated learning~(\gls{fl}) and federated distillation (FD) with or without federated augmentation (FAug) in the MNIST classification problem, where each device stores a 5-layer convolutional neural network (CNN). For FAug, the conditional generative adversarial network (GAN) consists of a 4-layer generator \gls{nn} and another 4-layer discriminator \gls{nn}. }\label{Fig:FDFL} \end{figure} On the other hand, the edge server can assist in the training process by exploiting its extra computation and communication resources. A compelling example is to rectify the non-IID training dataset incurred by the user-generated nature of data, wherein entirely un-correlated (non-identical) and/or too similar (non-independent) data samples across devices negate the benefit of distributed training~\cite{GoodfellowBook:16}. To this end, in federated augmentation (FAug)~\cite{Jeong:18}, the edge server first collects few seed samples from edge devices, and oversamples them (e.g., via Google's image search for visual data) through its fast connection to the Internet. Then, the edge server can utilize its high computing power for training a generative model (e.g., conditional generative adversarial network (GAN)~\cite{CondGAN14}). Downloading the trained generator empowers each device to locally augment deficient data samples until reaching an IID training dataset. With FAug, both FL and FD yield higher test accuracy as shown in Fig.~\ref{Fig:FD}, at the cost of slight increase in communication cost as illustrated in Fig.~\ref{Fig:FD_comm}. Lastly, a very deep \gls{nn} (e.g., Inception V4 \gls{nn} model consuming 44.3 GB~\cite{Wang:2018:inception}) cannot fit into a single device's memory, and has to be partitioned into multiple segments stored across edge devices and server, i.e., \textit{model split} (see Fig.\ref{Fig:split_model}). Here, the model's local and offloaded computations should be orchestrated over wireless links, by optimizing the partitioning strategy based on the \gls{nn}'s topology and constituent layers. This calls for a novel \gls{mec} framework that takes into account not only communication and computation resources but also \gls{nn} forward and backward propagation dynamics intertwined with channel dynamics. \vspace*{0.3cm} \label{usecaseEVT} \usecase{Extreme Event-Controlled MEC}: For the extreme event-controlling computation and communication co-design in \cite{CFLiu_MECTaskOffloading_Globecom,CFLiu_URLLC_MECTaskOffloading_TCOM}, we studied a multi-user \gls{mec} scenario as shown in Fig.~\ref{Fig: TCOM system model}, in which multiple \gls{mec} servers with different computation capabilities are deployed. In this setting, the \gls{ue} manages its local resource (i.e., total power budget) for computation and communication, i.e., task offloading, while the \gls{mec} server schedules its computational resources for the \glspl{ue}' offloaded tasks. Herein, we consider the length of the task queue as a latency measurement since queuing latency can be reflected by the queue length. For the reliability concerns, we are concerned about the bound violation probability and higher-order statistics of threshold deviation as highlighted in high reliability enabler 4. In this regard, we first impose a constraint on the queue length\footnote{The notation $Q$ generalizes the lengths of all task queues at the \glspl{ue} and \gls{mec} servers.} bound violation probability as \begin{figure*}[t] \centering \includegraphics[width=.75\textwidth]{CF-System} \caption{Extreme Event-Controlled MEC architecture.} \label{Fig: TCOM system model} \end{figure*} \begin{figure*}[t!] \centering \subfigure[]{ \includegraphics[width=.32\textwidth]{PIEEEFig3b} \label{Fig:TCOM_results_1} \subfigure[]{ \includegraphics[width=.32\textwidth]{PIEEEFig5j} \label{Fig:TCOM_results_2} } \subfigure[]{ \includegraphics[width=.32\textwidth]{PIEEEFig5f} \label{Fig:TCOM_results_3} } \caption{(a) Tail distributions of the excess queue length and the approximated GPD of exceedances, (b) 99th percentile of the queue length, and (c) mean and standard deviation of exceedances over the 99th percentile queue length, versus processing density.} \label{Fig:TCOM_results} \end{figure*} \begin{align} &\lim\limits_{T\to\infty}\frac{1}{T}\sum\limits_{t=1}^{T}\Pr\big(Q(t)> d \big)\leq \epsilon\ll 1.\label{Eq: Violation-Loc-Prob} \end{align} Here, $d$ and $ \epsilon$ are the given bound and tolerable violation probability. Let us further focus on the excess value over the bound $d$, which is denoted by $X(t)|_{Q(t)> d}=Q(t)- d>0$. By applying Theorem \ref{Thm: Pareto}, we approximate the exceedances as a GPD with the characteristic parameters $(\tilde{\sigma},\xi)$. The mean and variance are $\mathbb{E}\big[X(t)|Q(t)> d \big] \approx\frac{\tilde{\sigma}}{1-\xi}$ and $\mbox{Var}\big(X(t) |Q(t)> d\big) \approx\frac{\tilde{\sigma}^2}{(1-\xi)^2(1-2\xi)}$, respectively. We can find that the smaller $\tilde{\sigma}$ and $\xi$ are, the smaller the mean value and variance. Since the approximated GPD is just characterized by the scale and shape parameters, we impose thresholds on these two parameters, i.e., $\tilde{\sigma}\leq \tilde{\sigma}^{\rm th}$ and $\xi\leq \xi^{\rm th}$. Subsequently, applying the two parameter thresholds and $\mbox{Var}(X)=\mathbb{E}[(X)^2]-\mathbb{E}[X]^2$, we consider the conditional constraints on the mean and second moment of the excess queue length \begin{align} &\lim\limits_{T\to\infty}\frac{1}{T}\sum\limits_{t=1}^{T}\mathbb{E}\big[X (t)|Q(t)\!>\! d \big]\!\leq\! \frac{\tilde{\sigma}^{\rm th}}{1-\xi^{\rm th}},\label{Eq: GPD-Loc-mean} \\&\lim\limits_{T\to\infty}\frac{1}{T}\sum\limits_{t=1}^{T}\mathbb{E}\big[[X(t)]^2 |Q(t)\!>\! d\big]\!\leq\!\frac{2\big(\tilde{\sigma}^{\rm th}\big)^2}{\big(1-\xi^{\rm th}\big)\big(1-2\xi^{\rm th}\big)}.\label{Eq: GPD-Loc-var} \end{align} Taking into account the above three requirements for the extreme events, we trade off the UE's computation power and communication power in the extreme event-controlling computation and communication co-design. The effectiveness of characterizing threshold deviation by the Pickands--Balkema--de Haan theorem, i.e., Theorem~\ref{Thm: Pareto}, is verified in Fig.~\ref{Fig:TCOM_results_1}. Therein, $\Pr(Q>d)=3.4\times 10^{-3}$ with $d=3.96\times 10^{4}$. Additionally, in contrast with the schemes without edge computing and without local computation capability, the extreme event-controlling approach achieves the better performance, in terms of the extreme event-related metrics shown in Fig.~\ref{Fig:TCOM_results_2} and Fig.~\ref{Fig:TCOM_results_3}, in the considered MEC system. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{Draft_SUMUDUv4.pdf} \caption{Operational structure of EVT parametric FL ($\texttt{extFL}$).} \label{fig:v2v_fl_mle} \end{figure*} \vspace*{0.35cm} \label{usecaseEVTFL-V2V} \usecase{EVT/FL Ultra-Reliable Low-Latency V2V Communication}: The idea of how to combine \gls{evt} and \gls{fl} to enable \gls{urllc} in vehicular communication networks, referred as $\texttt{extFL}$, is discussed in our preliminary study~\cite{FL_v2x}, and illustrated in Fig. \ref{fig:v2v_fl_mle}. Here, vehicles observe their queue length samples and utilize the tail distribution of queue lengths at the vehicular transmitters over the whole edge network to optimize their transmission decisions such that the worst-case queue lengths are minimized while ensuring reliability in terms of queuing latency. The analytical parametric model of the aforementioned tail distribution is obtained via \gls{evt}. Naturally, the evaluation of above parameters is carried out by gathering all queue length samples at a central controller, the \gls{mec} server, with the additional costs of communication and computation overheads. In contrast to the centralized approach, here, \gls{fl} is used to reduce the communication payload by allowing individual vehicles to learn the tail distribution by exchanging a simplified model (two gradient values) instead of their raw local queue length samples, i.e. enabling URLLC with the aid of ML at the edge devices. The goal is thus to minimize the network-wide power consumption of a set of \glspl{vue} while ensuring low queuing latencies with high reliability. However, there still exists worst-case \glspl{vue} experiencing high latencies with a low probability whose performance losses are captured by extreme events pertaining to vehicles’ queue lengths exceeding a predefined threshold with non-negligible probability. The principles of \gls{evt} characterize the tail distribution of the queue lengths exceeding a predefined threshold by a generalized Pareto distribution with two parameters scale and shape, respectively. The concepts in \gls{mle} are used along \gls{fl} to estimate the scale and shape parameters of the queue tail distribution locally at each \glspl{vue} over the queue length samples. Therein, occasionally, local estimations and the gradients of \gls{mle}, known as \emph{local model} at each \glspl{vue} are shared with the \gls{mec} server. The \gls{mec} server does model averaging and shares the \emph{global model} with the \glspl{vue} to update their local estimations. Using the knowledge of the tail distribution over the network, the transmit power of each \gls{vue} is optimized to reduce the worst-case queuing delays. Fig. \ref{fig:FLvsCEN} compares the amount of data exchanged and the achieved V2V communication reliability of $\texttt{extFL}$ with a centralized tail distribution estimation model, denoted as $\texttt{CEN}$. Note that the $\texttt{CEN}$ method requires all \glspl{vue} to upload all their queue length samples to the RSU and to receive the estimated GPD parameters. In contrast, in \texttt{$\texttt{extFL}$}, \glspl{vue} upload their locally estimated learning models and receive the global estimation of the model. As a result, $\texttt{extFL}$ yields equivalent or better end user reliability compared to $\texttt{CEN}$ for denser networks while reducing the amount of data exchange among \glspl{vue} and the RSU. \begin{figure}[t!] \hspace{5pt} \subfigure[]{ \includegraphics[width=.9\columnwidth]{Sumudu_Simu01.pdf} \label{fig:FLvsCEN}} \hspace*{18pt} \subfigure[]{ \includegraphics[width=.8\columnwidth]{Sumudu_Simu02.pdf} \label{fig:v2v_queues}} \caption{Comparison between $\texttt{CEN}$ and $\texttt{extFL}$. (a) The amount of data exchanged between RSU and VUEs (left axis) and the achieved reliability (right axis). (b) Mean and variance of the worst-case VUE queue lengths.} \label{fig:accuracy} \end{figure} The worst-case \glspl{vue} queue lengths, i.e., queue lengths exceeding $\queue_0$, are compared in Fig. \ref{fig:v2v_queues}. Here, the mean indicates the average queuing latency of the worst-case \glspl{vue} while the variance highlights the uncertainty of the latency. As the number of \glspl{vue} increases, it can be noted that both the mean and variance in $\texttt{extFL}$ are lower than the ones in $\texttt{CEN}$. The reason for above improvement is the reduced training latency in $\texttt{extFL}$ over $\texttt{CEN}$. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{DQN-recoloured.pdf} \caption{Illustration of deep reinforcement learning for mobile-edge computing performance optimization.} \label{fig:dqn} \end{figure*} \vspace*{0.2cm} \label{usecaseDQN} \usecase{Deep Reinforcement Learning for Optimized Edge Computing Task Offloading}: The task offloading decision-making in edge computing networks is a challenging task in the presence of environmental dynamics. This situation is aggravated in ultra-dense networks, where solutions to break the \emph{curse of dimensionality} is desperately needed. In the works \cite{Chen1801,Chen1802}, a discrete-time Markov decision process was adopted to model the problem of expected long-term \gls{mec} performance optimization in an ultra-dense radio access network, where a number of \glspl{bs} are available for computation task offloading. For a representative wireless charging enabled \gls{mue}, whether to execute an arriving computation task at the local mobile device or to offload the task for edge server execution via one of the \glspl{bs} should adapt to the environment dynamics in an intelligent manner. These environment dynamics may consist of random computation task arrivals, time-varying communication qualities between the MU and the \glspl{bs} and the sporadic energy availability at the mobile device. The challenges for the problem-solving lie in the lack of any a priori knowledge of any environment dynamic statistics along with the high dimensional state space. A deep reinforcement learning technique shows the power of achieving an optimal solution. More specifically, the objective of the \gls{mue} is to minimize an expected infinite-horizon discounted cost given by \begin{align}\label{expeCost} Q(s, a) = \textsf{E}\!\!\left[ \sum_{t = 1}^\infty (\gamma)^{t - 1} \cdot c\!\left(s^t, a^t\right) | s^1 = s, a^1 = a\right], \end{align} where $\gamma \in [0, 1)$ is the discount factor, while the immediate cost $c\!\left(s^t, a^t\right)$ after performing an action $a^t$ under a state $s^t$ at each time slot $t$ takes into account the incurred task execution delay and the penalty of failing to process an arriving computation task. Once we obtain the optimal $Q$-function, the optimal action $a^*$ can be made by the \gls{mue} following $a^* = \arg\min_a Q(s, a)$ under a state $s$. Instead of using a conventional $Q$-learning to find the optimal $Q$-function, we resort to a \gls{dqn} \cite{Mnih15} $Q(s, a; \bm\theta)$ to approximate $Q(s, a)$ with $\bm\theta$ being the set of parameters of the neural network. The procedure of the deep reinforcement learning for \gls{mec} performance optimization is briefly depicted as in Fig. \ref{fig:dqn}. In Fig. \ref{simu01}, we compare the average cost performance from the \texttt{Proposed} deep reinforcement learning algorithm with three baselines: 1) \texttt{Local} -- Whenever a computation task arrives, the \gls{mue} executes it at the local mobile device using the queued energy units; 2) \texttt{Server} -- All arriving computation tasks are offloaded to the edge server for computing via the \glspl{bs} with the best communication qualities; and 3) \texttt{Greedy} -- When the computation task queue as well as the energy queue are not empty at a time slot, the \gls{mue} decides to execute the task locally or at the cloud to achieve the minimum immediate cost. We configure a \gls{dqn} of one hidden layer with $512$ neurons. The replay memory is assumed to have a capacity of $5000$ and we select the size of the mini-batch as $100$. From Fig. \ref{simu01}, we can clearly see that compared to the baselines, the deep reinforcement learning algorithm realizes best performance in average cost. A higher task arriving probability $\rho$ indicates a longer average task execution delay, hence a larger average cost. As the average energy arrival rate increases, the average cost improves due to the decreased failure of processing an arriving computation task. \begin{figure}[t] \centering \includegraphics[width=.95\columnwidth]{DQN_Simu01.pdf} \caption{Average cost per time slot versus average energy arrival rate under \texttt{MILD} ($\rho\!=\!0.3$) and \texttt{HEAVY} ($\rho\!=\!0.5$) task arrival probabilities, respectively represented with solid and dashed lines.} \label{simu01} \end{figure} \begin{figure*}[ht!] \centering \includegraphics[width=\textwidth]{MEC-FoV-VR_Scenario.pdf} \caption{Operational structure and building blocks of the edge controller that coordinates the DRNN FoV prediction-aided proactive content quality adaptation for the \gls{mmwave} $360^{\circ}$ VR video streaming.} \label{fig:VR_theater_scenario} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{Fig-chunk_size_50s_3v_LyapunovCombined_2C.pdf} \caption{(a) Average delay, (b) $99$th percentile delay, (c) HD delivery rate and (d) Jaccard index performance in \texttt{sT-3v}, respectively, as a function of the HD chunk size, for $V\!=\!3$ videos, $K\!=\!2\times\!V$ clusters, $T_H\!=\!5$ frames, and Lyapunov trade-off $V_{\delta}\!=\!1\!\cdot\!10^8$ and $V_{\delta}\!=\!1\!\cdot\!10^9$.} \label{fig:360VR_chunkSize} \end{figure*} \vspace*{0.2cm} \usecase{Edge ML Enabled 360$^\circ$ VR Multicast Transmission}\label{usecaseVR2} Our previous work in~\cite{PerfectoVRStreaming} considered merging \gls{ml} and \gls{mmwave} multicasting to optimize the proactive wireless streaming of FoV-based \gls{hd} 360$^\circ$ videos in a multi-user \gls{vr} environment with low latency guarantees. Hereof, the use of edge \gls{ml} to predict users' \gls{fov} in advance is pivotal to leverage inter-user correlations and curb the latency. These predicted correlations will ultimately drive both how contents are transmitted and the beamforming decisions at the \gls{mmwave} base stations. A VR theater scenario consisting of a network of VR users watching different HD $360^{\circ}$ VR videos streamed in the \gls{mmwave} band over a set of distributed \glspl{sbs} is studied. The \glspl{sbs} will report users' \gls{6dof} pose as well as \gls{csi} and produce multiple spatially orthogonal beams to serve shared \gls{fov} video content to groups of users (multicast) or a single beam (unicast) following the scheduling decisions adopted at the edge controller. By optimizing video frame admission and user scheduling, the goal is to provide a highly reliable broadband service for \gls{vr} users that deliver \gls{hd} videos with a latency that is below the \gls{mtp} latency limits with very high probability. To achieve this proactive content transmission and perform a head movement pattern recognition predicting users' upcoming tiled-\gls{fov}, a sequential learning model based on \glspl{gru}~\cite{GRUseminal,GRUseminal2} is selected. Specifically, \glspl{gru} are a form of \glspl{rnn} that include a double gating mechanism to govern the impact of past hidden states over the new output states and effectively tackle long-term dependencies. To that purpose, an architecture based on 2 layers of \gls{gru} cells with a hidden state size equal to 512 separated by a \gls{relu} activation are stacked. The output is then fed to a serial to parallel (S/P) layer and to a dense neural layer. Given the multi-label nature of the learning model, a sigmoid activation layer maps the $N$ sized dense output to the $N$ logits, one for each tile in the \gls{eqr} projection of the 360$^\circ$ \gls{vr} video frame, which are binarized with a cutoff layer such that \begin{equation} \widehat {y}_{u,n}^{f_p}= \begin{cases} 1, & \sigma(\bm{W}_{d}\bm{h}_f^{(2)}+\bm{b}_d)_n\geq \gamma_{th},\\ 0, & \text{otherwise}, \end{cases} \end{equation} where $\bm{W}_{d}$, $\bm{b}_d$ are the weights and biases of the dense fully-connected layer and $\gamma_{th}$ is the threshold value for the cutoff layer. The predicted FoV for a user $u$ and frame index \smash{$f_p=f+T_H$} is retrieved as $\widehat{\mathcal{N}}_{u}^{f_p}=\{ n\in [1,...,N]\hspace{-1mm}:\widehat {y}_{u,n}^{f_p}=1\}$. Fig. \ref{fig:VR_theater_scenario} provides an overview of the building. The output of the DRNN is fed to a user clustering module and The former constitutes one of the inputs for a scheduler the Lyapunov Drift plus penalty approach. In addition to our proposed scheme \texttt{MPROAC+}, the performance of three reference baselines with reactive unicast and multicast, and proactive multicast transmission capabilities, correspondingly, \texttt{UREAC}, \texttt{MREAC}, and \texttt{MPROAC} is evaluated. Our proposed approach incorporates a penalty whereby quality is trade in exchange for not violating a maximum latency bound. For simulation purposes, a small size theatre with capacity for 50 users with \glspl{sbs} are located at ceiling level in its upper 4 corners is selected. Fig. \ref{fig:360VR_chunkSize} evaluates the impact of the requested \gls{hd} video quality by representing the average and $99^{th}$ percentile delay, the HD delivery rate and Jaccard index measured while 30 users watch one out of the 3 available VR videos for an increasing requested video chunk size. Fig. \ref{fig:360VR_chunkSize} clearly shows the tradeoff between frame delay and \gls{hd} streaming rate. As the chunk size increases, the average and 99th percentile delays increase for the different schemes. Moreover, comparing \texttt{UREAC} with the other schemes, it is shown that multicasting brings $40-50\%$ increase in the \gls{hd} rate and $33-70\%$ latency reduction through the utilization of shared \glspl{fov} of different users. By delivering the predicted frames in advance, both the \texttt{MPROAC} and \texttt{MPROAC+} minimize the average delay without sacrificing the HD quality rate. Moreover, our proposed \texttt{MPROAC+} scheme is shown to also keep the worst delay values bounded due to imposing the constraint over the latency. The tradeoff between frame delay and quality is further illustrated the results for different values of the Lyapunov parameter $V_\delta$ are compared; as $V_\delta$ increases, the scheduling algorithm prioritizes maximizing users' \gls{hd} delivery rate, whereas at lower values of the scheduler prioritizes keeping the delay bounded with high probability. This comes at the expense of having lower \gls{hd} delivery rate. Lastly, the Jaccard similarity in Fig.~\ref{fig:360VR_chunkSize}(d) illustrates the tradeoffs between effective vs. transmitted contents. At low traffic loads, the Jaccard index is low, which is due to the large amount of excess data delivered due to transmitting an estimated user/cluster level \gls{fov}. As the traffic load increases, the proactive schemes transmit more real-time frames, which increases the Jaccard index. The Jaccard index decreases again at higher traffic loads as the effect of missed frames increases (once the average delay is close to reaching the deadline, as can be seen in Fig.~\ref{fig:360VR_chunkSize}(a)). \begin{figure*}[ht] \centering \includegraphics[width=0.70\textwidth]{MEC_VR_Scenario.pdf} \caption{Representation of a group of VR gaming arcades where HD frame computation is offloaded to a MEC platform such that input actions of the \glspl{vrp} might impact the virtual environment shown to a subset of the remaining \glspl{vrp}. The detailed view of the bottom arcade also illustrates several \gls{los} and \gls{nlos} \gls{mmwave} link states e.g., link blockage and \gls{vrp} and \gls{mmap} beam misalignment.} \label{fig:VR_scenario} \end{figure*} \vspace*{0.2cm} \usecase{MEC Enabled Multi-User VR Gaming Arcade}\label{usecaseVR1} We consider a practical use case of wireless \gls{vr} to deliver a low latency service to multi-user scenario of users playing \gls{vr} video games in a gaming arcade. This scenario, that is fully detailed in our previous work~\cite{conf:Elbamby2018-WCNC}, is highly demanding due to the tight latency tolerance in \gls{vr} as well as the state dynamics of the user due to the game-specific actions taken by themselves or by other players that affect what content should be shown to them. The users are served wirelessly through multiple \glspl{mmap} wired to edge computing and storage servers. These servers receive the users’ \gls{3d} location coordinates, their \gls{3d} pose that consists of roll, pitch, and yaw angles, and their game-related actions. The servers will render the corresponding frames in \gls{hd} resolution and deliver it wirelessly to users. Hence, the latency consists of the processing latency at the server and the communication latency to deliver the \gls{hd} frames expressed as \begin{equation} D_{uf}(t)=\xi_{fu}(D_{uf}^{\textrm{cp}}(t)+D_{uf}^{\textrm{cm}}(t)+\text{\ensuremath{\tau}}_{\textrm{EP}}), \end{equation} where $\xi_{fu}$ represents a binary indicator that equals $1$ when the \gls{hd} video frame is delivered to \gls{vrp} $u$ and equals $0$ if the \gls{lq} frame is delivered, $D_{uf}^{\textrm{cp}}$ and $D_{uf}^{\textrm{cm}}$ are the computing and communication delays of HD frame $f$ initiated from user $u$, and $\text{\ensuremath{\tau}}_{\textrm{EP}}$ is the processing latency which accounts for the edge server processing, storage processing, and the UL transmission of user pose and action data. Let the computing delay $D_{uf}^{\textrm{cp}}$ be expressed as follows: \vspace{-0.1cm} \begin{equation} D_{uf}^{\textrm{cp}}(t)=\biggl(\frac{\kappa L_{fu}^{\textrm{HD}}}{c_{e}}+W_{uf}(t)\biggr)z_{fu}(t)(1-y_{fu}(t)), \end{equation} where $c_{e}$ is the computation capability of edge server $e$, $z_{fu}(t)$ and $y_{fu}(t)$ indicate that the video frame $f$ of user $u$ is scheduled for computing, and is cached in the fog network at time instant $t$, respectively, and $W_{uf}$ is the computation waiting time of HD frame $f$ of user $u$ in the service queue, defined as $Q(t)$. Furthermore, let the communications delay $D_{uf}^{\textrm{cm}}$ be given as \vspace{-0.4cm} \begin{equation} D_{uf}^{\textrm{cm}}(t)\hspace{-0.7mm}=\hspace{-0.7mm}\arg\min_{d_{u}}\hspace{-2mm}\sum_{t'=D_{uf}^{\textrm{cp}}(t)+1}^{D_{uf}^{\textrm{cp}}(t)+d_{u}}\hspace{-0.5mm}\biggl(T_{t}r_{u}(t')\geq L_{fu}^{\textrm{HD}}\biggr), \end{equation} where the \emph{$\arg\min$} function is to find the minimum number of time slots needed for the video frame $f$ to be delivered. Here, we study two enablers to minimize the latency and boost the reliability of the \gls{vr} gaming experience. For the computing latency, we investigate how prior knowledge of users' future pose using prediction methods affects the computing the latency. We leverage results from previous works as in~\cite{Qian16} that state that the user's future pose in the next hundreds of milliseconds can be predicted with high accuracy to proactively predict, render, and cache the users’ upcoming frames, subject to computation and storage resource availability. For the communication parts, the use of \gls{mxconn} is considered to associate a user with more than one \glspl{mmap} if the \gls{sinr} with its serving \gls{mmap} falls below a given threshold. Specifically, \gls{sfn} operation is considered where multiple \gls{mmap} use the same frequency and time resource to transmit to the intended user. Fig. \ref{fig:VR} compares the communications and computing latency of our $\texttt{PROPOSED}$ scheme that considers both enablers of proactive computing and \gls{mxconn}, with $\texttt{BASELINE-1}$ that does not have either of the two enablers, and $\texttt{BASELINE-2}$ that considers only proactive computing. By looking into the computing latency in Fig. \ref{fig:VR}, we can see that the schemes with proactive computing significantly minimizes the computing latency, whereas a look at the communication latency shows the gain achieved using \gls{mxconn}. Comparing the communication latency of $\texttt{BASELINE-1}$ and $\texttt{BASELINE-2}$ also shows that the proactive computing, that improved the computing performance, also slightly increases the communication latency. This is due to having to send additional data due to the errors in prediction, in which the correct data has to be retransmitted in real time. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{users_vs_C_C_delays_nV4.pdf} \caption{The communication delay (solid lines) and computing delay (dashed lines) for different schemes as the number of players varies for an arcade of 16 mmAPs, each equipped with an edge computing unit.} \label{fig:VR} \end{figure} \section{Conclusions and Future Outlook} Edge computing is an essential component of future wireless networks, in which several challenges need to be overcome to realize the vision of ultra-reliable and low-latency edge computing. Chief to this vision is leveraging multiple high reliability and low-latency enablers applied for different types of services and use cases. In this article, we have discussed edge networking services and examined key enablers to achieve low-latency and high reliability networking. Moreover, we showcased how the network resources can be optimized for a selection of use cases characterized by their shared need for edge networking. As the vision of \gls{5g} starts to materialize beyond its initial inception towards imminent first commercial deployments, we envision a realization of edge computing hand in hand with the development of \gls{urllc} and distributed \gls{ai} able to deal with dynamic and heterogeneous environments, provide seamless computing, content, and control services, while preserving data privacy and security. \bibliographystyle{IEEEtran}
1,108,101,564,495
arxiv
\section{Introduction} Interpersonal coordination and synchronization between the motion of two individuals have been extensively studied over the past few decades \cite{HKB85,KKSS87, ST94, VMLB11}. Synergetic movements of two or more people mirroring each other frequently occur in many activities such as handling objects, manipulating a common workpiece, dancing, choir singing and movement therapy \cite{HT09, VOD10, MLVGSH12, LMH13, RP13}. It is of great importance to reveal not only the effects of mirroring movements among people on human physiological and mental functions, but also to deeply understand the link between intrapersonal and interpersonal coordination. In social psychology, it has been shown that people prefer to team up with others possessing similar morphological and behavioral features, and that they tend to coordinate their movement unconsciously \cite{F82, LS11}. Moreover, much evidence suggests that motor processes caused by interpersonal coordination are strictly related to mental connectedness. In particular, motor coordination between two human subjects contributes to social attachment particularly when the kinematic features of their movement share similar patterns \cite{WH09,SRdBTA14}. In order to explain the experimental observations of human interpersonal coordination, mathematical models are usually derived to capture the key features of the observed behavior. A classical example is the so-called HKB oscillator which was introduced in \cite{HKB85} to explain the transition from phase to antiphase synchronization in bimanual coordination experiments (for more details see \cite{KSSH87, JFK98}). The HKB nonlinear oscillator was shown to be able to capture many features of human coordination even beyond the bimanual synchronization experiments it was derived to explain. For example, HKB oscillators were used in \cite{ZATdB14_1,ZATdB14_2} in the context of the \emph{mirror game} \cite{NDA11}, often presented as an important paradigmatic case of study to investigate the onset of social motor coordination between two players imitating each other's hand movements. Furthermore, in \cite{KdGRT09} the authors take inspiration from the \emph{dynamic clamp} of cellular and computational neuroscience in order to probe essential properties of human social coordination by reciprocally coupling human subjects to a computationally implemented model of themselves (HKB oscillator), referred to as Virtual Player (VP). Such concept, namely the \emph{human dynamic clamp} (HDC), was further investigated and developed in \cite{DdGTK14} in order to cover a broader repertoire of human behavior, including rhythmic and discrete movements, adaptation to changes of pacing, and behavioral skill learning as specified by the VP. Besides, HKB oscillators were also used in \cite{ST94} in order to capture the rhythmic coordination between two individuals swinging hand-held pendulums, in \cite{VMLB11} in order to model spontaneous interpersonal postural coordination between two human people and account for the competition between the coupling to a visual target to track and the coupling to the partner, in \cite{RMIGS07} in order to qualitatively explain interpersonal movement synchronization between two human beings involved in rhythmic paradigms, and in \cite{AST95} in order to account for the frequency detuning of the phase entrainment dynamics of two people involved in interlimb coordination. While coordination of two human players has been studied in numerous previous investigations, the case of multiple human players has been seldom studied in the existing literature, due to a combination of practical problems in running the experiments and lack of a formal method able not only to model the considered scenario but also to quantify and characterize the synchronization level of the ensemble. Multiplayer games involve a group of three or more people engaged in a communal coordination task. The variety of scenarios that can be considered is vast due to the countless activities the players might be involved in (limb movements, finger movements, head movements, walking in a crowd, or more in general music and sport activities), the many ways in which the participants can interact and communicate with each other and the different ways all the players can be physically located with respect to each other while performing the specified task. Some of the existing works on coordination of multiple human players include studies on choir singers during a concert \cite{HT09}, rhythmic activities as for example "the cup game" and marching tasks \cite{IR15}, rocking chairs \cite{FR10, RGFGM12} and coordination of rowers' movements during a race \cite{WW95}. In these papers the authors provide several experimental results in order to analyze the behavior of a group of people performing some coordinated activities, but a rigorous mathematical model capable of capturing the observed results and explaining the features of the movement coordination among them is still missing. In particular, in \cite{FR10} the authors study both unintentional and intentional coordination by asking the players to try and synchronize the oscillations of the rocking chairs with their eyes shut or open. Synchronization is observed to spontaneously emerge when players observe each other's movements. Another study in which multiplayer activities are analyzed but a mathematical model is missing is carried out in \cite{YY11}: the authors use the symmetric Hopf bifurcation theory, which is a model-independent approach based on coupled oscillators, to investigate the synchronized patterns of three people during sport activities. Further results about multiplayer activities deal with spontaneous group synchronization of arm movements and respiratory rhythms. For example, in \cite{CBVB14} the authors test whether pre-assigned arm movements performed in a group setting spontaneously synchronize and whether synchronization extends to heart and respiratory rhythms. In their study no explicit directions are given on whether or how the arm swingings are to be synchronized among participants, and experiments are repeated with and without external cues. Interestingly, when an external auditory rhythm is present, both motor and respiratory synchronization is found to be enhanced among the group. Also the overall coordination level is observed to increase when compared to that detected when the same experiments are again carried out in its absence. While in \cite{CBVB14} no mathematical model is presented, in \cite{GDSC96} the effects of an external visual signal, when considering the postural sway of a human being, is explicitly modeled by introducing a linear oscillator with adaptive parameters depending on the frequency of the exogenous visual signal itself. Moreover, when considering the social postural coordination between two human beings \cite{VMLB11}, another model for an external visual stimulus is presented by adding a unidirectional coupling term to the dynamics of a nonlinear oscillator. The main objective of this paper is to propose and analyze a model able to account for the onset of movement synchronization in multiplayer scenarios and explain some of the features observed experimentally in the existing literature. Specifically, we consider a heterogeneous network of HKB nonlinear oscillators as a good model of multiplayer coordination and, as already done in \cite{MLVGSH12} for the case of two agents only, we regard it as a synchronization problem. Each equation is used to model the movement of a different player and is therefore characterized by a different set of parameters to account for human-to-human variability. The effect of different interaction models, linear and nonlinear, will be investigated to understand under what conditions synchronization is observed to emerge. Our analysis suggests that bounded synchronization is indeed a common emergent property in these networks whose occurrence can also be accounted for analytically in a number of different scenarios. Also, as expected from existing theoretical results, we find that the structure of the interactions among players has an effect on the coordination level detected in the network. Furthermore, the effects of adding an external sinusoidal signal will be studied in order to understand whether synchronization can be improved by means of an entrainment signal \cite{RdBS10}. Our analysis suggests that the synchronization level of the ensemble can indeed increase when the oscillation frequency of the external signal is similar to the natural angular velocity of the agents in the network. However, in all the other cases, the external signal acts as a disturbance and leads to a decrease of the coordination among the agents. We wish to emphasize that the study reported in this paper will form the basis of future experimental investigations which are currently being planned. The paper is organized as follows. In Sect. \ref{sec:preliminaries} some notation that shall be used in later sections is introduced. In Sect. \ref{sec:problemstatement} the equation that describes the network is presented, in terms of both internal dynamics of each agent and coupling protocol thanks to which they can interact with each other. In Sect. \ref{sec:synchmetr} some metrics are introduced to characterize the quality and the level of coordination in human groups. In Sect. \ref{sec:testbed} a testbed scenario of multiplayer coordination in networks of human people is presented, while in Sect. \ref{sec:modres} the key synchronization features experimentally observed are reproduced by considering a heterogeneous network of HKB oscillators, and the effects of three different coupling strategies thanks to which they are interconnected are explored. In Sect. \ref{sec:entrainment} the effects of adding an external entrainment signal is analyzed with respect to the overall synchronization level of the network. In Sect. \ref{sec:mainresults} global bounded synchronization of the network when its nodes are connected through a linear diffusive coupling protocol is analytically proven to be achieved, and some numerical examples are provided in order to both illustrate the effectiveness of our analysis and to show that bounded synchronization can be achieved also when considering different couplings. Finally, in Sect. \ref{sec:conclusion} a summary of our results and some possible future developments are presented. \section{Preliminaries and background} \label{sec:preliminaries} We denote with $\otimes$ the Kronecker product between two matrices. The operator $\lambda_k \left( \cdot \right)$ defined over a matrix indicates the $k$-th eigenvalue of the matrix itself, and $\lambda_M \left( \cdot \right)$ indicates its maximum eigenvalue when the matrix is real and symmetric and as a consequence all the eigenvalue are real as well. A \emph{graph} is a tuple $\mathcal{G} = \{ \mathcal{V}, \mathcal{A} \}$ defined by a set of nodes $\mathcal{V} = \{ 1,...,N \}$ and a set of edges $\mathcal{A} \subseteq \mathcal{V} \times \mathcal{V}$. A graph is said to be \emph{undirected} if $(i,j) \in \mathcal{A} \iff (j,i) \in \mathcal{A}$. In an undirected graph, two nodes $i$ and $j$ are said to be \emph{neighbors} if $(i,j) \in \mathcal{A}$. The matrix $A=\{a_{ij} \} \in \mathbb{R}^{N \times N}$, where \begin{equation*} a_{ij}\begin{cases} >0, & \mbox{if } (i,j) \mbox{ are neighbors} \\ =0, & \mbox{otherwise} \end{cases} \end{equation*} is called \emph{adjacency matrix}, and $a_{ij} \ge 0$ is called strength of the interaction between the pair $(i, j)$. In particular, a graph is said to be \emph{unweighted} if the interaction between two neighbors is equal to $1$. A \emph{path} between nodes $h$ and $k$ is a sequence of nodes, with $h$ and $k$ as endpoints, such that every two consecutive nodes are neighbors. A graph is said to be \emph{simple} if $a_{ii} = 0 \ \forall i \in \mathcal{V}$, while it is said to be \emph{connected} if there exists a path between any two of its nodes. The matrix $L = \{l_{ij} \} \in \mathbb{R}^{N \times N}$ defined as \begin{equation} l_{ij} : = \begin{cases} \sum_{k=1}^{N} a_{ik}, & \mbox{if } i=j \\ -a_{ij}, & \mbox{if } i \neq j \end{cases} \end{equation} is called \emph{Laplacian matrix} of the graph (or simply \emph{Laplacian}). The Laplacian of any simple undirected graph is symmetric with zero row sum and is a positive semidefinite matrix with as many null eigenvalues as there are components in the graph. In particular, a connected graph has only one null eigenvalue. Throughout the paper we shall consider a connected simple undirected network of $N$ agents assuming that any two players interact symmetrically with one another. Before analyzing a multiplayer scenario, it is worth considering the simpler case of only two human players interacting with each other. The system that can be used to model the interaction between them is described as follows \cite{RMIGS07, FJ08}: \begin{equation} \begin{cases} \ddot{x_1}+\left(\alpha x_1^2 + \beta \dot{x}_1^2 -\gamma \right) \dot{x}_1 + \omega_1^2 x_1 = I(x_1,x_2) \\ \ddot{x_2}+\left(\alpha x_2^2 + \beta \dot{x}_2^2 -\gamma \right) \dot{x}_2 + \omega_2^2 x_2 = I(x_2,x_1) \end{cases} \end{equation} where $x_i \in \mathbb{R}$ denotes the position of the $i$-th player, with $i=1,2$. The right-hand side of both equations represents the coupling term between the two players: in particular \begin{equation} I(w,z) := [ a+b\left( w-z \right)^2 ] \left( \dot{w}-\dot{z} \right) \end{equation} The term $\left(\alpha x_i^2 + \beta \dot{x}_i^2 -\gamma \right) \dot{x}_i$ represents the nonlinear damping of the oscillatory movement of player $i$. Specifically, the sign of $\gamma$ determines whether, in the absence of coupling, the oscillation is persistent ($\gamma>0$) or vanishes (vice versa) as time goes by: it is trivial to verify this by studying the stability of the origin and checking the sign of the eigenvalues of the Jacobian of the system. Moreover, $\alpha$ and $\beta$ determine the amplitude of such oscillation, while $\omega_i$ is related to its frequency. It has been proven that this model of two nonlinearly coupled oscillators account for the features observed during experimental data in bimanual experiments (see \cite{HKB85} for further details). \section{Human to human coordination as a synchronization problem} \label{sec:problemstatement} In the introduction of this paper we have pointed out that the dynamics of two coupled HKB oscillators has been used to describe different kinds of interpersonal coordination tasks between two people, including bimanual coordination experiments, mirror game, social postural coordination and rocking chairs. According to the particular scenario considered, the state vector of each oscillator is used to represent position and velocity of the particular body part of interest of either of the players (finger, hand, head, and so forth). Following the same approach, we can consider a scenario in which more than two human beings are performing a multiplayer coordination task, as for example arm or hand rhythmic movements, rocking chairs, head tracking of a visual target and so on. In these cases, the state vector of each node represents position and velocity of the particular body part of interest of each player. Therefore, the dynamics of each player when moving in isolation will be described by an HKB equation: \begin{equation} \label{eqn:hkbInternalDynamics} f_i(t,x_i)= \begin{bmatrix} x_{i_2} \\ - (\alpha_i x_{i_1}^2+\beta_i x_{i_2}^2-\gamma_i)x_{i_2} - \omega_i^2 x_{i_1} \end{bmatrix} \end{equation} where $x_i=[x_{i_1} \ x_{i_2}]^T \in \mathbb{R}^2$ is the state vector, with $x_{i_1}, x_{i_2}$ representing position and velocity of the $i$-th human player, respectively. To model the interaction between different players we assume that the dynamics of each of them is affected by some coupling function $u_i$ which depends on the state of its neighbors. In what follows we will explore the effects of three possible selections for such a function. In particular, we are interested in analyzing the differences of the results provided by all of them and understanding which one leads to synchronization features which are the closest to the ones observed in some existing work about group synchronization of networks of several human people involved in a coordination task \cite{RGFGM12}. \begin{enumerate} \item \emph{Full state coupling}. With this kind of coupling, we assume that players adjust both their velocities and accelerations proportionally to the average mismatch between theirs and those of their neighbors. Mathematically, we have: \begin{equation} \label{eqn:gsldc} u_i = -\frac{c}{\mathcal{N}_i} \sum_{j=1}^{N} a_{ij} \left( x_i-x_j \right) \end{equation} In particular, $\mathcal{N}_i>0$ is the number of neighbors of node $i$, while $c>0$ is the coupling strength among the agents. \item \emph{Partial state coupling}. Next, we explore the case where players only adjust their accelerations according to the position and velocity mismatches from their neighbors: \begin{equation} \label{eqn:gsipvc} u_i = - \begin{bmatrix} 0 \\ \sum_{j=1}^{N} \frac{a_{ij}}{\mathcal{N}_i} \left[ c_1 \left( x_{i_1}-x_{j_1} \right) + c_2 \left( x_{i_2}-x_{j_2} \right) \right] \end{bmatrix} \end{equation} In particular, $\mathcal{N}_i>0$ is the number of neighbors of node $i$, while $c_1,c_2>0$ represent the position and the velocity coupling strengths, respectively. \item \emph{HKB coupling}. Finally we consider an interaction model which is the direct extension to multiplayer coordination problems of the interaction function used in the classical HKB set up to model coordination between two players \cite{HKB85,FJHK96}. Specifically we choose the following nonlinear function: \begin{equation} \label{eqn:gshkbc} u_i = \begin{bmatrix} 0 \\ \frac{c}{\mathcal{N}_i} \sum_{j=1}^{N} a_{ij} [a+b(x_{i_1}-x_{j_1})^2](x_{i_2}-x_{j_2}) \end{bmatrix} \end{equation} Once again, $\mathcal{N}_i>0$ is the number of neighbors of node $i$, while $c>0$ represents the coupling strength among the agents. \end{enumerate} The resulting network model describing the interaction of a group of $N$ players can then be written as \begin{equation} \label{eqn:networkeq} \dot{x}_i(t) = \begin{bmatrix} x_{i_2} \\ - (\alpha_i x_{i_1}^2+\beta_i x_{i_2}^2-\gamma_i)x_{i_2} - \omega_i^2 x_{i_1} \end{bmatrix} + u_i(t) \ \in \mathbb{R}^2 \end{equation} where the coupling function $u_i$ can be chosen as one of those listed above. We now explore under what conditions coordination, and hence synchronization, emerges for each of the three scenarios of interest. We wish to emphasize that, since the node parameters are heterogeneous, complete synchronization as defined in \cite{LDCH10} cannot be achieved. We will consider instead the case where bounded synchronization, as defined below, emerges. Namely, we define the average trajectory as \begin{equation} \bar{x}(t) : = \frac{1}{N} \sum_{j=1}^{N} x_j(t) \end{equation} and the tracking error as \begin{equation} e_i(t) : = x_i(t)-\bar{x}(t) \quad \forall t \ge 0, i=1,...,N \end{equation} We also define the parameters vector for each node $i$ as $\vartheta_i := [\alpha_i \ \beta_i \ \gamma_i \ \omega_i]^T \in \mathbb{R}^4$, and we introduce the stack vectors $x(t) : = [x_1(t)^T \ x_2(t)^T \ ... \ x_N(t)^T]^T \in \mathbb{R}^{2N}$ and $e(t) : = [e_1(t)^T \ e_2(t)^T \ ... \ e_N(t)^T]^T \in \mathbb{R}^{2N}$ and the error norm $\eta(t) : = ||e(t)|| \in \mathbb{R}, \forall t \ge 0$, where $|| \cdot ||$ indicates the Euclidean norm. We say that a network of HKB oscillators achieves coordination if and only if \begin{equation} \label{eqn:gbseqdef} \lim_{t \to \infty} \eta(t) \le \epsilon \end{equation} for any initial condition $x_{i,0}$ and parameter vector $\vartheta_i$ of the nodes in the network, where $\epsilon>0$ is a sufficiently small constant. \section{Synchronization metrics} \label{sec:synchmetr} In order to quantify and analyze the synchronization level in a network of more than two agents, we use the metrics introduced in \cite{RGFGM12} to characterize the quality and the level of coordination in human groups. Let $x_k(t) \in \mathbb{R} \ \forall t \in [0,T]$ be the continuous time series representing the motion of each agent, with $k \in [1,N]$, where $N$ is the number of individuals and $T$ is the duration of the experiment. Let $x_k(t_i) \in \mathbb{R}$, with $k \in [1,N]$ and $ i \in [1,N_T]$, be the respective discrete time series of the $k$-th agent, obtained after sampling $x_k(t)$, where $N_T$ is the number of time steps and $\Delta T := \frac{T}{N_T}$ is the sampling period. Let $\theta_k(t) \in [-\pi,\pi]$ be the phase of the $k$-th agent, which can be estimate by making use of the Hilbert transform of the signal $x_k(t)$ \cite{KCRPM08}. We define the \emph{cluster phase} or \emph{Kuramoto order parameter}, both in its complex form $q'(t) \in \mathbb{C}$ and in its real form $q(t) \in [-\pi,\pi]$ as \begin{equation} q'(t) := \frac{1}{N} \sum_{k=1}^{N} e^{ \{ j \theta_k(t) \} } \end{equation} \begin{equation} q(t) := {\rm atan2} \left(\Im(q'(t)),\Re(q'(t)) \right) \end{equation} which can be regarded as the average phase of the group at time $t$. Let $\phi_k(t) := \theta_k(t) - q(t)$ be the relative phase between the $k$-th participant and the group phase at time $t$. We can define the relative phase between the $k$-th participant and the group averaged over the time interval $[t_1,t_{N_T}]$, both in its complex form $\bar{\phi}'_k \in \mathbb{C}$ and in its real form $\bar{\phi}_k \in [-\pi,\pi]$ as \begin{equation} \bar{\phi}'_k := \frac{1}{T} \int_{0}^{T} e^{ \{ j \phi_k(t) \} } \ dt \simeq \frac{1}{N_T} \sum_{i=1}^{N_T} e^{ \{ j \phi_k(t_i) \} } \end{equation} \begin{equation} \qquad \bar{\phi}_k := {\rm atan2} \left( \Im(\bar{\phi}'_k), \Re(\bar{\phi}'_k) \right) \end{equation} In order to quantify the degree of synchronization for the $k$-th agent within the group we define the following parameter \begin{equation} \label{eqn:r1} \rho_k := |\bar{\phi}'_k| \quad \in [0,1] \end{equation} which simply gives information on how much the $k$-th agent is synchronized with the average trend of the group. The closer $\rho_k$ is to $1$, the better the synchronization of the $k$-th agent itself. In order to quantify the synchronization level of the entire group at time $t$ we define the following parter \begin{equation} \label{eqn:r2} \rho_{g}(t) := \frac{1}{N} \left | \sum_{k=1}^{N} e^{ \{ j [ \phi_k(t)- \bar{\phi}_k ] \} } \right | \quad \in [0,1] \end{equation} which simply represents the group synchronization: the closer $\rho_{g}(t)$ is to $1$, the better the synchronization level of the group at time $t$. Its value can be averaged over the whole time interval $[0,T]$ in order to have an estimate of the mean synchronization level of the group during the total duration of the performance: \begin{equation} \label{eqn:r3} \rho_g := \frac{1}{T} \int_{0}^{T} \rho_{g}(t) \ dt \simeq \frac{1}{N_T} \sum_{i=1}^{N_T} \rho_{g}(t_i) \quad \in [0,1] \end{equation} Finally if we denote with $\phi_{d_{k,k'}}(t):=\theta_k(t)-\theta_{k'}(t)$ the relative phase between two participants in the group at time $t$, it is possible to estimate their dyadic synchronization, that is the synchronization level between participants $k$ and $k'$ over the whole round: \begin{equation} \label{eqn:r4} \rho_{d_{k,k'}} := \left | \frac{1}{T} \int_{0}^{T} e^{ \{ j \phi_{d_{k,k'}}(t) \} } \ dt \right | \simeq \left | \frac{1}{N_T} \sum_{i=1}^{N_T} e^{ \{ j \phi_{d_{k,k'}}(t_i) \} } \right | \quad \in [0,1] \end{equation} It is worth pointing out that high dyadic synchronization levels can coexist with low group synchronization values. \section{Testbed example} \label{sec:testbed} As a testbed scenario we consider the synchronization of rocking chairs motion studied in \cite{RGFGM12}. In particular, participants sit on six identical wooden rocking chairs disposed as a circle and are supposed to rock them in two different conditions: \begin{enumerate} \item \emph{Eyes closed:} participants are required to rock at their own preferred frequency while keeping their eyes closed; \item \emph{Eyes open:} participants are required to rock at their own preferred frequency while trying to synchronize their rocking chair movements as a group. \end{enumerate} In the eyes closed condition the participants are not visually coupled, meaning that the oscillation frequency of each of them is not influenced by the motion of the others, whilst in the eyes open condition each player is asked to look at the middle of the circle in order to try and synchronize their motion with the one of the others. The six participants first perform a trial while keeping their eyes closed, then perform two eyes open trials, namely $T1$ and $T2$; each of the three trials lasts $3$ minutes. In Fig. \ref{fig:rockChSim} some results about the typical trend of the group synchronization $\rho_g(t)$ and its mean value and standard deviation are represented for each of the three aforementioned trials. In particular, in Fig. \ref{fig:rockChSim}a we can observe that the mean value $\rho_g$ of the group synchronization, represented as a circle, is around $0.4$ in the eyes closed condition, while it is around $0.85$ in the eyes open condition: this means that, when the participants are not visually coupled, as expected synchronization does not emerge, whilst when visually coupled and explicitly told to rock their own chair movements as a group, the synchronization level significantly increases. In Fig. \ref{fig:rockChSim}b we can observe that in the eyes closed condition the amplitude of the oscillations of the group synchronization is higher than the one obtained in the eyes open condition. \begin{figure} \centering \subfloat[mean value and standard deviation]{\includegraphics[width=.5\textwidth]{Fig1a}} \subfloat[typical trend]{\includegraphics[width=.5\textwidth]{Fig1b}} \caption{Group synchronization in the rocking chairs experiments of \cite{RGFGM12} - T1 and T2 refer to two different trials of the eyes open condition} \label{fig:rockChSim} \end{figure} In Table \ref{table:rhoKcompRC} we show typical values of the degree of synchronization $\rho_k$ of the participants involved in the rocking chairs experiments, both for the eyes closed and the eyes open condition. It is easy to see that, as expected, the value of $\rho_k$ is much higher for almost all the participants when they are visually coupled. Interestingly enough, agent $6$ does not undergo an improvement of $\rho_6$ with respect to the eyes closed condition, meaning that such participant has more trouble synchronizing with the group compared to the other ones. \begin{table}[ht] \caption{Degree of synchronization of the participants in the rocking chairs experiments of \cite{RGFGM12} - EC: eyes closed, EO: eyes open} \centering \label{table:rhoKcompRC} \begin{tabular}{lll} \hline\noalign{\smallskip} Participant & EC ($\rho_g=0.36$) & EO ($\rho_g=0.80$) \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & $0.36$ & $0.95$ \\ 2 & $0.34$ & $0.92$ \\ 3 & $0.30$ & $0.95$ \\ 4 & $0.35$ & $0.88$ \\ 5 & $0.34$ & $0.67$ \\ 6 & $0.40$ & $0.37$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} In Table \ref{table:dyadSynchRC} we show typical values of the dyadic synchronization $\rho_{d_{k,k'}}$ of the participants involved in the rocking chairs experiments for the eyes open condition. As expected, the lowest values are the ones obtained with respect to the participant that most struggled to synchronize with the group, that is participant $6$. \begin{table}[ht] \caption{Dyadic synchronization of the participants in the rocking chairs experiments of \cite{RGFGM12} for the eyes open condition ($\rho_g=0.80$)} \centering \label{table:dyadSynchRC} \begin{tabular}{llllll} \hline\noalign{\smallskip} Participants & $2$ & $3$ & $4$ & $5$ & $6$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & $0.87$ & $0.86$ & $0.81$ & $0.63$ & $0.19$ \\ 2 & $-$ & $0.85$ & $0.78$ & $0.59$ & $0.21$ \\ 3 & $-$ & $-$ & $0.82$ & $0.61$ & $0.21$ \\ 4 & $-$ & $-$ & $-$ & $0.50$ & $0.18$ \\ 5 & $-$ & $-$ & $-$ & $-$ & $0.14$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Modeling results} \label{sec:modres} In this section we uncover the synchronization features that the three different interaction protocols introduced earlier lead to, with respect to the rocking chairs experiments introduced earlier as a testbed scenario \cite{RGFGM12}. We will explore whether and how the model of coupled HKB oscillators we propose in this paper can reproduce the key features of the observed experimental results. In so doing we will explore: \begin{itemize} \item the effects of choosing different coupling functions; \item how varying the coupling strength affects the synchronization level of the agents. \end{itemize} In what follows we simulate a heterogeneous network of $N=6$ HKB oscillators whose parameters and initial values are heuristically set as described in Table \ref{table:6nodesTable} and we set $T=200s$. We suppose that the network is simple, connected, unweighted and undirected and we assume that each node is connected to all the others (complete graph), which we believe well represents the topology implemented in the rocking chairs experiments of \cite{RGFGM12} for the eyes open condition. \begin{table}[ht] \caption{Numerical simulations - parameters and initial values for a network of $N=6$ HKB oscillators} \centering \label{table:6nodesTable} \begin{tabular}{llllll} \hline\noalign{\smallskip} Nodes & $\alpha_i$ & $\beta_i$ & $\gamma_i$ & $\omega_i$ & $x_i(0)$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & $0.46$ & $1.16$ & $0.58$ & $0.31$ & $[-1.4, +0.3]$ \\ 2 & $0.37$ & $1.20$ & $1.84$ & $0.52$ & $[+1.0, +0.2]$ \\ 3 & $0.34$ & $1.73$ & $0.62$ & $0.37$ & $[-1.8, -0.3]$ \\ 4 & $0.17$ & $0.31$ & $1.86$ & $0.41$ & $[+0.2, -0.2]$ \\ 5 & $0.76$ & $0.76$ & $1.40$ & $0.85$ & $[+1.5, +0.1]$ \\ 6 & $0.25$ & $0.86$ & $0.56$ & $0.62$ & $[-0.8, -0.1]$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure} \centering \subfloat[mean value and standard deviation]{\includegraphics[width=.5\textwidth]{Fig2a}} \subfloat[typical trend]{\includegraphics[width=.5\textwidth]{Fig2b}} \caption{Group synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators - \textbf{a} \emph{NC}: no coupling, \emph{FSC}: full state coupling ($c=0.15$), \emph{PSC}: partial state coupling ($c_1=c_2=0.15$), \emph{HKB}: HKB coupling ($a=b=-1,c=0.15$) - \textbf{b} black dashed line: no coupling, red solid line: full state coupling} \label{fig:syncLev6} \end{figure} In particular, since we are interested in replicating the key features of the rocking chairs experiments for both the conditions (eyes open and eyes closed), in Fig. \ref{fig:syncLev6} we show the group synchronization obtained with and without interaction protocol. In particular, in Fig. \ref{fig:syncLev6}a we show mean value and standard deviation of $\rho_g(t)$: in each column, they are shown both for all the three interaction protocols presented earlier and for the case in which the nodes are not connected, respectively. The mean value is indicated with a circle, while the standard deviation from it is indicated with a vertical bar whose extremities are delimited by two horizontal lines. In particular, if we denote with $\rho_g$ the mean value of the the group synchronization $\rho_g(t)$ obtained as defined in Eq. \ref{eqn:r3} for each of the four aforementioned cases after a simulation of duration $T$ and with $\sigma_{\rho_g}$ its standard deviation, respectively, we have that: \begin{equation} \sigma_{\rho_g} = \sqrt{ \frac{1}{T} \int_{0}^{T} \left( \rho_g(t)-\rho_g \right)^2 \ dt } \simeq \sqrt{ \frac{1}{N_T} \sum_{i=1}^{N_T} \left( \rho_g(t_i)-\rho_g \right)^2 } \end{equation} It is easy to observe that, in absence of connections among the nodes, which corresponds to $u_i=0 \ \forall i \in [1,N]$, the group synchronization has a mean value approximately equal to $0.4$, while it significantly increases (approximately $0.9$) when connecting the nodes with any of the three interaction protocols introduced above. These results confirm the observations previously made for a network of six human people involved in rocking chairs experiments (see Fig. \ref{fig:rockChSim}a). In particular, we have chosen $c=0.15$ for the full state interaction protocol, $c_1=c_2=0.15$ for the partial state interaction protocol, and $a=b=-1, c=0.15$ for the HKB interaction protocol. In Fig. \ref{fig:syncLev6}b we show the time evolution of the group synchronization $\rho_g(t)$ when the nodes are not connected at all (black dashed line) and when they are connected through a full state interaction protocol (red solid line): for the sake of clarity we do not show the trend of $\rho_g(t)$ obtained with a partial state and an HKB interaction protocol since they are qualitatively analogous to the one obtained with a full state interaction protocol. Our simulation results are able to reproduce another key feature observed in \cite{RGFGM12}: when the nodes are uncoupled, which corresponds to the eyes closed condition, the amplitude of the oscillations of $\rho_g(t)$ is higher than the one obtained when the nodes are coupled instead, which corresponds to the eyes open condition (see Fig. \ref{fig:rockChSim}b). Then, in Table \ref{table:rhoKcomp} we show the degree of synchronization $\rho_k$ obtained for each node of the network, both in absence of coupling among the agents and in its presence. It is easy to see that for each node $k$ in the network, $\rho_k$ has much higher values when any of the three interaction protocols is introduced, confirming what observed in \cite{RGFGM12} when asking the participants to rock their chairs while keeping their eyes open rather than closed. Moreover, we are able to reproduce another interesting feature: despite the group synchronization assuming high values when the human players are visually coupled, there might be some agents that struggle to keep up with the general trend of the group, therefore showing lower values in terms of $\rho_k$ (node $5$ in our simulations). \begin{table}[ht] \caption{Degree of synchronization of the nodes in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators - no coupling (NC), full state coupling (FSC) with $c=0.15$, partial state coupling (PSC) with $c_1=c_2=0.15$ and HKB coupling (HKB) with $a=b=-1,c=0.15$} \centering \label{table:rhoKcomp} \begin{tabular}{lllll} \hline\noalign{\smallskip} Node & NC & FSC & PSC & HKB \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & $0.42$ & $0.95$ & $0.93$ & $0.97$ \\ 2 & $0.38$ & $0.92$ & $0.94$ & $0.96$ \\ 3 & $0.45$ & $0.98$ & $0.95$ & $0.97$ \\ 4 & $0.41$ & $0.98$ & $0.96$ & $0.98$ \\ 5 & $0.33$ & $0.33$ & $0.33$ & $0.49$ \\ 6 & $0.36$ & $0.98$ & $0.97$ & $0.98$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Furthermore, in Table \ref{table:dyadSynch} we show the dyadic synchronization $\rho_{d_{k,k'}}$ for all the possible couples of nodes in the network: again, our simulation results confirm what observed for the rocking chairs experiments. Indeed, the couples of nodes with lower dyadic synchronization correspond to pairs in which at least either of the two nodes had trouble synchronizing with the general trend of the group (node $5$ in our simulations). For the sake of clarity we show $\rho_{d_{k,k'}}$ only when connecting the nodes through a full state interaction protocol, since analogous results can be obtained also for the other two strategies introduced earlier in this paper. \begin{table}[ht] \caption{Dyadic synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators - full state coupling ($c=0.15$) } \centering \label{table:dyadSynch} \begin{tabular}{llllll} \hline\noalign{\smallskip} Nodes & $2$ & $3$ & $4$ & $5$ & $6$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} 1 & $0.91$ & $0.98$ & $0.94$ & $0.38$ & $0.98$ \\ 2 & $-$ & $0.92$ & $0.96$ & $0.43$ & $0.93$ \\ 3 & $-$ & $-$ & $0.97$ & $0.39$ & $0.99$ \\ 4 & $-$ & $-$ & $-$ & $0.40$ & $0.98$ \\ 5 & $-$ & $-$ & $-$ & $-$ & $0.41$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} It is easy to foresee that, regardless of the interaction protocol the nodes are connected through, the value of the coupling strength has a direct impact on the group synchronization in terms of its mean value and its standard deviation. We now show how quantitatively $\rho_g$ varies as the the coupling strength varies for all the three interaction protocols introduced earlier in this paper, when considering a heterogeneous unweighted complete graph of $N=6$ HKB oscillators whose parameters and initial values are described in Table \ref{table:6nodesTable}. Moreover, we set $T=200s$. From Fig. \ref{fig.rhoG_coupStr_FSC} to \ref{fig:rhoG_coupStr_HKB} we show mean value and standard deviation of the group synchronization $\rho_g(t)$ obtained for different values of the coupling strength when considering full state coupling, partial state coupling and HKB coupling as interaction protocols, respectively. In particular, the blue solid line refers to the mean value of $\rho_g(t)$, while the red dashed lines indicate the variation or dispersion from it. \begin{figure} \centering \subfloat[group synchronization]{\includegraphics[width=.5\textwidth]{Fig3a}} \subfloat[group synchronization - zoom]{\includegraphics[width=.5\textwidth]{Fig3b}} \caption{Mean value and standard deviation of the group synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators for different values of the coupling strength $c$ - full state coupling} \label{fig.rhoG_coupStr_FSC} \end{figure} From Fig. \ref{fig.rhoG_coupStr_FSC}a it is clear that, when considering a full state coupling as interaction protocol, the group synchronization increases as the coupling strength $c$ increases: in particular, in order for the network to well synchronize it is sufficient a relatively small value for the coupling strength ($c \simeq 0.15$, see Fig. \ref{fig.rhoG_coupStr_FSC}b). In terms of multiplayer games, this means that the stronger the influence that each player has on the others, the better the overall synchronization of the human participants. \begin{figure} \centering \subfloat[$c_2=0, c_1$ variable]{\includegraphics[width=.5\textwidth]{Fig4a}} \subfloat[$c_1=0, c_2$ variable]{\includegraphics[width=.5\textwidth]{Fig4b}} \caption{Mean value and standard deviation of the group synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators for different values of the coupling strengths $c_1$ and $c_2$ - partial state coupling} \label{fig:rhoG_coupStr_PSC} \end{figure} In Fig. \ref{fig:rhoG_coupStr_PSC}a we show how, when considering a partial state coupling as interaction protocol, the group synchronization varies for increasing values of $c_1$ while keeping $c_2$ constantly equal to $0$, and vice versa in Fig. \ref{fig:rhoG_coupStr_PSC}b. As we can see, the influence that $c_2$ has on the group synchronization is stronger than the one provided by $c_1$. In terms of multiplayer games, this means that human players react better to changes in the velocity of their neighbors rather than in their position. This results is confirmed also in Fig. \ref{fig:rhoG_coupStr_PSC_HKB}a in which we show how the mean value of the group synchronization changes as $c_1$ and $c_2$ are simultaneously varied (darker colors refer to lower values of the average group synchronization, whilst lighter ones refer to higher values). \begin{figure} \centering \subfloat[group synchronization]{\includegraphics[width=.5\textwidth]{Fig5a}} \subfloat[group synchronization - zoom]{\includegraphics[width=.5\textwidth]{Fig5b}} \caption{Mean value and standard deviation of the group synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators for different values of the coupling strength $c$ while keeping $a=b=-1$ constant - HKB coupling} \label{fig:rhoG_coupStr_HKB} \end{figure} Finally from Fig. \ref{fig:rhoG_coupStr_HKB}a it is clear that, when considering an HKB coupling as interaction protocol while keeping $a$ and $b$ constantly equal to $-1$, the group synchronization increases as the coupling strength $c$ increases. In particular, like in the case of a full state interaction protocol, in order for the network to well synchronize it is sufficient to choose a relatively small value for the coupling strength ($c \simeq 0.15$, see Fig. \ref{fig:rhoG_coupStr_HKB}b). In terms of multiplayer games, this means that the stronger the influence that each player has on the others, the better the overall synchronization of the human participants. This results is confirmed also in Fig. \ref{fig:rhoG_coupStr_PSC_HKB}b in which we show how the mean value of the group synchronization changes as $a$ and $b$ are simultaneously varied while keeping $c$ constantly equal to $1$ (darker colors refer to lower values of the average group synchronization, whilst lighter ones refer to higher values). As we can see, as the values of $|a|$ and $|b|$ increase, then the average of the group synchronization increases as well. \begin{figure} \centering \subfloat[partial state coupling - $c_1,c_2$ variable]{\includegraphics[width=.5\textwidth]{Fig6a}} \subfloat[HKB coupling - $a,b$ variable while keeping $c=1$ constant]{\includegraphics[width=.5\textwidth]{Fig6b}} \caption{Mean value of the group synchronization $\rho_g(t)$ in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators for different values of the coupling strengths} \label{fig:rhoG_coupStr_PSC_HKB} \end{figure} \section{Entrainment of the network} \label{sec:entrainment} In this section we analyze the effects on the group synchronization of the network of adding an external sinusoidal signal to the dynamics of each node. Our main objective is understanding whether, and possibly under what conditions, such entrainment signal leads to a better synchronization level of a heterogeneous network of HKB oscillators with respect to the case in which the signal is absent. This will help us understand whether an external auditory or visual stimulus can improve the coordination level in multiplayer games when considering networks of human people involved in some synchronization task. Following the approach of \cite{RdBS10}, we model such a scenario in the following way: \begin{equation} \label{eqn:hkbExtSig} f_i(t,x_i)= \begin{bmatrix} x_{i_2} \\ - (\alpha_i x_{i_1}^2+\beta_i x_{i_2}^2-\gamma_i)x_{i_2} - \omega_i^2 x_{i_1} + \zeta \end{bmatrix}+u_i \end{equation} where $\zeta(t)=A_\zeta \sin \left(\omega_\zeta t \right)$ represents the entrainment signal and $u_i(t)$ one of the interaction protocols introduced earlier in this paper. We introduce the \emph{entrainment index} $\rho_E \in [0,1]$ in order to quantify the overall synchronization level between the network and the external signal $\zeta(t)$: \begin{equation} \label{eqn:entrIndexHKB} \rho_{E_k} := \left | \frac{1}{T} \int_{0}^{T} e^{ j [ \theta_k(t)- \theta_\zeta (t) ] } \ dt \right |, \ \rho_{E} := \frac{1}{N} \sum_{k=1}^{N} \rho_{E_k} \end{equation} where $\theta_k(t)$ is the phase of the $k$-th node, $\theta_\zeta(t)$ is the phase of $\zeta(t)$, $T$ is the duration of the experiment and $N$ is the number of nodes in the network. The closer $\rho_E$ is to $1$, the better the synchronization of the group with the entrainment signal. In what follows we simulate a heterogeneous network of $N=6$ HKB oscillators whose parameters and initial values are heuristically set as described in Table \ref{table:6nodesTable} and we set $T=200s$. We suppose that the network is simple, connected, unweighted and undirected and we assume that each node is connected to all the others (complete graph), which we believe well represents the topology implemented in the rocking chairs experiments of \cite{RGFGM12}. \begin{figure} \centering \includegraphics[width=.7\textwidth]{Fig7} \caption{Entrainment index in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators - full state coupling ($c=0.15$)} \label{fig:rhoEplotFSC} \end{figure} In Fig. \ref{fig:rhoEplotFSC} we show the entrainment index for different values of the frequency $\omega_\zeta$ and the amplitude $A_\zeta$ of the entrainment signal $\zeta(t)$ when considering a full state coupling as interaction protocol with $c=0.15$ (darker colors refer to lower values of $\rho_E$, whilst lighter ones refer to higher values). It is easy to see that, for each value of $\omega_\zeta$, the entrainment index increases as $A_\zeta$ increases as well, meaning that the network better synchronizes with $\zeta(t)$ for increasing values of its amplitude. Moreover, for a given value of $A_\zeta$, the highest values of $\rho_E$ are achieved when the frequency of the entrainment signal is close to the average value $\Omega$ of the natural frequencies $\omega_i$ of the nodes (in this case $\Omega \simeq 0.5$ ). These results confirm the findings of \cite{SRAG07,VSR15}, in which it is shown that spontaneous unintentional synchronization between the oscillation of a handheld pendulum swung by an individual and of an external sinusoidal stimulus (which corresponds to our external entrainment signal) emerges only when the frequency of the signal itself is similar to the preferred frequency of the player. For the sake of brevity we do not show how $\rho_E$ varies as $\omega_\zeta$ and $A_\zeta$ vary as well when considering partial state coupling and HKB coupling as interaction protocols, since we obtain results which are analogous to the ones shown in Fig. \ref{fig:rhoEplotFSC} for a full state coupling. \begin{figure} \centering \includegraphics[width=.7\textwidth]{Fig8} \caption{Mean value and standard deviation of the group synchronization in a heterogeneous unweighted complete graph of $N=6$ HKB oscillators - \emph{FSC}: full state coupling ($c=0.15$) - \emph{PSC}: partial state coupling ($c_1=c_2=0.15$) - \emph{HKB}: HKB coupling ($a=b=-1, c=0.15$) - green line: no entrainment signal, red line: $\omega_\zeta=0.1, A_\zeta=0.1$, blue line: $\omega_\zeta=0.3, A_\zeta=0.2$, black line: $\omega_\zeta=0.5, A_\zeta=0.3$} \label{fig:rhoGzetaComp} \end{figure} In Fig. \ref{fig:rhoGzetaComp} we show mean value and standard deviation of the group synchronization $\rho_g(t)$ when considering different parameters of the entrainment signal (green line: no entrainment signal, red line: $\omega_\zeta=0.1, A_\zeta=0.1$, blue line: $\omega_\zeta=0.3, A_\zeta=0.2$, black line: $\omega_\zeta=0.5, A_\zeta=0.3$) for all the three coupling protocols we have presented (\emph{FSC}: full state coupling ($c=0.15$) - \emph{PSC}: partial state coupling ($c_1=c_2=0.15$) - \emph{HKB}: HKB coupling ($a=b=-1, c=0.15$)). Since we are interested in understanding whether an additive external sinusoidal signal can improve the synchronization level of the network with respect to the case in which it is absent, the values of the coupling strengths chosen in these simulations for all the three interaction protocols are the same as the ones previously used in absence of entrainment signal (see Fig. \ref{fig:syncLev6}a). From Fig. \ref{fig:rhoGzetaComp} it is easy to observe that, for all the three interaction protocols, the group synchronization of the network improves only when the entrainment index $\rho_E$ has high values (see black line compared to the green one). In the other two cases (blue line and red line), the entrainment signal acts as a disturbance for the dynamics of the nodes and the group synchronization decreases. In terms of multiplayer games for networks of human people, this means that it is possible to further enhance the coordination level of participants only when the entrainment signal has an oscillation frequency which is close to the average of the natural oscillation frequencies of the individuals involved and its amplitude is sufficiently high. \section{Convergence analysis} \label{sec:mainresults} As anticipated earlier, other than finding a mathematical model able to reproduce features observed experimentally in some multiplayer games studied in the existing literature, we are also interested in understanding under what conditions synchronization is observed to emerge. In particular, in this section we are going to show that global bounded synchronization can be analytically guaranteed for a heterogeneous network of diffusively coupled $N$ HKB oscillators by making use of two different approaches, namely \emph{contraction theory} and \emph{Lyapunov theory}. \subsection{Contraction theory} Let $|\cdot|$ be a norm defined on a vector $w \in \mathbb{R}^n$ with induced matrix norm $||\cdot||$. As stated in \cite{RdBS13}, given a matrix $P \in \mathbb{R}^{n \times n}$, the \emph{induced matrix measure} is defined as $\mu(P) := \lim_{h \to 0^+} \frac{\left( ||I+hP|| -1 \right)}{h}$. \begin{dfn} Let us consider the system $\dot{w} = F(t,w)$ defined $\forall t \ge 0, \ \forall w \in C \subset \mathbb{R}^n$. We say that such system is contracting with respect to a norm $|\cdot|$ with associated matrix measure $\mu (\cdot)$ iff \begin{equation} \label{eqn:ContrDef} \exists \ k>0: \mu \left( J(w,t) \right) \le -k, \quad \forall w \in C, \forall t \ge 0 \end{equation} where $J$ is the Jacobian of the system. \end{dfn} They key stage in the application of contraction theory to synchronization of networks of oscillators is the construction of the so-called \emph{virtual system} \cite{JS04}. \begin{dfn} \label{dfn:vsdfn} Let us consider a heterogenous network described by Eq. \ref{eqn:networkeq}. The virtual system is defined as the system that has the trajectories of the nodes as particular solutions. \end{dfn} Formally, the virtual system depends on the state variables of the oscillators in the network and on some virtual state variables. Substitution of the state variables of a certain node $i$ into the virtual ones returns the dynamics of the $i$-th node of the network itself. It is worth pointing out that virtual systems are originally defined for networks of identical systems in order to prove complete synchronization: indeed, it is possible to prove that if a virtual system is contracting, then $\lim_{t \to \infty} \eta(t) = 0$. However, it is possible to define virtual systems also for networks of nonidentical oscillators by averaging the values of all the different parameters in order to prove bounded synchronization (see heterogeneous network of repressilators in \cite{RdB09} ). In \cite{RdB09} a simple algorithm is provided that allows to check whether the virtual system of a certain heterogeneous network of $N$ agents is contracting, which leads to global bounded synchronization of the network itself. In particular, rather than verifying Eq. \ref{eqn:ContrDef} in order to check whether the virtual system is contracting, the algorithm consists in checking the truth of some statements regarding the single elements of the Jacobian of the virtual system and imposing some conditions: \begin{enumerate} \item build the Jacobian $J$ of the virtual system; \item check whether the following statements are true or false \begin{itemize} \item S1: $J(i,i)<0$; \item S2: $J(i,i)=-\rho_i, \ 0<\rho_i<\infty$; \item S3: $J(i,j)\neq 0 \Rightarrow J(j,i)=0$; \end{itemize} \item generate a set of conditions for synchronization (CFS) according to the truth or the falsity of the previous statements. \end{enumerate} In particular, denoting with $n_{0_i}$ the number of $0$ elements in the $i-th$ row of the Jacobian of the virtual system, the CFS are generated in the following way: \begin{itemize} \item $S1, S2, S3 \Rightarrow |J(i,j)|<\frac{\rho_i}{n-n_{0_i}-1}$ ; \item $S1, S2, \bar{S3} \Rightarrow |J(i,j)|>\frac{\rho_i}{n-n_{0_i}-1}, |J(j,i)|<\frac{\rho_j}{n-n_{0_j}-1}$ or vice versa ; \item $S1, \bar{S2}, S3 \Rightarrow |J(i,j)|<\frac{|J(i,i)|}{n-n_{0_i}-1}$ ; \item $S1, \bar{S2}, \bar{S3} \Rightarrow |J(i,j)|>\frac{|J(i,i)|}{n-n_{0_i}-1}, |J(j,i)|<\frac{|J(j,j)|}{n-n_{0_j}-1}$ or vice versa . \end{itemize} Note that if statement \emph{S1} is not true, it is not possible for the virtual system to be contracting. \begin{thm} \label{thm:ctheorythm} Suppose to have a heterogeneous network of $N$ HKB oscillators interconnected via a full state coupling as described in Eq. \ref{eqn:gsldc}. Let us also assume that the network topology is a connected, simple, undirected and unweighted complete graph. If the following hypothesis is satisfied \begin{equation} \label{eqn:coupStrenCT2} \frac{N-1}{N} \left( 2\tilde{\alpha}z_{1_{max}}z_{2_{max}}+\tilde{\omega}^2+\tilde{\gamma} \right) < c < \frac{N-1}{N} \end{equation} where $\tilde{\alpha}, \tilde{\omega}, \tilde{\gamma}$ are the average values of parameters $\alpha_i$, $\omega_i$, $\gamma_i$, respectively, and $z_{1_{max}}, z_{2_{max}}$ are the bounds of the two virtual state variables, then global bounded synchronization is achieved by the network. \end{thm} \begin{proof} Let us consider an unweighted complete graph of $N$ HKB oscillators interconnected via a full state coupling, that is \begin{equation*} \dot{x}_i= \begin{bmatrix} x_{i_2} \\ - (\alpha_i x_{i_1}^2+\beta_i x_{i_2}^2-\gamma_i)x_{i_2} - \omega_i^2 x_{i_1} \end{bmatrix} \end{equation*} \begin{equation} -\hat{c} \sum_{j=1}^N a_{ij} \left(x_i-x_j \right), \quad \forall i \in[1,N] \end{equation} where $x_i \in \mathbb{R}^2$ is the state variable of node $i$ and $\hat{c} := \frac{c}{N-1}$ since each node in a connected complete graph has $N-1$ neighbors. The virtual system reads \begin{equation} \dot{z} = \begin{bmatrix} z_2-\hat{c}Nz_1+\hat{c}\sum_{j=1}^{N} x_{j_1} \\ -\left( \tilde{\alpha}z_1^2+\tilde{\beta}z_2^2-\tilde{\gamma} \right)z_2 - \tilde{\omega}^2 z_1 - \hat{c}Nz_2 + \hat{c} \sum_{j=1}^{N} x_{j_2} \end{bmatrix} \end{equation} where $z \in \mathbb{R}^2$ is the state variable of the virtual system and $\tilde{\alpha}:=\frac{1}{N}\sum_{i=1}^{N} \alpha_i$, $\tilde{\beta}:=\frac{1}{N}\sum_{i=1}^{N} \beta_i$, $\tilde{\gamma}:=\frac{1}{N}\sum_{i=1}^{N} \gamma_i$, $\tilde{\omega}:=\frac{1}{N}\sum_{i=1}^{N} \omega_i$. The Jacobian of the virtual system is: \begin{equation} \label{eqn:jacVS} J(t,z) = \begin{bmatrix} -\hat{c}N & 1 \\ -(2\tilde{\alpha}z_2z_1+\tilde{\omega}^2) & -\tilde{\alpha}z_1^2-3\tilde{\beta}z_2^2-\hat{c}N+\tilde{\gamma} \end{bmatrix} \end{equation} In order to prove global bounded synchronization of the network, we need the virtual system to be contracting. In order to do so, we apply the algorithm presented in \cite{RdB09} to Eq. \ref{eqn:jacVS}. When $i=1,j=2$, it is immediate to see that statement \emph{S1} is true, while \emph{S2} and \emph{S3} are false ($c$ might be in general time varying), leading to $|J(1,2)|>|J(1,1)|$ and $|J(2,1)|<|J(2,2)|$. When $i=2,j=1$ instead, inequalities to satisfy and CFS depend on the sign of $\tilde{\alpha}$ and $\tilde{\beta}$. Supposing without loss of generality that $\tilde{\alpha},\tilde{\beta}>0$ as usually done in literature \cite{FJ08, KdGRT09}, it is immediate to see that an inequality corresponding to the fulfilment of \emph{S1} needs to be added to the list of CFS generated by the algorithm (a worst case scenario is $-\hat{c}N+\tilde{\gamma}<0$), and that both \emph{S2} and \emph{S3} are again false, leading to the two same conditions. This means that the network achieves global bounded synchronization when the following system is satisfied: \begin{equation*} \begin{cases} \hat{c}>\frac{\tilde{\gamma}}{N}\\ 1>\hat{c}N\\ | 2\tilde{\alpha}z_1z_2+\tilde{\omega}^2 | < | \tilde{\alpha}z_1^2+3\tilde{\beta}z_2^2-\tilde{\gamma}+\hat{c}N | \end{cases} \end{equation*} \begin{equation} \Leftrightarrow \begin{cases} \frac{\tilde{\gamma}}{N}<\hat{c}<\frac{1}{N}\\ | 2\tilde{\alpha}z_1z_2+\tilde{\omega}^2 | < | \tilde{\alpha}z_1^2+3\tilde{\beta}z_2^2-\tilde{\gamma}+\hat{c}N | \end{cases} \end{equation} Supposing that the dynamics of the virtual system is bounded, meaning that $|z_1(t)| \le z_{1_{max}}, |z_2(t)| \le z_{2_{max}}$ $\forall t \ge 0$, we can consider the following worst case scenario \begin{equation} \begin{cases} \frac{\tilde{\gamma}}{N}<\hat{c}<\frac{1}{N}\\ 2\tilde{\alpha}z_{1_{max}}z_{2_{max}}+\tilde{\omega}^2 < \hat{c}N - \tilde{\gamma} \end{cases} \end{equation} which leads to \begin{equation} \label{eqn:coupStrenCT} \frac{2\tilde{\alpha}z_{1_{max}}z_{2_{max}}+\tilde{\omega}^2+\tilde{\gamma}}{N}<\hat{c}<\frac{1}{N} \end{equation} and, as a consequence, to Eq. \ref{eqn:coupStrenCT2}. So we can conclude that if the coupling strength $c$ fulfills Eq. \ref{eqn:coupStrenCT2}, the heterogeneous network of HKB oscillators overlying a complete graph achieves bounded synchronization. \end{proof} \begin{rem} Note that when the number of nodes in the network $N$ is really high, then $\frac{N-1}{N} \rightarrow 1$. This means that global bounded synchronization is achieved when: \begin{equation} \label{eqn:coupStrenCT3} 2\tilde{\alpha}z_{1_{max}}z_{2_{max}}+\tilde{\omega}^2+\tilde{\gamma} < c < 1 \end{equation} \end{rem} \subsection{Lyapunov theory} Let $\mathcal{D}$ be the set of diagonal matrices, $\mathcal{D}^+$ the set of positive definite diagonal matrices and $\mathcal{D}^-$ the set of negative definite diagonal matrices. Let us now define \emph{QUAD} and \emph{QUAD Affine} vector fields \cite{DLdBL14}. \begin{dfn} Given $n \times n$ matrices $P \in \mathcal{D}^+, W_i \in \mathcal{D}$, the vector field $f_i$ is said to be \emph{QUAD($P,W_i$)} iff \begin{equation} (z-w)^T P [f_i(t,z)-f_i(t,w)] \le (z-w)^T W_i (z-w) \end{equation} for any $z,w \in \mathbb{R}^n$ and for any $t \ge 0$. \end{dfn} \begin{dfn} Given $n \times n$ matrices $P \in \mathcal{D}^+, W_i \in \mathcal{D}$, the vector field $f_i$ is said to be \emph{QUAD($P,W_i$) Affine} iff $f_i(t,x_i)=h_i(t,x_i)+g_i(t,x_i)$ and \begin{itemize} \item $h_i$ is QUAD($P,W_i$); \item $\exists \ M<\infty : ||g_i(t,z)||_2<M, \ \forall z \in \mathbb{R}^n, \forall t \ge 0$ \end{itemize} \end{dfn} Let us consider a heterogeneous network of $N$ agents interconnected via a linear coupling: \begin{equation} \label{eqn:qvflc} \dot{x}_i(t) = f_i(t,x_i)-\frac{c}{\mathcal{N}_i}\sum_{j=1}^{N}a_{ij}\Gamma(x_i-x_j), \quad c>0 \end{equation} where $\Gamma \in \mathbb{R}^{n \times n}$. Note that this is a generalization of the full state coupling previously introduced, which can be obtained by setting $\Gamma=I_n$. As reported in \cite{DLdBL14} in details, in order to prove global bounded synchronization of a network of $N$ nonidentical QUAD Affine systems coupled via a linear interaction protocol, we need $h_i(t,x_i)$ to be QUAD($P,W_i$) with $W_i \in \mathcal{D}^-$ for all the nodes in the network at any time instant. However, in a heterogeneous network of $N$ HKB oscillators with vector fields described by Eq. \ref{eqn:hkbInternalDynamics}, regardless of the way we define $h_i$ and $g_i$ it is never possible to satisfy the following condition \begin{equation} (z-w)^T P [h_i(t,z)-h_i(t,w)] \le (z-w)^T W_i (z-w) \end{equation} with definite negative matrices $W_i$. Indeed, the right-hand term is always negative, while the left-hand one can be positive for any value of $P>0$. On the other hand, in order to avoid conditions on the sign of the matrices $W_i$, it is necessary to write the dynamics of the nodes in the following way \begin{equation} f_i(t,x_i)=h_i(t,x_i)+g_i(t,x_i) \quad \forall i=1,2,...,N \end{equation} with $h_i(t,z)=h_j(t,z)=h(t,z) \ \forall i,j \in [1,N], \ \forall z \in \mathbb{R}^n$, and with all the terms $g_i$ being bounded, at any time instant. In particular, in \cite{DLdBL14} the authors formalize the following theorem. \begin{thm} \label{thm:qvflcthm} Let us consider a heterogeneous network of $N$ agents interconnected via a linear coupling as described in Eq. \ref{eqn:qvflc}. Let us suppose that $f_i(t,x_i)=h(t,x_i)+g_i(t,x_i)$ and that: \begin{enumerate} \item the network is made up of $N$ QUAD($P,W$) Affine systems, with $P \in \mathcal{D}^+$ and $W \in \mathcal{D}$; \item $\Gamma$ is a positive semidefinite diagonal matrix; \item if $W$ is made up of $l \in [0,n]$ non-negative elements, which without loss of generality can be collected in its $l \times l$ upper-left block, then $\Gamma$ is made up of $\bar{l}$ positive elements, where $l \le \bar{l} \le n$, which again without loss of generality can be collected in its $\bar{l} \times \bar{l}$ upper-left block; \item $\exists \ 0<\bar{M}<\infty : ||g_i(t,x_i)||_2<\bar{M} \ \forall i=1,2,...,N, \forall t \ge 0$. \end{enumerate} Then, we can claim that global bounded synchronization is achieved by the network. In particular, if we define matrix $L_{\mathcal{N}}=\{ l_{\mathcal{N}_{ij}} \}$ as \begin{equation} l_{\mathcal{N}_{ij}} := \begin{cases} \frac{1}{\mathcal{N}_i} \sum_{k=1}^N a_{ik}, & \mbox{if } i=j \\ -\frac{a_{ij}}{\mathcal{N}_i}, & \mbox{if } i \neq j \mbox{ and } (i,j) \mbox{ are neighbors}\\ 0, & \mbox{otherwise} \end{cases} \end{equation} we can state that $\exists \ 0<\bar{c}<\infty, \epsilon>0 : \lim_{t \to \infty} \eta(t) \le \epsilon \ \forall c > \bar{c}$, where \begin{equation} \label{QUADthMinc} \bar{c} = \min_{P,W} \ \max \left( \frac{\lambda_M\left( W_l \right)}{\lambda_2 \left(L_{\mathcal{N}} \otimes P_l \Gamma_l \right)},0 \right) \end{equation} with $W_l,P_l,\Gamma_l$ representing the $l \times l$ upper-left block of matrices $W,P,\Gamma$, respectively, and where for a given value of $c>\bar{c}$ \begin{equation} \label{QUADthErrBound} \epsilon = \min_{P,W} \ \frac{\sqrt{N} \bar{M} ||P||_2}{-\max \left( \lambda_M\left( W_l \right) -c \lambda_2 \left(L_{\mathcal{N}} \otimes P_l \Gamma_l \right), \lambda_M \left( W_{n-l} \right) \right)} \end{equation} with $W_{n-l}$ representing the $(n-l) \times (n-l)$ lower-right block of matrix $W$ and with the assumption that $c \lambda_2 \left(L_{\mathcal{N}} \otimes P_l \Gamma_l \right) > \lambda_M\left( W_l \right)$. \end{thm} \begin{proof} See \cite{DLdBL14}. \end{proof} We can thus derive the following corollary. \begin{cor} Let us consider a heterogeneous network of $N$ HKB oscillators interconnected via a linear coupling. Supposing that the topology of the network is simple and undirected, and assuming that $\gamma_i=\tilde{\gamma} \ \forall i \in [1,N]$, if the coupling strength satisfies the inequality \begin{equation} \label{eqn:cbarQuad} c \ge \bar{c} = \min_{W(1,1),P(1,1),P(2,2)>0} \frac{\max \left( W(1,1),\tilde{\gamma} P(2,2) \right) }{\lambda_2\left( L_{\mathcal{N}} \right) \min_{j=1,2}\left( P(j,j)\Gamma(j,j) \right)} \end{equation} then global bounded synchronization is achieved by the network. In particular, we can claim that \begin{equation} \label{eqn:enormQuad} \lim_{t \to \infty} \eta(t) \le \epsilon = \min_{W(1,1),P(1,1),P(2,2),d_\epsilon>0} \frac{\sqrt{N}\bar{M} \max \left(P(1,1),P(2,2) \right) }{d_\epsilon} \end{equation} where \begin{equation} d_\epsilon:=c\lambda_2 \left( L_{\mathcal{N}} \right) \min_{j=1,2 } \left( P(j,j) \Gamma(j,j) \right) -\max \left( W(1,1),\tilde{\gamma} P(2,2) \right) \end{equation} \end{cor} \begin{proof} First of all we need to write the dynamics of each node in the network as $f_i(t,x_i)=h(t,x_i)+g_i(t,x_i)$. This is possible if we suppose that $\gamma_i=\tilde{\gamma} \ \forall i \in [1,N]$ and define: \begin{equation*} h(t,x_i)=\begin{bmatrix} 0\\ \tilde{\gamma} x_{i_2} \end{bmatrix} \end{equation*} \begin{equation*} g_i(t,x_i)=\begin{bmatrix} x_{i_2}\\ -(\alpha_i x_{i_1}^2 + \beta_i x_{i_2}^2)x_{i_2}-\omega_i^2x_{i_1} \end{bmatrix} \end{equation*} Then we need to verify whether the nodes in the network are QUAD($P,W$) Affine systems. In particular, this means that we need $h$ to be QUAD($P,W$), with $P \in \mathcal{D}^+$ and $W \in \mathcal{D}$. Therefore, if we define $P=diag \{P(1,1),P(2,2) \}$ with $P(1,1),P(2,2)>0$, $W=diag \{W(1,1),W(2,2) \}$ and $h(t,z)=[0 \ \tilde{\gamma} z_2]^T \ \forall z \in \mathbb{R}^2$, we have to satisfy: \begin{equation} \label{eqn:hquadineq} P(2,2) \tilde{\gamma} (z_2-w_2)^2 \le W(1,1)(z_1-w_1)^2+W(2,2)(z_2-w_2)^2 \end{equation} Hence, if we choose $W(2,2)=\tilde{\gamma} P(2,2)$, it possible to reduce Eq. \ref{eqn:hquadineq} to \begin{equation} W(1,1)(z_1-w_1)^2 \ge 0 \end{equation} which is true for any $W(1,1)>0$. This means that the first hypothesis of Theorem \ref{thm:qvflcthm} simply reduces to choosing any $P \in \mathcal{D}^+$ and $W=diag \{ W(1,1),\tilde{\gamma} P(2,2) \}$ for any $W(1,1)>0$. Since $W \in \mathbb{R}^{2 \times 2}$ is made up of $2$ non-negative elements, we have that $l=\bar{l}=2$. Therefore, in order to satisfy the second and the third hypotheses of Theorem \ref{thm:qvflcthm}, we need $\Gamma$ to be a diagonal positive definite matrix, that is $\Gamma \in \mathcal{D}^+$ (note that this is true when the nodes are connected through a full state coupling, since it corresponds to $\Gamma=I_2$). Finally, the last hypothesis to satisfy is related to the boundedness of the terms $g_i$ at any time instant. As already shown before, we have chosen: \begin{equation} \label{eqn:qvfgf} g_i(t,x_i) = \begin{bmatrix} x_{i_2}\\ -(\alpha_i x_{i_1}^2 + \beta_i x_{i_2}^2)x_{i_2}-\omega_i^2x_{i_1} \end{bmatrix} \end{equation} Since the dynamics of each HKB oscillator is bounded \cite{ZATdB15}, we can define \begin{equation*} p_{i_{max}} := \sup_{t \ge 0} \left( |x_{i_1}(t)| \right), \ v_{i_{max}} := \sup_{t \ge 0} \left( |x_{i_2}(t)| \right) \end{equation*} and we define as well $p_M := \max_{i} \left( p_{i_{max}} \right)$, $v_M := \max_{i} \left( v_{i_{max}} \right)$, $\alpha_M := \max_{i} \left( |\alpha_i| \right)$, $\beta_M := \max_{i} \left( |\beta_i| \right)$, $\omega_M := \max_{i} \left( |\omega_i| \right)$. Therefore, from Eq. \ref{eqn:qvfgf} we get: \begin{equation*} ||g_i||_2 \le |x_{i_2}| + |(\alpha_i x_{i_1}^2 + \beta_i x_{i_2}^2)x_{i_2}+\omega_i^2x_{i_1}| \end{equation*} \begin{equation*} \le |x_{i_2}| + |\alpha_i x_{i_1}^2 + \beta_i x_{i_2}^2| |x_{i_2}| + \omega_i^2 |x_{i_1}| \end{equation*} \begin{equation} \le (1+|\alpha_i| p_{i_{max}}^2 + |\beta_i| v_{i_{max}}^2) v_{i_{max}} + \omega_i^2 p_{i_{max}} := M_i \end{equation} Besides, we have that \begin{equation} \label{eqn:boundM} M_i \le (1+\alpha_M p_M^2 + \beta_M v_M^2) v_M + \omega_M^2 p_M := \bar{M} \end{equation} This means that the fourth hypothesis of Theorem \ref{thm:qvflcthm} is always satisfied in the case of HKB oscillators, and the bound $\bar{M}$ is defined in Eq. \ref{eqn:boundM}. In order to find an easier expression of the minimum value required for the coupling strength and of the upper-bound for the error norm, we can take advantage of the particular form of matrices $P$ and $W$: \begin{equation*} P = P_2 = \begin{bmatrix} P(1,1) & 0 \\ 0 & P(2,2) \end{bmatrix}, \qquad P(1,1),P(2,2)>0 \end{equation*} \begin{equation*} W = W_2 = \begin{bmatrix} W(1,1) & 0\\ 0 & \tilde{\gamma}P(2,2) \end{bmatrix}, \qquad W(1,1)>0 \end{equation*} Therefore, from Eq. \ref{QUADthMinc} we have that the minimum value $\bar{c}$ for the coupling strength that guarantees bounded synchronization of the network is given by Eq. \ref{eqn:cbarQuad}, while for a given $c>\tilde{c}$, from Eq. \ref{QUADthErrBound} we have that the upper bound of the error norm is given by Eq. \ref{eqn:enormQuad}. So we can conclude that if $c>\bar{c}>0$, where $\bar{c}$ is defined in Eq. \ref{eqn:cbarQuad}, global bounded synchronization is achieved. \end{proof} \subsection{Numerical validation} As previously shown for a connected simple undirected heterogeneous network of $N$ HKB oscillators, by making use of contraction theory it is possible to guarantee bounded synchronization if we suppose that the underlying topology is an unweighted complete graph (all-to-all network). On the other hand, by making use of Lyapunov theory, bounded synchronization can be achieved regardless of the topology and the weights of the interconnections, although an assumption has to be made on one of the parameters of the nodes ($\gamma_i=\tilde{\gamma} \ \forall i \in [1,N]$). In order to be able to study the most possible general case, we consider a weighted random graph of $N=5$ HKB nonlinear oscillators characterized by $\gamma_i=\tilde{\gamma} \ \forall i \in [1,N]$. In particular, in such a random graph the odds of an edge connecting two nodes is worth $60 \%$ and its weight is randomly picked between $0$ and $2$ (see Fig. \ref{fig:topology}). As for the parameters and the initial conditions of the nodes, see Table \ref{table:5nodesTable}. Moreover, we set $T=200s$. \begin{figure} \centering \includegraphics[width=.7\textwidth]{Fig9} \caption{Underlying topology - simple connected weighted graph} \label{fig:topology} \end{figure} \begin{table}[ht] \caption{Numerical simulations - parameters and initial values for a network of $N=5$ HKB oscillators} \centering \label{table:5nodesTable} \begin{tabular}{llllll} \hline\noalign{\smallskip} Nodes & $\alpha_i$ & $\beta_i$ & $\gamma_i$ & $\omega_i$ & $x_i(0)$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} Node 1 & $0.46$ & $1.16$ & $0.58$ & $0.16$ & $[-1.4, +0.3]$ \\ Node 2 & $0.37$ & $1.20$ & $0.58$ & $0.26$ & $[+1.0, +0.2]$ \\ Node 3 & $0.34$ & $1.73$ & $0.58$ & $0.18$ & $[-1.8, -0.3]$ \\ Node 4 & $0.17$ & $0.31$ & $0.58$ & $0.21$ & $[+0.2, -0.2]$ \\ Node 5 & $0.76$ & $0.76$ & $0.58$ & $0.27$ & $[+1.5, +0.1]$\\ \noalign{\smallskip}\hline \end{tabular} \end{table} This scenario leads to $p_M=2.6$, $v_M=0.96$, $\bar{M}=7.6$, $\lambda_2 \left(L_{\mathcal{N}} \right)=0.4112$, $P(1,1)=0.077$, $P(2,2)=0.077$, $W(1,1)=0.001$, $W(2,2)=0.045$ and $\bar{c} = 1.4211$. Fig. \ref{fig:FSC} shows $x_1(t)$ for all the nodes in the network (blue line: node 1, green line: node 2, red line: node 3, cyan line: node 4, magenta line: node 5) when connected through a full state coupling protocol with $c=1.45$: as we can see, our analytical results are confirmed and synchronization is achieved by the network. On the other hand, in Fig. \ref{fig:eta} we show that bounded synchronization can actually be achieved for smaller values of the coupling strength when considering a full state coupling ($c=0.07$), and that it can be achieved also with the two other coupling protocols presented earlier in this paper ($c_1=c_2=0.1$ for the partial state coupling and $a=b=-1, c=0.1$ for the HKB coupling, respectively). Indeed, with this choice of coupling strength, the error norm $\eta(t)$ is roughly bounded by $\epsilon \simeq 2$. In Fig. \ref{fig:gsCompNum} we then show the trend of the group synchronization obtained respectively in all the three cases: as we can see, after an initial transient $\rho_g(t)$ reaches a much higher value, confirming what observed in \cite{RGFGM12}. In particular, the trend obtained when considering an HKB coupling resembles the most the one obtained in the real experiments involving human people: indeed, in this case the group synchronization presents persistent oscillations with the highest amplitude, as observed in the rocking chairs experiments. \begin{figure} \centering \subfloat[first state variable]{\includegraphics[width=.5\textwidth]{Fig10a}} \subfloat[first state variable - zoom]{\includegraphics[width=.5\textwidth]{Fig10b}} \caption{First state variable $x_{i_1}$ in a simple connected weighted heterogeneous network of $N=5$ HKB oscillators - full state coupling ($c=1.45$)} \label{fig:FSC} \end{figure} \begin{figure} \centering \includegraphics[width=.7\textwidth]{Fig11} \caption{Error norm in a simple connected weighted heterogeneous network of $N=5$ HKB oscillators - magenta solid line: full state coupling ($c=0.07$), blue solid line: partial state coupling ($c_1=c_2=0.1$), black dashed line: HKB coupling ($a=b=-1, c=0.1$)} \label{fig:eta} \end{figure} \begin{figure} \centering \subfloat[group synchronization]{\includegraphics[width=.5\textwidth]{Fig12a}} \subfloat[group synchronization - zoom]{\includegraphics[width=.5\textwidth]{Fig12b}} \caption{Group synchronization in a simple connected weighted heterogeneous network of $N=5$ HKB oscillators - magenta solid line: full state coupling ($c=0.07$), blue solid line: partial state coupling ($c_1=c_2=0.1$), black dashed line: HKB coupling ($a=b=-1, c=0.1$)} \label{fig:gsCompNum} \end{figure} \section{Conclusion} \label{sec:conclusion} We have proposed a mathematical model for movement synchronization of a group of three or more people. In particular we have considered heterogeneous networks of HKB nonlinear oscillators, in which each equation is characterized by a different set of parameters to account for human-to-human variability. Three different coupling models, both linear and nonlinear, have been investigated, and the effects of adding an external entrainment signal have been analyzed. We have found analytical conditions for a connected simple undirected network to achieve bounded synchronization when considering a full state coupling as interaction protocol among the nodes, while we have numerically shown that bounded synchronization can be achieved also when considering a partial state coupling or a HKB coupling. In particular, we have observed that it is possible to replicate some of the synchronization features obtained in the rocking chairs experiments with all the three coupling protocols proposed in this paper, although the most realistic one is achieved when connecting the nodes through a nonlinear HKB coupling. Indeed, in this case the group synchronization presents persistent oscillations with the highest amplitude, as observed in \cite{RGFGM12}. Some viable extensions of this work include performing real experiments involving hand synchronization of more than two players and then choosing the coupling protocol that can best capture the observations coming from such experiments and explain the onset and features of movement coordination among the human players. Finally, we wish to prove global bounded synchronization with any kind of coupling protocol, both linear and nonlinear, also in the more general case of directed networks. \section*{Acknowledgements} The authors wish to acknowledge support from the European Project AlterEgo FP7 ICT 2.9 - Cognitive Sciences and Robotics, Grant Number 600610.
1,108,101,564,496
arxiv
\section{Introduction} \label{sec:introduction} Given input data $X = (x_1, \dots, x_n)\in \mathrm{R}^{n\times p}$ and response data $Y = (y_1, \dots, y_n)\in \mathrm{R}^n$, the problem of linear regression with a \citet{tikhonov1943stability} regularization term and an explicit sparsity constraint is defined as \begin{equation} \label{eq:l0-regression:primal} \begin{array}{rl} \displaystyle \min_w & \frac{1}{2\gamma} \norm{w}^2_2 + \frac{1}{2} \norm{Y - X w}_2^2 \\[0.5em] \mathrm{s.t.} & \norm{w}_0 \leq k, \\ \end{array} \end{equation} where $\gamma>0$ is a given weight that controls the importance of the regularization term. The number of regression coefficients needed to explain the observations from the input data is limited to $k$ by the $\ell_0$-norm constraint on the regressor $w$. Tikhonov regularization helps to reduce the effect of noise in the input data. Regularization and robustness are indeed known to be intimately connected as shown for instance by \citet{bertsimas2009equivalence, xu2009robustness}. Evidently in practice, both the sparsity parameter $k$ and the Tikhonov regularization term $\gamma$ must ultimately be determined from the data. Cross validation has in practice been empirically found to be an effective method to determine both hyperparameters. \subsection*{Background} Problem \eqref{eq:l0-regression:primal} is a discrete optimization problem, which belongs to the class of $NP$-hard problems. Motivated by the apparent difficulty of the sparse regression formulation \eqref{eq:l0-regression:primal}, much of the literature until recently has largely ignored the exact discrete formulation and rather focused on heuristic approaches. Historically, the first heuristics methods for sparse approximation seem to have arisen in the signal processing community (c.f.\ the work of \citet{mallat1993matching} and references therein) and typically are of an iterative thresholding type. More recently, one popular class of sparse regression heuristics solve convex surrogates to the sparse regression formulation \eqref{eq:l0-regression:primal}. There is an elegant theory for such schemes promising large improvements over the more myopic iterative thresholding methods. Indeed, a truly impressive amount of high-quality work \citep{buhlmann2011statistics, hastie2015statistical, wainwright2009sharp} has been written on characterizing when exact solutions can be recovered, albeit through making strong assumptions on the data. One such heuristic based on a convex proxy related to our formulation and particularly worthy of mention is the \texttt{Elastic Net} developed by \citet{zou2005regularization}. One particular canonical form of the \texttt{Elastic Net} heuristic solves the proxy convex optimization problem \begin{equation} \label{eq:l1-regression:primal} \begin{array}{rl} \displaystyle \min_w & \frac{1}{2\gamma} \norm{w}^2_2 + \frac{1}{2} \norm{Y - X w}_2^2 \\[0.5em] \mathrm{s.t.} & \norm{w}_1 \leq \lambda, \end{array} \end{equation} where the $\ell_1$-norm constraint shrinks the regressor coefficients towards zero thus encouraging sparse regressors for $\lambda$ tending to zero. When disregarding the Tikhonov regularization term, the popular \texttt{Lasso} heuristic introduced by \citet{tibshirani} is recovered. An important factor in favor of heuristics such as \texttt{Lasso} and \texttt{Elastic Net} are their computational feasibility and scalability. Indeed, problem \eqref{eq:l1-regression:primal} can be solved efficiently and mature software implementations such as \texttt{GLMNet} by \citet{friedman2013glmnet} are available. Despite all of the aforementioned positive properties, proxy based methods such as \texttt{Lasso} and \texttt{Elastic Net} do have several innate shortcomings. These shortcomings are well known in the statistical community too. First and foremost, as argued in \citep{bertsimas2014statistics} they do not recover very well the sparsity pattern. Furthermore, the \texttt{Lasso} leads to biased regression regressors, since the $\ell_1$-norm penalizes both large and small coefficients uniformly. In sharp contrast, the $\ell_0$-norm sparsifies the regressor without conflating the effort with unwanted shrinking. For a few decades the exercise of trying to solve the sparse regression problem \eqref{eq:l0-regression:primal} at a practical scale was branded hopeless. \citet{bixby2012brief} noted however that in the last twenty-five years the computational power of \ac{mio} solvers has increased at an astonishing rate. Riding on the explosive improvement of \ac{mio} formulations, \citet{bertsimas2014statistics} achieved to solve the sparse regression problem \eqref{eq:l0-regression:primal} for problem instances of dimensions $n$, $p$ in the 1000s. Using a big-$\mc M$~ formulation of the cardinality constraint, the sparse regression problem \eqref{eq:l0-regression:primal} can indeed be transformed into the \ac{mio} problem \begin{equation} \label{eq:bigm:primal} \begin{array}{rl} \min & \frac{1}{2\gamma} \norm{w}_2^2 + \frac{1}{2} \norm{Y - X w}_2^2 \\[0.5em] \mathrm{s.t.} & w \in \mathrm{R}^p, ~s \in \S^p_k \\[0.5em] & -\mathcal M s_j \leq w_j \leq \mathcal M s_j, \hspace{0.60cm} \forall j \in [p]. \end{array} \end{equation} With the help of the binary set $\S^p_k \defn \set{s\in\{0, 1\}^p}{\mb 1^\top s \leq k}$, the constraint in \eqref{eq:bigm:primal} ensures that the regression coefficient $w_j$ is nonzero only if the selection variable $s_j=1$ for a sufficiently large constant $\mathcal M$. The constant $\mathcal M$ must be estimated from data as outlined in \cite{bertsimas2014statistics} to ensure the equivalence between the sparse regression problem \eqref{eq:l0-regression:primal} and its \ac{mio} formulation \eqref{eq:bigm:primal}. This \ac{mio} approach is significantly more scalable than the leaps and bounds algorithm outlined in \cite{furnival2000regressions}, largely because of the advances in computer hardware, the improvements in \ac{mio} solvers, and the specific warm-start techniques developed by \cite{bertsimas2014statistics}. Even so, many problems of practical size are still far beyond the scale made tractable through this approach. \subsection*{A scalable perspective} Although a direct big-$\mc M$~ formulation of the sparse regression problem results in a well posed \ac{mio} problem, the constant $\mathcal M$ needs to be chosen with care as not to impede its numerical solution. The choice of this data dependent constant $\mathcal M$ indeed affects the strength of the \ac{mio} formulation \eqref{eq:bigm:primal} and is critical for obtaining solutions quickly in practice. Furthermore, as the regression dimension $p$ grows, explicitly constructing the \ac{mio} problem \eqref{eq:bigm:primal}, let alone solving it, becomes burdensome. In order to develop an exact scalable method to the sparse regression problem \eqref{eq:l0-regression:primal} capable of solving problem instances of sample size $n$ and regressor dimension in the 100,000s, a different perspective on sparse regression is needed. The big-$\mc M$~ formulation \eqref{eq:bigm:primal} of the sparse linear regression problem \eqref{eq:l0-regression:primal} takes on a primal perspective to regression. Like most exact as well as heuristic sparse regression formulations, the big-$\mc M$~ formulation \eqref{eq:bigm:primal} indeed tries to solve for the optimal regression coefficients $w_0^\star$ in \eqref{eq:l0-regression:primal} directly. However, it is well known in the kernel learning community that often far deeper results can be obtained if a dual perspective is taken. We show that this dual perspective can be translated to a sparse regression context as well and offers a novel road to approach exact sparse regression. Taking this new perspective, sparse regression problem \eqref{eq:l0-regression:primal} can be reduced to a pure integer convex optimization problem avoiding the construction of any auxiliary constants. Crucially, a tailored cutting plane algorithm for the resulting \ac{cio} problem renders solving the sparse regression problem \eqref{eq:l0-regression:primal} to optimality tractable for problem instances with number of samples and regressors in the 100,000s. That is two orders of magnitude better than the current state of art and impeaches the primary selling point of heuristic approaches such as \texttt{Elastic Net} or \texttt{Lasso}. As we will discuss subsequently, our cutting plane algorithm is often comparable or indeed even faster than the aforementioned convex proxy heuristic approaches. \subsection*{Phase Transitions} Let the data come from $Y = X w_\mathrm{true} + E$ where $E$ is zero mean noise uncorrelated with the signal $X w_\mathrm{true}$, then we define the accuracy and false alarm rate of a certain solution $w^\star$ in recovering the correct support as: $$A \% \defn 100\times\frac{\abs{ \supp(w_\mathrm{true}) \cap \supp(w^\star) }}{k}$$ and $$F\% \defn 100\times\frac{\abs{ \supp(w^\star) \setminus \supp(w_\mathrm{true}) }}{\abs{\supp(w^\star)}}.$$ Perfect support recovery occurs only then when $w^\star$ tells the whole truth ($A\%=100$) and nothing but the truth ($F\%=0$). The ability to recover the support of the ground truth $w_{\mathrm{true}}$ of the \texttt{Lasso} heuristic \eqref{eq:l1-regression:primal} for some value of $\lambda$ was shown by \citet{donoho2009observed} to experience a phase transition. The phase transition described by \citet{donoho2009observed} concerns the ability of the \texttt{Lasso} solution $w_1^\star$ to coincide in support with the ground truth $w_{true}$. This accuracy phase transition for the \texttt{Lasso} has been extensively studied in \citep{buhlmann2011statistics, hastie2015statistical, wainwright2009sharp} and is considered well understood by now. That being said, the assumptions made on the data needed for a theoretical justification of such phase transition are quite stringent and often of limited practical nature. For instance, \citet{wainwright2009sharp} showed that for observations $Y$ and independent Gaussian input data $X$ a phase transition occurs at the phase transition curve \begin{equation} \label{eq:wainwright} n_1 = (2k+\sigma^2) \log (p-k), \end{equation} where $\sigma$ presents the noise level corrupting the observations. In the regime $n > n_1$ exact recovery of the support occurs with high-probability, while on the other side of the transition curve the probability for successful recovery drops to zero. Nonetheless, this phase transition from accurate discovery to statistical meaninglessness has been widely observed empirically \citep{donoho2009observed, donoho2006breakdown} even under conditions in which these assumptions are severely violated. For exact sparse regression \eqref{eq:l0-regression:primal} a similar phase transition has been observed by \citet{zheng2015does} and \citet{wang2011performance}, although this transition is far less studied from a theoretical perspective than the similar transition for its heuristic counterpart. It is however known that the accuracy phase transition for exact sparse regression must occur even sooner than that of any heuristic approach. That is, exact sparse regression \eqref{eq:l0-regression:primal} yields statistically more meaningful optima than for instance the convex \texttt{Lasso} heuristic \eqref{eq:l1-regression:primal} does. Recently \citet{gamarnik2017high}, motivated by the results of the present paper, showed that when the regression coefficients are binary, a phase transition occurs at \begin{equation} \label{eq:gamarnik} n_0 = 2 k \log{p} / \log\left(\frac{2k}{\sigma^2} + 1\right). \end{equation} Empirical verification of this phase transition was historically hindered due to the lack of exact scalable algorithms. Our novel cutting plane algorithm lifts this hurdle and opens the way to show the benefits of exact sparse regression empirically. More importantly, we present strong empirical evidence that a computational phase transition occurs as well. Specifically, there is a phase transition concerning our ability to solve the sparse regression problem \eqref{eq:l0-regression:primal} efficiently. In other words, there is a phase transition in our ability to recover the true coefficients of the sparse regression problem and most surprisingly in our ability to find them fast. This complexity phase transition does not seem to be reported before and sheds a new light on the complexity of sparse linear regression. Contrary to traditional complexity theory which suggests that the difficulty of a problem increases with size, the sparse regression problem \eqref{eq:l0-regression:primal} has the property that for a small number of samples $n < n_t$, our approach takes a large amount of time to solve the problem. However, for a large number of samples $n > n_t$, our approach solves the problem extremely fast and perfectly recovers the support of the true regressor $w_{\mathrm{true}}$ fully. The complexity phase transition occurs between the theoretically minimum amount of samples $n_0 < n_t$ needed by exact sparse regression, there remains some hardness to the problem after all, but occurs crucially before $n_t<n_1$ the \texttt{Lasso} heuristic provides statically meaningful regressors. Lastly, recall that the accuracy phase transition \eqref{eq:wainwright} for \texttt{Lasso} and its counterpart \eqref{eq:gamarnik} for exact sparse regression are applicable only then when the true sparsity $k$ is known. Evidently in practice, the sparsity parameter $k$ must ultimately be determined from the data. Most commonly this is done using cross validation. Incorrect determination of this parameter most often leads to elevated false alarm rates. Crucially, we show that in this regard only exact sparse regression experiences a phase transitions in its ability to select only the relevant features. Lasso always seems to favor adding irrelevant features in an attempt to improve its prediction performance. We will show that exact regression is significantly better than \texttt{Lasso} in discovering all true relevant features ($A\%= 100$), while truly outperforming its ability to reject the obfuscating ones ($F\%= 0$). \subsection*{Contributions and structure} \begin{enumerate} \item In Section \ref{sec:sparse_regression}, we propose a novel binary convex reformulation of the sparse regression problem \eqref{eq:l0-regression:primal} that represents a new dual perspective to the problem. The reformulation does not use the big-$\mc M$~ constant present in the primal formulation \eqref{eq:bigm:primal}. In Section \ref{sec:cutting_plane}, we devise a novel cutting plane method and provide evidence that it can solve the sparse regression problem for sizes of $n$ and $p$ in the 100,000s. That is two orders of magnitude than what was achieved in \citep{bertsimas2014statistics}. The empirical computational results in this paper do away with the long held belief that exact sparse regression for practical problem sizes is a lost cause. \item The ability to solve the sparse regression problem \eqref{eq:l0-regression:primal} for very high dimensional problems allows us to observe properties of the problem that demonstrate new phase transition phenomena. Specifically, we demonstrate experimentally in Section \ref{sec:empirical_results} that there is a threshold $n_t$ such that if $n \geq n_t$, then $w^\star_0$ recovers the true support ($A\%=100$ for $F\%=0$) and the time to solve problem \eqref{eq:l0-regression:primal} is seconds (for $n$ and $p$ in 100,000s) and it only grows only linear in $n$. Remarkably, these times are less than the time to solve \texttt{Lasso} for similar sizes. Moreover, if $n < n_t$, then the time to solve problem \eqref{eq:l0-regression:primal} grows proportional to $\binom{p}{k}$. In other words, there is a phase transition in our ability to recover the true coefficients of the sparse regression problem and most surprisingly in our ability to solve it. Contrary to traditional complexity theory that suggests that the difficulty of a problem increases with dimension, the sparse regression problem \eqref{eq:l0-regression:primal} has the property that for small number of samples $n$, our approach takes a large amount of time to solve the problem, but most importantly the optimal solution does not recover the true signal. However, for a large number of samples $n$, our approach solves the problem extremely fast and recovers $A\%=100$ of the support of the true regressor $w_{\mathrm{true}}$. Significantly, the threshold $n_t$ for the phase transition for full recovery of exact sparse regression is significantly smaller than the corresponding threshold $n_1$ for \texttt{Lasso}. Whereas \texttt{Lasso} tends to furthermore include many irrelevant features as well, exact sparse regression furthermore achieves this full recovery at almost $F\%=0$ false alarm rate. \item We are able to generalize in Section \ref{sec:nonlinear} our approach to sparse kernel regression. We believe that this nonlinear approach can become a fierce and more disciplined competitor compared to ``black box'' approaches such as neural networks. \end{enumerate} \subsection*{Notation} Denote with $[n]$ the set of integers ranging from one to $n$. The set $\S^p_k$ denotes the set $$\S_k^p\defn\set{s \in \{0,1\}^p}{\mb 1^\top s\leq k},$$ which contains all binary vectors $s$ selecting $k$ components from $p$ possibilities. Assume that $(y_1, \dots, y_p)$ is a collection of elements and suppose that $s$ is an element of $\S^p_k$, then $y_s$ denotes the sub-collection of $y_j$ where $s_j = 1$. We use $\norm{x}_0$ to denote the number of elements of a vector $x$ in $\mathrm{R}^p$ which are nonzero. Similarly, we use $\supp(x) = \set{s \in \{0, 1\}^p}{s_i = 1 \iff x_i \neq 0} $ to denote those indices of a vector $x$ which are nonzero. Finally, we denote by $\S_+^n$ ($\S_{++}^n$) the cone of $n\times n$ positive semidefinite (definite) matrices. \section{A convex binary reformulation of sparse linear regression} \label{sec:sparse_regression} Sparse regression taken at face value is recognized as a mixed continuous and discrete optimization problem. Indeed, the sparse regressor $w$ as an optimization variable in \eqref{eq:l0-regression:primal} takes values in a continuous subset of $\mathrm{R}^p$. The $\ell_0$-norm sparsity constraint, however, adds a discrete element to the problem. The support $s$ of the sparse regressor $w$ is discrete as it takes values in the binary set $\S^p_k=\set{s \in \{0,1\}^p}{\mb 1^\top s\leq k}$. It should not come as a surprise then that the reformulation \eqref{eq:bigm:primal} developed by \citet{bertsimas2014statistics} formulates the sparse regression problem as a \ac{mio} problem. For the reasons outlined in the introduction of this paper, we take a different approach to the sparse regression problem \eqref{eq:l0-regression:primal} entirely. To that end we first briefly return to the ordinary regression problem for which any sparsity considerations are ignored and in which a linear relationship between input data $X$ and observations $Y$ is determined through solving the least squares regression problem \begin{equation} \label{eq:regression:primal} \begin{array}{rl} c \defn \min & \frac{1}{2\gamma} \norm{w}_2^2 + \frac{1}{2} \norm{Y - X w}^2_2 \\[0.5em] \mathrm{s.t.} & w \in \mathrm{R}^p. \end{array} \end{equation} We will refer to the previously defined quantity $c$ as the regression loss. The quantity $c$ does indeed agree with the regularized empirical regression loss for the optimal linear regressor corresponding to the input data $X$ and response $Y$. We point out now that the regression loss function $c$ is convex as a function of the outer product $X X^\top$ and furthermore show that it admits an explicit characterization as a semidefinite representable function. \begin{lemma}[The regression loss function $c$] \label{lemm:convexity} The regression loss $c$ admits the following explicit characterizations \begin{align} c &= \frac12 Y^\top \left( \eye{n} - X \left(\eye{p}/\gamma + X^\top X\right)^{-1} X^\top \right) Y, \label{eq:char1} \\[0.5em] &= \frac12 Y^\top \left(\eye{n} + \gamma X X^\top \right)^{-1} Y. \label{eq:char2} \\ \intertext{Futhermore, the regression loss $c$ as a function of the kernel matrix $X X^\top$ is conic representable using the formulation} c(X X^\top) &=\min \set{\eta\in \mathrm{R}_+}{\begin{pmatrix} 2 \eta & Y^\top \\ Y & \eye{n} + \gamma X X^\top \end{pmatrix} \in \S^{n+1}_+}. \label{eq:char3} \end{align} \end{lemma} \begin{proof} As the minimization problem \eqref{eq:regression:primal} over $w$ in $\mathrm{R}^p$ is an unconstrained \ac{qop}, the optimal value $w^\star$ satisfies the linear relationship $(\eye{p}/\gamma + X^\top X)w^\star = X^\top Y.$ Substituting the expression for the optimal linear regressor $w^\star$ back into optimization problem, we arrive at \[ c = \frac12 Y^\top Y - \frac12 Y^\top X \left(\eye{p}/\gamma + X^\top X\right)^{-1} X^\top Y \] establishing the first explicit characterization \eqref{eq:char1} of the regression function $c$. The second characterization \eqref{eq:char2} can be derived from the first with the help of the matrix inversion lemma found in \cite{hager1989updating} stating the identity \[ \left(\eye{n} + \gamma X X^\top\right)^{-1} = \eye{n} - X \left( \eye{p}/\gamma + X^\top X\right)^{-1} X^\top. \] The Schur complement condition discussed at length in \cite{zhang2006schur} guarantees that as $\eye{n} + \gamma X X^\top$ is strictly positive definite, we have the equivalence \[ 2 \eta \geq Y^\top \left(\eye{n} + \gamma X X^\top\right)^{-1} Y \iff \begin{pmatrix} 2 \eta & Y^\top \\ Y & \eye{n} + \gamma X X^\top \end{pmatrix} \in \S^{n+1}_+. \]% Representation \eqref{eq:char3} is thus an immediate consequence of expression \eqref{eq:char2} as well. \end{proof} We next establish that the sparse regression problem \eqref{eq:l0-regression:primal} can in fact be represented as a pure binary optimization problem. The following result provides a novel perspective on the sparse regression problem \eqref{eq:l0-regression:primal} and is of central importance in the paper. \begin{theorem}[Sparse linear regression] \label{thm:cio} The sparse regression problem \eqref{eq:l0-regression:primal} can be reformulated as the nonlinear optimization problem \begin{equation} \label{eq:opt:miop:kernel} \begin{array}{rl} \min & \displaystyle \frac12 Y^\top \left(\eye{n} + \gamma \textstyle\sum_{j\in[p]} s_j K_j \right)^{-1} Y \\[0.6em] \mathrm{s.t.} & s\in \S^p_k, \end{array} \end{equation} where the micro kernel matrices $K_j$ in $\S^n_+$ are defined as the dyadic products \begin{equation} \label{eq:kernel} \textstyle K_j \defn X_j X_j ^\top. \end{equation} \end{theorem} \begin{proof} We start the proof by separating the optimization variable $w$ in the sparse regression problem \eqref{eq:l0-regression:primal} into its support $s \defn \supp{w}$ and the corresponding non-zero entries $w_s$. Evidently, we can now write the sparse regression problem \eqref{eq:l0-regression:primal} as the bilevel minimization problem \begin{equation} \label{eq:reformulation} \min_{s\in\S^p_k}\left[\min_{w_s \in \mathrm{R}^k} ~ \frac{1}{2\gamma} \norm{w_s}^2_2 + \frac{1}{2} \norm{Y - X_s w_s}_2^2 \right]. \end{equation} It now remains to be shown that the inner minimum can be found explicitly as the objective function of the optimization problem \eqref{eq:opt:miop:kernel}. Using Lemma \ref{lemm:convexity}, the minimization problem can be reduced to the binary minimization problem $\min_s \set{c(X_s X_s^\top)}{s\in\S^p_k}$. We finally remark that the outer product can be decomposed as the sum $$X_s X_s^\top = \textstyle\sum_{j\in[p]} s_j X_j X_j^\top,$$ thereby completing the proof. \end{proof} An alternative to the sparse regression problem \eqref{eq:l0-regression:primal} is to consider the penalized form of the sparse regression problem: \begin{equation} \label{probl:penalized:l0} \begin{array}{rl} \displaystyle\min_{w\in \mathrm{R}^p} & \frac12 \norm{Y- Xw}_2^2 + \frac{1}{2\gamma} \norm{w}_2^2 + \lambda \norm{w}_0, \end{array} \end{equation} in which the $\ell_0$-norm constraint is migrated to the objective function. Analogously to Theorem \ref{thm:cio} we can show that problem \eqref{probl:penalized:l0} can be reformulated as the nonlinear optimization problem \begin{equation*} \begin{array}{rl} \min & \displaystyle \frac12 Y^\top \left(\eye{n} + \gamma \textstyle\sum_{j\in[p]} s_j K_j \right)^{-1} Y + \lambda \cdot \mb 1^\top s \\[0.5em] \mathrm{s.t.} & s \in \{0, 1\}^p.\\[0.5em] \end{array} \end{equation*} While we do not need to pre-specify $k$ in problem \eqref{probl:penalized:l0}, we need to specify the penalty $\lambda$ instead. The optimization problem \eqref{eq:opt:miop:kernel} is a pure binary formulation of the sparse regression problem directly over the support $s$ instead of the regressor $w$ itself. As the objective function in \eqref{eq:opt:miop:kernel} is convex in the vector $s$, problem \eqref{eq:opt:miop:kernel} casts the sparse regression problem as a \ac{cio} problem. Nevertheless, we will never explicitly construct the \ac{cio} formulation as such and rather develop in Section \ref{sec:cutting_plane} an efficient cutting plane algorithm. We finally discuss here how the sparse regression formulation in Theorem \ref{thm:cio} is related to kernel regression and admits an interesting dual relaxation. \subsection{The kernel connection} In ordinary linear regression a linear relationship between input data $X$ and observations $Y$ is determined through solving the least squares regression problem \eqref{eq:regression:primal}. The previous optimization problem is known as Ridge regression as well and balances the least-squares prediction error with a Tikhonov regularization term. One can solve the Ridge regression problem in the primal space -- the space of parameters $w$ -- directly. Ridge regression is indeed easily recognized to be a convex \ac{qop}. Ordinary linear regression problems can thus be formulated as \ac{qop}s of size linear in the number of regression coefficients $p$. Correspondingly, the big-$\mc M$~ formulation \eqref{eq:bigm:primal} can be regarded as a primal perspective on the sparse regression problem \eqref{eq:l0-regression:primal}. Formulation \eqref{eq:bigm:primal} indeed attempts to solve the sparse regression problem in the primal space of parameters $w$ directly. However, it is well known in the kernel learning community that far deeper results can be obtained if one approaches regression problems from its convex dual perspective due to \citet{vapnik1998support}. Indeed, in most of the linear regression literature the dual perspective is often preferred over its primal counterpart. We state here the central result in this context to make the exposition self contained. \begin{theorem}[{\citet{vapnik1998support}}] \label{thm:vapnik} The primal regression problem \eqref{eq:regression:primal} can equivalently be formulated as the unconstrained maximization problem \begin{equation} \label{eq:regression:dual} \begin{array}{rl} c = \max & -\frac{\gamma}{2} \alpha^\top K \alpha - \frac{1}{2} \alpha^\top \alpha + Y^\top \alpha \\[0.5em] \mathrm{s.t.} & \alpha \in \mathrm{R}^n, \\ \end{array} \end{equation} where the kernel matrix $K = X X^\top$ in $\S^n_+$ is a positive semidefinite matrix. \end{theorem} The dual optimization problem \eqref{eq:regression:dual} is a convex \ac{qop} as well and, surprisingly, scales only with the number of samples $n$ and is insensitive to the input dimension $p$. This last surprising observation is what gives the dual perspective its historical dominance over its primal counterpart in the context of kernelized regression discussed in \citep{scholkopf2002learning}. When working with high dimensional data for which the number of inputs $p$ is vastly bigger than the number of samples $n$, the dual optimization problem \eqref{eq:regression:dual} is smaller and often easier to solve. For any $i$ and $j$, the kernel matrix entry $K(i, j)$ corresponds to the inner product between input samples $x_i$ and $x_j$ in $\mathrm{R}^p$. The matrix $K$ is usually referred to as the kernel matrix or Gram matrix and is always positive definite and symmetric. Since the kernel specifies the inner products between all pairs of sample points in $X$, it completely determines the relative positions of those points in the embedding space. Our \ac{cio} formulation \eqref{eq:opt:miop:kernel} of the sparse optimization problem \eqref{eq:l0-regression:primal} can be seen to take a dual perspective on the sparse regression problem \eqref{eq:l0-regression:primal}. That is, our novel optimization formulation \eqref{eq:opt:miop:kernel} is recognized as a subset selection problem in the space of kernels instead of regressors. It can indeed be remarked that when the sparsity constraint is omitted the kernel matrix reduces to the standard kernel matrix $$K = \textstyle\sum_{j\in[p]} X_j X_j^\top = X X^\top.$$ \subsection{A second-order cone relaxation} Many heuristics approach the sparse regression problem \eqref{eq:l0-regression:primal} through a continuous relaxation. Indeed, a continuous relaxation of the big-$\mc M$~ formulation \eqref{eq:bigm:primal} of the sparse regression problem is immediately recognized as the convex \ac{qop} \begin{equation} \label{eq:bigm:relaxation} \begin{array}{rl} \displaystyle \min_w & \frac{1}{2\gamma} \norm{w}_2^2 + \frac{1}{2} \norm{Y - X w}_2^2 \\[0.5em] \mathrm{s.t.} & \norm{w}_\infty \leq \mathcal M, ~\norm{w}_1 \leq \mathcal{M} k \end{array} \end{equation} which \citet{bertsimas2014statistics} recognized as a slightly stronger relaxation than the \texttt{Elastic Net} \eqref{eq:l1-regression:primal}. It thus makes sense to look at the continuous relaxation of the sparse kernel optimization problem \eqref{eq:opt:miop:kernel} as well. Note that both the big-$\mc M$~ \eqref{eq:bigm:relaxation} and \texttt{Elastic Net} \eqref{eq:l1-regression:primal} relaxation provide lower bounds to the exact sparse regression problem \eqref{eq:l0-regression:primal} in terms of a \ac{qop}. However, neither of the these relaxations is very tight. In Theorem \ref{thm:cio:relaxation} we will indicate that a more intuitive and comprehensive lower bound based on our \ac{cio} formulation \eqref{eq:opt:miop:kernel} can be stated as a \ac{socp}. A naive attempt to state a continuous relaxation of the \ac{cio} formulation \eqref{eq:opt:miop:kernel} in which we would replace the binary set $\S_k^p$ with its convex hull would result in a large but convex \ac{sdp} problem. Indeed, the convex hull of the set $\S^p_k$ is the convex polytope $\{s\in[0,1]^p:\mb 1^\top s \leq k\}$. It is, however, folklore that large \ac{sdp}s are notoriously difficult to solve in practice. For this reason, we reformulate here the continuous relaxation of \eqref{eq:opt:miop:kernel} as a small \ac{socp} for which very efficient solvers do exist. This continuous relaxation provides furthermore some additional insight towards the binary formulation of the sparse regression problem \eqref{eq:l0-regression:primal}. Using Theorem \ref{thm:vapnik}, we can equate the continuous relaxation of problem \eqref{eq:opt:miop:kernel} to the following saddle point problem \begin{equation} \label{eq:saddle_point} \min_{ s \in \mathop{\operatorname{conv}}( \S^p_k)} \, \max_{\alpha \in \mathrm{R}^n} \, -\frac{\gamma}{2} \textstyle \sum_{j\in[p]} s_j \cdot \left[ \alpha^\top K_j \alpha \right] - \frac{1}{2} \alpha^\top \alpha + Y^\top \alpha . \end{equation} Note that the saddle point function is linear in $s$ for any fixed $\alpha$ and concave continuous in $\alpha$ for any fixed $s$ in the compact set $\mathop{\operatorname{conv}}(\S^p_k)$. It then follows (see \citet{sion1958general}) that we can exchange the minimum and maximum operators. By doing so, the continuous relaxation of our \ac{cio} problem satisfies \begin{equation} \label{eq:continuous_relaxation} \begin{split} \min_{s\in\mathop{\operatorname{conv}}(\S^p_k)} \, c&(\textstyle\sum_{j\in[p]} s_j K_j) = \\ & \max_{\alpha \in \mathrm{R}^n} - \frac{1}{2} \alpha^\top \alpha + Y^\top \alpha - \frac{\gamma}{2} \max_{s \in \mathop{\operatorname{conv}}(\S^p_k)} \,\textstyle \sum_{j\in[p]} s_j \cdot \alpha^\top K_j \alpha. \end{split} \end{equation} The inner maximization problem admits an explicit representation as the sum of the $k$-largest components in the vector with components $\alpha^\top K_j \,\alpha$ ranging over $j$ in $[p]$. It is thus worth noting that this continuous relaxation has a discrete element to it. The continuous relaxation of the \ac{mio} problem \eqref{eq:opt:miop:kernel} can furthermore be written down as a tractable \ac{socp}. \begin{theorem} \label{thm:cio:relaxation} The continuous relaxation of the sparse kernel regression problem \eqref{eq:opt:miop:kernel} can be reduced to the following \ac{socp} \begin{equation} \label{eq:opt:miop:dual} \begin{array}{rl} \displaystyle\min_{s\in \mathop{\operatorname{conv}} (\S^p_k)}\, c(\textstyle\sum_{j\in[p]} s_j K_j) = \max & \displaystyle-\frac{1}{2} \ip{\alpha}{\alpha} + \ip{Y}{\alpha} - \ip{\mb 1}{u} - k t \\[0.6em] \mathrm{s.t.} & \alpha \in \mathrm{R}^n, ~ t\in\mathrm{R}, ~ u \in \mathrm{R}_+^p, \\[0.5em] & \displaystyle \frac2\gamma u_j \geq \alpha^\top K_j \alpha - \frac{2}{\gamma} t, \quad \forall j\in[p]. \end{array} \end{equation} \end{theorem} \begin{proof} The continuous relaxation of the optimization problem \eqref{eq:opt:miop:kernel} was already identified as the optimization problem \eqref{eq:continuous_relaxation}. We momentarily focus on the inner maximization problem in \eqref{eq:continuous_relaxation} and show it admits a closed form expression. As the only constraint on the (continuous) selection vector $s$ is a knapsack constraint, the inner maximum is nothing but the sum of the $k$-largest terms in the objective. Hence, we have \[ \max_{s \in \mathop{\operatorname{conv}}(\S^p_k)} \, \textstyle \sum_{j\in[p]} s_j \cdot \alpha^\top K_j \alpha = \max_{[k]}([\alpha^\top K_1 \alpha, \dots, \alpha^\top K_p \alpha]), \] where $\max_{[k]}$ is defined as the convex function mapping its argument to the sum of its $k$-largest components. Using standard linear optimization duality we have \[ \begin{array}{rlcrl} \max_{[k]}(x) = \max & x^\top s & = & \min & k t + \mb 1^\top u \\[0.5em] \mathrm{s.t.} & s \in \mathrm{R}^p_+ & & \mathrm{s.t.} & t \in \mathrm{R}, ~ u \in \mathrm{R}^p_+ \\[0.5em] & s \leq \mb 1, ~\mb 1^\top s=k & & & u_j \geq x_j -t, \quad \forall j \in [p]. \end{array} \] where $t$ and $u$ are the dual variables corresponding to the constraints in the maximization characterization of the function $\max_{[k]}$. Making use of the dual characterization of $\max_{[k]}$ in expression \eqref{eq:continuous_relaxation} gives us the desired result. \end{proof} The continuous relaxation \eqref{eq:opt:miop:dual} of the sparse regression problem \eqref{eq:l0-regression:primal} discussed in this section is thus recognized as selecting the $k$-largest terms $\alpha^\top K_j \alpha$ to construct the optimal dual lower bound. We shall find that the dual offers an excellent warm start when attempting to solve the sparse linear regression problem exactly. \section{A cutting plane algorithm} \label{sec:cutting_plane} We have formulated the sparse regression problem \eqref{eq:l0-regression:primal} as a pure binary convex optimization problem in Theorem \ref{thm:cio}. Unfortunately, no commercial solvers are available which are targeted to solve \ac{cio} problems of the type \eqref{eq:opt:miop:kernel}. In this section, we discuss a tailored solver largely based on the algorithm described by \cite{duran1986outer}. The algorithm is a cutting plane approach which iteratively solves increasingly better \ac{mio} approximations to the \ac{cio} formulation \eqref{eq:opt:miop:kernel}. Furthermore, the cutting plane algorithm avoids constructing the \ac{cio} formulation \eqref{eq:opt:miop:kernel} explicitly which can prove burdensome when working with high-dimensional data. We provide numerical evidence in Section \ref{sec:empirical_results} that the algorithm described here is indeed extremely efficient. \subsection{Outer approximation algorithm} \label{sec:outer-appr-algor} In order to solve the \ac{cio} problem \eqref{eq:opt:miop:kernel}, we follow the outer approximation approach introduced by \citet{duran1986outer}. The algorithm described by \citet{duran1986outer} proceeds to find a solution to the \ac{cio} problem \eqref{eq:opt:miop:kernel} by constructing a sequence of \ac{mio} approximations based on cutting planes. In pseudocode, it can be seen to construct a piece-wise affine lower bound to the convex regression loss function $c$ defined in equation \eqref{eq:char3}. \begin{algorithm} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \caption{The outer approximation process} \label{alg:outer_approximation} \Input{$Y \in \mathrm{R}^{n}$, $X \in \mathrm{R}^{n\times p}$ and $k \in [1, p]$} \Output{$s^\star \in \S^p_k$ and $w^\star \in \mathrm{R}^p$} $s_1 \gets$ warm start \\ $\eta_1 \gets 0$ \\ $t \gets 1$ \\ \While{$\eta_t < c(s_t)$}{ $ s_{t+1}, ~\eta_{t+1} \gets \arg \min_{s, \, \eta} \, \{ \, \eta \in \mathrm{R}_+ ~\mathrm{s.t.} ~s \in \S^p_k, ~~\eta \geq c(s_i) + \nabla c(s_i) (s-s_i), ~ \forall i \in [t] \}$ \\ $t \gets t+1$ } $s^\star \gets s_{t}$ \\ $w^\star \gets 0$, \quad $w^\star_{s^\star} \gets \left(\eye{p}/\gamma + X_{s^\star}^\top X_{s^\star} \right)^{-1} X_{s^\star}^\top Y$ \end{algorithm} At each iteration, the cutting plane added $\eta \geq c(s_t) + \nabla c (s_t) (s-s_t)$ cuts off the current binary solution $s_t$ unless it happened to be optimal in \eqref{eq:opt:miop:kernel}. As the algorithm progresses, the outer approximation function $c_t$ thus constructed $$c_t(s) \defn \max_{i\in[t]} \, c(s_t) + \nabla c(s_t) (s-s_t)$$ becomes an increasingly better approximation to the regression loss function $c$ of interest. Unless the current binary solution $s_t$ is optimal, a new cutting plane will refine the feasible region of the problem by cutting off the current feasible binary solution. \begin{theorem}[Cutting Plane Method] The procedure described in Algorithm \ref{alg:outer_approximation} terminates after a finite number of cutting planes and returns the exact sparse regression solution $w_0^\star$ of \eqref{eq:l0-regression:primal}. \end{theorem} Despite the previous encouraging corollary of a result found in \citep{fletcher1994solving}, it nevertheless remains the case that from a theoretical point of view exponentially many cutting planes need to be computed in the worst-case, potentially rendering our approach impractical. Furthermore, at each iteration a \ac{mio} problem needs to be solved. This can be done by constructing a branch-and-bound tree, c.f. \citet{lawler1966branch}, which itself requires a potential exponential number of leaves to be explored. This complexity behavior is however to be expected as exact sparse regression is known to be an $NP$-hard problem. Surprisingly, the empirical timing results presented in Section \ref{sec:empirical_results} suggests that the situation is much more interesting than what complexity theory might suggest. In what remains of this section, we briefly discuss three techniques to carry out the outer approximation algorithm more efficiently than a naive implementation would. In general, outer approximation methods are known as ``multi-tree'' methods because every time a cutting plane is added, a slightly different \ac{mio} problem is to be solved anew by constructing a branch-and-bound tree. Consecutive \ac{mio}s in Algorithm \ref{alg:outer_approximation} differ only in one additional cutting plane. Over the course of our iterative cutting plane algorithm, a naive implementation would require that multiple branch and bound trees are built in order to solve the successive \ac{mio} problems. We implement a ``single tree'' way of solving the iteration algorithm \ref{alg:outer_approximation} by using dynamic constraint generation, known in the optimization literature as either a lazy constraint or column generation method. Lazy constraint formulations described in \citep{barnhart1998branch} dynamically add cutting planes to the model whenever a binary feasible solution is found. This saves the rework of rebuilding a new branch-and-bound tree every time a new binary solution is found in Algorithm \ref{alg:outer_approximation}. Lazy constraint callbacks are a relatively new type of callback. To date, the only commercial solvers which provide lazy constraint callback functionality are \texttt{CPLEX}, \texttt{Gurobi} and \texttt{GLPK}. In what follows, we discuss two additional tailored adjustments to the general outer approximation method which render the overall method more efficient. The first concerns an efficient way to evaluate both the regression loss function $c$ and its subgradient $\nabla c$ efficiently. The second discusses a heuristic to compute a warm start $s_1$ to ensure that the first cutting plane added is of high quality, causing the outer approximation algorithm to converge more quickly. \subsection{Efficient dynamic constraint generation} In the outer approximation method considered in this document to solve the \ac{cio} problem \eqref{eq:opt:miop:kernel} linear constraints of the type \begin{equation} \label{eq:linearization} \eta \geq c(\bar{s}) + \nabla c(\bar{s}) (s - \bar{s}) \end{equation} at $\bar s$ a given iterate, are considered as cutting planes at every iteration. As such constraints need to be added dynamically, it is essential that we can evaluate both the regression loss function $c$ and its subgradient components efficiently. \begin{lemma}[Derivatives of the optimal regression loss $c$] \label{lemm:derivatives} Suppose the kernel matrix $K$ is differentiable function of the parameter $s$. Then, we have that the gradient of the regression loss function $c(K) = \frac12 \alpha^\star(K) Y$ can be stated as \[ \nabla c(s) = -\alpha^\star(K) ^\top \cdot \frac{\gamma}{2} \frac{\mathrm{d} K}{\mathrm{d} s} \cdot \alpha^\star(K), \] where $\alpha^\star(K)$ maximizes \eqref{eq:regression:dual} and hence is the solution to the linear system $$\alpha^\star(K) = \left(\eye{n} + \gamma K \right)^{-1} Y.$$ \end{lemma} We note that the naive numerical evaluation of the convex loss function $c$ or any of its subgradients would require the inversion of the regularized kernel matrix $\eye{n}+\gamma \sum_{j\in[p]} \bar s_j K_j$. The regularized kernel matrix is dense in general and always of full rank. Unfortunately, matrix inversion of general matrices presents work in the order of $\mathcal O(n^3)$ floating point operations and quickly becomes excessive for sample sizes $n$ in the order of a few 1,000s. Bear in mind that such an inversion needs to take place for each cutting plane added in the outer approximation Algorithm \ref{alg:outer_approximation}. It would thus appear that computation of the regression loss $c$ based on its explicit characterization \eqref{eq:char2} is very demanding. Fortunately, the first explicit characterization \eqref{eq:char1} can be used to bring down the work necessary to $\mathcal O(k^3+n k)$ floating point operations as we will show now. Comparing equalities \eqref{eq:char1} and \eqref{eq:char2} results immediately in the identity \begin{equation} \label{eq:woodbury} \alpha^\star(\textstyle\sum_{j\in[p]} s_j K_j) = \left( \eye{n} - X_s (\eye{k}/\gamma+ X_s^\top X_s)^{-1} X_s \right) Y. \end{equation} The same result can also be obtained by applying the matrix inversion lemma stated in \citep{hager1989updating} to the regularized kernel matrix by noting that the micro kernels $K_j$ are rank one dyadic products. The main advantage of the previous formula is the fact that it merely requires the inverse of the much smaller capacitance matrix $C \defn \eye{k}/\gamma+X_s^\top X_s$ in $\S_{++}^{k}$ instead of the dense full rank regularized kernel matrix in $\S_{++}^{n}$. Using expression \eqref{eq:woodbury}, both the regression loss function $c$ and any of its subgradients can be evaluated using $\mathcal O(k^3+n k)$ instead of $\mathcal O(n^3)$ floating point operations. When the number of samples $n$ is significantly larger than $k$, the matrix inversion lemma provides a significant edge over a vanilla matrix inversion. We note that from a statistical perspective this always must be the case if there is any hope that sparse regression might yield statistically meaningful results. Pseudocode implementing the ideas discussed in this section is provided in Algorithm \ref{alg:regression_function}. \begin{algorithm} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \caption{Regression function and subgradients} \label{alg:regression_function} \Input{$Y \in \mathrm{R}^{n}$, $X \in \mathrm{R}^{n\times p}$, $s \in \S^p_k$ and $\gamma \in \mathrm{R}_{++}$} \Output{$c \in \mathrm{R}_+$ and $\nabla c \in \mathrm{R}^p$} $\alpha^\star \gets Y - X_s (\eye{k}/\gamma+ X_s^\top X_s)^{-1} X_s^\top Y$ \\ $c \gets \frac12 Y^\top \alpha^\star$ \\ \For {$j$ in $[p]$ } { $\nabla c_j \gets -\frac{\gamma}{2} (X_j^\top \alpha^\star)^2$ \\ } \end{algorithm} \subsection{Dual warm starts} Regardless of the initial selection $s_1$, the outer approximation Algorithm \ref{alg:outer_approximation} will eventually return the optimal subset solution $s^\star$ to the sparse regression formulation in Theorem \ref{thm:cio}. Nevertheless, to improve computational speed in practice it is often desirable to start with a high-quality warm start rather than any arbitrary feasible point in $\S^p_k$. As already briefly hinted upon, a high-quality warm start can be obtained by solving the continuous relaxation \eqref{eq:opt:miop:dual}. More specifically, we take as warm start $s_1$ to the outer approximation algorithm the solution to \begin{equation} s_1 \in \arg \max_{s \in \S_k^p} ~\textstyle\sum_{j\in[p]} s_j \cdot \alpha^{\star\top} K_j \alpha^\star, \label{eq:tr1} \end{equation} where $\alpha^\star$ is optimal in \eqref{eq:opt:miop:dual}. Note that the solution to problem \eqref{eq:tr1} can be found explicitly as the vector indicating the $k$ largest components of $(\alpha^{\star\top} K_1 \alpha^\star, \dots, \alpha^{\star\top} K_p \alpha^\star)$. We finally remark that the \texttt{Lasso} or the solution found by the first order heuristic developed in \citep{bertsimas2014statistics} could have been used equally well. \section{Scalability and phase transitions} \label{sec:empirical_results} To evaluate the effectiveness of the cutting plane algorithm developed in Section \ref{sec:cutting_plane}, we report its ability to recover the correct regressors as well as its running time. In this section, we present empirical evidence on two critically important observations. The first observation is that our cutting plane algorithm scales to provable optimality in seconds for large regression problems with $n$ and $p$ in the 100,000s. That is two orders of magnitude larger than the known exact sparse regressor methods in \citep{bertsimas2014statistics} and takes away the main propelling justification for heuristic approaches for many regression instances in practice. The second observation relates to the fact that we observe phase transition phenomena in the three important properties which characterize our exact sparse regression formulation : its ability to find all relevant features ($A\%$), its rejection of irrelevant features from the obfuscating bulk ($F\%$), and the time ($T$) it takes to find an exact sparse regressor using our cutting plane Algorithm \ref{alg:outer_approximation}. All algorithms in this document are implemented in \texttt{Julia} and executed on a standard \texttt{Intel(R) Xeon(R) CPU E5-2690 @ 2.90GHz} running \texttt{CentOS release 6.7}. All optimization was done with the help of the commercial mathematical optimization distribution \texttt{Gurobi version 6.5}. \subsection{Data description} \label{ssec:data_description} Before we present the empirical results, we first describe the properties of the synthetic data which shall be used throughout this section. The input and response data are generated synthetically with the observations $Y$ and input data $X$ satisfying the linear relationship \[ Y = X w_{\mathrm{true}} + E. \] The unobserved true regressor $w_{\mathrm{true}}$ has exactly $k$-nonzero components at indices selected uniformly without replacement from $[f]$. Likewise, the nonzero coefficients in $w_{\mathrm{true}}$ are drawn uniformly at random from the set $\{-1, +1\}$. The observation $Y$ consists of the signal $S \defn X w_{\mathrm{true}}$ corrupted by the noise vector $E$. The noise components $E_i$ for $i$ in $[n]$ are drawn \ac{iid} from a normal distribution $N(0, \sigma^2)$ and scaled to $$\sqrt{\mathrm{SNR}} = \norm{S}_2 / \norm{E}_2$$ Evidently as the \ac{snr} increases, recovery of the unobserved true regressor $w_{\mathrm{true}}$ from the noisy observations can be done with higher precision. We have yet to specify how the input matrix $X$ is chosen. We assume here that the input data samples $X = (x_1, \dots, x_n)$ are drawn from an \ac{iid} source with Gaussian distribution; that is $$x_i \sim N(0, \Sigma), \quad \forall i\in [n].$$ The variance matrix $\Sigma$ will be parametrized by the correlation coefficient $\rho \in [0, 1)$ as $\Sigma(i, j) \defn \rho^{\abs{i-j}}$ for all $i$ and $j$ in $[p]$. As the $\rho$ tends to $1$, the columns of the data matrix $X$ become more alike which should impede the discovery of nonzero components of the true regressor $w_{\mathrm{true}}$ by obfuscating them with highly correlated look-a-likes. In the extreme case in which $\rho =1$, all columns of $X$ are the same at which point there is no hope of discovering the true regressor $w_{\mathrm{true}}$ even in the noiseless case. \subsection{Scalability} We provide strong evidence that the cutting plane Algorithm \ref{alg:outer_approximation} represents a truly scalable algorithm to the exact sparse regression problem \eqref{eq:l0-regression:primal} for $n$ and $p$ in the 100,000s. As many practical regression problems are within reach of our exact cutting plane Algorithm \ref{alg:outer_approximation}, the need for convex surrogate regressors such as \texttt{Elastic Net} and \texttt{Lasso} is greatly diminished. We note that an effective regression must find all relevant features ($A\%=100$) while at the same time reject those that are irrelevant $(F\%=0)$. To separate both efforts, we assume in this and the following section that true number $k$ of nonzero components of the ground truth $w_\mathrm{true}$ is known. In this case $A\%+F\%=100$ which allows us to focus entirely on the the accuracy of the obtained regressors. Evidently, in most practical regression instances $k$ needs to be inferred from the data as well. Incorrect determination of this number can indeed lead to high false alarm rates. We will return to this important issue of variable selection and false alarm rates at the end of the subsequent section. For the sake of comparison, we will also come to discuss the time it takes to solve the \texttt{Lasso} heuristic \eqref{eq:l1-regression:primal} as implemented by the \texttt{GLMNet} implementation of \citet{friedman2013glmnet}. Contrary to exact sparse regression, no direct way exists to obtain a sparse regressor from solving the convex surrogate heuristic \eqref{eq:l1-regression:primal}. In order to facilitate a fair comparison however, we shall take that \texttt{Lasso} regressor along a path of optimal solutions in \eqref{eq:l1-regression:primal} for varying $\lambda$ which is the least regularized but has exactly $k$ nonzero coefficients as a heuristic sparse solution. \begin{table} \begin{center} \subimport{}{scalability.tex} \end{center} \vspace{0.5em} \caption{A comparison between exact sparse regression using our cutting plane algorithm and the \texttt{Lasso} heuristic with respect to their solution time in seconds applied to noisy ($\sqrt{\mathrm{SNR}}=20$) and lightly correlated data ($\rho=0.1$) explained by either $k=10$, $k=20$ or $k=30$ relevant features. These problem instances are truly large scale as for the largest instance counting $n=100,000$ samples for $p=200,000$ regressors a memory exception was thrown when building the data matrices $Y$ and $X$. Remarkably, even on this scale the cutting plane algorithm can be significantly faster than the \texttt{Lasso} heuristic. } \label{table:scalability} \end{table} In Table \ref{table:scalability} we discuss the timing results for exact sparse linear regression as well as for the \texttt{Lasso} heuristic applied to noisy ($\sqrt{\mathrm{SNR}}=20$) and lightly correlated ($\rho=0.1$) synthetic data. We do not report the accuracy nor the false alarm rate of the obtained solution as this specific data is in the regime where exact discovery of the support occurs for both the \texttt{Lasso} heuristic and exact sparse regression. Remarkably, the timing results in Table \ref{table:scalability} suggest that using an exact method does not impede our ability to obtain the solution fast. The problem instances displayed are truly large scale as indeed for the largest problem instance a memory exception was thrown when building the data matrices $X$ and $Y$. In fact, even in this large scale setting our cutting plane algorithm can be significantly faster than the \texttt{Lasso} heuristic. Admittedly though, the \texttt{GLMNet} implementation returns an entire solution path for varying $\lambda$ instead of a single regression model. Comparing though to the performance reported on exact sparse regression approaches in \citep{furnival2000regressions} and \citep{bertsimas2014statistics}, our method presents a potentially game changing speed up of at least two orders of magnitude. The results in Table \ref{table:scalability} thus do refute the widely held belief that exact sparse regression is not feasible at large scales. In fact, we consider pointing out the fact that exact sparse regression is not hopeless in practice an important contribution of this paper. Although a hard theoretical picture is not yet available as for why the cutting plane Algorithm \ref{alg:outer_approximation} proves so efficient, we hope that these encouraging results spur an interest in exact approaches towards sparse regression. In the subsequent section, we will come to see that the scalability of exact sparse regression entails more than meets the eye. \subsection{Phase transition phenomena} \label{ssec:phase_transition_phenomena} We have established that the cutting plane Algorithm \ref{alg:outer_approximation} scales to provable optimality for problems with number of samples and regressor dimension in the 100,000s. Let us remark that for the results presented in Table \ref{table:scalability}, both the exact and heuristic algorithms returned a sparse regressor with correct support and otherwise were of similar precision. In cases where the data does not allow a statistically meaningful recovery of the ground truth $w_{\mathrm{true}}$ an interesting phenomenon occurs. We present and discuss in this part of the paper three remarkable phase transition phenomena. The first will concern the statistical power of sparse regression, whereas the second will concern our ability to find the optimal sparse regressor efficiently. We will refer to the former transition as the accuracy transition, while referring to the latter as the complexity transition. The false alarm phase transition is the third phase transition phenomenon and relates to the ability of exact sparse regression to reject irrelevant features from the obfuscating bulk. We will argue here using strong empirical evidence that these transitions are in fact intimately related. Of all three phase transitions discussed here, only the accuracy phase transition has previously received attention and is also understood theoretically. The accuracy phase transition describes the ability of the sparse regression formulation \eqref{eq:l0-regression:primal} to uncover the ground truth $w_{\mathrm{true}}$ from corrupted measurements alone. The corresponding phase transition for the \texttt{Lasso} has been extensively studied in the literature by amongst many others \citet{buhlmann2011statistics, hastie2015statistical} and \citet{wainwright2009sharp} and is considered well understood by now. As mentioned, with uncorrelated input data ($\rho=0$) a phase transition occurs at the curve \eqref{eq:wainwright}. In the regime $n > n_1$ exact recovery with \texttt{Lasso} occurs with high-probability for some $\lambda>0$, whereas otherwise the probability for successful recovery drops to zero. A similar phase transition has been observed by \citet{zheng2015does} and \citet{wang2011performance} for exact sparse regression as well, although this transition is far less understood from a theoretical perspective than the similar transition for its heuristic counterpart. Recently though, \citet{gamarnik2017high} have made some way and shown that an all or nothing phase transition phenomena occurs for exact sparse regression with binary coefficients as well. \begin{theorem}[\citet{gamarnik2017high}] \label{thm:gamarnik} Let the data ($\rho=0$) be generated as in Section \ref{ssec:data_description}. Let $\epsilon>0$. Suppose $k \log k \leq C n$, for some $C> 0$ for all $k$ and $n$. Suppose furthermore that $k\to\infty$ and $\sigma^2/k\to 0$. If $n\geq (1-\epsilon) n_0$, then with high probability \[ \frac1k \norm{w_0^\star-w_{\mathrm{true}}}_0 \to 0. \] Whereas when $n \leq (1-\epsilon) n_0$, then with high probability \( \frac1k \norm{w_0^\star-w_{\mathrm{true}}}_0 \to 1. \) \end{theorem} Although the following theorem holds for unregularized sparse regression ($\gamma \to \infty$), the same holds for other appropriately chosen values of the regularization parameter as well. Interestingly, \citet{gamarnik2017high} the proof technique of Theorem \ref{thm:gamarnik} might give additional intuitive insight with regard to the phase transition phenomena with respect to the statistical accuracy and computational complexity of exact sparse regression problem, which we will now empirically report on. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{l1_cio_data.pdf} \end{center} \caption{A comparison between exact sparse regression using our cutting plane algorithm and the approximate \texttt{Lasso} heuristic on uncorrelated data ($\rho=0$) with noise ($\sqrt{SNR}=20$) counting $p=2,000$ regressors of which only $k=10$ are relevant. In the top panel we depict the time in minutes necessary to solve the sparse regression problem using either method as a function of the number of samples. The panel below gives the corresponding accuracy $A\%$ of the regressors as a function of the number of samples. The red vertical line at $n_1=152$ samples depicts the accuracy phase transition concerning the ability of the \texttt{Lasso} heuristic to recover the support of the ground truth $w_{\mathrm{true}}$. The blue vertical line at $n_t=120$ does the same for exact sparse regression. The final panel indicates the ability of both methods to reject obfuscating features in terms of the false alarm rate $F\%$. It can thus be seen that exact sparse regression does yields more statistically meaningful regressors (higher accuracy $A\%$ for less false alarms $F\%$) than the \texttt{Lasso} heuristic. Furthermore, a complexity phase transition can be recognized as well all around $n_t$. } \label{fig:cio_l1} \end{figure} In Figure \ref{fig:cio_l1}, we show empirical results for noiseless uncorrelated synthetically generated data with $p=2,000$ of which only $k=10$ are relevant. The accuracy $A\%$ and false alarm rates $F\%$ using exact sparse regression as well as the \texttt{Lasso} and time $T$ in minutes to obtain either one are taken as the average values of fifty independent synthetic datasets. When the optimal solution is not found in less than fifteen minutes we take the best solution found up to that point. The error bars give an indication of one inter-sample standard deviation among these fifty independent experiments. The colored horizontal lines indicate that the number of samples $n$ after which either method returned a full recovery ($A\%=100$) of the support of the ground truth when both are given the correct number $k$ of relevant sparse features. The \texttt{Lasso} heuristic is empirically found to require approximately $n=180$ samples to recover the true support which corresponds rather well with the theoretically predicted $n_1=152$ necessary samples by \citet{wainwright2009sharp}. Unsurprisingly, the related accuracy phase transition of exact sparse regression using Algorithm \ref{alg:outer_approximation} is found empirically to occur at $n_t=120$ samples. We now discuss the second transition which indicates that the time it takes to solve the sparse regression \eqref{eq:l0-regression:primal} using the cutting plane Algorithm \ref{alg:outer_approximation} experiences a phase transition as well. We seem to be the first to have seen this complexity phase transition likely due to the fact that scalable algorithms for exact sparse regression have historically been lacking. Nevertheless, the fact that the complexity of exact sparse regression might experience a phase transition has been allude to before. Contrary to traditional complexity theory which suggests that the difficulty of a problem increases with problem size, the sparse regression problem has the property that as the number of samples $n > n_t$ increases the problem becomes easier in that the solution recovers 100\% of the true signal, and our approach solves the problem extremely fast (in fact faster than \texttt{Lasso}), while for small number of samples $n < n_t$ exact sparse regression seems impractical. It should be remarked that as $n_0 \approx 50 < n_t$ there still remains a region in which exact sparse regression is statistically relevant but computationally not feasible. In all the experiments conducted up to this point, we assumed that the number of non-zero regressor coefficients $k$ of the ground truth $w_\mathrm{true}$ underlying the data was given. Evidently, in most practical applications the sparsity parameter $k$ needs to be inferred from the data as well. In essence thus, any practical sparse regression procedure must pick those regressors contributing to the response out of the obfuscating bulk. To that end, we introduced the false alarm rate $F\%$ of a certain solution $w^\star$ as the percentage of regressors selected which are in fact unfitting. The ideal method would of course find all contributing regressors ($A\%=100$) and not select any further ones ($F\%=0$). In practice clearly, a trade-off must sometimes be made. The final phase transition will deal with the ability of exact sparse regression to reject obfuscating irrelevant features using cross validation. Historically, cross validation has been empirically found to be an effective way to infer the sparsity parameter $k$ from data. Hence, for both exact sparse regression and the \texttt{Lasso} heuristic, we select that number of non-zero coefficients which generalizes best to the validation sets constructed using cross validation with regards to prediction performance. In case of exact sparse regression, we let $k$ range between one and twenty whereas the true unknown number of non-zero regressors was in fact ten. The third plot in Figure \ref{fig:cio_l1} gives the false alarm rate $F\%$ of both methods in terms of the number of samples $n$. As can be seen, the \texttt{Lasso} heuristic has difficulty keeping a low false alarm rate with noisy data. Even in the region where the \texttt{Lasso} heuristic is accurate ($A\%$), it is not as sparse as hoped for. Exact sparse regression does indeed yield sparser models as it avoids including regressors that do not contribute to the observations. \subsection{Parametric Dependency} \label{ssec:param-depend} To investigate the effect of each of the data parameters even further, we use synthetic data with the properties presented in Table \ref{table:parameters}. In order to be able to separate the effect of each parameter individually, we present the accuracy $A \%$, false alarm rate $F \%$ and solution time $T$ of our cutting plane algorithm as a function of the number of samples $n$ for each parameter value separately while keeping all other parameters fixed to their nominal value. All results are obtained as the average values of twenty independent experiments. The figures in the remainder of this section indicate that the accuracy, false alarm and complexity phase transitions shown in Figure \ref{fig:cio_l1} persist for a wide variety of properties of the synthetic data. \begin{table} \begin{center} \begin{tabular}{llr} \hline Sparsity & $k$ & $\{10^\star, 15, 20\}$ \\ Dimension & $p$ & $\{5000^\star, 10000, 15000\}$ \\ Signal-to-noise ratio & $\sqrt{\mathrm{SNR}}$ & $\{3, 7, 20^\star\}$ \\ \hline \end{tabular} \end{center} \caption{Parameters describing the synthetic data used in Section \ref{ssec:param-depend}. The starred values denote the nominal values of each parameter.} \label{table:parameters} \end{table} \subsubsection*{Feature dimension $p$} As both phase transition curves \eqref{eq:wainwright} and \eqref{eq:gamarnik} depends only logarithmically on $p$, we do not expect the reported phase transitions to be very sensitive to the regressor dimension either. Indeed, in Figure \ref{fig:cio_p} only a minor influence on the point of transition between statistically meaningful and efficient sparse regression to unreliable and intractable regressors is observed as a function of $p$. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{p.pdf} \end{center} \vspace{-10pt} \caption{The top panel shows the time it takes to solve the sparse regression problem using the cutting plane method for data with $p = 5,000$, $10,000$ or $15,000$ regressors as a function of $n$. When the optimal solution is not found in less than ten minutes we take the best solution found up to that point. The bottom panels show the accuracy $A\%$ and false alarm rate $F\%$. Only a minor influence on the point of transition between statistically meaningful and efficient sparse regression to unreliable and intractable regression is observed as a function of the regression dimension $p$. } \label{fig:cio_p} \end{figure} \subsubsection*{Sparsity level $k$} Figure \ref{fig:cio_k} suggests that $k$ has an important influence of the phase transition curve. The experiments suggest that there is a threshold $f_t$ such that if $n/k \geq f_t$, then full support recovery $(A\%=100, \,F\%=0)$ occurs and the time to solve problem \eqref{eq:l0-regression:primal} is in the order of seconds and only grows linear in $n$. Furthermore, if $n/k < f_t$, then support recovery $A\%$ drops to zero, false alarms $F\%$ surge, while the time to solve problem \eqref{eq:l0-regression:primal} grows combinatorially as $\binom{p}{k}$. This observation is in line with the theoretical result \eqref{eq:gamarnik}, which predicts that this threshold only depends logarithmically on the feature dimension $p$ and the \ac{snr} which we study subsequently. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{k.pdf} \end{center} \vspace{-10pt} \caption{The top panel shows the time it takes to solve the sparse regression problem as a function of $n$ using the cutting plane method for data with $p=5,000$ regressors of which only $k = 10$, $15$ or $k=20$ are relevant. When the optimal solution is not found in less than ten minutes we take the best solution found up to that point. The bottom panels show the accuracy $A\%$ and false alarm rate $F\%$. These results suggest that the quantity $n/k$ is a major factor in the phase transition curve of exact sparse regression.} \label{fig:cio_k} \end{figure} \subsubsection*{Signal-to-noise ratio ($\mathrm{SNR}$)} From an information theoretic point of view, the \ac{snr} must play an important role as well as reflected by the theoretical curve \eqref{eq:gamarnik}. Indeed, the statistical power of any method is questionable when the noise exceeds the signal in the data. In Figure \ref{fig:cio_snr} this effect of noise is observed as for noisy data the phase transition occurs later than for more accurate data. \begin{figure} \begin{center} \includegraphics[width=0.85\textwidth]{snr.pdf} \end{center} \vspace{-10pt} \caption{The top panel shows the time it takes to solve the sparse regression problem as a function of $n$ using the cutting plane method for data with signal-to-noise level $\sqrt{\mathrm{SNR}} = 3,$ $7$ and $20$. When the optimal solution is not found in less than one minute we take the best solution found up to that point. The bottom panel shows the accuracy $A\%$. } \label{fig:cio_snr} \end{figure} \subsection{A remark on complexity} The empirical results in this paper suggest that the traditional complexity point of view might be misleading towards a better understanding of the complexity of the sparse regression problem \eqref{eq:l0-regression:primal}. Indeed, contrary to traditional complexity theory which suggests that the difficulty of a problem increases with dimension, the sparse regression problem \eqref{eq:l0-regression:primal} has the property that for small number of samples $n$, our approach takes a large amount of time to solve the problem. However, for a large number of samples $n$, our approach solves the problem extremely fast and recovers 100\% of the support of the true regressor $w_{\mathrm{tru e}}$. \section{The road towards nonlinear feature discovery} \label{sec:nonlinear} In this section, we discuss an extension of the sparse linear regression to the case of nonlinear regression by augmenting the input data $X$ with auxiliary nonlinear transformations. In fact, the idea of nonlinear regression as linear regression to lifted data underpins kernel methods. Kernel methods can in a primal perspective be viewed as Tikhonov regularization between the observations $Y$ and transformed versions $\psi(x_i)$ of the original data samples. The feature map $\psi(\cdot)$ encodes which nonlinearities should be detected. To illustrate the idea we augment each of the $p$ original regressors with the following nonlinear transformations: \begin{equation} \label{eq:temp12} x, ~\sqrt{\abs{x}},~\log \abs{x}, ~x^2,~x^3,~\cos(10 \pi x),~ \sin(x),~\tanh(2 x). \end{equation} The method could be made more general by allowing for nonlinear products between variables but we abstain from doing so for the sake of simplicity. To enforce a sparse regression model, we demand that the final regressor can only depend on $k$ different (potentially nonlinear) features. Instead of solving problem \eqref{eq:l0-regression:primal}, we then solve its nonlinear version \begin{equation} \label{eq:l0-regression:nonlinear} \begin{array}{rl} \min & \frac{1}{2\gamma} \norm{\tilde w}^2_2 + \frac{1}{2} \norm{Y - \psi(X) \tilde w}_2^2 \\[0.5em] \mathrm{s.t.} & \norm{\tilde w}_0 \leq k, \end{array} \end{equation} where the matrix $\psi(X)$ in $\mathrm{R}^{n \times f}$ consists of the application of the transformations in \eqref{eq:temp12} to the input matrix $X$. The nonlinear sparse regression problem \eqref{eq:l0-regression:nonlinear} can be dealt with in an identical manner as its linear counterpart \eqref{eq:l0-regression:primal}. Notice that the dimension of the nonlinear regressor $\tilde w$ is potentially much larger than its linear counterpart $w$. \begin{corollary}[Sparse nonlinear regression] \label{cor:cio_nonlinear} The sparse regression problem \eqref{eq:l0-regression:nonlinear} can be reformulated as the nonlinear optimization problem \begin{equation*} \begin{array}{rl} \min_{s\in \S^f_k} & \displaystyle \frac12 Y^\top \left(\eye{n} + \gamma \textstyle\sum_{j\in[f]} s_j K_j \right)^{-1} Y \end{array} \end{equation*} where $ K_j \defn \psi_j(X) \psi_j(X) ^\top.$ \end{corollary} Note that he only material difference between Corollary \ref{cor:cio_nonlinear} and Theorem \ref{thm:cio} is the definition of kernel matrices $K_j$. As an illustration of the nonlinear approach described above, consider observations and data coming from the following nonlinear model \begin{equation} \label{eq:nonlinear_model} Y = 3 \sqrt{\abs{X_4}} -2 X_2^2 + 4 \tanh(2 X_3) + 3 \cos(2 \pi X_2) -2 X_1 + a X_1 X_2 + E. \end{equation} We assume that the input data $X$ and noise $E$ is generated using the method outlined in Section \ref{ssec:data_description}. That is, the signal-to-noise ratio was chosen to be $\sqrt {\mathrm{SNR}}=20$ to simulate the effect of noisy data. For simplicity we assume the original data $X$ to be uncorrelated ($\rho =0$). An additional 16 regressors are added to obfuscate the four relevant regressors in the nonlinear model \eqref{eq:nonlinear_model}. The input data after the nonlinear transformations in \eqref{eq:temp12} comprised a total of $f=160$ nonlinear features. We consider two distinct nonlinear models for corresponding parameter values $a=0$ and $a=1$. Notice that for the biased case $a=1$, the term $a X_1 X_2$ will prevent our nonlinear regression approach to find the true underlying nonlinear model \eqref{eq:nonlinear_model} exactly. We state the results of our nonlinear regression approach applied to the nonlinear model \eqref{eq:nonlinear_model} for both $a=0$ and $a=1$ in Table \ref{table:nonlinear}. All reported results are the median values of five independent experiments. Cross validation on $k$ ranging between one and ten was used to determine the number of regressors considered. Determining the best regressor for each $k$ took around ten seconds, thus making a complete regression possible in a little under two minutes. As currently outlined though, our nonlinear regression approach is not sensitive to nonlinearities appearing as feature products and consequently it will treat the term $a X_1 X_2$ as noise. Hence, the number of underlying regressors we can ever hope to discover is five. For $a=0$, 200 samples suffice to identify the correct nonlinearities and features. For $a=1$ Table \ref{table:nonlinear} reports an increased false alarm rate compared to $a=0$. \begin{table} \begin{center} \begin{tabular}{| l | l | c c c c c |} \cline{2-7} \multicolumn{1}{c|}{} & Quality $w^\star$ & $n = 100$ & $n = 200$ & $n = 300$ & $n = 400$ & $n = 500$\\ \hline $a = 0$ & $(A\%, F\%)$ & (100, 38) & (100, 0) & (100, 0) & (100, 0) & (100, 0)\\ \hline $a = 1$ & $(A\%, F\%)$ & (80, 50) & (100, 17) & (100, 17) & (100, 28) & (100, 17) \\ \hline \end{tabular} \end{center} \vspace{0.5em} \caption{For the nonlinear model \eqref{eq:nonlinear_model} and for $a=0$, $n=200$ suffice to identify the correct features. For $a=1$, $A\%=100$ for $n\geq 200$, but $F\%>0$.} \label{table:nonlinear} \end{table} The method proposed here serves only as an illustration. Off course no method can aspire to discover arbitrary nonlinearities without sacrificing its statistical power. We believe that this constitutes a promising new road towards nonlinear feature discovery in data. With additional research, we believe that it can become a fierce and more disciplined competitor towards the more ``black box'' approaches such as neural networks. \section{Conclusions} \label{sec:conclusions} We presented a novel binary convex reformulation and a novel cutting plane algorithm that solves to provable optimality exact sparse regression problems for instances with sample sizes and regressor dimensions well in the 100,000s. This presents an improvement of two orders of magnitude compared to known exact sparse regression approaches and takes away the computational edge attributed to sparse regression heuristics such as the \texttt{Lasso} or \texttt{Elastic Net}. The ability to solve sparse regression problems for very high dimensions allows us to observe new phase transition phenomena. Contrary to complexity theory which suggests that the difficulty of a problem increases with problem size, the sparse regression problem has the property that as $n$ increases, the problem becomes easier in that the solution perfectly recovers the support of the true signal, and our approach solves the problem extremely fast (in fact faster than \texttt{Lasso}), whereas for small $n$, our approach takes a large amount of time to solve the problem. We further provide preliminary evidence that our methods open a new road towards nonlinear feature discovery based on sparse selection from a potentially huge amount of desired nonlinearities. \section*{Acknowledgements} The second author is generously supported by the Early Post.Mobility fellowship No.\ 165226 of the Swiss National Science Foundation. \setlength\bibitemsep{10pt} \printbibliography \end{document}
1,108,101,564,497
arxiv
\section{Introduction}\label{sec1} Quantum computers leverage quantum properties such as entanglement, and promise the potential of a speed advantage over classical algorithms when applied to specialized problems. Some algorithms such as Shor’s algorithm for factorization \citep{shor} and Grover’s search algorithm \citep{grover} have been shown theoretically to outperform all known classical algorithms applied to the same tasks. Quantum algorithm design is made difficult by the unintuitive nature of quantum entanglement which must be used effectively to achieve an advantage over classical algorithms. Quantum machine learning seeks to apply quantum computation to machine learning tasks to achieve a quantum advantage over classical machine learning. Quantum machine learning and classical machine learning show promise for automating many practical tasks that would otherwise require human intelligence, including disease diagnosis \citep{disease_diagnosis_article}, natural language processing \citep{nlp_article}, and image classification \citep{image_recognition_overview}. Machine learning algorithms are designed to learn functions from data \citep{goodfellow2016deep}. These algorithms can be separated into three categories: supervised learning, unsupervised learning, and reinforcement learning. Supervised learning algorithms make use of prelabeled training data while unsupervised learning algorithms do not \citep{supervised_vs_unsupervised_distinction}. In reinforcement learning, agents learn behaviour by interacting with an environment that provides rewards and punishments to guide them. Kernel methods are an important class of techniques in both classical and quantum machine learning. The Support Vector Machine (SVM) \citep{initial_svm} is an important classical supervised kernel method, due to its theoretical relations to other learning models \citep{neural_tangent_kernel, SQMLMAKM} and results regarding its generalisation ability \citep{Vapnik1998Book, VCBound}. It is built around an optimization algorithm for finding an optimal linear hyperplane that separates data points into two classes. The hyperplane is selected to maximize the minimum margin of any point in the training dataset, where the margin of a point is defined as the distance of the point from the separating hyperplane. Larger margin sizes have been theoretically linked to improved generalisation performance \citep{Vapnik1998Book, VCBound}. Non-linear decision boundaries can be achieved by mapping non-linear data to a higher dimensional feature space. The mapping function used is called a feature map, and the range of a feature map is called a feature space. By use of a technique known as the \textit{``kernel trick”}, the decision boundary optimization problem can be reformulated in terms of a kernel function that computes the similarity of a pair of data points in the feature space \citep{initial_svm}. This obviates the need to explicitly compute feature map outputs for data points, so long as the corresponding kernel function can be computed. The feature map must be carefully selected for effective separation of the data. For any kernel function and labelled training set combination, a quantity known as the kernel-target alignment of the kernel can be calculated. This indicates the degree of agreement between the kernel function and a hypothetical oracle kernel induced by the training labels that is well suited to the training data \citep{on_kernel_target_alignment}. A high kernel-target alignment has been shown in other works to correlate with improved classification performance \citep{on_kernel_target_alignment} and it has been proposed for use as a metric for selecting suitable kernels for a dataset in classification problems \citep{on_kernel_target_alignment, training_kernel_target_alignment}. The QSVM algorithm enhances the SVM algorithm by implementing the feature map function as a quantum circuit (see section \ref{sec:QSVM_explanation}). While quantum feature map circuits are parameterized in the feature values of a single data point, they can also contain additional trainable parameters which can be optimized to improve the suitability of the kernel circuit for a specific dataset \citep{training_kernel_target_alignment}. A quantum circuit containing trainable parameters is called an \textit{ansatz}. Prior work has used classical optimizers on trainable parameterized quantum kernels to maximize their kernel-target alignment, which resulted in positive effects on the classification accuracy of the resulting SVM models \citep{training_kernel_target_alignment}. Quantum feature map circuits of fixed structure that make use of trainable parameter values are reported in \cite{training_kernel_target_alignment} . The trainable parameter values are optimized using Stochastic Gradient Ascent to maximize the kernel-target alignment of the corresponding kernel functions. This was performed to test whether kernel-target alignment optimization could improve the performance of a fixed-structure quantum feature map on a given dataset. Increased kernel-target alignment had previously been shown in \cite{on_kernel_target_alignment} to correlate with improved classification ability. The technique described can be applied either to tailor an existing feature map to a dataset or to fully generate a feature map for a dataset from a feature map ansatz that has a predetermined circuit structure and parameter placement. The work made use of both classical noise-free simulations of quantum computers and real NISQ computers to run quantum circuits, reporting improvements in classification accuracy after kernel-target alignment maximization \citep{training_kernel_target_alignment}. A second model training metric applicable to quantum kernel classifiers is a classifier's root mean squared error (RMSE). This is often optimized to train parameterized models for classification tasks. It can be better suited to training than direct evaluation of accuracy since it accounts for the magnitudes of misclassification errors rather than simply the number of errors that occur. Accuracy can be too insensitive to circuit parameter changes to show when a circuit has slightly improved if a data set is not sufficiently large. Another approach to optimizing the choice of kernel circuit for a dataset is to optimize the selection of circuit gates used in the feature map in addition to the values of trainable circuit parameters. This approach has been applied to optimizing circuits applied to other problems \citep{rotosolve_rotoselect_circuit_structure_optimization}. Genetically inspired algorithms have also been applied to circuit structure optimization since they are capable of combinatorial optimization \citep{evolving_quantum_circuits_using_genetic_algorithm, quantum_circuit_design_genetic_programming, genetic_algorithm_quantum_circuit_compilation}. \cite{Altares_L_pez_2021} detailed the implementation of a genetic algorithm for automated feature map circuit design for use with QSVM classifiers that both maximizes classification accuracy and minimizes circuit size. The optimization was performed using a variation of genetic algorithm named NSGA-II \citep{nsga2} which customizes the usual genetic selection and fitness evaluation operations of the genetic algorithm (as outlined in section \ref{sec:GA_explanation}) to evaluate multiple fitness functions and preferentially select non-dominated solutions for crossover. In a minimization problem with two fitness functions, a solution $s$ with fitness values $(a, b)$ is considered non-dominated with respect to another set of $n$ solutions with fitness values $\{(f_i, g_i) \vert i \in \{1, 2, ..., n\}\}$ if and only if $\forall i \in \{1, 2, ..., n\}, (f_i < a)\implies(g_i > b)$, i.e. if and only if there are no solutions in the set with all fitness values superior to the corresponding fitness values of $s$. NSGA-II also makes use of elitism to guarantee the preservation of the solutions that best optimize at least 1 of the individual fitness functions. In order to apply a genetic algorithm to quantum feature map design, the work also defined a binary string representation for encoding feature map circuits. The encoding strategy can be found in \cite{Altares_L_pez_2021} and is summarised here. \begin{table} \centering \begin{tabular}{|c|c|} \hline Bits & Gate \\ \hline 000 & H \\ 001 & CNOT \\ 010 & I \\ 011 & Rx \\ 100 & Rz \\ 101 & I \\ 110 & I \\ 111 & Ry \\ \hline \end{tabular} \begin{tabular}{|c|c|} \hline Bits & Parameter\\ \hline 00 & $\pi$ \\ 01 & $\pi/2$ \\ 10 & $\pi/4$ \\ 11 & $\pi/8$ \\ & \\ & \\ & \\ & \\ \hline \end{tabular} \caption{Mapping used on each consecutive 5-bit sequence of bits encoding a feature map gate to determine the gate type and a proportionality parameter value used in the case of parameterised gates. The available gates for the encoding to select from are Hadamard (H), CNOT, identity (I), and parameterised rotations around the X, Y, and Z axis of the Bloch sphere which are used to encode data point feature values into the circuit. We define the Rx($\theta$) gate as cos$(\theta / 2)$I$ - i$sin$(\theta/2)$X, the Ry($\theta$) gate as cos$(\theta / 2)$I$ - i$sin$(\theta/2)Y$, and the Rz($\theta$) gate as cos$(\theta / 2)$I$ - i$sin$(\theta/2)Z$. If a parameterised rotation gate $R_a$ around axis $a$ is selected, the parameter selected by the last two of the five gate bits will be used to selected a proportionality parameter $p$ . When the gate $R_a$ is applied to encode a feature value $x_i$, the gate applied will be $R_a(px_i)$. In the case of a CNOT gate being selected and this gate being applied to qubit $i$, qubit $i$ will be used as the control qubit and qubit $(i+1)\mod M$ will be used as the target qubit.} \label{table:gate_bits_mapping} \end{table} Each circuit gate is encoded in a sequence of 5 bits, the first three of which encode the type of gate applied and the last two of which encode a proportionality parameter for use in the case that a parameterised rotation gate was selected by the first three bits. The mapping of bits to gates and proportionality parameters is shown in Table \ref{table:gate_bits_mapping}. Hyperparameters $M$ and $N$ which designate the maximum number of qubits and maximum number of gate layers respectively must be chosen before applying the genetic algorithm and are fixed for the duration of the genetic optimization. A single solution is represented as a bit string of length $5MN$, which holds the concatenation of the encodings of the individual gates in the feature map. The gates in the encoding are applied successively with the target qubit and feature value to potentially encode being selected in a round-robin fashion. Stated explicitly, for each consecutive group of 5 gate bits, the $i$th gate description will be applied to qubit $i\mod M$ and will encode feature $i\mod N$ if it performs a parameterised rotation. This encoding strategy was selected for simplicity, although other strategies could also feasibly be attempted. Two fitness functions were optimized in the work: accuracy on a test set was maximized and a weighted size metric was simultaneously minimized. The unweighted size metric \textit{SM} was calculated in terms of the number of qubits $M$, the number of single qubit gates $N_{\text{local}}$, and the number of entangling gates $N_{\text{CNOT}}$, by the expression \[ \text{SM} = \frac{N_{\text{local}} + 2N_{\text{CNOT}}}{M}. \] The weighted size metric \textit{WS} supplied to the genetic algorithm was given by the expression \[ \text{WS} = \text{SM} + \text{SM} \cdot \text{accuracy}^2. \] The work was able to demonstrate the effectiveness of using NSGA-II with the devised feature map binary string encoding strategy, accuracy maximization, and weighted size minimization to automatically produce quantum feature map circuits for QSVM classification using only a dataset and a few hyperparameters as input. The generated circuits were also experimentally shown to generalise to unseen data. In addition to using few qubits and quantum gates, the circuits produced by the approach were observed to make little to no use of entanglement, meaning they could be efficiently simulated classically and the approach could constitute a quantum-inspired classical machine learning algorithm. Some attempts have been made to enhance the genetic optimization and compare the approach to others. The work done in \cite{single_objective_genetic_extension_work} is based on the algorithm put forward in \cite{Altares_L_pez_2021}. They used a modified encoding scheme which encoded the proportionality parameter values using three bits instead of only two, doubling the number of encodable parameter values. A restricted choice of parameter values was one of the potential limitations of the algorithm designed in \cite{Altares_L_pez_2021}. The algorithm was also further modified to optimize gate cost and classification accuracy in a single objective expression, using a single objective genetic algorithm instead of a multi-objective genetic algorithm such as NSGA-II. Notably, this introduced a new hyperparameter used to weight the focus of the optimization between circuit size and accuracy, which is not done with NSGA-II. The feature map generated by the genetic algorithm was compared with two other choices of ansatz: a hardware efficient ansatz proposed in \cite{he_ansatz}, and a unitary decomposition ansatz proposed in \cite{unitary_decomposition_ansatz}. Each ansatz was trained with COBYLA \citep{cobyla_book}, directly evaluating classification accuracy as a cost function. It was found that the feature map circuits generated by the genetic algorithm variation used in the work performed similarly to the hardware efficient ansatz, depending on the circuit depth hyperparameter selected when generating the hardware efficient ansatz. However, both were beaten by the unitary decomposition ansatz in accuracy, which achieved the highest accuracy at the cost of having a large, fixed size. Our work investigates kernel-target alignment as a metric for automating quantum feature map design for the Quantum-Enhanced Support Vector Machine (QSVM) algorithm \citep{QMLFHS, QSVMBDC}. Our work has two goals: investigate the suitability of using alternative cost functions to accuracy in the genetic optimization process described in \cite{Altares_L_pez_2021} and secondly, investigate whether the problem of limited circuit parameter choices in the genetic algorithm can be addressed by a hybrid process of genetic and circuit parameter training. We address the first goal by evaluating two alternative cost functions to the accuracy metric in \cite{Altares_L_pez_2021} : firstly, kernel-target alignment for the genetic optimization step and secondly, a heuristic estimation of the kernel-target alignment performing a fraction of the kernel evaluations. For the second goal, a hybrid method involving further optimising the final choice of trainable circuit parameter values after the genetic algorithm terminates for each of the above approaches is evaluated. This final optimization uses COBYLA \citep{cobyla_book} to maximize either kernel-target alignment or RMSE. The new approaches are compared to the original across several binary classification problems of varying difficulties. We show that even though the kernel-target alignment metric is less computationally expensive to compute in terms of quantum kernel evaluations and avoids the training of an SVM classifier, the performance of the constructed classifiers is comparable to the original approach and often achieves a better margin distribution on training data. It has been demonstrated Theoretically that increased margin sizes indicates better generalisation ability \citep{Vapnik1998Book, VCBound}. The kernel-target alignment approximation heuristic is shown to perform marginally worse than exact kernel-target alignment optimization but at a fraction of the computational cost. The hybrid approaches are shown to improve margin sizes over the original. The original approach is also shown to sometimes overfit to the test data used to evaluate its accuracy metric, particularly on difficult problems. In the following section, we give a more detailed explanation of the background topics involved in understanding this work and related works, and explain our experimental setup. This is followed by a section covering our findings and interpretations. In the final section we give an overview of the contributions made and suggest ideas for further research. \section{Methods}\label{sec2} \subsection{Binary classifiers using quantum kernels} \subsubsection{Support Vector Machine (SVM)}\label{sec:SVM_explanation} The SVM algorithm is a classical supervised machine learning algorithm for binary classification problems that works by finding an optimal separating hyperplane between two classes of data points. The SVM algorithm is applicable when the data points can be represented by real-valued feature vectors. For simplifying definitions, the class labels are usually replaced with positive and negative one. The \textit{margin} of a single data point is defined as the distance from the data point to the SVM's chosen hyperplane. The margin of an SVM classifier refers to the minimum of the margins of the data points. The data points with minimum margin are known as the \textit{support vectors}. The hyperplane chosen by the SVM is optimal in the respect that it maximises the minimum of the margins of the training set data points by solving a quadratic programming optimization problem. \begin{figure} \centering \includegraphics{Fig1.pdf} \caption{An example illustrating how a feature map function could be used to make non-linearly separable points linearly separable in a higher dimensional space. In this case, the feature map could be implemented as a function that adds a third dimension to the points with decreasing value as distance from the central region of the points increases.} \label{fig:feature_map_diagram} \end{figure} The SVM algorithm is also capable of classifying datasets with classes that are not linearly separable. This can be achieved by first mapping the data points to a higher dimensional space in such a way that they become linearly separable in the higher dimensional space (see Figure \ref{fig:feature_map_diagram} for an illustration). A function used to perform this mapping is called a \textit{feature map} and the range of the function is called the \textit{feature space}. The choice of feature map must be suited to the dataset in order to classify it well, since it determines whether the data will become linearly separable after transformation. A feature map \(\phi(x)\) that maps a point into feature space has a corresponding \textit{kernel function} \(\kappa(x_i,x_j) = \langle \phi(x_i) , \phi(x_j) \rangle\) which computes the inner product of a pair of data points in the feature space. The margin optimization problem can be equivalently reformulated as a dual problem in terms of the kernel function \citep{initial_svm}, which can sometimes avoid the explicit computation of the feature map. This advantage is often referred to as the \textit{``kernel trick"}. Seeing that in many cases where a feature map can not be efficiently computed but the corresponding kernel function can be, the dual formulation of the problem increases the number of potential feature maps that can be applied to a dataset. In the dual form of the SVM, the classifier output for a given class is determined using a coefficient sequence \(\alpha = \{\alpha_1, \alpha_2, ..., \alpha_n\}\) and offset sequence \(b = \{b_1, b_2, ..., b_n\}\) chosen by the SVM algorithm during the hyperplane optimization. The decision function \textit{df} outputs an indication of the distance of its input point from the hyperplane after mapping into feature space. It is defined in terms of the kernel function $\kappa$, the training samples \(\{x_1, x_2, ..., x_n\}\), the $\alpha$ coefficients, and the $b$ offsets as follows \[ \text{df}(x) = \sum_{i=1}^n (\alpha_i \kappa(x, x_i) + b_i). \] The sign of $\text{df}(x)$ is used to determine the predicted class of the argument point $x$: \[ \text{Class}(x) = \text{sgn}(\text{df}(x)). \] \subsubsection{Quantum-enhanced Support Vector Machine}\label{sec:QSVM_explanation} The Quantum-enhanced Support Vector Machine (QSVM) algorithm extends the SVM algorithm by performing the kernel computation on a quantum computer \citep{QMLFHS, QSVMBDC}. A quantum circuit parameterised in the values of a single data point is used as a feature map to map the data points to a high dimensional quantum state in a quantum Hilbert space. For a quantum feature map encoding data points into \(q\) qubits, the dimensionality of the feature Hilbert space is \(2^q\). Although a quantum computer can efficiently compute the quantum state feature space representation of data point, in general the quantum state cannot be efficiently represented classically due to the exponentially increasing dimensionality of the feature space. To work around this classical limitation, the kernel function is computed directly on the quantum computer and the kernel-based formulation of the SVM is used. The kernel computation for a pair of data points can be efficiently performed by measuring the overlap of their corresponding states in the quantum feature space. To train a QSVM model, the \textit{Gram matrix} $K_{n \times n}$ of the training points must be computed. For $n$ training points $\{x_1, x_2, ..., x_n\}$ and a quantum kernel function $\kappa$, the Gram matrix is defined by $K_{ij} = \kappa(x_i, x_j)$ \text{where} $i,j \in \{1, 2, ..., n\}$. In the case of a noise free quantum computer or simulator being used to execute $\kappa$, the symmetric property $\kappa(x_i, x_j) = \kappa(x_j, x_i)$ and the property that $\kappa(x_i, x_i) = 1$ can be used to reduce the number of required evaluations. Assuming the stated properties, the $n$ main diagonal entries of $K$ ($K_{11}$, $K_{22}$, ..., $K_{nn}$) do not require kernel evaluations, since $K_{ii} = \kappa(x_i, x_i) = 1$. For the remaining $n^2-n$ entries $K_{ij}$ which are not on the main diagonal, there is a symmetric entry $K_{ji}$ with the same value, since $K_{ij} = \kappa(x_i, x_j) = \kappa(x_j, x_i) = K_{ji}$. This means that only half of the entries need to be explicitly computed by kernel evaluations. In effect, only $\frac{n^2-n}{2}$ kernel evaluations must be performed to construct $K$. In the case of a NISQ computer, this technique could potentially be applied if measures were taken to mitigate noise and correct the kernel matrix such as in \cite{training_kernel_target_alignment}. Since the only difference between the SVM and QSVM algorithms is how the kernel computation is performed, the potential advantage of the QSVM algorithm lies in enabling the computation of kernel functions that are hard to estimate classically \citep{liu2021rigorous}. While examples of such kernels have been discovered \citep{liu2021rigorous} for artificial datasets, it is an open question how best to design quantum feature maps to achieve a useful kernel with a quantum speed advantage. \subsection{Kernel quality metrics} \subsubsection{Kernel-target alignment} Kernel-target alignment is a heuristic for kernel quality that measures the degree of similarity between two kernels, or the degree of agreement between a kernel and a dataset \citep{on_kernel_target_alignment}. It is calculated using a matrix inner product between the Gram matrix constructed from the training samples and an oracle matrix constructed from the training labels, where the oracle matrix acts as a stand-in for the Gram matrix of a hypothetical kernel which is very well-suited to the data. For a set of training points $\{x_1, x_2, ..., x_n\}$ with corresponding labels $\{y_1, y_2,$ $..., y_n\}$ with $\forall i, y_i \in \{-1, 1\}$, a kernel function $\kappa(x_i, x_j)$, and with the \textit{Frobenius} inner product for matrices defined as $\langle A, B \rangle_F = \sum_{i,j} A_{ij} B_{ij}$, the kernel-target alignment can be computed as follows \citep{on_kernel_target_alignment} \begin{enumerate} \item Compute the Gram matrix $K_{n \times n}$ using the kernel function and training points by the rule \[ K_{ij} = \kappa(x_i, x_j). \] \item Compute the oracle matrix $O_{n \times n}$ using the training labels by the rule \[ O_{ij} = y_i y_j. \] \item Compute the kernel-target alignment \textit{KTA} using the Frobenius inner product as \[ \text{KTA} = \frac{\langle K, O \rangle_F}{\sqrt{\langle K, K \rangle_F \langle O, O \rangle_F}}. \] \end{enumerate} A high kernel-target alignment has been shown in other works to correlate with improved classification performance \citep{on_kernel_target_alignment} and it has been proposed for use as a metric for selecting applicable kernels for a dataset in classification problems \citep{on_kernel_target_alignment, training_kernel_target_alignment}. \subsubsection{Root Mean Squared Error} Root Mean Squared Error (RMSE) is a common metric for measuring the error of a model. It is calculated as the square root of the mean of the squared errors of a classifier's predictions on each training set data point. In this work, the RMSE of a classifier is calculated using the errors of the decision function on training data with an adjustment to the error calculation. The adjustment is to account for there not being a definitively correct output of the SVM decision function for a given sample and label pair. The error is measured relative to a positive target decision function output $m$, which we set to one in this work. We calculate the error for a decision function output $a$ and training label $b$ using the following rule: \[ \text{error}(a, b) = \begin{cases} (m - a) & \text{if } b = 1 \text{ and } a < m \\ (a - m) & \text{if } b = -1 \text{ and } a > -m \\ 0 & \text{otherwise} \end{cases} \] This choice of error function means that only points not classified to the desired degree of confidence $m$ contribute to the error calculation, and the errors of the considered points increase with distance from the target output. For a set of training points $\{x_1, x_2, ..., x_n\}$ with corresponding labels $\{y_1, y_2,$ $..., y_n\}$ with $\forall i, y_i \in \{-1, 1\}$, the RMSE is calculated in terms of this adjusted error function and the decision function \textit{df} by the following rule \[ \text{RMSE} = \sqrt{\frac{\sum_{i=1}^n \text{error}(\text{df}(x_i), y_i)^2}{n}} \] \subsection{Overview of genetic algorithms}\label{sec:GA_explanation} Genetic algorithms are flexible metaheuristic algorithms inspired by the real-world evolutionary principles of natural selection, genetic inheritance, and random mutation. They are a popular choice of algorithm for optimizing complex objective functions in cases where algorithms known to produce a global optimum are unknown or infeasible. Implementing a genetic algorithm first requires designing a solution representation on which genetic operations can be performed. The solution representation often (but not always) takes the form of a binary string, which is mapped by a problem-specific decoding function to a usable solution. A genetic algorithm manages a set of these solution representations, which is called a population. The initial population can be a set of randomly generated solutions or chosen according to a problem-specific heuristic. The optimization process is iterative and typically repeats until a suitable solution is found, a desired number of iterations has passed, or the rate of improvement of the solutions becomes low. Each iteration, genetically inspired operations are applied to the population to create a new replacement population. The population creation process involves fitness evaluation, selection, crossover, and mutation operations. Fitness evaluation is performed by decoding a solution representation into a solution, then evaluating a numeric score of its suitability for the problem. This is performed for the entire population, after which point a selection operation is applied to select some solutions for crossover and mutation. It is important that better solutions are more likely to be selected for crossover, since this is the main mechanism driving improvement between generations. The solutions selected for crossover are referred to as \textit{``parents"}. The crossover operation is performed between two parent solutions to produce one or more new solutions, called child solutions. This is usually performed by taking a simple combination of the solution representations of the parents. In the case of a binary string solution representation, a simple crossover can be performed by combining two non-overlapping subsequences of the parents, taking a random number of bits from the first and the remainder from the corresponding positions in the second. A mutation operation can be applied to a child solution by randomly editing its representation by a small amount. This simulates the random mutation which occurs in real life and affects the diversity of available genetic material in the population. A strategy often employed when determining which individuals will make up the next generation is to preserve the best performing of the solutions among the current generation and the newly created children. This is known as \textit{elitism} and ensures that solutions can survive through multiple generations and potentially indefinitely, so long as they continue to outperform newer ones. This helps prevent regression of the achieved fitness due to chance as generations pass. The general idea for a genetic algorithm is flexible enough that many variations and extensions of the discussed components have also been studied \citep{genetic_algorithm_review}. \subsection{Experiments} The algorithm for automated feature map design described in \cite{Altares_L_pez_2021} was reimplemented using the Julia programming language \citep{Julia-2017}, the Yao quantum simulator framework \citep{yao}, and the pymoo \citep{pymoo} implementation of NSGA-II. All experiments were run with the maximum qubit count and feature map depth hyperparameters set to 6. The genetic algorithm population size was set to 100, with 15 new individuals being produced every generation. 30\% of the new individuals were produced by crossover; the rest were chosen randomly from the parents. In each generation, 70\% of the population underwent mutation. When mutation occured, 20\% of the bits in the mutated solution were flipped. All experiments were run using a noise free quantum simulator provided by Yao. \begin{table} \centering \begin{adjustbox}{center} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Dataset & Class -1 & Class 1 & \begin{tabular}{@{}c@{}}Features \\ (PCA)\end{tabular} & Train & Test & Validation \\ \hline Moons & Top left & Bottom right& 2 (N/A) & 210 & 90 & 500 \\ Cancer & Benign & Malignant & 30 (10) & 210 & 90 & 124 \\ Iris & Versicolor & Virginica & 4 (N/A) & 42 & 18 & 40 \\ Digits & Eight & Nine & 64 (10) & 140 & 60 & 148 \\ Circles & Outer & Inner & 2 (N/A) & 210 & 90 & 500 \\ Random & Red & Blue & 2 (N/A) & 210 & 90 & N/A \\ Voice & Acceptable & Unacceptable & 309 (10) & 28 & 12 & 44 \\ SUSY & Background & Signal & 18 (10) & 210 & 90 & 500 \\ SUSY reduced & Background & Signal & 8 (N/A) & 210 & 90 & 500 \\ \hline \end{tabular} \end{adjustbox} \caption{Table showing the characteristics of the datasets and sample splits used. Not all points in the base datasets were used to ensure the sample split remained balanced in each of the sample sets. Other considerations in determining the data splits were experiment runtime while maintaining a sufficiently large ratio of test points to train points and a sufficiently large number of validation points. All datasets with more than 10 feature values were reduced to 10 features using Principle Component Analysis (PCA). The moons, circles, and random datasets are artificial, with the moons and circles datasets being generated with Scikit-learn \citep{sklearn}. The rest of the datasets are sourced from the UCI Machine Learning Repository \citep{UCIDATA} either directly or indirectly through Scikit-learn \citep{sklearn}.} \label{table:datasets} \end{table} Three configurations of the original algorithm were run on nine different datasets of varying difficulty (see Table \ref{table:datasets}) to compare their effectiveness. The first configuration maximized accuracy on a test set and minimized weighted size, as in the original work \citep{Altares_L_pez_2021}. The second configuration maximized kernel-target alignment on the training data, ignoring the test data, and minimized the unweighted size metric also defined in \cite{Altares_L_pez_2021}. The third configuration maximized an approximation of kernel-target alignment on the training data, and minimized the same unweighted size metric. Before performing the genetic optimization the datasets are split into three disjoint subsets, namely training data, testing data, and validation data. The training data is used to evaluate kernel-target alignment and its approximation, as well as train the QSVM model for a given feature map circuit. The testing data is only used to evaluate the accuracy metric in the first approach. The validation data is used to determine the generalisation ability of the generated models, and must be separate from the testing data since the first approach can indirectly access the testing data through the accuracy metric and potentially overfit to it. In order to calculate the kernel-target alignment approximation for $n$ training points, the $n$ points are divided into $a$ disjoint complementary subsets of size roughly $n/a$. The number of subsets $a$ can be adjusted based on the number of training points to balance speed and precision. The kernel target alignment is calculated on each of the subsets in turn, then averaged. Assuming the properties of the kernel function are not used to accelerate the Gram matrix computation, $n^2$ kernel evaluations are required to compute the exact kernel-target alignment, and only $a (n/a)^2 = (n^2)/a$ evaluations are required to compute the approximation, giving a factor $a$ speedup. If the kernel properties are used, then $(n^2-n)/2 = n^2/2 - n/2$ kernel evaluations are required to compute the exact kernel-target alignment. Therefore, the number of kernel evaluations required when evaluating the kernel-target alignment approximation can be derived as \[ a ( \frac{(\frac{n}{a})^2-\frac{n}{a}}{2} ) = \frac{n^2}{2a} - \frac{n}{2}, \] meaning the factor speedup by evaluating the approximation is larger than $a$, but should approach $a$ as $n$ increases. In this work, we use a value of $a=5$ in all experiments involving the kernel-target alignment approximation. \begin{figure} \centering \includegraphics[width=\columnwidth]{Fig2.pdf} \caption{A flow diagram outlining the algorithm followed to genetically train quantum feature map circuits. The diagram also shows how a hybrid method involving circuit parameter training can be performed after genetic optimization.} \label{fig:algorithm_flow_diagram} \end{figure} After the genetic optimization in each configuration completes, we attempt further improvement by further optimizing just the proportionality parameters encoded in the last two bits of the gate representation using an implementation of COBYLA \citep{cobyla_book} provided by the NLopt optimization library \citep{NLopt}. This allows the parameter values to not be restricted to one of only four possibilities. This optimization aims to either minimize RMSE or maximize kernel-target alignment using the training set to evaluate the metrics. The COBYLA optimizer is allowed one hundred evaluations of the cost function to perform the optimization. A flow diagram outlining the algorithmic process can be seen in Figure \ref{fig:algorithm_flow_diagram}. The COBYLA cost functions for RMSE and kernel-target alignment both require computing a Gram matrix each evaluation, meaning the number of kernel evaluations performed is $\frac{100(n^2-n)}{2}$. This can be contrasted with the genetic optimization of accuracy or kernel-target alignment, where at least as many kernel evaluations are performed to evaluate the fitness of just the first generation of 100 solutions in the genetic algorithm. In the subsequent 1199 generations $15 \times 1199=17 985$ more Gram matrix evaluations are performed for a total of 18085, meaning the final parameter training for the entire output population requires roughly 55\% percent of the number of kernel evaluations performed in the genetic optimization in the cases of genetically optimizing accuracy or the exact kernel-target alignment. We name the three base approaches 1, 2, and 3, respectively. Each approach has two additional sub-approaches defined for further training of RMSE or kernel-target alignment, for a total of nine approaches. The RMSE and kernel-target alignment variations are named with a .1 and .2 suffix respectively. We graph the classification accuracies, average margins, ROC curves, feature map circuits, and confusion matrices of the best models produced by each approach, where the best model of a population is taken to be the one achieving the highest validation set accuracy. For two dimensional datasets, decision boundaries are also graphed. The code implementing the experiments and result graphing can be found on GitHub \citep{RPellowJarman2022Software}. \section{Results}\label{sec3} \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \begin{tikzpicture} \node[scale=0.7] { \begin{quantikz} \lstick{$\ket{0}$} & \gate{R_y(3.1416 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_y(3.1416 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_x(1.5708 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_z(3.1416 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \end{quantikz} }; \end{tikzpicture} \caption{Approach 1 - Accuracy, Weighted size} \end{subfigure} \begin{subfigure}{\textwidth} \centering \begin{tikzpicture} \node[scale=0.7] { \begin{quantikz} \lstick{$\ket{0}$} & \gate{R_z(0.3927 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_z(1.5708 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_y(0.3927 * x_{0})} & \gate{H} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{H} & \gate{H} & \gate{H} & \gate{R_x(3.1416 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_y(3.1416 * x_{0})} & \gate{R_y(1.5708 * x_{0})} & \gate{R_z(3.1416 * x_{0})} & \gate{R_z(3.1416 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_z(0.7854 * x_{1})} & \gate{R_y(0.3927 * x_{1})} & \gate{R_y(1.5708 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \end{quantikz} }; \end{tikzpicture} \caption{Approach 2 - Kernel-target alignment, Unweighted size} \end{subfigure} \begin{subfigure}{\textwidth} \centering \begin{tikzpicture} \node[scale=0.7] { \begin{quantikz} \lstick{$\ket{0}$} & \gate{R_y(3.1416 * x_{0})} & \gate{R_y(3.1416 * x_{0})} & \gate{R_y(0.7854 * x_{0})} & \gate{R_z(1.5708 * x_{0})} & \gate{R_z(3.1416 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_z(3.1416 * x_{1})} & \gate{H} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_x(1.5708 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{R_y(0.7854 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_y(1.5708 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \gate{H} & \gate{R_z(0.7854 * x_{1})} & \gate{R_x(0.7854 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_y(0.7854 * x_{1})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_z(1.5708 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{R_x(1.5708 * x_{0})} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \lstick{$\ket{0}$} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} & \gate{H} & \ifthenelse{\the\pgfmatrixcurrentcolumn>1}{\arrow[arrows]{l}}{} \\ \end{quantikz} }; \end{tikzpicture} \caption{Approach 3 - Kernel-target alignment approximation, Unweighted size} \end{subfigure} \caption{The circuits with highest validation set accuracy produced by the three base genetic approaches when creating quantum feature maps for the Moons dataset. (a) shows the best produced circuit when training to maximize accuracy and minimize weighted size as in the original work, (b) shows the best circuit when training to maximize the exact kernel-target alignment and minimize unweighted size, and (c) shows the best circuit when training to maximize the approximation of the kernel-target alignment and minimize unweighted size. Circuits (b) and (c) are significantly larger. Unused gate layers and qubits are omitted from the diagrams.} \label{figure:moons_circuits} \end{figure} As in the original work by \cite{Altares_L_pez_2021}, the feature map circuits produced by each of the approaches tend to make little to no use of entangling gates (see Figure \ref{figure:moons_circuits}). However, the circuits produced by optimizing the kernel-target alignment based metrics tend to produce significantly larger circuits overall (see Figure \ref{figure:moons_circuits}). This could be explained by the fact that the weighted size metric in the genetic optimization was replaced with an unweighted size metric in those approaches due to the weighted size metric depending on the test set accuracy, which was not evaluated. The optimization of the circuit size did not converge in the allocated 1200 generations in approaches 2 and 3, which can be inferred from the presence of redundant gates. This could possibly be addressed by allowing more generations to pass or using a size metric weighted in kernel-target alignment instead of accuracy, similarly to the original approach. Another possible explanation for the larger circuit size is that a circuit achieving perfect accuracy may still be able to improve its kernel-target alignment; in the accuracy maximization case, the genetic algorithm is able to shift focus to minimizing circuit size after achieving 100\% accuracy, but the same cannot be done as easily when maximizing kernel-target alignment since its limiting value of one is more difficult to achieve. Additionally, the high mutation rate of 70\% could be reduced to attempt to reach convergence in the allocated 1200 generations. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig4.pdf} \caption{A graph showing the classification accuracies of the best models produced by various approaches of quantum feature map design on the Moons dataset, compared with a classical RBF kernel for reference. All approaches can be seen to achieve comparable accuracy across the different subsets.} \label{figure:moons_accuracies} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig5.pdf} \caption{A graph showing the mean margin of the Moons training set points for the best classifiers produced by each approach, with errors bars showing standard deviation. Circuit parameter training and genetic training of kernel-target alignment are both shown to increase the mean margin size. The approach numbering corresponds to the numbering used in Figure \ref{figure:moons_accuracies}.}. \label{figure:moons_margins} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig6.pdf} \caption{A graph showing the ROC curves of the best models produced by various approaches of quantum feature map design on the Moons dataset. All of the produced models are shown to perform similarly on the dataset.} \label{figure:moons_roc_curves} \end{figure} Our experiments show that substituting kernel-target alignment or approximated kernel-target alignment for accuracy in the genetic optimization process produces feature map circuits with accuracy comparable to the original approach across all datasets (see Figures \ref{figure:moons_accuracies} and \ref{figure:moons_roc_curves} for example). Further optimizing the final population's trainable parameters using COBYLA was often able to improve the average margin sizes of the classifiers (see Figure \ref{figure:moons_margins}) on training data and sometimes able to improve validation set classification accuracy (see Figures \ref{figure:moons_accuracies} and \ref{figure:moons_decision_boundary_improvement}), showing that a hybrid approach performing further optimization of the final populations parameter values is worth attempting despite the computational cost if improving accuracy is important. Training the parameters of a single solution for 100 cost evaluations requires only half a percent of the kernel evaluations as the genetic optimization process, so a smaller subset of the final solutions could be trained at a much lower cost. Additionally, the untrained parameters encoded in the solution binary strings are not lost if further training is performed and can still be used if they happen to perform better than the trained ones. \begin{figure} \centering \includegraphics[width=0.85\columnwidth]{Fig7.pdf} \caption{A graph showing the classification accuracies of the best models produced by various approaches of quantum feature map design on the Voice dataset, compared with a classical RBF kernel for reference. Genetic accuracy maximization is shown to overfit to the testing data used to evaluate the accuracy metric, justifying the necessity of a separate validation set.} \label{figure:voice_accuracies} \end{figure} The results demonstrate that on difficult datasets such as the SUSY, SUSY reduced features, Voice, and Random datasets, the original approach's models can overfit to the testing data used to evaluate the accuracy metric (see Figure \ref{figure:voice_accuracies}). This is likely due to the fact that test set accuracy is directly optimized in the genetic algorithm without regard to training set accuracy. Since the kernel-target alignment approaches make use of only the training data during the genetic optimization they do not suffer from the same drawback, although they do not show improvement on validation data for the difficult problems. This problem could possibly be avoided by shuffling the training and testing data each generation, although this would make the accuracy metric depend on the generation at which the accuracy was evaluated and could prevent caching of solution fitnesses in the genetic algorithm. A second possible solution is to average the accuracy over subsets of the data. Given a dataset of $n$ points, this can be performed while requiring at most $\frac{n^2-n}{2}$ kernel evaluations in the worst case, since the Gram matrix for the entire dataset can be computed once and used as a cache to look up the kernel output for any pair of points when creating models with arbitrary choices of training and testing subsets. \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{Fig8.pdf} \caption{A graph showing the mean margin of the Random training set points for the best classifiers produced by each approach, with errors bars showing standard deviation. The approach numbering corresponds to that in Figure \ref{figure:moons_accuracies}.} \label{figure:adhoc_margins} \end{figure} \begin{figure} \centering \begin{adjustbox}{center} \centering \includegraphics[width=0.7\columnwidth]{Fig9_1.pdf} \includegraphics[width=0.7\columnwidth]{Fig9_2.pdf} \end{adjustbox} \caption{Decision boundaries of the best classifiers for approaches 2 and 2.1 on the Moons validation data. Further parameter training on training data to minimize RMSE after genetically designing the feature maps to maximize kernel-target alignment is shown to improve classification ability on validation data.} \label{figure:moons_decision_boundary_improvement} \end{figure} \begin{table} \centering \begin{adjustbox}{center} \centering \begin{tabular}{|c|c|c|c|} \hline Approach & Average margin & Absolute Change & Percentage Change \\ \hline 1-Accuracy (Original work) & 0.838 & N/A & N/A \\ Accuracy, RMSE training & 0.993 & +0.154 & +18.42\verb Accuracy, KTA training & 0.971 & +0.133 & +15.87\verb \hline 2-Alignment & 1.043 & +0.205 & +24.46\verb Alignment, RMSE training & 1.016 & -0.028 & -2.66\verb Alignment, KTA training & 1.090 & +0.047 & +4.49\verb \hline 3-Approximation & 1.065 & 0.226 & +26.99\verb Approximation, RMSE training & 1.145 & +0.080 & +7.54\verb Approximation, KTA training & 1.124 & +0.059 & +5.59\verb \hline \end{tabular} \end{adjustbox} \caption{Table showing the average margin size of the best classifier produced by each approach, averaged across the nine datasets with equal weighting given to each dataset. For this purpose, the best classifier is defined as the classifier achieving the highest validation set accuracy for the target dataset. The improvement columns for the base approaches (2 and 3) show how genetic optimization of kernel-target alignment and its approximation improve on the margins achieved in the original work. For the hybrid approaches (with additional RMSE and KTA parameter training), the columns show change relative to the base approaches.} \label{table:margins} \end{table} The margins of classifiers trained with the second and third approach tended to be larger than those trained with the first (see Figures \ref{figure:moons_margins} and \ref{figure:adhoc_margins}, as well as Table \ref{table:margins}). This could be due to the fact that the kernel-target alignment metric and its approximation are evaluated on the training subset as opposed to accuracy which is evaluated on the testing subset, leading to the former two approaches having higher confidence on the training subset. In either case, increased margin size is an indicator of improved generalisation ability according to theoretical works \citep{Vapnik1998Book, VCBound} showing that margin size bounds VC dimension, and VC dimension bounds expected generalisation error. Further parameter training on the final generation also tended to show some improvements in margin sizes, even on easier datasets such as the Moons and Circles datasets where there was not much effect on overall classification accuracy. This improvement in margin can be seen visually in the decision boundary graphs of the classifiers (see Figure \ref{figure:moons_decision_boundary_improvement}). \section{Conclusion}\label{sec13} In this paper, we compared our implementation of the approach defined in \cite{Altares_L_pez_2021} with adjustments to the genetic algorithm cost functions. These adjustments were aimed at investigating the suitability of kernel-target alignment as an alternative metric to test set accuracy and at reducing the number of kernel evaluations required by the approach. The new approaches were shown to still be effective at designing accurate classifiers with fewer kernel evaluations, although at the cost of increased circuit size. They were also shown to often produce classifiers with better margins on training data. We also put forward a hybrid approach extending the original work by applying COBYLA \citep{cobyla_book} to further optimize the trainable parameters of the produced quantum feature map circuits after the termination of the genetic algorithm to attempt further improvement, at a lower additional computational cost than the genetic algorithm's base cost. This parameter training was also shown to be capable of improving margin sizes and sometimes accuracy without increasing the circuit gate cost. There is still more work to be done in accelerating the genetic algorithm while keeping gate costs low. A potential avenue to achieving this goal is the use of a multi-phase genetic algorithm in which the cost function is initially easy to evaluate but increases in precision after a set number of generations passes. For example, the $a$ parameter of the kernel-target alignment approximation could be made to decrease as generations pass for the approximation to become more accurate at the cost of more kernel evaluations, or the cost function could be switched from kernel-target alignment to classification accuracy to reduce gate cost once a predetermined kernel-target alignment has been achieved. The original approach could also further be extended to have the gate encoding for parameterised gates select a classical data encoding function such as those used in \cite{ASFMKBQC} to introduce classical nonlinearity to the encoding, potentially allowing for even lower gate cost or higher accuracy circuits to be produced. \backmatter
1,108,101,564,498
arxiv
\section{Introduction} One of the most interesting aspects of instanton effects in QCD is related to the strong CP problem and why the vacuum angle is extremely small \cite{t'Hooft}. Various solution have been proposed in the literature for this problem. In one scheme one can set $\theta =0 $ at tree level and then assume that higher order effects in the magnitude of $\theta$ are negligible \cite{Di Vecchia}, while a second option is to introduce a massless quark so that $\theta$ becomes unobservable. This second possibility is ruled out phenomenologically. A third alternative is to introduce a dynamical variable $\theta (x)$ in such a way that an effective interaction \beq L_{int}=c\theta (x)F\tilde{F} \eeq (c being a constant) dynamically sets the vacuum angle $\theta_{QCD}$ to be zero \cite{Peccei}. This third approach, generally known as the Peccei-Quinn solution, solves the CP problem and has cosmological implications. In this scenario $\theta$ is therefore made local, where nonperturbative QCD effects produce a mass for the axion and fix the background value of $\theta_{QCD}$ to be zero. The interaction of an axion field to electromagnetic fields was initially investigated several years ago, after it was shown \cite{Witten} that in a $\theta$ vacuum a t'Hooft-Polyakov magnetic monopole acquires an electric charge proportional to $\theta$ and to the magnetic charge $g$ of the monopole, \beq q_{\theta}=C\theta g \eeq with C a constant. In particular the dynamics of monopoles traversing axionic domain walls has since been elucidated \cite{Sikivie}, \cite{Huang}. In its full generality, the interaction of the axion with the electromagnetic field is assumed to be given, at low energies, by the action density \cite{Sikivie} \beq L=-{1\over 4}F^2 +{1\over 2}\dmu\theta\partial^{\mu}\theta -{{m^2 v^2}\over {N^2}}[1 - cos(Na/v)] +\alpha F\tilde{F} \eeq where $\tilde{F}$ denotes the dual of the field strength and \beq \alpha = {e^2\over {32\pi^2 sin^2{\theta^{0}}_{w}}} \left(\theta_{qcd} +{{ T_{\theta}}\over 2\pi}{{N\theta(x)}\over v} \right) \eeq where $\theta(x)$ is the axion field, m is the axion mass, N is the number of axion vacua and $v$ is the vacuum expectation value that breaks the Peccei-Quinn symmetry. ${\theta^{0}}_{w}$ is the electroweak angle at the grand unification mass scale $(GUM)$ and $\theta_{qcd}$, the constant is taken at GUM. $T_{\theta} =2\pi$ for QCD, is the period of $\theta$. We have assumed that there is grand unification of the strong and electroweak interactions \cite{Georgi} and we have used $$g_s= e^2/sin^2({\theta^{0}}_{w}) \,\, $$ at GUM, where $g_s$ is the QCD coupling constant at the same scale. In this letter we investigate the propagation of high frequency electromagnetic modes in the presence of an axion. We use the light cone expansion of the propagator for the electromagnetic field, and we calculate the local part of the one-loop effective action of the axion field. In the case of a slowly varying $\theta$ field, representing a massive axion, we show that the coefficient of the leading singularity of the asymptotic expansion can be explicitely determined in full nonlocal form. This last result allows us to define a causal gauge field propagator in the presence of a massive axion, a result which cannot be obtained from the dispersion relation because it contains spurious poles. As noticed previously in the literature \cite{Jackiw}, this particular background configuration of the axion field (the massive axion) makes the propagation of the low frequency em modes unstable and a tachionic pole arises. The investigation of ref.\cite {Jackiw} was carried out in the particular case of a coupled Maxwell-Chern-Simons theory, a system which shares the same properties of the massive axion model. We show that, for any background axion field, and in particular for a massive axion model, the em propagator behaves well at short distances. Our results are based on a direct application of the Hadamard theorem \cite{Hadamard} for equations of propagation which are diagonal in the highest derivatives. The short distance behaviour of the model is in fact controlled by the highest derivatives of the gaussian operator (which is diagonal), while the instability is generated by the presence of lower derivatives (coupled to an $\epsilon$ tensor) in the equations of motion. \newpage The partition function of the model can be written in the simplified form \beq \label{one} Z=\int [d\theta] [d A_{\mu}] e^{i \int {\cal{L}+ J\theta +J'_\mu A^\mu} d^{4}x} \eeq \beq \label{two} {\cal{L}}= -\quarter F^{2} + \half \dmu \theta \partial^{\mu} \theta + {c\over 4} \theta (x) F \tilde{F}\eeq $F$ denotes the electromagnetic field, $\tilde{F} = {1\over 2}\epsilon F$ denotes its dual. We will work in the Lorentz gauge. The evaluation of the effective action for the $\theta$-field, functionally integrating out the gauge fields is given by \beq \label{three} {\cal{L}}_{eff} = \half \dmu \theta \partial^{\mu} \theta +\frac{i}{2} tr \ln(\hat{P}) \eeq where \beq \label{four} \hat{P} = g^{\nu\beta} \Box -c \dmu \theta \epsilon^{\mu \nu \alpha \beta} \partial_{\alpha} \eeq is the relevant operator induced by the gaussian approximation. The equations of motion for the gauge fields are: \beq \label{five} \dmu F^{\mu \nu} = c \dmu \theta \epsilon^{\mu \nu \alpha \beta} \partial_{\alpha} A_{\beta} \eeq where \beq \label{six} F_{\mu \nu} = \dmu A_{\nu} - \dnu A_{\mu} \eeq In the Lorentz gauge we find \beq \label{seven} \Box A_{\nu} - k_{\nu \alpha \beta} \partial^{\alpha} A^{\beta}=0 \eeq where we have defined \beq \label{eight} k_{\nu \alpha \beta} = c \dmu \theta \epsilon^{\mu \nu \alpha \beta} \eeq The Hadamard expansion \cite{Hadamard} for the propagator of the gauge fields in the axionic background can be set directly by introducing the Green's function $\hat{\Delta}$ for eq.(11), since the gaussian operator is diagonal in the highest derivatives \beq \label{nine} \Box \hat{\Delta}_{\nu \rho} - k_{\nu \alpha \beta} \partial^{\alpha} \hat{{\Delta}^{\beta}}_{\rho} = \delta_{\nu \rho} \delta^{4}(z) \eeq where $z=x-y$. The support of the $\Delta$ distribution is therefore inside the light-cone and causality is respected. The standard ansatz is therefore \beqa \label{ten} \hat{\Delta}_{\nu \rho}(x,y) & = & G^{(0)}_{\nu \rho}(x,y) D_{F} \nonumber \\ & & \mbox{} - \frac{i}{16 \pi^{2} } \ln({z^{2}\over {\mu}^2}-i 0) \sum_{n=0}^{\infty} (\frac{z^2}{4})^{n}\frac{1}{n!} G_{\nu \rho}^{(n+1)}(x,y) \eeqa where \beq \label{eleven} D_{F} = \frac{1}{4 \pi^{2} i (z^{2}-i 0)} \eeq is the Feynman free propagator in coordinate space while $\mu$ is a mass parameter introduced in order to keep dimensionless the argument of the logarithm in the expansion. The recursion relations for the coefficients of the expansion are easily obtained. By equating to zero the independent singularities of eq. (13) applied to the expansion (14) we get a leading singularity equation for $z^{-4}$ \beqa \label{thirteen} 2 z^{\mu} \dmu G^{(0)}_{\nu \rho}(x,y) - z^{\alpha} k_{\nu\alpha\beta} G^{(0)}_{\beta\rho}(x,y) & = & 0 \eeqa a $D_{F}$ (i.e.$z^{-2}$) singularity equation: \beqa \label{fourteen} \Box G^{(0)}_{\nu\rho} + G^{(1)}_{\nu\rho}+z^{\mu}\dmu G^{(1)}_{\nu\rho} - k_{\nu\alpha\beta}\partial^{\alpha}G^{(0)}_{\beta\rho} & &\nonumber \\ -{z^\alpha \over 2} k_{\nu\alpha\beta}{{G^{(1)}}^{\beta}}_{\rho} & = & 0 \eeqa and a log-equation (for $z^{2n}ln({z^2\over {\mu}^2})$) \beqa \label{fifteen} z^{\mu} \dmu G^{n+2}_{\nu\rho}+(n+2)G^{(n+1)}_{\nu\rho} + \Box G^{(n+1)}_{\nu\rho} & & \nonumber \\ \mbox{} - \frac{z^{\alpha}}{2} k_{\nu\alpha\beta} G^{(n+2)\beta}_{\rho} - k_{\nu\alpha\beta} \partial^{\alpha} G^{(n+1)}_{\beta\rho} & = & 0 \eeqa Defining $G^{(-1)}= 0 $, $n=-1,0,1,...$, we summarize all the equations in the form: \beqa \label{sixteen} z^{\mu} \dmu G^{(n+1)}_{\nu\rho} + (n+1)G^{(n+1)}_{\nu\rho} + \Box G^{(n)}_{\nu\rho} & & \nonumber \\ \mbox{} - \frac{z^{\alpha}}{2} k_{\nu\alpha\beta} G^{(n+1)}_{\beta\rho} - k_{\nu\alpha\beta}\partial^{\alpha}G^{(n) \beta}_{\rho} & = & 0 \eeqa At this level the axion is a background field and we see, for instance from eq. (16), that it affects the propagation of the leading singularity in a non trivial way (through $k_{\nu\alpha\beta}$). For a constant $\theta$-angle the equation for the propagator of the leading singularity is trivially given by \beq \label{seventeen} z^{\mu}\dmu G^{(0)}_{\nu\rho} = 0 \eeq valid on the light cone surface, with the initial condition \beq G^{(0)}_{\nu\rho} (x,x)=\delta_{\nu\rho} \eeq The solution is trivially given as the Kronecker delta, and the strength therefore is diagonal over the entire characteristic surface. In this simplified case the equations of motion for the gauge fields are given by \beq \label{eighteen} \Box \Box^{-1} = \delta_{\nu \rho} \delta^{4}(z) \eeq where as usual, \beq \label{nineteen} \Box^{-1} = \frac{G^{(0)}_{\nu\rho}}{({z^{2}-i 0})4{\pi}^2 i} = \frac{\delta_{\nu\rho}}{({z^{2}-i 0})4{\pi}^2 i} \eeq At this point we focus our attention on eq (16), which describes the leading behaviour of the dynamics of the electromagnetic field in the presence of a local vacuum angle. We will preliminarly show that, in the case of a linearly growing axion field, in the timelike case, some components of the leading singularity tensor ${G^{(0)}}_{\nu\rho}$ are constant on the light cone surface. The reason appears to be quite simple and is valid both in the abelian and in the nonabelian case. In fact, by contraction of both terms of (16) with $\partial_{\nu}\theta$ and using a symmetry argument (the antisymmetry of k defined in eq (12) ) we get the equation \beq \partial_\nu \theta z^{\mu}\dmu {G^{(0)}}_{\nu\rho}(x(s),0)=0 \eeq If we introduce the parametrization \beq x^{\mu}(s)=s x^{\mu}, 0<s<1; \,\,y=0 \eeq for a straight line inside the light cone surface we can rewrite eq (24) in the form \beq \partial_{\nu}\theta(x){d\over ds}{G^{(0)}}_{\nu\rho}(x(s),0)=0 \eeq Repeating the same procedure a second time one gets the equation \beq x^{\nu}{d\over ds}G^{(0)}_{\nu\rho}(x(s),0)=0 \eeq It is simple to show that if the variation of the $\theta$ field is linear in a timelike direction, a frame can be found in which it has only a time variation, say \beq \dmu\theta =(a_0,0,0,0) \eeq and in particular we get for the timelike components \beq {d \over ds}G^{(0)}_{0\rho} =0 \eeq {}From the initial condition eq (21) therefore we get \beq {G^{(0)}}_{0 i}=0 \,\, \,\,\,{G^{(0)}}_{0 0}=1 \eeq As we emphasized previously, this result remains true even in the more realistic non abelian case, for a linearly growing (timelike) axion field, and is a simple consequence of the antisymmetry involved in the coupling. A simple and physical way to look at this result is to view it as an anisotropy effect induced on the propagation of the gauge fields by the pseudo tensor coupling. We will show now that the other components of the leading singularity tensor can also be determined by following an approach analogous to the one we used above, in the case in which condition (28) is enforced. The equation for the leading singularity (eq. 16) can be cast in the form \beq {d\over ds}{G^{(0)}}_{ij}(x(s),0) -{c a_0\over 2}x^k\epsilon^{0ikl}{G^{(0)}}_{lj}(x(s),0)=0 \\ \,\,\,\,\,\,i,j,k,l=1,2,3 \eeq For a given timelike vector $x^\mu$, the previous equation has two first integrals. This can be easily seen through an elementary analogy with the motion of a classical charged particle under a Lorentz force in a constant magnetic field. We use this analogy to solve it. Define \beq G^{(0)}_{ij}(x(s),0)= \hat{v_j}; \,\,\,\,{\omega}^k={a_0\over 2}x^k \eeq where $\hat{\omega}$ is constant at fixed $x^\mu$ and rewrite it as \beq {d\over ds}\hat{v_j}=\hat{\omega}\wedge\hat{v_j} \eeq and two first integrals of motion along the s-line are \beq\hat{\omega}.\hat{v_j}={a_0\over 2}x^k G^{(0)}_{kj} (x(s),0)=d_j \eeq \beq \hat{{v^2}_j}=\sum_{i} G^{(0)}_{ij}(x(s),0)G^{(0)}_{ij} (x(s),0)=l_j \eeq Iterating eq.(33) we get \beq \left({d^2\over ds^2}+{\omega}^2 \right)\hat{v_j}= \hat{\omega}d_j \eeq By usingff the initial condition eq (21) one easily gets \beq d_j={c a_0\over 2}x^j \,\,\,\, l_j=1,\,\,j=1,2,3 \eeq and finally the solution \beq G^{(0)}_{ij}(x(s),0)=\left(\delta_{ij}-{ x^i x^j\over r^2} \right) cos({c a_0\over 2}rs) +\epsilon_{ikj}{x^k\over r} sin({c a_0\over 2}rs) + {x^i x^j\over r^2} \eeq This expression describes the behavior of the gauge field correlator in its non local form, asympotically around the light-cone. Our results can be summarized in the following expressions for the Feynman propagators of electromagnetic fields in the linearly growing timelike $\theta$ vacuum $${<A_{i}(x)A_{j}(0)>}_{\theta}=$$ \beq \left[ (\delta_{ij}- {x^{ij}\over r^2})cos({{a_0r}\over 2}) +\epsilon^{ikj}{x^k\over r}sin({{a_0r}\over 2})+ {x^ix^j\over r^2}\right] {1\over {4\pi^2 i(x^2-i0)}} +\,\, log.\,terms \eeq \beq {<A_{0}(x)A_{i}(0)>}_{\theta} ={{\delta_{0i}}\over {4\pi^2i(x^2-i0)}} + G^{(1)}_{0i}(x,0)log(x^2-i0) \eeq modulo additional logarithmic corrections. This model provides an additional example of a solution for the Green function in an external field in full nonlocal form. The outline of this approach, as is well known, dates back to Schwinger \cite{Schwinger} and it dealt with the Dirac propagator in an abelian gauge field. Causality problems have been investigated by Velo and Zwanziger \cite{Zwanziger} in this framework successfully, and calculations of anomalies are also possible. The massive axion model suffers, however, of an instability at low frequencies, in other words the retarded em propagator developes a tachionic behaviour in its lower modes. This aspect has been considered in very detail in ref. \cite{Jackiw} in which the 3+1 dimensional Maxwell-Chern-Simons (MCS) system is discussed. The equations of motion for the propagation of electromagnetic fields in the massive axion background are equivalent, in fact, to the MCS model. An expression of the retarded propagator was also given in it and it was shown that em frequencies smaller than $a_0$, the time component of the derivative of the axion field, becomes tachionic \cite{Jackiw}. We refer to that paper for further details. In the MCS case the instability is an inevitable effect, since no background field is involved from the beginning, while in the massive axion model is still unclear whether by taking into account in a better way the axion dynamics the instability can be removed. In its current formulation, in the case of either a spacelike or timelike variation of the local vacuum, the propagation of the gauge fields appears to be consistent in the ultraviolet regime and, therefore, the singular part of the effective action for the axion field, obtained by integrating out the gauge fields in the partition function, can be computed straightforwardly. In the following we are going to show that the structure of the singularities of the propagator of the gauge field in an arbitrary $\theta$ background has a very simple feature. \section{The effective action} In the general case of a variable vacuum angle we can easily calculate the singular (local) part of the effective action for the $\theta$ field integrating out the gauge field. Let's define $[G^{(n)}_{\mu \nu}]$ to be the coincidence limit of the coefficients introduced before. \beq \label{twenty} [G^{n}_{\mu \nu}] \equiv G^{(n)}_{\mu\nu}(x,x) \eeq The same coefficients also appear in the divergent part of the one loop effective action \beq \label{twentyone} \Gamma^{(1)}(\theta) = \frac{i}{2} tr \ln(\hat{P}) \eeq Let's take a variation of (42) with respect to the background field $\theta $ \beq \delta \Gamma^{(1)}[\theta] = \frac{i}{2} tr \delta P P^{-1} = \frac{i}{2} <\delta\hat{P}_{\mu\nu}\hat{\Delta}^{\nu\mu}(x,y)\delta^{4}(z)> \eeq where the angular bracket denotes integration over x,y. We find after some algebra \beq \label{twentythree} [\delta \hat{P}_{\nu\beta} \hat{\Delta}^{\beta\nu}]= - c <\dmu \delta\theta \epsilon^{\mu\rho\alpha\nu} \partial_{\alpha} \Delta_{\nu \rho}(x,y)\delta^{4}(z)> \eeq Ultraviolet divergences in the effective action come from the singular behaviour of the propagator at short distances. We need to regularize expressions such as $[\ln(z^{2}-i 0)]$, $[D_{F}]$ and $[\dmu D_F] $. This can be done in many ways. The simplest one consists in introducing a short distance cutoff $\Lambda$. All the singularities are regulated at the intermediate stage, while we first compute the coincidence limits of the various coefficients. The cutoff is removed by taking the limit $\Lambda \rightarrow \infty$ at the end. A simple calculation shows that the relevant contribution to (44) is given by \beq \label{twentyfour} [\partial_{\alpha}\hat{\Delta}{\nu\rho}]= -\frac{i}{16 \pi^{2}} [\ln(z^{2}-i 0)][\partial_{\alpha}G^{(1)}_{\nu\rho}] + [\partial_{\alpha}G^{(0)}_{\nu\rho}][D_{F}]+ [{G^{(0)}}_{\nu\rho}] [\partial_{\alpha} D_F] \eeq For notational simplicity we omitted to write down in eq. (45) explicitely the cutoff dependence of the singular coincidence limits. It can be seen immediately that, in this regularization, the contribution from the $ [\partial D_F]$ singularity is zero. By differentiating the singularities equations we find \beq \label{twentyfive} [\partial_{m} G^{(0)}_{\nu\rho}] = \half k_{\nu m \rho} \eeq \beqa \label{twentysix} [\partial_{m} \partial_{n} G^{(0)}_{\nu\rho}] & = & \quarter \partial_{n} k_{\nu m \rho} + \quarter \partial_{m} k_{\nu n \rho} \nonumber \\ & & {1\over 8}{k_{\nu m}}^{\beta}k_{\beta n \rho}+ {1\over 8}{{k_{\nu n}}}^{\beta}k_{\beta m \rho} \eeqa The explicit expression for the counterterm is \beq \label{thirty} <[\delta\hat{P}_{\tau\nu}\hat{\Delta}^{\nu\tau}]>=\int d^4x \delta\dmu\theta \epsilon^{\mu\tau\alpha\nu}\left(\frac{-c} {32 \pi^{2} i}[\ln(z^{2}-i 0)][\partial_{\alpha}G^{(1)}_{\nu\tau}] + [\partial_{\alpha} {G^{(0)}}_{\nu\rho}] [D_F] \right) \eeq A straightforward but lenghty calculation gives \beq \Gamma^{(1)}(\theta)={i\over 2} \int d^4 x \left( {-c\over 32\pi^2 i} [ln z^2] ({1\over 4} (Q^2)^2 c^2 -{1\over 2}\dmu\theta\Box \partial^{\mu} \theta)+{1\over 4\pi^2 i}[{1\over z^2}] {3 c\over 2} Q^2 \right) \eeq where we've defined \beq Q^2=\dmu \theta\partial^{\mu}\theta \eeq Introducing a pole and a logarithmic renormalization costants $ Z_1$ and $Z_2$, we can express the counterterm lagrangean in the form \beq \label{thirtyfive} \nabla {\cal{L}}^{count.} =c Z_1 Q^2 + c Z_2 \left(c^2 (Q^2)^2 - 2 \dmu\theta\Box\partial^\mu\theta \right) \eeq \section{\bf Conclusions} We have analyzed the dynamics of a coupled axion-gauge field model in the low energy limit and we've investigated in detail the problem of the propagation of high frequency em modes in an axion background. In particular we have seen that, in the case of a timelike varying axion background, the leading singularity of the Hadamard expansion of the gauge fields correlator can be determined in non local form. By applying Hadamard's theorem on the structure of the singularities of diagonal hyperbolic operators, we have shown that the instability of the Maxwell-axion system is confined to the propagation of low em frequencies, reobtaining a result first considered in ref. \cite {Jackiw}. Our results are valid for the propagation of em modes in any axionic background. We have also shown that the local part of the effective action of the axion can be computed in a simple form, a result which gives to the effective model physical consistency. \vspace{.5cm} \newpage \centerline{\bf Acknowledgments} \vspace{.5cm} I acknowledge Profs. A.S. Goldhaber, H. Yamagishi, B. W. Lindquist J.J.M. Verbaarschot and G. Sterman. I warmly thank Prof. D. Zwanziger and Dr. F. Bastianelli for clarifying discussions and for helpful suggestions. I finally thank the Theory Group at Univ. of Lecce, Italy for their kind hospitality, Susan Marie Trapasso and Rinat Kedem for their generous help.
1,108,101,564,499
arxiv
\section{Introduction} Heusler compounds \cite{Heu03,HSH03,Heu04} have attracted scientific and technological interest for their potential use as materials for magneto-electronic devices. Reason is the exceptional electronic structure found in many of those compounds, in particular in those based on cobalt. They exhibit a complete spin polarisation at the Fermi energy ($\epsilon_F$), that means they behave like a metal for electrons of one spin direction and like an insulator for the other. K{\"u}bler \etal \cite{KWS83} recognised that the minority-spin density at the Fermi energy nearly vanishes in the Heusler compounds Co$_2$MnAl and Co$_2$MnSn. The authors concluded that this should lead to peculiar transport properties in these compounds because only the majority density contributes. Materials with a complete spin polarisation at $\epsilon_F$ are called half-metallic ferromagnets \cite{GME83}, even though there do exist more complicated cases as classified in Ref.~\cite{CVB02}. The Heusler compounds are usually ternary 2-1-1 compounds. They consist for the most part of two transition metals (X$_2$, Y) and one main group (Z) element crystallising in the $L2_1$ structure (space group $F\:m\overline{3}m$). Besides ternary X$_2$YZ compounds, there exist also large assortments of substitutional quaternary alloys of the type X$_2$Y$_{1-x}$Y'$_x$Z or X$_2$YZ$_{1-x}$Z'$_x$. One of the early substitutional series that attracted interest as potential material for magneto-electronics was Co$_2$Cr$_{1-x}$Fe$_x$Al \cite{EFV03,BFJ03,FKW05,WFK06b}. Drawback of this series is that it is hard to be stabilised in the $L2_1$ structure. Mostly a mixture of atoms in Y and Z positions is observed leading to $B2$-like disorder \cite{KUK04}. However, the disorder destroys the half-metallic properties \cite{WFK06b}. Recently, the series of Heusler alloys Co$_2$Mn$_{1-x}$Fe$_x$Si has attracted particular interest because it exhibits the $L2_1$ order over the whole range of $x$ \cite{BFK06}. The Curie temperatures of the end members are 985~K \cite{FSI90,BNW00} and 1100~K \cite{WFK05,WFK06a} for the Mn and Fe containing compounds, respectively. The end members of the series Co$_2$Mn$_{1-x}$Fe$_x$Si, that are the purely Mn or Fe containing compounds, have been used for fabrication of magnetic tunnel junctions \cite{IOM06,OSN06}. The tunnel magneto-resistance (TMR) ratios of 159\% in the Mn compound at low temperature and 41\% in the Fe compound at room temperature suggest that still an improvement in the materials is necessary for successful use in devices, in particular with respect to their temperature behaviour. Recently, Tekuda \etal \cite{TIM06} reported about tunnel junctions build from the iso-electronic compound Co$_2$FeAl$_{0.5}$Si$_{0.5}$. The junctions exhibited TMR ratios of 76\% at 300~K and 106\% at 5~K for the $B2$ structure while that with $L2_1$ structure showed 51\% and 78\% at 300~K and 5~K, respectively \footnote{The TMR ratio is 175\% at 300~K for optimised junctions with $L2_1$ structure, private communication by K.~Inomata (Tsukuba, Japan).}. These values of the TMR ratio are larger than the ones found using pure Co$_2$FeAl or Co$_2$FeSi electrodes. The temperature stability of the minority gap is one of the main challenging questions for materials to be used in applications. From the viewpoint of the electronic structure, several different effects may destroy the half-metallicity at finite temperatures, depending on the situation of $\epsilon_F$. At $T>0$, quasi-particle states - occurring close to the minority band edges - may be induced in the gap \cite{CAK06}, for example by magnon excitation. In particular, a spin-rotation \cite{SDo02} may destroy or at least reduce the size of the gap. This has the effect that $\epsilon_F$ - being initially situated at the top of the valence band - does not fall any longer inside of the gap at elevated temperature. If $\epsilon_F$ is located at the bottom of the conduction band then the half-metallicity will be immediately lost for $T>0$ due to the occupation of minority states through the Fermi-Dirac distribution. (Note: this effect cannot appear the same way for $\epsilon_F$ being situated at the top of the valence band, as there are no states to be occupied thermally inside of the gap but only above.) Finally, the lattice parameter and defect densities will be changed at elevated temperatures resulting also in changes of the electronic structure. An already small increase of the lattice parameter may be able to destroy the half-metallicity if the Fermi energy was initially at $T=0$ located close to one of the edges of the minority gap. At the same situation, a smearing of the states close to $\epsilon_F$ by an increase of the defects with temperature above 0~K, will also destroy the half-metallic character. Taking all those facts together, it is expected that a location of $\epsilon_F$ close to the middle of the minority gap will result in the most robust half-metallicity, supposed the gap is not too small ($\approx 1$~eV). Low magnetic moment compounds like Co$_2$CrAl exhibit a variety of majority $d$-bands crossing the Fermi energy in rather all directions of the Brillouin zone. The high magnetic moment Co$_2$YZ compounds ($m>4\mu_B)$ exhibit only few, strongly dispersing majority $d$-bands crossing $\epsilon_F$ mainly along high symmetry directions. These few bands may be in favour for coherent tunnelling \cite{NTI06}. Finally, a mixture of the $3d$ elements on the Y position will cause different localised moments on different sites. This might lead to instabilities of the half-metallic character. All those aspects, fixing the $d$-state element on the Y position and thus the kind of localised moment, location of the Fermi energy with respect to the gap in the minority states, and simplicities of the majority bands crossing it, were of prime importance for selection of the Co$_2$FeAl$_{1-x}$Si$_x$ series for the present study. \section{Computational details} \label{sec:CD} The electronic structure of the series of alloys was calculated by means of the full potential linearised augmented plane wave (FLAPW) method. For Co$_2$FeAl$_{1-x}$Si$_x$, the calculations were carried out using the FLAPW method as implemented in {\scshape Wien}2k provided by Blaha \etal \cite{BSS90,BSM01}. The exchange-correlation functional was taken within the generalised gradient approximation (GGA) in the parameterisation of Perdew \etal \cite{PBE96}. In addition, the LDA$+U$ method \cite{AAL97} was used to respect on-site correlation at the $3d$ transition metals. It should be mentioned that the $+U$ was used here together with the GGA rather than the LSDA parameterisation of the exchange-correlation functional. However, no significant differences were observed using either of these parameterisations. In {\scshape Wien}2k, the effective Coulomb-exchange parameter $U_{eff}=U-J$ is used, where $U$ is the Coulomb part and $J$ is the exchange part. The use of $U_{eff}$ suppresses multipole effects. That means, it neglects the non-spherical terms in the expansion of the Coulomb interaction. In particular, the values for $U_{eff}$ were set to $U_{Co}=0.14$~Ry, and $U_{Fe}=0.132$~Ry, independent of the Si concentration. These values are able to explain the the magnetic moment in Co$_2$Mn$_{1-x}$Fe$_x$Si over the whole range of Fe concentration, as was found in previous calculations \cite{BFK06}. $U_{Co}$ and $U_{Fe}$ are close to the values for the Coulomb interaction $U_{dd}$ for $d$ electrons in the elemental $3d$ transition metals reported in Ref.~\cite{BSa89}. Finally, a $25\times25\times25$ point mesh was used as base for the integration in the cubic systems resulting in 455 $k$-points in the irreducible wedge of the Brillouin zone. No noticeable changes in the precision of the magnetic moments or in the position of the Fermi energy were observed if comparing to a smaller $20\times20\times20$ mesh. The energy convergence criterion was set to $10^{-5}$~Ry and simultaneously the criterion for charge convergence to $10^{-3}$ electrons. This combination resulted in final values being about one order of magnitude lower for both criteria. The properties of the pure Al or Si containing compounds were calculated in $F\:m\overline{3}m$ symmetry using the lattice parameter ($a_{Al}=5.706$~\AA, $a_{Si}=5.633${~\AA}) as found from a structural optimisation. The values are close to the experimental ones: $a_{Al}=5.727$~\AA and $a_{Si}=5.64$~\AA. The larger deviation for the Al compound may be caused by the fact that this compound does frequently not have an ordered $L2_1$ structure in experiments. Following Vegard's law, a linear variation of $a$ was assumed for the mixed compounds. All muffin tin radii were set to nearly touching spheres. The mixed compounds with $x=1/4$ and $3/4$ were calculated in $P\:m\overline{3}m$ and for $x=1/2$ in $P\:4/mmm$ symmetry, similar as for the mixed Cr-Fe compounds as reported in \cite{FKW05}. \section{Results and Discussion} In the following the electronic and magnetic structure of the series Co$_2$FeAl$_{1-x}$Si$_x$ will be discussed. The end members Co$_2$FeAl and Co$_2$FeSi were already presented in detail in previous work. In Refs.~\cite{WFK05,KFF06} it was shown, that it is necessary to include the on-site correlation in the calculations for Co$_2$FeSi in order to explain the experimental data and to find the half-metallic ground state. Details of the band structure for that system are found in Refs.~\cite{BFK06,KFF06}. Already in pure LSDA-GGA calculations, Co$_2$FeAl became a half-metallic ferromagnet \cite{FKW05}. It can be expected, however, that on-site correlation plays also an important role in the Al containing compound if it does in the case of Si. Therefore, the electronic structure of Co$_2$FeAl was recalculated using the LDA$+U$ method as described in section \ref{sec:CD}. Figure~\ref{fig_1} compares the spin resolved band structure of Co$_2$FeAl calculated in the LSDA-GGA and the LDA$+U$ approach. \begin{figure} \includegraphics[width=6cm]{fig1.eps} \caption{Spin resolved band structure of Co$_2$FeAl. \newline Compared are the band structures calculated in the LSDA-GGA (a,b) and the LDA$+U$ (c,d) approaches.} \label{fig_1} \end{figure} From Figure~\ref{fig_1}, it is seen that the inclusion of $U_{eff}$ in the calculation does not cause pronounced changes of the majority bands. Even the flat band at about -4~eV below the Fermi energy is shifted by only 200~meV to higher binding energies. This is remarkable as this band is mainly responsible for the localised moment at the Fe atom. The major impact of the Coulomb parameter is on the minority bands and in particular on their unoccupied part. The gap is clearly opened up and the flat, lowest conduction bands at the $\Gamma$-point are shifted by about 1~eV to higher energies. Figure \ref{fig_2} compares the spin resolved density of states for the complete series Co$_2$FeAl$_{1-x}$Si$_x$ The calculations were carried out using LDA$+U$. For $x=0$ the low lying $s$-states are found at energies below -6~eV. With increasing Si content a new group of $s$-states appears that is found for $x=1$ at energies below -9~eV. In the range of intermediate Si concentration both groups appear. Those low lying $s$-states are separated from the $p$ and $d$-states by the Heusler-typical hybridisation gap. This gap is considerably larger in the Si compound ($\approx 1.5$~eV) compared to the Al compound ($\approx 0.5$~eV) indicating the stronger hybridisation. This stronger hybridisation makes the Si rich alloys more stable compared to the Al rich part. The $p$-states are in all cases found at the bottom of the high lying valence bands above that gap. With increasing $x$, both - the majority as well as the minority channel - do not exhibit pronounced changes of the $d$-state derived densities. Their general shape stays rather unaffected. However, the $d$-band width increases from about 5.2~eV to 6.7~eV with increasing Si content. At the same time, the high majority density of the localised $d$-states found at -4~eV in the Al compound shifts to -5~eV in the Si compound. Fixing the Fermi energy in the minority gap at the position where it is found in the Al compound would thus lead to a rather unphysical enlargement of the exchange splitting. However, the shift of the majority density is compensated by a shift of the minority density with increasing Si content to lower energies. As a result, one observes a virtual movement of $\epsilon_F$ through the gap in the minority states. Throughout the whole series from $x=0$ to 1, the band gap about $\epsilon_F$ is clearly revealed in the minority density. The properties of the gap are further discussed after a brief discussion of the magnetic moments. \begin{figure}[ht] \includegraphics[width=6cm]{fig2.eps} \caption{Spin resolved density of states of Co$_2$FeAl$_{1-x}$Si$_x$. \newline The panels (a, ... , e) show - from top to button - the DOS with increasing amount of Si for $x=0, 0.25, 0.5, 0.75$, and 1. The DOS is calculated using LDA$+U$.} \label{fig_2} \end{figure} The magnetic properties are compared in Tabel~\ref{tab_1}. The nearly half-metallic state found for Co$_2$FeAl already in the LSDA-GGA calculation results in a nearly integer spin magnetic moment (deviation smaller than $10^{-3}$). This reflects the well-expected fact that a magnetic moment being compatible to the Slater-Pauling rule ($m=(N_v-24)\mu_B$, with $N_v$ being the number of valence electrons in the primitive cell containing four atoms \cite{Kue00,FKW06}) must not result unambiguously in a half-metallic state. The substitution of Al by Si results in a pronounced deviation from the Slater-Pauling-like behaviour, in the LSDA-GGA approach. This is clear from the shift of the minority band-gap away from the Fermi energy in the LSDA-GGA calculations (see Fig.~\ref{fig_3}(a)). \begin{table}[ht] \centering \caption{Total magnetic moments of ordered Co$_2$FeAl$_{1-x}$Si$_x$. \\ All moments were calculated for the given super-cells. Their values are in $\mu_B$ and respect 4 atoms in the cell for easier comparison.} \begin{tabular}{l|c|cc} compound & $x$ & GGA & LDA$+U$ \\ \noalign{\smallskip}\hline\noalign{\smallskip} Co$_2$FeAl & 0 & 5.0 & 5.0 \\ Co$_8$Fe$_4$Al$_3$Si & $1/4$ & 5.2 & 5.25 \\ Co$_4$Fe$_2$AlSi & $1/2$ & 5.37 & 5.5 \\ Co$_8$Fe$_4$AlSi$_3$ & $3/4$ & 5.49 & 5.75 \\ Co$_2$FeSi & 1 & 5.53 & 6.0 \\ \noalign{\smallskip}\hline \end{tabular} \label{tab_1} \end{table} Other than in pure LSDA-GGA, the LDA$+U$ calculations reveal clearly a linear dependence of the spin magnetic moment $m(x)$ on the Si concentration $x$ and thus on the number of valence electrons. The small deviation from a half-metallic state for the $x=0$ and 1 compounds seen in Fig.~\ref{fig_3}(b) does not lead to a discernible deviation from the expected integer moments of 5~$\mu_B$ or 6~$\mu_B$, respectively. This only small deviation is caused by the fact that only very few states contribute to the minority density close to $\epsilon_F$ (see Fig.~\ref{fig_2}). Finally, it has to be noted that the LSDA-GGA calculations do not reflect the experimentally found magnetic moment of 6~$\mu_B$ for Co$_2$FeSi, either at the optimised or at the experimental lattice parameter. It was carefully checked that this effect is not caused by a missing orbital magnetic moment. Using LSDA together with spin-orbit coupling and Brook's orbital polarisation resulted in $m_{s}=5.536\mu_B$ and $m_{l}=0.189\mu_B$. This means that $m_{tot}=5.725\mu_B$ is still far below the value determined by magnetometry (this magnitude of the total magnetic moment $m_{tot}$ was observed using {\scshape Wien}2k as well as the relativistic {\scshape Munich}-SPRKKR, not reported here.). Only the LDA$+U$ scheme was able to reflect the value in the correct order, independent whether spin-orbit interaction was respected or not. Figure \ref{fig_3} compares the behaviour of the gap in the minority states of Co$_2$FeAl$_{1-x}$Si$_x$. Shown are the extremal energies of the states involving the minority band gap that are the accompanied valence band maximum and conduction band minimum. In the LSDA-GGA approach, the small gap in the minority states is moved away from the Fermi-energy with increasing Si content and the half-metallicity becomes destroyed. Using LDA$+U$, the gap has a nearly constant width of 760~meV over the complete series from $x=0$ to 1. From Figure \ref{fig_3}, it is seen that the end-members are just at the borderline to half-metallic ferromagnetism. Starting from $x=0$, the Fermi energy moves from the top of the valence band to the bottom of the conduction band at $x=1$. For Co$_2$FeAl$_{0.5}$Si$_{0.5}$, the Fermi energy is located close to the middle of the band gap in the minority states. It should be noted that the values for $U_{eff}$ used here are the borderline cases for the half-metallic ferromagnetism over the complete series Co$_2$FeAl$_{1-x}$Si$_x$. Small variations of the Coulomb parameter $U_{eff}$ will change the behaviour of the density of states as reported in Refs.~\cite{WFK05,KFF06} for Co$_2$FeSi. About 10\% higher or lower values will make one of the end members a clear half-metal and destroy at the same time the half-metallic character of the other one. From the theoretical point of view, the compounds Co$_2$FeAl and Co$_2$FeSi are thus very unstable half-metallic ferromagnets, if at all. \begin{figure}[ht] \includegraphics[width=8cm]{fig3.eps} \caption{The gap in the minority states of Co$_2$FeAl$_{1-x}$Si$_x$. \newline Compared are the positions of the valence band maximum (VBM) and the conduction band minimum (CBM) as calculated by means of LSDA-GGA in (a) and LDA$+U$ in (b).} \label{fig_3} \end{figure} \section{Summary and Conclusions} The electronic structure of the substitutional series of the quaternary Heusler compounds Co$_2$FeAl$_{1-x}$Si$_x$ was investigated by means of band structure calculations using the LDA and LDA$+U$ approximations. It was found that the Co$_2$FeAl$_{1-x}$Si$_x$ series of compounds exhibits half-metallic ferromagnetism if using the LDA$+U$ scheme. Moderate Coulomb-interaction parameters of less than 2~eV were used. For the two end-members, Co$_2$FeAl and Co$_2$FeSi, the Fermi energy is close to the band edges of the minority states. The high densities at those band edges make the half-metallic character of both compounds rather unstable at finite temperatures above 0~K. This might be one reason explaining the low tunnelling magneto resistance ratio found in those compounds at room temperature. For $x\approx0.5$, the calculations predict that the Fermi energy is located in the middle of the gap of the minority states. This behaviour will make Co$_2$FeAl$_{0.5}$Si$_{0.5}$ stable against temperature variations as discussed in the introduction. Experiments were started to verify over what range of compositions the series Co$_2$FeAl$_{1-x}$Si$_x$ crystallises in the required $L2_1$ structure and to find the most stable half-metallic ferromagnet in this series. In summary, it was shown that the variation of the main group element in Heusler compounds is a strong tool in order to tune their physical properties. \bigskip \noindent{\bf Acknowledgment :}\newline We thank K.~Inomata (Tsukuba, Japan) for providing his data before publication. The authors are very grateful to P.~Blaha ({\scshape Wien}2k) and H.~Ebert ({\scshape Munich}-SPRKKR) and their groups for development and providing the computer codes. This work is financially supported by the Deutsche Forschungs Gemeinschaft (project TP7 in research group FG 559). \newpage \bibliographystyle{unsrt}
1,108,101,564,500
arxiv
\section{Introduction} The Tower of Hanoi problem was introduced by \'{E}douard Lucas in 1883 \cite{Lucas} for the case of 3 pegs and $n$ disks of different sizes. Initially, $n$ disks are placed on one of the 3 pegs with the largest at the bottom. Then, at each time one of the topmost disks is moved to a peg with a larger disk on the top. The goal of the problem is to transfer all the disks from the initial peg to the peg of destination with the minimum number of moves. A simple recursive argument shows that $2^n-1$ moves are necessary and sufficient to carry out this task. This Tower of Hanoi problem was then extended to the case of 4 pegs by Dudeney in 1907 \cite{Dude} and to arbitrary $k \ge 3$ pegs by Stewart in 1939 \cite{Stewart1}. In 1941, Frame \cite{Frame} and Stewart \cite{Stewart2} independently proposed algorithms which achieve the same numbers of moves for the $k$-peg Tower of Hanoi problem with $k \ge 4$ pegs. Klav\v{z}ar et al.\cite{Klav1} showed that seven different approaches to the $k$-peg Tower of Hanoi problem, including those by Frame and Stewart, are all equivalent, that is, achieve the same numbers of moves. Thus, these numbers are called the {\it Frame-Stewart numbers} \cite{Klav2}. Somewhat surprisingly, the optimal solution for the multi-peg Tower of Hanoi problem with $k \ge 4$ pegs is not known yet. So far, the best upper bounds are achieved by the Frame-Stewart numbers and the best lower bounds are obtained by Chen et al.\cite{Chen}. Since the upper bounds are believed to be optimal, they are called the ``presumed optimal'' solution. The Stewart's recursive algorithm for the $k$-peg Tower of Hanoi problem is summarized as follows. For integer $t$ such that $1\leq t\leq n$, \begin{enumerate} \item recursively transfer a pile of $n-t$ smallest disks from the first peg to a temporary peg using $k$ pegs; \item transfer the remaining pile of $t$ largest disks from the first peg to the final peg using $k-1$ pegs, ignoring the peg occupied by the $n-t$ smallest disks; \item recursively transfer the pile of $n-t$ smallest disks from the temporary peg to the final peg using $k$ pegs. \end{enumerate} The algorithm chooses the integer $t$ such that the number of moves $2 \cdot \mathrm{S}_k(n-t) + \mathrm{S}_{k-1}(t)$ is minimized. Thus, the Frame-Stewart numbers $\mathrm{S}_k(n)$ satisfy the following recurrence relations: $$\mathrm{S}_k(n) = \min_{1 \le t \le n} \bigl\{2 \cdot \mathrm{S}_k(n-t) + \mathrm{S}_{k-1}(t)\bigr\}, \mbox{ for } n \ge 1, \ k \ge 4,$$ $$\mathrm{S}_3(n) = 2^n - 1, \mbox{ for } n \ge 1, \mbox{ and } \mathrm{S}_k(0) = 0, \mbox{ for } k \ge 3. $$ When $k=4$ for instance, $\mathrm{S}_4(n)$ is obtained by the following simple formula: $$ \mathrm{S}_4(n) - \mathrm{S}_4(n-1) = 2^{i-1}, \mbox{ for } \binom{i}{2} < n \le \binom{i+1}{2}, $$ where $\binom{i}{2}$ is the binomial coefficient equal to $i(i-1)/2$. In the general case $k \ge 4$, $\mathrm{S}_k(n)$ is obtained by several different approaches, e.g., \cite{Frame, Klav1, Klav2, Majumdar, Stewart2}. In \protect\cite{Mats}, the following general recurrence relation was considered to clarify the combinatorial structure latent in the recurrence relation for $\mathrm{S}_k(n)$ and to cope with the recurrence relations for the Tower of Hanoi {\it on graphs} in which pegs are placed on vertices of a given graph and disks are only moved along the edges: $$ \mathrm{T}(n) = \min_{1\le t \le n} \bigl\{\alpha \cdot \mathrm{T}(n-t) + \beta \cdot (2^t - 1) \bigr\}, \mbox{ for } n \ge 1, \mbox{ and } \mathrm{T}(0) = 0, $$ where $\alpha$ and $\beta$ are arbitrary positive integers. It was shown that the sequence of differences $(\mathrm{T}(n) - \mathrm{T}(n-1))_{n \ge 1}$ consists of numbers of the form $\beta \cdot 2^i\cdot\alpha^j$, with $i,j \ge 0$, lined in the increasing order. When $\alpha = 3$, $2^i \cdot \alpha^j$ increases as $1,2,3,2^2,2\cdot3,2^3,3^2,2^2\cdot3,2^4,2\cdot3^2, \cdots$. These numbers are called ``3-smooth numbers''\cite{Sloane} and have been studied extensively in number theory, in relation to the distribution of prime numbers \cite{Hardy} and to new number representations \cite{Bleck,Erd}. The formulation and analysis of $\mathrm{T}(n)$, however, has some defects such that (i) it is only focused on the 4-peg case with no consideration for the general case $k \ge 3$; and (ii) even in the 4-peg case, term $2^i \cdot\alpha^j$ consists of constant 2 and parameter $\alpha$, which might admit further generalization. In this paper, we fully generalize the recurrence relations for the previous $\mathrm{S}_k(n)$ and $T(n)$ and obtain the exact formulas. Namely, we define the following recurrence relations for two sequences of arbitrary positive integers $\left(p_i\right)_{i \ge 3}$ and $\left(q_i\right)_{i \ge 3}$: $$ \mathrm{G}_k(n) = \min_{1 \le t \le n}\bigl\{ p_k\cdot \mathrm{G}_k(n-t) + q_k\cdot \mathrm{G}_{k-1}(t) \bigr\},\ \text{for}\ n\ge 1,\ k\ge 4, $$ $$ \mathrm{G}_3(n) = p_3\cdot \mathrm{G}_3(n-1)+q_3,\ \text{for}\ n\ge 1, \mbox{ and } \mathrm{G}_k(0) = 0,\ \text{for}\ k\ge 3. $$ Then, we show that the sequence of differences $(\mathrm{G}_k(n)- \mathrm{G}_k(n-1))_{n \ge 1}$ consists of numbers of the form $(\prod_{i=3}^{k}q_i) \cdot (\prod_{i=3}^{k}{p_i}^{\alpha_i})$, with $\alpha_i\ge 0$ for all $i$, lined in the increasing order. In other words, we show the following theorem. \begin{thm}\label{thm1} For every positive integer $n$ and for two sequences of arbitrary positive integers $\left(p_i\right)_{i \geq 3}$ and $\left(q_i\right)_{i \geq 3}$, we have $$ \mathrm{G}_k(n) = q\cdot\sum_{j=1}^{n}u^k_j $$ where $q=\prod_{i=3}^{k}q_i$ and $u^k_j$ is the $j$th term of the sequence $\left(u^k_j\right)_{j\geq1}$ of integers $\prod_{i=3}^{k}{p_i}^{\alpha_i}$, with $\alpha_i\geq0$ for all $i$, lined in the increasing order. \end{thm} We call $\mathrm{G}_k(n)$ the {\it generalized Frame-Stewart numbers}. Note that $\mathrm{G}_k(n)$ is equal to $\mathrm{S}_k(n)$ when $(p_i, q_i) = (2, 1)$ for all $i \ge 3$ and is equal to $\mathrm{T}(n)$ when $(p_3, q_3) = (2, 1)$ and $(p_4, q_4) = (\alpha, \beta)$. \par The remaining of the paper is organized as follows. In Section~2, we show some basic properties of the sequence $\left(u^k_j\right)_{j\geq 1}$ defined from $\left(p_i\right)_{i \geq 3}$. In Section~3, we prove Theorem~\ref{thm1}, the main result of this paper. In Section~4, application of these numbers in obtaining upper bounds of the number of moves for the Tower of Hanoi problem on several graphs is provided. \section{Basic results on smooth numbers sequences} Let $\left(p_i\right)_{i\geq3}$ be a sequence of positive integers. We consider the sequence $\left(u^k_j\right)_{j\geq1}$ of all the integers of the form $\prod_{i=3}^{k}{p_i}^{\alpha_i}$, where $\alpha_i\geq0$ for all $i$, lined in the increasing order. For instance, for $(p_3,p_4)=(2,2)$ and $(p_3,p_4)=(2,3)$, the first few terms of $(u^4_j)_{j\geq1}$ are $(1,2,2,2^2,2^2,2^2,2^3,\cdots)$ and $(1,2,3,2^2,2\cdot3,2^3,3^2,\cdots)$, respectively. When there is some $i_0$ such that $p_{i_0}$ is equal to $1$, then by definition $\left(u^k_j\right)_{j\geq1}$ is the constant sequence of $1$'s, for every $k\geq i_0$. We note that $\left(u^k_j\right)_{j\geq1}$ is closely related to {\it smooth numbers} which have been explored extensively in number theory. A positive integer is called {\it $B$-smooth} if none of its prime factors are greater than a positive integer $B$. The sequence $\left(u^k_j\right)_{j\geq1}$ then consists of $B$-smooth numbers for $B = \max_{3 \leq i \leq k}\left\{p_i\right\}$. In this section, we restrict to the case where all the $p_i$'s are greater than $1$ and prove a simple lemma on a certain ``recursive'' structure of the smooth numbers sequence $\left(u^k_j\right)_{j\geq1}$, which will be used to prove Theorem~\ref{thm1} in the next section. \begin{lem}\label{lem1} Let $k\geq4$ and let $\left(k_j\right)_{j\geq1}$ be the sequence of positive integers defined by $k_1=1$ and $k_j=\min\left\{l>k_{j-1} \ \middle| \ u^k_l=u^{k-1}_{j}\right\}$ for $j\geq2$. Then, for every integer $n$ such that $k_j<n<k_{j+1}$, we have $u^k_n = p_k\cdot u^k_{n-j}$. \end{lem} \begin{proof} If $k_{j+1}=k_j+1$, then the lemma is trivial. Suppose now that $k_{j+1}-k_j\geq2$ and let $n$ be a positive integer such that $k_j<n<k_{j+1}$. First, consider a term $\prod_{i=1}^{k}{{p_i}^{\alpha_i}}$ of the sequence $(u^k_l)_{l\ge1}$. If $\alpha_k=0$, then $\prod_{i=1}^{k}{{p_i}^{\alpha_i}}=\prod_{i=1}^{k-1}{{p_i}^{\alpha_i}}$ belongs to $(u^k_{k_l})_{l\geq1}$ by definition of $(k_l)_{l\geq1}$. Otherwise, if $\alpha_k\ge1$, then $\prod_{i=1}^{k}{{p_i}^{\alpha_i}}=p_k\cdot\left({p_k}^{\alpha_k-1}\cdot\prod_{i=1}^{k-1}{{p_i}^{\alpha_i}}\right)$ belongs to $(p_k\cdot u^k_l)_{l\geq1}$. Now, since $k_j<n<k_{j+1}$, it follows that $u^k_{k_j}\le u^k_n < u^k_{k_{j+1}}$ by the growth of the sequence $(u^k_l)_{l\ge1}$. So the first $n$ terms of $(u^k_l)_{l\geq1}$ exactly contains the first $j$ terms of $(u^k_{k_l})_{l\geq1}$. We already know that a term of $(u^k_l)_{l\ge1}$ belongs to $(u^k_{k_l})_{l\ge1}$ or to $(p_k\cdot u^k_l)_{l\ge1}$. This leads to the decomposition $$ \left\{ u^k_l\ \middle|\ 1\le l\le n\right\} = \left\{ u^k_{k_l}\ \middle|\ 1\le l\le j\right\} \bigcup \left\{ p_k\cdot u^k_l\ \middle|\ 1\le l\le n-j\right\} $$ and to the equality $u^k_n=p_k\cdot u^k_{n-j}$, by the maximality of $u^k_n$. \end{proof} Lemma~\ref{lem1} can be also used for computing $\left(u^k_j\right)_{j\geq1}$ explicitly for special sequences $\left(p_i\right)_{i \geq 3}$. Here, we compute $\left(u^k_j\right)_{j\geq1}$ in the simple case $p_i=p\geq2$ for all $i\geq 3$ (we note that when $p=2$, $\left(u^k_j\right)_{j\geq1}$ is the sequence for the original $k$-peg Tower of Hanoi problem). \begin{prop}\label{prop1} Let $p_i=p\geq 1$ for all $3\leq i\leq k$. Then, for all integers $j\geq0$ and $n\geq1$ such that $\displaystyle\binom{k+j-3}{k-2} < n \leq \binom{k+j-2}{k-2}$, we have $u^k_n=p^{j}$. \end{prop} \begin{proof} For $p=1$, the result is clear. Suppose now that $p\geq2$ and that the result is verified for $i=k-1$ and $n\geq1$, and for $i=k$ and $n\leq \binom{k+j_0-3}{k-2}$ for some $j_0\geq1$. By hypothesis of recurrence, we know that $$ p^{j_0} = u^{k-1}_{\binom{k+j_0-4}{k-3}+l_1},\quad \text{for}\ 1 \leq l_1\leq \binom{k+j_0-4}{k-4}, $$ and $$ p^{j_0-1} = u^{k}_{l_2},\quad \text{for}\ \binom{k+j_0-4}{k-2} < l_2 \leq \binom{k+j_0-3}{k-2}. $$ By definition of the sequence $(k_l)_{l\geq1}$, we have $$ k_{\binom{k+j_0-4}{k-3}+l_1}=\binom{k+j_0-3}{k-2}+l_1,\quad \text{for}\ 1\leq l_1\leq \binom{k+j_0-4}{k-4}. $$ Moreover, $$ k_{\binom{k+j_0-4}{k-3}+\binom{k+j_0-4}{k-4}+1}=k_{\binom{k+j_0-3}{k-3}+1}\quad \text{and}\quad u^k_{k_{\binom{k+j_0-3}{k-3}+1}}=u^{k-1}_{\binom{k+j_0-3}{k-3}+1}=p^{j_0+1}. $$ By Lemma~\ref{lem1}, we know that, for every positive integer $n$ such that $k_{\binom{k+j_0-3}{k-3}}<n<k_{\binom{k+j_0-3}{k-3}+1}$, the equality $$ p^{j_0}=u^k_n=p_k\cdot u^k_{n-\binom{k+j_0-3}{k-3}}=p\cdot u^k_{n-\binom{k+j_0-3}{k-3}} $$ holds. This leads to $$ u^k_{n-\binom{k+j_0-3}{k-3}}=p^{j_0-1},\quad \text{for}\quad k_{\binom{k+j_0-3}{k-3}}<n<k_{\binom{k+j_0-3}{k-3}+1}. $$ Since $u^k_{l_2}=p^{j_0-1}$ if and only if $\binom{k+j_0-4}{k-2} < l_2\leq \binom{k+j_0-3}{k-2}$ by hypothesis of recurrence, it follows that $$ k_{\binom{k+j_0-3}{k-3}+1} = \binom{k+j_0-3}{k-2} + \binom{k+j_0-3}{k-3} + 1 = \binom{k+j_0-2}{k-2} + 1. $$ Therefore, $$ u^k_n=p^{j_0},\quad \text{for}\ \binom{k+j_0-3}{k-2} < n\leq\binom{k+j_0-2}{k-2},\quad \text{and}\quad u^k_{\binom{k+j_0-2}{k-2}+1}=p^{j_0+1}. $$ This completes the proof. \end{proof} \section{Proof of Theorem~\ref{thm1}} Let $\mathrm{G}_k^1(n)$ denotes the special case of $\mathrm{G}_k(n)$ associated with arbitrary sequence $\left(p_i\right)_{i\geq3}$ and with the constant sequence $\left(q_i\right)_{i\geq3}$ with $q_i=1$ for $i \geq 3$. There exists a simple relationship between numbers $\mathrm{G}_k(n)$ and $\mathrm{G}_k^1(n)$. \begin{prop}\label{prop2} For every nonnegative integer $n$ and for every sequence of integers $\left(q_i\right)_{i\geq3}$, we have $$ \mathrm{G}_k(n) = q\cdot\mathrm{G}_k^1(n), $$ where $q=\prod_{i=3}^{k}q_i$. \end{prop} \begin{proof} By recurrence on $k$ and $n$. For $k=3$, we can prove by simple induction on $n$ that $\mathrm{G}_3(n)=q_3\cdot\mathrm{G}_3^1(n)$ for all $n$. Suppose the result is true for $k-1$ and all $n\geq0$, and $k$ and all $l$ such that $l\leq n-1$. By the recursive definition of $\mathrm{G}_k(n)$ and by the assumption of induction, we obtain $$ \mathrm{G}_k(n) \begin{array}[t]{l} = \displaystyle\min_{1\leq t\leq n}\left\{ p_k\cdot \mathrm{G}_k(n-t) + q_k\cdot\mathrm{G}_{k-1}(t)\right\} \\[2ex] = \displaystyle\min_{1\leq t\leq n}\left\{ p_k\cdot \prod_{i=3}^{k}q_i\cdot\mathrm{G}_k^1(n-t) + q_k\cdot\prod_{i=3}^{k-1}q_i\cdot\mathrm{G}_{k-1}^1(t)\right\} \\[2ex] = \displaystyle\prod_{i=3}^{k}q_i\cdot\min_{1\leq t\leq n}\left\{ p_k\cdot \mathrm{G}_k^1(n-t) + \mathrm{G}_{k-1}^1(t)\right\} \\[3ex] = q\cdot\mathrm{G}_k^1(n). \end{array} $$ \end{proof} By Proposition~\ref{prop2}, it is sufficient to prove Theorem~\ref{thm1} for $\mathrm{G}_k^1(n)$ instead of $\mathrm{G}_k(n)$. Now, we show at which argument $ \mathrm{G}_k^1(n) = \displaystyle\min_{1\leq t\leq n}\left\{ p_k\cdot \mathrm{G}_k^1(n-t) + \mathrm{G}_{k-1}^1(t)\right\} $ takes its minimum. \begin{lem}\label{lem2} Let $n$ be a positive integer. Suppose that $p_i > 1$ for all $3 \leq i \leq k$. Suppose also that $\Delta\mathrm{G}_{i}^1(l)=\mathrm{G}_{i}^1(l)-\mathrm{G}_{i}^1(l-1)=u^i_l$ for $3\leq i\leq k-1$ and $l\geq1$ and that $\Delta\mathrm{G}_k^1(l)=u^k_l$ for $1\leq l\leq n-1$. Let $j$ be the integer such that $k_j\leq n<k_{j+1}$. Then, for $1\leq t\leq n$, $\mathrm{G}_{k,n}^1(t)=p_k\cdot\mathrm{G}_k^1(n-t)+\mathrm{G}_{k-1}^1(t)$ takes its minimum at $t=j$. \end{lem} \begin{proof} Since $$ \mathrm{G}_{k,n}^1(t+1) - \mathrm{G}_{k,n}^1(t) \begin{array}[t]{l} = p_k\cdot\mathrm{G}_k^1(n-t-1)+\mathrm{G}_{k-1}^1(t+1) - p_k\cdot\mathrm{G}_k^1(n-t) - \mathrm{G}_{k-1}^1(t)\\[2ex] = -p_k\cdot(\mathrm{G}_k^1(n-t)-\mathrm{G}_k^1(n-t-1)) + (\mathrm{G}_{k-1}^1(t+1)-\mathrm{G}_{k-1}^1(t))\\[2ex] =1 -p_k\cdot\Delta\mathrm{G}_k^1(n-t) + \Delta\mathrm{G}_{k-1}^1(t+1) \end{array} $$ for every $1\leq t\leq n-1$, it follows by hypothesis that $$ \mathrm{G}_{k,n}^1(t+1) - \mathrm{G}_{k,n}^1(t) = -p_k\cdot u^k_{n-t} + u^{k-1}_{t+1}\quad \text{for}\quad 1\leq t\leq n-1. $$ First, when $1 \leq t\leq j-1$, the growth of the sequences $\left(u^k_l\right)_{l\geq1}$ and $\left(u^{k-1}_l\right)_{l\geq1}$ yields the following inequalities $$ u^k_{n-t} \geq u^k_{n-j+1} \geq u^k_{k_j-j+1}, \quad u^{k-1}_{t+1}\leq u^{k-1}_j=u^k_{k_j}. $$ Let $m=\min\left\{ l\geq0 \ \middle| \ k_{j+l+1}-k_{j+l}\geq2 \right\}$. Such $m$ always exists. By definition of $k_{j+l}$, we have $k_{j+l}=k_j+l$ for $0\leq l\leq m$ and $k_{j+m}<k_j+m+1<k_{j+m+1}$. So we deduce from Lemma~\ref{lem1} that $$ u^k_{k_j+m+1} = p_k\cdot u^k_{(k_j+m+1)-(j+m)} = p_k\cdot u^k_{k_j-j+1}. $$ Thus, $$ \mathrm{G}_{k,n}^1(t+1)-\mathrm{G}_{k,n}^1(t) = -p_k\cdot u^k_{n-t} + u^{k-1}_{t+1}\leq -u^k_{k_j+m+1} + u^k_{k_j} \leq 0 $$ for $1\leq t\leq j-1$. Therefore, $\mathrm{G}_{k,n}^1(t)\geq \mathrm{G}_{k,n}^1(j)$ for all $1\leq t\leq j$. Similarly, when $j\leq t\leq n-1$, we have $$ u^k_{n-t} \leq u^k_{n-j} \leq u^k_{k_{j+1}-j-1},\quad u^{k-1}_{t+1}\geq u^{k-1}_{j+1} = u^k_{k_{j+1}}. $$ Let $m=\min\left\{l\geq0 \ \middle| \ k_{j-l+1}-k_{j-l}\geq2 \right\}$. If such $m$ does not exist, then $n=k_j=j$ and we already know that $\mathrm{G}_{k,n}^1(t)$ takes its minimum at $t=j$. Suppose now that the integer $m$ exists. By definition of $k_{j-l+1}$, we have $k_{j-l+1}=k_{j+1}-l$ for $0\leq l\leq m$ and $k_{j-m}<k_{j+1}-m-1<k_{j-m+1}$. So we deduce from Lemma~\ref{lem1} that $$ u^k_{k_{j+1}-m-1} = p_k\cdot u^k_{(k_{j+1}-m-1)-(j-m)} = p_k\cdot u^k_{k_{j+1}-j-1}. $$ Thus, $$ \mathrm{G}_{k,n}^1(t+1)-\mathrm{G}_{k,n}^1(t) = -p_k\cdot u^k_{n-t} + u^{k-1}_{t+1} \geq -u^k_{k_{j+1}-m-1} + u^k_{k_{j+1}} \geq 0 $$ for $j\leq t\leq n-1$. Therefore, $\mathrm{G}_{k,n}^1(t)\geq\mathrm{G}_{k,n}^1(j)$ for all $j\leq t\leq n$. Consequently, $\mathrm{G}_{k,n}^1(t)$ takes its minimum at $t = j$. \end{proof} We are now ready to prove the main result of this paper. \begin{proof}[Proof of Theorem~\ref{thm1}] From Proposition~\ref{prop1}, it is sufficient to prove that $$ \mathrm{G}_k^1(n) = \sum_{j=1}^{n}u^k_j $$ for every positive integer $n$. \par First, suppose that $p_i>1$ for all integers $3\leq i\leq k$. We proceed by induction on $k$ and $n$. It is clear that for all $k \geq 3$, $\mathrm{G}_k^1(1) = 1 = u^k_1$. It is also clear that $\Delta \mathrm{G}_3^1(n) = \mathrm{G}_3^1(n)-\mathrm{G}_3^1(n-1) = p^{n-1}_3=u^3_n$ for all $n \geq 1$. Now assume that $\Delta\mathrm{G}_{i}^1(l)=u^i_l$ for all $3\leq i\leq k-1$ and all $l\geq1$ and that $\Delta\mathrm{G}_k^1(l)=u^k_l$ for all $1\leq l\leq n-1$. Then, we show that $\Delta\mathrm{G}_k^1(n)=u^k_n$ holds. For $n$, there exists some $j\geq1$ such that $k_j\leq n<k_{j+1}$. It is divided into two cases: when $n=k_j$ (Case 1) and when $k_j<n<k_{j+1}$ (Case 2). \par \textit{Case 1}. When $n=k_j$, we obtain $$ \Delta\mathrm{G}_k^1(n)\begin{array}[t]{l} = \mathrm{G}_k^1(k_j)-\mathrm{G}_k^1(k_j-1)\\[1.5ex] = \mathrm{G}^1_{k,k_j}(j)-\mathrm{G}^1_{k,k_j-1}(j-1)\quad (\text{since}\ k_{j-1}\leq k_j-1<k_j \text{ and by Lemma~\ref{lem2}})\\[1.5ex] = p_k\cdot\left(\mathrm{G}_k^1(k_j-j)-\mathrm{G}_k^1((k_j-1)-(j-1))\right) + \left(\mathrm{G}_{k-1}^1(j)-\mathrm{G}_{k-1}^1(j-1)\right)\\[1.5ex] = \Delta\mathrm{G}_{k-1}^1(j)\\[1.5ex] = u^{k-1}_j\quad (\text{by\ assumption\ of\ induction})\\[1.5ex] = u^k_{k_j}\quad (\text{by\ definition\ of}\ k_j)\\[1.5ex] = u^k_n. \end{array} $$ Thus, the proof is shown in this case. \par \textit{Case 2}. When $k_j<n<k_{j+1}$, we obtain $$ \Delta\mathrm{G}_k^1(n)\begin{array}[t]{l} = \mathrm{G}_k^1(n)-\mathrm{G}_k^1(n-1)\\[1.5ex] = \mathrm{G}_{k,n}^1(j)-\mathrm{G}^1_{k,n-1}(j)\quad (\text{since}\ k_j\leq n-1<k_{j+1}\text{ and by Lemma~\ref{lem2}})\\[1.5ex] = p_k\cdot\left(\mathrm{G}_k^1(n-j)-\mathrm{G}_k^1(n-1-j)\right) + \left(\mathrm{G}_{k-1}^1(j)-\mathrm{G}_{k-1}^1(j)\right)\\[1.5ex] = p_k\cdot\Delta\mathrm{G}_k^1(n-j)\\[1.5ex] = p_k\cdot u^k_{n-j}\quad (\text{by\ assumption\ of\ induction})\\[1.5ex] = u^k_n\quad (\text{by\ Lemma~\ref{lem1}}). \end{array} $$ Thus, the proof is shown in this case, too. \par Next, suppose that $p_i=1$ for some integer $i\leq k$. When $p_3=1$, it is clear that $\mathrm{G}_3^1(n)=n$ for all $n\geq0$. Suppose now, without loss of generality, that $p_{i_0}=1$ for some $4\leq i_0\leq k$ and $p_i>1$ for all $3\leq i\leq i_0-1$. We proceed by induction on $n$. Assume that $\mathrm{G}^1_{i_0}(l)=l$ for $0\leq l\leq n-1$. For $n$, by definition, $$ \mathrm{G}^1_{i_0}(n) = \min_{1\leq t\leq n}\left\{\mathrm{G}^1_{i_0}(n-t)+\mathrm{G}^1_{i_0-1}(t)\right\} = \min_{1\leq t\leq n}\left\{(n-t)+\mathrm{G}^1_{i_0-1}(t)\right\}. $$ Since $p_i>1$ for all $3\leq i\leq i_0-1$, we know that $\mathrm{G}^1_{i_0-1}(l)=\sum_{m=1}^{l}u^{i_0-1}_m$ for $l\geq1$. It is clear that $u^{i_0-1}_m\geq1$ for all $1\leq m\leq l$. Therefore we have $\mathrm{G}^1_{i_0-1}(l)\geq l$ for $l\geq1$. So $\mathrm{G}^1_{i_0,n}(t)=(n-t)+\mathrm{G}^1_{i_0-1}(t)$ takes its minimum at $t=1$ and $\mathrm{G}^1_{i_0}(n)=(n-1)+1=n$ as announced. Finally, suppose that, for some integer $i\geq3$, $\mathrm{G}_{i}^1(l)=l$ for all $l\geq0$ and $\mathrm{G}^1_{i+1}(l)=l$ for all $1\leq l\leq n-1$. For $n$, we obtain $$ \mathrm{G}^1_{i+1}(n) = \min_{1\leq t\leq n}\left\{\mathrm{G}^1_{i+1}(n-t)+\mathrm{G}_{i}^1(t)\right\} = \min_{1\leq t\leq n}\left\{(n-t)+t\right\} = n. $$ This concludes the proof of Theorem~\ref{thm1}. \end{proof} \begin{cor} Let $k\geq4$ and $j\geq1$. For every integer $n$ such that $k_j\leq n<k_{j+1}$, $$ \mathrm{G}_k(n) = p_k\cdot\mathrm{G}_k(n-j)+q_k\cdot\mathrm{G}_{k-1}(j). $$ \end{cor} \begin{proof} From Proposition~\ref{prop2}, Theorem~\ref{thm1} and Lemma~\ref{lem2}. \end{proof} We end this section in considering the special case where $p_i=p\geq1$ for all $i$. \begin{prop} Let $p_i=p\geq1$ for all $3\leq i\leq k$. Then, for all integers $j\geq0$ and $n\geq1$ such that $$ \displaystyle\binom{k+j-3}{k-2} < n \leq \binom{k+j-2}{k-2}, $$ $\mathrm{G}_k^1(n)$ can be computed as follows: $$ \mathrm{G}_k^1(n) = \sum_{m=0}^{j-1}\binom{k+m-3}{k-3}p^m + \left(n-\binom{k+j-3}{k-2}\right)p^{j}. $$ \end{prop} \begin{proof} By induction on $n$. First, we know by Proposition~\ref{prop1} that $u^k_n=p^{j}$. Moreover, $\mathrm{G}_k^1(n)=\mathrm{G}_k^1(n-1)+u^k_n$ from Theorem~\ref{thm1}. When $n = \binom{k+j-3}{k-2}+1$, we obtain by the assumption of induction $$ \mathrm{G}_k^1(n) \begin{array}[t]{l} = \displaystyle\mathrm{G}_k^1(n-1) + p^j\\ = \displaystyle\sum_{m=0}^{j-2}\binom{k+m-3}{k-3}p^m + \left((n-1)-\binom{k+j-4}{k-2}\right)p^{j-1} + p^{j}\\[3ex] = \displaystyle\sum_{m=0}^{j-2}\binom{k+m-3}{k-3}p^m + \binom{k+j-4}{k-3}p^{j-1} + p^{j}\\[3ex] = \displaystyle\sum_{m=0}^{j-1}\binom{k+m-3}{k-3}p^m + \left(n-\binom{k+j-3}{k-2}\right)p^{j}. \end{array} $$ When $\binom{k+j-3}{k-2}+1 < n \leq \binom{k+j-2}{k-2}$, we obtain $$ \mathrm{G}_k^1(n) \begin{array}[t]{l} = \displaystyle\mathrm{G}_k^1(n-1) + p^j\\ = \displaystyle\sum_{m=0}^{j-1}\binom{k+m-3}{k-3}p^m + \left((n-1)-\binom{k+j-3}{k-2}\right)p^{j} + p^{j}\\[3ex] = \displaystyle\sum_{m=0}^{j-1}\binom{k+m-3}{k-3}p^m + \left(n-\binom{k+j-3}{k-2}\right)p^{j}. \end{array} $$ \end{proof} \section{Application: the Tower of Hanoi on graphs} Let $G=(V,E)$ be a simple graph with the set of vertices $V=\{v_1,\ldots,v_k\}$ and the set of edges $E$. A $k$-peg Tower of Hanoi problem can be considered on $G$: the $k$ pegs are placed on the vertices $v_1,\ldots,v_k$ and transfer of disks is allowed between the pegs $v_i$ and $v_j$ only if there is an edge between $v_i$ and $v_j$. The original $k$-peg Tower of Hanoi problem then corresponds to the Tower of Hanoi problem on the complete graph $\mathrm{K}_k$. The cases of $k=3$ and $k=4$ are illustrated in Figure~\ref{fig1}. \begin{figure}[!h] \begin{center} \begin{tabular}{cccccc} \begin{tikzpicture} \node (A) at (0,0) [circle,draw] {$1$}; \node (B) at (2,0) [circle,draw] {$2$}; \node (C) at (1,1.73205081) [circle,draw] {$3$}; \draw (A) -- (B) -- (C) -- (A); \end{tikzpicture} & & & & & \begin{tikzpicture} \node (A) at (0,0) [circle,draw] {$1$}; \node (B) at (2,0) [circle,draw] {$2$}; \node (C) at (2,2) [circle,draw] {$3$}; \node (D) at (0,2) [circle,draw] {$4$}; \draw (A) -- (B) -- (C) -- (D) -- (A) -- (C); \draw (D) -- (B); \end{tikzpicture} \end{tabular} \end{center} \caption{\label{fig1}The original Tower of Hanoi problem with $3$ pegs ($\mathrm{K}_3$) and $4$ pegs ($\mathrm{K}_4$).} \end{figure} The main application of the generalized Frame-Stewart numbers is in giving upper bounds of the number of moves for the Tower of Hanoi problem on some simple graphs. For the Tower of Hanoi problem on the complete graph with $k\ge 3$ vertices and $n\ge 0$ disks, we retrieve the Frame-Stewart numbers $\mathrm{S}_k(n)$ stated in Section~1. In the sequel of this section, we consider other special cases where $G$ is the path graph $\mathrm{P}_3$ or the star graph $\mathrm{S}_k$. \subsection{On the path graph $\mathrm{P}_3$} The following theorem shows that the optimal number of moves for the Tower of Hanoi problem on the path graph $\mathrm{P}_3$ is given by the generalized Frame-Stewart numbers. \begin{figure}[!h] \begin{center} \begin{tikzpicture} \node (1) at (0,0) [circle,draw] {1}; \node (2) at (2,0) [circle,draw] {2}; \node (3) at (4,0) [circle,draw] {3}; \draw (1) -- (2) -- (3); \end{tikzpicture} \end{center} \caption{\label{fig2}The path graph $\mathrm{P}_3$.} \end{figure} \begin{thm}\label{thm2} Consider the Tower of Hanoi problem on $\mathrm{P}_3$, as depicted in Figure~\ref{fig2}. The minimum number of moves to transfer $n\geq1$ disks \begin{itemize} \item from peg 1 to peg 3 is $\mathrm{G}_3(n)=2\cdot\sum_{i=0}^{n-1}3^i$, where $(p_3,q_3)=(3,2)$; \item from peg 1 to peg 2 is $\mathrm{G}_3^1(n)=\sum_{i=0}^{n-1}3^i$, where $(p_3,q_3)=(3,1)$. \end{itemize} \end{thm} Though the fact of this theorem is rather well-known (e.g., see \cite{Sapir}), we present a short proof to see the connection with the generalized Frame-Stewart numbers. \begin{proof} We begin with the transfer between peg 1 and peg 3. In order to move the biggest disk from peg 1 to peg 3, we have to first move it from peg 1 to peg 2 and so the $n-1$ smallest disks must be on peg 3. The $n-1$ smallest disks are transferred from peg 1 to peg 3 in $\mathrm{G}_3(n-1)$ moves. Then, we move the biggest disk from peg 1 to peg 2. In order to move this disk to peg 3, we transfer the $n-1$ smallest disks from peg 3 to peg 1 in $\mathrm{G}_3(n-1)$ moves. Finally, we put the biggest disk from peg 2 to peg 3 in $1$ move and the $n-1$ smallest disks from peg 1 to peg 3 in $\mathrm{G}_3(n-1)$ moves. The total number of moves for $n$ disks is then $3\cdot\mathrm{G}_3(n-1)+2$, which corresponds to $\mathrm{G}_3(n)$ as announced. Since this is the best possible, $\mathrm{G}_3(n)$ is the optimal number of moves. \par For the transfer between peg 1 and peg 2, as before, in order to move the biggest disk from peg 1 to peg 2, we have to first transfer the $n-1$ smallest disks from peg 1 to peg 3. As proved above, the minimum number of moves to do this is $\mathrm{G}_3(n-1)$. Moreover, we know that $\mathrm{G}_3(n-1)=2\cdot\mathrm{G}_3^1(n-1)$ by Proposition~\ref{prop2}. Then, after moving the biggest disk from peg 1 to peg 2, the $n-1$ smallest disks are transferred from peg 3 to peg 2. It is done in $\mathrm{G}_3^1(n-1)$ moves. Thus, we conclude that the minimum number of moves for transferring $n$ disks from peg 1 to peg 2 is $3\cdot\mathrm{G}_3^1(n-1)+1$ as announced. \end{proof} \subsection{On the star graph $\mathrm{S}_k$} We end this section by considering the Tower of Hanoi problem on the star graph $\mathrm{S}_k$ with $k+1$ vertices and $k$ edges. For $k=2$, the graph $\mathrm{S}_2$ corresponds to the path graph $\mathrm{P}_3$. The star graphs for $k=3$ and $k=4$ are depicted in Figure~\ref{fig3}. \begin{figure}[h!] \begin{center} \begin{tabular}{ccccc} \begin{tikzpicture} \node (1) at (0,1.15470054) [circle,draw] {1}; \node (2) at (0,3.46410162) [circle,draw] {2}; \node (3) at (-2,0) [circle,draw] {3}; \node (4) at (2,0) [circle,draw] {4}; \draw (2) -- (1) -- (3); \draw (1) -- (4); \end{tikzpicture} & & & & \begin{tikzpicture} \node (1) at (0,2) [circle,draw] {1}; \node (2) at (-2,4) [circle,draw] {2}; \node (3) at (2,4) [circle,draw] {3}; \node (4) at (2,0) [circle,draw] {4}; \node (5) at (-2,0) [circle,draw] {5}; \draw (2) -- (1) -- (4); \draw (3) -- (1) -- (5); \end{tikzpicture} \\ \end{tabular} \end{center} \caption{\label{fig3}The star graphs $\mathrm{S}_3$ and $\mathrm{S}_4$.} \end{figure} Stockmeyer \cite{Stock} considered the Tower of Hanoi problem on the star graph $\mathrm{S}_3$, where all the $n$ disks are transferred from one leaf of the graph to another leaf (for instance, from peg 2 to peg 3 in Figure~\ref{fig3}). He described a recursive algorithm which achieved a good (seemingly the best) upper bound; thus, called it the ``presumed optimal'' algorithm. Here, we generalize this algorithm to the star graph $\mathrm{S}_k$ for arbitrary $k \geq 2$ and show that the number of moves for this problem is obtained exactly by the generalized Frame-Stewart numbers. \begin{thm} Let $k\ge2$ be an integer. Consider the Tower of Hanoi problem on the star graph $\mathrm{S}_k$ in which $n\geq 1$ disks are transferred from one leaf of the graph to another leaf. Then, an upper bound on the number of moves to solve this problem is given by the generalized Frame-Stewart number $\mathrm{G}_{k+1}(n)$, where $(p_3, q_3) = (3,2)$ and $(p_i, q_i) = (2, 1)$ for $4 \leq i \leq k+1$. \end{thm} \begin{proof} By induction on $k$ of $\mathrm{S}_k$. When $k=2$, as noted before, the star graph $\mathrm{S}_2$ corresponds to the path graph $\mathrm{P}_3$. So by Theorem~\ref{thm2}, $\mathrm{G}_3(n)$, where $(p_3, q_3) = (3, 2)$, is the minimum number of moves to transfer $n$ disks from peg 2 to peg 3. Suppose now that the result is true for any number of disks up to $\mathrm{S}_{k-1}$ and until $n-1$ disks for $\mathrm{S}_k$. $n$ disks are then recursively transferred from peg 2 to peg 3 as follows. For some integer $t$ such that $1\leq t\leq n$, \begin{itemize} \item transfer the $n-t$ smallest disks from peg $2$ to peg $k+1$ in $\mathrm{G}_{k+1}(n-t)$ moves; \item consider the remaining $k$ pegs and the subgraphs obtained after deleting the vertex of peg $k+1$, which is the star graph $\mathrm{S}_{k-1}$, and transfer the $t$ largest disks from peg $2$ to peg $3$ in $\mathrm{G}_k(t)$ moves; \item transfer the $n-t$ smallest disks from peg $k+1$ to peg 3 in $\mathrm{G}_{k+1}(n-t)$ moves. \end{itemize} We choose the integer $t$ such that the number of moves $2\cdot\mathrm{G}_{k+1}(n-t)+\mathrm{G}_{k}(t)$ is minimized. Thus, the algorithm satisfies the following recurrence relation: $$ \mathrm{G}_{k+1}(n) = \min_{1 \le t \le n}\bigl\{ 2\cdot \mathrm{G}_{k+1}(n-t) + \mathrm{G}_k(t) \bigr\}. $$ By this equation with the assumption of induction up to $k-1$, the number of moves of this algorithm is given by the generalized Frame-Stewart number with $(p_3, q_3) = (3,2)$ and $(p_i, q_i) = (2, 1)$ for $4 \leq i \leq k+1$. \end{proof}
1,108,101,564,501
arxiv
\section{Introduction}\label{sect:intro} Let $V$ be a complex finite-dimensional vector space and $\N(V) \subset \End(V)$ the nilpotent cone of $V$ i.e. the variety of nilpotent endomorphisms of $V$. The enhanced nilpotent cone is defined as $V\times \N(V)$; it admits a diagonal action of $\GL(V)$. This action has been examined in detail by several authors including Achar-Henderson \cite{AH}, Travkin \cite{Tr}, Mautner \cite{MautnerPaving} and Sun \cite{SunEnhancednilcone}. In particular the $\GL(V)$-orbits in $V\times \N(V)$ are enumerated; it was shown by Achar-Henderson and Travkin that the orbits are naturally in bijection with bi-partitions of $\dim V$. The orbit closure relations are also described combinatorially.\\ This classification of $\GL(V)$-orbits in $V\times \N(V)$ was extended to the enhanced cyclic nilpotent cone by Johnson \cite{Joh}. Let $\Q(\ell)$ be the cyclic quiver with $\ell$ vertices, and $\Q_{\infty}(\ell)$ the framing $\infty \rightarrow 0$ of this quiver at the vertex $0$. The enhanced cyclic nilpotent cone $\N_{\infty}(\ell,x)$ is the space of representations of $\Q_{\infty}(\ell)$ with dimension one at the framing vertex $\infty$, and the endomorphism obtained by going once around the cycle having nilpotency $\le x$. The group $G = \prod_{i = 0}^{\ell-1} \GL_n$ acts on $\N_{\infty}(\ell,x)$ with finitely many orbits.\\ In this article, we return to the question of parameterizing the $G$-orbits in $\N_{\infty}(\ell,x)$. Our motivation comes from two quite different sources. Firstly, this is a generalization of the problem of studying parabolic conjugacy classes in the space of nilpotent endomorphisms; see section \ref{sec:parabolicintro}. Secondly, via the Fourier transform, the enhanced cyclic nilpotent cone plays a key role in the theory of admissible $\mathscr{D}$-modules on the space of representations of $\Q_{\infty}(\ell)$. Applications of our parameterization of orbits to the representation theory of admissible $\mathscr{D}$-modules are explored in the sister article \cite{BeB2}. \subsection{A representation theoretic parameterization} There are (essentially) two different approaches to the classification of $G$-orbits in the (cyclic) enhanced nilpotent cone. The first is to consider it as a problem in linear algebra, that of classifying pairs $(v,X)$, of a nilpotent endomorphism $X$ and a vector $v \in V$, up to change of basis (``coloured endomorphisms and coloured vectors'' in the case of the enhanced cyclic nilpotent cone). This is the approach taken in \cite{AH}, \cite{Tr} and \cite{Joh}. Secondly, one can consider it as a problem in representation theory. Namely, it is clear that the enhanced cyclic nilpotent cone parameterizes representations of a particular algebra $\A_{\infty}(\ell,x)$ (realized as an admissible quotient of the corresponding path algebra), with the appropriate dimension vector. The usual enhanced nilpotent cone corresponds to $\ell = 1$. Then it is a matter of classifying the isomorphism classes of representations of this algebra. It is this latter approach that we take here. One natural way to try and classify the isomorphism classes of these algebras is to consider their universal covering algebras. This works well only if the representation type of the algebras $\A_{\infty}(\ell,x)$ is finite. Unfortunately, we show: \begin{proposition}\label{prop:typel} The algebra $\A_{\infty}(\ell,x)$ is of finite representation type if and only if $\ell = 1$ and $x \le 3$. \end{proposition} Moreover, we show that if $\ell > 1$ or $x > 3$, then for $(\ell,x) = (4,1), (2,2)$ the algebra $\A_{\infty}(\ell,x)$ is tame, and it is wild in all other case; see section \ref{sec:reptype}. Fortunately, the algebra $\A(\ell,x)$, whose representations correspond to the (non-enhanced) cyclic nilpotent cone, has finite representation type by Kempken \cite{Ke}. Hence we deduce its indecomposable representations from the universal covering algebra. From this we can read off the isomorphism classes of indecomposable representations $M$ of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$, even though $\A_{\infty}(\ell,x)$ does not have finite representation type in general. We introduce the set of \textit{Frobenius circle diagrams} $\Ca_F(\ell)$; from each Frobenius circle diagram one can easily reconstruct an indecomposable nilpotent representation of $\Q_{\infty}(\ell)$. Each Frobenius circle diagram $C$ has a weight $\mathrm{wt}_{\ell}(C)$. We also introduce the weight $\mathrm{wt}_{\ell}(\lambda)$ of a partition $\lambda$; see section \ref{sect:combi} for the definition of these combinatorial objects. \begin{theorem}\label{thm:paramainintro} Fix $\ell, x \ge 1$. \begin{enumerate} \item There are canonical bijections between: \begin{itemize} \item The set of isomorphism classes of indecomposable nilpotent representations $M$ of $\Q_{\infty}(\ell)$ with $(\dim M)_{\infty} = 1$. \item The set $\Ca_F(\ell)$ of Frobenius circle diagrams. \item The set of all partitions. \end{itemize} \item These bijections restrict to bijections between: \begin{itemize} \item The set of isomorphism classes of indecomposable representations $M$ of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$. \item The set $\{ C \in \Ca_F(\ell) \ | \ \mathrm{wt}_{\ell}(C) \le x \}$ of Frobenius circle diagrams of weight at most $x$. \item The set $\{ \lambda \in \mathcal{P} \ | \ \mathrm{wt}_{\ell}(\lambda) \le x \}$ of partitions of weight at most $x$. \end{itemize} \end{enumerate} \end{theorem} In the case of most interest to us, $$ \dim M = (1, n, \dots, n) = \varepsilon_{\infty} + n \delta $$ where $\delta$ is the minimal imaginary root for the cyclic quiver, the classification can be interpreted combinatorially. If $\mathcal{P}$ denotes the set of all partitions, $\mathcal{P}_{\ell}$ the set of all $\ell$-multipartitions, then we show that: \begin{corollary}\label{cor:paramcomb} The $G$-orbits in the enhanced cyclic nilpotent cone $\N_{\infty}(\ell,n)$ are naturally labeled by the set $$ \mathcal{Q}(n,\ell) := \left\{ (\lambda;\nu) \in \mathcal{P} \times \mathcal{P}_{\ell} \ | \ \mathrm{res}_{\ell}(\lambda) + \mathrm{sres}_{\ell}(\nu) = n \delta\right\}. $$ \end{corollary} Here $\mathrm{res}_{\ell}(\lambda)$ and $\mathrm{sres}_{\ell}(\nu)$ are the (shifted) $\ell$-residues of the corresponding partitions; see section \ref{sec:comborbits} for details. We note that our parameterization is clearly different from the parameterization given in \cite{AH}, \cite{Tr} and \cite{Joh}. In the case of the usual enhanced nilpotent cone ($\ell = 1$), we explain in subsection \ref{ssect:enc_transl} how to go between the two parameterizations (this is also explained in \cite[Lemma 2.4]{MautnerPaving}). When $\ell > 1$, we relate our parameterization to the one given by Johnson \cite{Joh} in subsection \ref{ssect:cenc_transl}. Corollary \ref{cor:paramcomb} also appeared recently in \cite[Remark 11.2.3]{DoGinT}. \subsection{Parabolic conjugacy classes in the nilpotent cone}\label{sec:parabolicintro} In \cite{B2}, the second author considered the adjoint action of parabolic subgroups $P\subseteq\GL(V)$ on the varieties $\N^{(x)}$ of $x$-nilpotent endomorphisms of $V$. In particular, the question of which pairs $(P,x)$ have the property that there are finitely many $P$-orbits on $\N^{(x)}$ is addressed. The methods used in \textit{loc. cit.} are mainly representation-theoretic: the algebraic group action is translated, via an associated fibre bundle, to a setup of representations of a certain finite quiver with relations. In all those cases (excluding the enhanced nilpotent cone) where there are finitely many $P$-orbits, the orbits are enumerated and the orbit closures are described in detail. \\[1ex] In this article, we describe how $\GL(V)$-orbits in the enhanced nilpotent cone relate to the $P$-orbits of a particular parabolic (the ``mirabolic'') subgroup on the nilpotent cone $\N$. More generally, for any dimension vector $\mathbf{d}$ of the cyclic quiver, there is a certain parabolic $P \subset \GL_{\mathbf{d}}$ such that $\GL_{\mathbf{d}}$-orbits in the cyclic enhanced nilpotent cone $\N_{\infty}(\ell,x,\mathbf{d})$ are in bijection with $P$-orbits in the cyclic nilpotent cone $\N(\ell,x,\mathbf{d})$. This can be seen as a first step in the generalization of the above question to the case of parabolic conjugacy classes in the nilcone of Vinberg's $\theta$-representations. See remark \ref{rem:theta} for more details. \subsection{Admissible $\mathscr{D}$-modules}\label{sec:addintro} As mentioned previously, one motivation for developing a quiver-theoretic approach to the $G$-orbits in the enhanced cyclic nilpotent cone is that one gets in this way immediate results regarding the category of admissible $\mathscr{D}$-modules on the space $X = \Rep(\Q_{\infty}(\ell); \mathbf{v})$ of representations of the framed cyclic quiver. Fix a character $\chi$ of the Lie algebra $\mathfrak{g}$ of $G$. The category $\mathscr{C}_{\chi}$ of admissible $\mathscr{D}$-modules on $X$ is the category of all smooth $(G,\chi)$-monodromic $\mathscr{D}$-modules on $X$, whose singular support lies in a certain Lagrangian $\Lambda$. Essentially those modules whose singular support is nilpotent in the conormal direction; see \cite{BeB2} for details. Admissible $\mathscr{D}$-modules are always regular holonomic, and it is easily shown (since $\N_{\infty}(\ell,n)$ has finitely many $G$-orbits) that there are only finitely many simple objects in $\mathscr{C}_{\chi}$. The behaviour of the category $\mathscr{C}_{\chi}$ depends heavily on the parameter $\chi$.\\ Using the results of this article, we are able to describe precisely, in \cite{BeB2}, the locus where $\mathscr{C}_{\chi}$ is semi-simple. It is shown that this is the complement to countably many (explicit) affine hyperplanes. In \cite{BeB2}, we are able to list 10 other properties of the category $\mathscr{C}_{\chi}$ are equivalent to ``$\mathscr{C}_{\chi}$ is semi-simple". The reason why our new parametrization of the $G$-orbits in the cyclic enhanced nilpotent cone is so useful in this context is because it allows us to easily compute the fundamental group of the orbits. \subsection{Outline of the article} In section two the required background results in representation theory are given. Section three introduces the combinatorial notions that we will use, in particular the notion of Frobenius circle diagrams. In section four we consider parabolic conjugacy classes in the nilpotent cone. Section five deals with the parameterization of orbits in the enhanced nilpotent cone (i.e. for $\ell = 1$). Then the enhanced cyclic nilpotent cone is considered in section six. In particular, we prove Proposition \ref{prop:typel}, Theorem \ref{thm:paramainintro} and Corollary \ref{cor:paramcomb}. \subsection{Outlook} Our representation-theoretic approach makes it possible to apply techniques from representation theory to better understand the geometry of the enhanced cyclic nilpotent cone. For example, there are several techniques available to calculate degenerations; that is, orbit closure relations. Namely, the results of Zwara \cite{Zw1,Zw2} and Bongartz \cite{Bo1,Bo2} are applicable. By making use of these, we hope to define combinatorially the closure ordering on the set $\mathcal{Q}(n,\ell)$ in the near future.\\ {\bf Acknowledgements:} The authors would like to thank C. Johnson for his very precise and valuable ideas regarding the translation between our parametrization and his original parametrization of orbits. We also thank K. Bongartz and M. Reineke for helpful remarks on the subject. The first author was partially supported by EPSRC grant EP/N005058/1. \section{Theoretical background}\label{sect:theory} Let $\mathbf{C}$ be the field of complex numbers and $\GL_n\coloneqq\GL_n(\mathbf{C})$ the general linear group, for a fixed integer $n\in\textbf{N}$, regarded as an affine algebraic group. \subsection{Quiver representations} The concepts in this subsection are explained in detail in \cite{ASS}. A \textit{(finite) quiver} $\Q$ is a direct graph $\Q=(\Q_0,\Q_1,s,t)$, with $\Q_0$ a finite set of \textit{vertices}, and $\Q_1$ a finite set of \textit{arrows}, with $\alpha\colon s(\alpha)\rightarrow t(\alpha)$. The \textit{path algebra} $\mathbf{C}\Q$ is the $\mathbf{C}$-vector space with basis consisting of all paths in $\Q$, that is, sequences of arrows $\omega=\alpha_s\dots\unkern\alpha_1$, such that $t(\alpha_{k})=s(\alpha_{k+1})$ for all $k\in\{1,\dots\unkern,s-1\}$; we formally include a path $\varepsilon_i$ of length zero for each $i\in \Q_0$ starting and ending in $i$. The multiplication $\omega\cdot\omega'$ of two paths $\omega= \alpha_s ... \alpha_1$ and $\omega' = \beta_t ... \beta_1$ is by concatenation if $t(\beta_t)=s(\alpha_1)$, and is zero otherwise. This way, $\mathbf{C}\Q$ becomes an associative $\mathbf{C}$-algebra. The \textit{path ideal} $I(\mathbf{C}\Q)$ of $\mathbf{C} \Q$ is the (two-sided) ideal generated by all paths of positive length; then an arbitrary ideal $I$ of $\mathbf{C} \Q$ is called \textit{admissible} if there exists an integer $s$ with $I(\mathbf{C} \Q)^s\subset I\subset I(\mathbf{C} \Q)^2$.\\[1ex] A finite-dimensional $\mathbf{C}$-representation of $\Q$ is a tuple \[((M_i)_{i\in \Q_0},(M_\alpha\colon M_i\rightarrow M_j)_{(\alpha\colon i\rightarrow j)\in \Q_1}),\] of $\mathbf{C}$-vector spaces $M_i$ and $\mathbf{C}$-linear maps $M_{\alpha}$. There is the natural notion of a \textit{morphism of representations} $M=((M_i)_{i\in \Q_0},(M_\alpha)_{\alpha\in \Q_1})$ and \mbox{$M'=((M'_i)_{i\in \Q_0},(M'_\alpha)_{\alpha\in \Q_1})$}, which is defined to be a tuple of $\mathbf{C}$-linear maps $(f_i\colon M_i\rightarrow M'_i)_{i\in \Q_0}$, such that $f_jM_\alpha=M'_\alpha f_i$ for every arrow $\alpha\colon i\rightarrow j$ in $\Q_1$. For a representation $M$ and a path $\omega$ in $\Q$ as above, we denote $M_\omega=M_{\alpha_s}\cdot\dots\unkern\cdot M_{\alpha_1}$. A representation $M$ is said to be \textit{bound by $I$} if $\sum_\omega\lambda_\omega M_\omega=0$ whenever $\sum_\omega\lambda_\omega\omega\in I$. Thus, we obtain certain categories: the abelian $\mathbf{C}$-linear category $\rep_{\mathbf{C}} \mathbf{C}\Q$ of all representations of $\Q$ and the category $\rep_{\mathbf{C}} \mathbf{C} \Q/I$ of representations of $\Q$ bound by $I$; the latter is equivalent to the category of finite-dimensional $\A$-representations, where $\A\coloneqq \mathbf{C} \Q/I$ is the quotient algebra.\\[1ex] Given a representation $M$ of $\Q$, its \textit{dimension vector} $\dim M\in\mathbf{N}\Q_0$ is defined by $(\dim M)_{i}=\dim_{\mathbf{C}} M_i$ for $i\in \Q_0$. Fixing a dimension vector $\mathbf{d}\in\mathbf{N}\Q_0$, we obtain a full subcategory $\rep_{\mathbf{C}} \A(\mathbf{d})$ of $\rep_{\mathbf{C}} \A$ which consists of representations of dimension vector $\mathbf{d}$. Let $\Rep(\Q,\mathbf{d}):= \bigoplus_{\alpha\colon i\rightarrow j}\Hom_{\mathbf{C}}(\mathbf{C}^{d_i},\mathbf{C}^{d_j})$; points of $\Rep(\Q,\mathbf{d})$ correspond to representations $M\in\rep_{\mathbf{C}} \mathbf{C} \Q(\df)$ with $M_i=\mathbf{C}^{d_i}$ for $i\in \Q_0$. Via this correspondence, the set of such representations bound by $I$ corresponds to a closed subvariety $\Rep(\mathbf{C} \Q / I,\mathbf{d})\subset \Rep(\Q,\mathbf{d})$. The set $\mathbf{N} \Q_0$ of dimension vectors is partially ordered by $\alpha \ge \beta$ if $\alpha_i \ge \beta_i$ for all $i$ and we say that $\alpha > \beta$ if $\alpha \ge \beta$ with $\alpha \neq \beta$. A dimension vector $\alpha$ is called \emph{sincere} if $\alpha_i > 0$ for all $i$. The algebraic group $\GL_{\mathbf{d}}=\prod_{i\in \Q_0}\GL_{d_i}$ acts on $\Rep(\Q,\mathbf{d})$ and on $\Rep(\mathbf{C} \Q / I,\mathbf{d})$ via base change. The $\GL_{\mathbf{d}}$-orbits of this action are in bijection with the isomorphism classes of representations $M$ in $\rep_{\mathbf{C}} \A(\mathbf{d})$.\\[1ex] The Krull-Remak-Schmidt Theorem says that every finite-dimensional $\A$-representation decomposes into a direct sum of indecomposable representations. We denote by $\Gamma_{\A}=\Gamma(\Q,I)$ the \textit{Auslander-Reiten quiver} of $\rep_{\mathbf{C}}\A$. \subsection{Representation types}\label{ssect:repTypes} Consider a finite-dimensional, basic $\mathbf{C}$-algebra $\A:=\mathbf{C} \Q/I$. The algebra $\A$ is said to be of \textit{finite representation type} if there are only finitely many isomorphism classes of indecomposable representations. If it is not of finite representation type, the algebra is of \textit{infinite representation type}. The Dichotomy Theorem of Drozd \cite{Dr} says that if $\A$ is of infinite type, then $\A$ is one of two type: \begin{itemize} \item \textit{tame representation type} (or \textit{is tame}) if, for every integer $n$, there is an integer $k$ and finitely generated $\mathbf{C}[x]$-$\A$-bimodules $M_1,\dots\unkern,M_{k}$ which are free over $\mathbf{C} [x]$, such that for all but finitely many isomorphism classes of indecomposable right $\A$-modules $M$ of dimension $n$, there are elements $i\in\{1,\dots\unkern,k\}$ and $\lambda\in \mathbf{C}$, such that $M\cong \mathbf{C}[x]/(x-\lambda)\otimes_{\mathbf{C}[x]}M_i$. \item \textit{wild representation type} (or \textit{is wild}) if there is a finitely generated $\mathbf{C} \langle X,Y\rangle$-$\A$-bimodule $M$ which is free over $\mathbf{C}\langle X,Y\rangle$ and sends non-isomorphic finite-dimensional indecomposable $\mathbf{C} \langle X,Y\rangle$-modules via the functor $\_\otimes_{\mathbf{C} \langle X,Y\rangle}M$ to non-isomorphic indecomposable $\A$-modules. \end{itemize} If $\A$ is a tame algebra then there are at most one-parameter families of pairwise non-isomorphic indecomposable $\A$-modules; in the wild case there are families of representations of arbitrary dimension.\\[1ex] Several different criteria are available to determine the representation type of an algebra. We say that an algebra $\B = \mathbf{C} \Q'/I'$ is a \textit{full subcategory} of $\A = \mathbf{C} \Q/I$, if $\Q'$ is a \textit{convex subquiver} of $\Q$ (that is, a path closed full subquiver) and $I'$ is the restriction of $I$ to $\mathbf{C} \Q'$. \\[1ex] An indecomposable projective $P$ has \textit{separated radical} if, for any two non-isomorphic direct summands of its radical, their supports (as subsets of $\Q$) are disjoint. We say that $\A$ \textit{fulfills the separation condition} if every projective indecomposable has a separated radical. \\[1ex] In general, the definition of a \textit{strongly simply connected} algebra is quite involved. However, in case of a triangular algebra $\A$ (meaning that the corresponding quiver $\Q$ has no oriented cycles) there is an equivalent description: $\A$ is \textit{strongly simply connected} if and only if every convex subcategory of $\A$ satisfies the separation condition \cite{Sko2}. For a triangular algebra $\A = \mathbf{C} \Q/I$, the \textit{Tits form} $q_{\A}:\mathbf{Z}^{\Q_0}\rightarrow \mathbf{Z}$ is the integral quadratic form defined by \[q_{\A}(v) = \sum_{i\in\Q_0} v_i^2 - \sum_{\alpha:i\rightarrow j\in\Q_1} v_iv_j + \sum_{i,j\in\Q_0} r(i,j)v_iv_j;\] for $v=(v_i)_i\in \mathbf{Z}^{\Q_0}$; here $r(i,j) := \dim \varepsilon_i R \varepsilon_j$, for any minimal generating subspace $R$ of $I$. \\[1ex] \begin{comment}The corresponding symmetric bilinear form is denoted $b_{\A}(\_,\_)$ and fulfills the condition $q_{\A}(v+w)=q_{\A}(v)+b_{\A}(v,w) +q_{\A}(w)$. The \textit{radical} of $q_{\A}$ is $\rad q_{\A}:=\{u\in\mathbf{Z}^{\Q_0} \mid q_{\A}(u)=0\}$, its elements are called \textit{nullroots}. By considering $q_{\A}$ as a quadratic form on $\mathbf{Q}$, we get the notion of \textit{isotropic roots}, the set of such is $\rad_{\mathbf{Q}}^0 q_{\A}:=\{u\in\mathbf{Q_+}^{\Q_0}\mid q_{\A}(u)=0\}$. The maximal dimension of a connected halfspace in $\rad_{\mathbf{Q}}^0 q_{\A}$ is the \textit{isotropic corank} $\corank^0 q_{\A}$ of $q_{\A}$.\\[1ex] \end{comment} The quadratic form $q_{\A}$ is called \textit{weakly positive}, if $q_{\A}(v) > 0$ for every $v\in\mathbf{N}^{\Q_0}$; and \textit{(weakly) non-negative}, if $q_{\A}(v) \geq 0$ for every $v\in\mathbf{Z}^{\Q_0}$ (or $v\in\mathbf{N}^{\Q_0}$, respectively). These concepts are closely related to the representation type of $\A$ and many results are, for example, summarized by De la Pe\~na and Skowro\'{n}ski in \cite{DlPS}. There are many necessary and sufficient criteria for finite, tame and wild types available, for example by Bongartz \cite{Bo4} and Br\"ustle, De la Pe\~na and Skowro{\'n}ski \cite{BdlPS}. For our purposes, however, the following statement, which follows from these results, suffices. \begin{comment} \begin{lemma} If $\A$ is representation-finite, then $q_{\A}$ is weakly positive. If $\A$ is tame, then $q_{\A}$ is weakly non-negative. \end{lemma} In certain cases, the opposite direction is true, as well. The following criterion for finite representation type is due to Bongartz \cite{Bo4}. \begin{theorem}\label{thm:fin_crit} Let $\A = \mathbf{C} \Q/I$ be a triangular algebra, which admits a preprojective component. Then $\A$ is representation-finite if and only if the Tits form $q_{\A}$ is weakly positive.\\ Furthermore, if the equivalent conditions hold true, then the dimension vector function $X\mapsto \dim X$ induces a bijection between the set of isomorphism classes of indecomposable $\A$-modules and the set of positive roots of $q_{\A}$. \end{theorem} A sufficient criterion for tame types is available by Br\"ustle, De la Pe\~na and Skowro{\'n}ski \cite{BdlPS}. \begin{theorem}\label{thm:tame_crit} Let $\A$ be strongly simply connnected. Then $\A$ is tame if and only if $q_{\A}$ is weakly non-negative. \end{theorem} By Drozd's Dichotomy statement, Theorem \ref{thm:tame_crit} yields a sufficient criterion for wildness. \end{comment} \begin{lemma}\label{lem:wild_crit} Let $\A$ be strongly simply connected. $\A$ is of wild representation type if and only if there exists $v\in \mathbf{N}^{\Q_0}$, such that $q_{\A}(v)\leq -1$. \end{lemma} \subsection{Group actions} If the algebraic group $G$ acts on an affine variety $X$, then $X/G$ denotes the set of orbits and $X/\!/ G := \Spec \mathbf{C}[X]^G$ is the categorical quotient. The following is a well-known fact on associated fibre bundles \cite{Se}, which will help translating certain group actions. \begin{lemma}\label{thm:basis_transl} Let $G$ be an algebraic group, let $X$ and $Y$ be $G$-varieties, and let $\pi : X \rar Y$ be a $G$-equivariant morphism. Assume that $Y$ is a single $G$-orbit, $Y = G \cdot y_0$. Let $H$ be the stabilizer of $y_0$ and set $F:= \pi^{-1} (y_0)$. Then $X$ is isomorphic to the associated fibre bundle $G\times^HF$, and the embedding $\phi: F \hookrightarrow X$ induces a bijection between $H$-orbits in $F$ and $G$-orbits in $X$ preserving orbit closures. \end{lemma} For an element $x \in \mathfrak{g} := \mathrm{Lie} \ G$, its centralizer in $G$ is denoted $Z_G(x)$, and its centralizer in $\mathfrak{g}$ is $Z_{\mathfrak{g}}(x)$. \section{Combinatorial objects}\label{sect:combi} In this subsection, we define the combinatorial objects we use later. \subsection{(Frobenius) Partitions and Young diagrams}\label{ssect:part_YD} The set of all weakly decreasing partitions is denoted $\mathcal{P}$ and $\mathcal{P}_{\ell}$ denotes the set of all $\ell$-multipartitions. The subset of $\mathcal{P}$, resp. of $\mathcal{P}_{\ell}$, consisting of all partitions of $n\in\mathbf{N}$, resp. of all $\ell$-multipartitions of $n$, is denoted $\mathcal{P}(n)$, resp. $\mathcal{P}_{\ell}(n)$. Then $\Pa_2(n)$ is the set of \textit{bipartitions} of $n$, that is, of tuples of partitions $(\lambda,\mu)$, such that $\lambda\vdash m$ and $\mu\vdash n-m$ for some integer $m\leq n$.Given a partition $\lambda$, its \textit{Young diagram} is denoted by $Y(\lambda)$. \\[1ex] The transpose of the partition $\lambda$ is denoted $\lambda^t$ and we define $s(\lambda)$ to be the number of diagonal entries of $Y(\lambda)$, that is, $s(\lambda)=\sharp\{i\mid \lambda_i\geq i\}$. \begin{definition} We denote by $\Pa_F(n)$ the set of \textit{Frobenius partitions} of $n$. That is, the set pairs of tuples of strictly decreasing integers $(a_1 > \cdots > a_k \ge 0 )$ and $(b_1 > \cdots > b_k \ge 0)$ such that $\sum_{i=1}^k (a_i+b_i+1) = n$. We call $k$ the \textit{length} of $(\mathbf{a},\mathbf{b})$. \end{definition} It is a classical result that the set of Frobenius partitions $\Pa_F(n)$ can be naturally identified with the set $\mathcal{P}(n)$ of partitions of $n$. To be explicit, this is a bijection $\varphi: \Pa(n)\rightarrow \Pa_F(n)$ defined as follows:\\[1ex] For a partition $\lambda\vdash n$, let $\varphi(\lambda)\coloneqq (\mathbf{a}(\lambda),\mathbf{b}(\lambda))$, where $\mathbf{a}(\lambda)_i:=\lambda^t_i-i$ and $\mathbf{b}(\lambda)_i:=\lambda_i-i$ for $i\leq s(\lambda)$. Graphically speaking, the Frobenius partition can be read off the Young diagram $Y(\lambda)$: $a_i$ is the number of boxes below the $i$th diagonal and $b_i$ the number of boxes to the right of the $i$th diagonal. \\[1ex] We also associate to a Frobenius partition $(\mathbf{a},\mathbf{b})$ the strictly decreasing partition $$ P(\mathbf{a},\mathbf{b}) := (p_1,...,p_k), \quad \textrm{where} \quad p_i:=a_i+b_i+1. $$ One cannot recover $(\mathbf{a},\mathbf{b})$ from $P(\mathbf{a},\mathbf{b})$ in general. \begin{example} Let $(\mathbf{a},\mathbf{b})=((4,2,0),(6,3,0))$ be a Frobenius partition. Then $P(\mathbf{a},\mathbf{b})=(11,6,1)$. \begin{center} \begin{ytableau} &&&&&&&&&&\\ &&&&&\\ \\ \end{ytableau} \end{center} Moreover, $\varphi(7,5,3,2,1) = ((4,2,0),(6,3,0))$ which we naturally get by pigeon-holing the Frobenius partition into the partition: \begin{center} \begin{ytableau} s_1&*(lightblue)1&*(lightblue)2& *(lightblue)3&*(lightblue)4&*(lightblue)5& *(lightblue)6\\ *(lightred)4&s_2&*(lightblue)1&*(lightblue)2 &*(lightblue)3\\ *(lightred)3&*(lightred)2& s_3\\ *(lightred)2&*(lightred)1\\ *(lightred)1\\ \end{ytableau} \end{center} \end{example} \subsubsection{Weights} If $\lambda = (\lambda_1 \ge \cdots \ge \lambda_k > 0)$ is a non-trivial partition, define $$ \mathrm{wt}_{\ell}(\lambda) := | \{ -k < i < \lambda_1 \ | \ i \equiv 0 \ \mathrm{mod} \ \ell \}| $$ to be the \textit{$\ell$-weight} of $\lambda$. If $\varphi(\lambda) = (a_1 > \cdots ; b_1 > \cdots )$ is its Frobenius form, then $\mathrm{wt}_{\ell}(\lambda)$ equals $| \{ -b_1 \le i \le a_1 \ | \ i \equiv 0 \ \mathrm{mod} \ \ell \}|$. Pictorially, one simply counts the number of boxes of content $0 \ \mathrm{mod} \ \ell$ in the first Frobenius hook of $\lambda$. \subsection{The affine root system of type $\mathsf{A}$}\label{sec:affineA} Throughout the article, $\Q(\ell)$ will denote the cyclic quiver with $\ell$ vertices, whose underlying graph is the Dynkin diagram of type $\widetilde{\mathsf{A}}_{\ell-1}$. Then, as explained in the introduction, $\Q_{\infty}(\ell)$ is the framed cyclic quiver. We denote by $R \subset \Z {\Q(\ell)_0}$ the set of \textit{roots} and $R^+ = R \cap \mathbf{N} \Q(\ell)_0$ the subset of \textit{positive roots}. If $\delta = (1, \dots, 1)$ denotes the minimal imaginary root and $\Phi := \{ \alpha \in R \ | \ \varepsilon_0 \cdot \alpha = 0 \}$ is the finite root system of type $\mathsf{A}_{\ell-1}$, then $$ R = \{ n \delta + \alpha \ | \ n \in \Z, \ \alpha \in \Phi \cup \{ 0 \} \} \smallsetminus \{ 0 \}. $$ Let $\lambda$ be a partition. Recall that the content $\mathrm{ct}(\Box)$ of the box $\Box \in Y(\lambda)$ in position $(i,j)$ is the integer $j - i$. We fix a generator $\sigma$ of the cyclic group $\Z_{\ell}$. Given a partition $\lambda$, the $\ell$-residue of $\lambda$ is defined to be the element $\mathrm{res}_{\ell}(\lambda) := \sum_{\Box \in \lambda} \sigma^{\mathrm{ct}(\Box)}$ in the group algebra $\Z[\Z_{\ell}]$. Similarly, given an $\ell$-multipartition $\nu$, the shifted $\ell$-residue of $\nu$ is defined to be $$ \mathrm{sres}_{\ell}(\nu) = \sum_{i = 0}^{\ell-1} \sigma^i \mathrm{res}_{\ell}(\nu^{(i)}). $$ We identify the root lattice $\Z \Q(\ell)$ of $\Q(\ell)$ with $\Z \{ \Z_{\ell} \}$ by $\varepsilon_i \mapsto \sigma^i$. \subsection{Matrices} We fix the nilpotent cone $\N$ of nilpotent matrices in $\mathfrak{gl}(V)$. The $\GL(V)$-conjugacy classes in $\N$ are labeled by their Jordan normal forms. In order to make use of these concepts later, we fix some notation here. We denote by $J_k$ the nilpotent Jordan block of size $k$; and by $J_{\lambda}$ the nilpotent matrix $J_{\lambda_1} \oplus \cdots \oplus J_{\lambda_k}$ in Jordan normal form of block sizes $\lambda_1 \ge \cdots \ge \lambda_k$. \subsection{Circle diagrams}\label{sec:circle} Given a positive integer $\ell > 0$, we define a \textit{circle diagram} of type $\ell$ to be a quiver $C$, whose set $C_0$ of vertices is partitioned into $\ell$ blocks $b_0,..., b_{\ell-1}$, such that each vertex has at most one incoming arrow and one outgoing arrow; and an arrow can only be drawn from a vertex in block $b_i$ to a vertex in block $b_{i+1}$; or from a vertex in block $b_{\ell-1}$ to a vertex in block $b_{0}$, and there are no oriented cycles. We say that the vertices in block $b_i$ are \textit{in position $i$} and call the vector $\mathbf{d}= (d_0,\dots,d_{\ell-1})$, where $d_i$ is the number of vertices in position $i$, the \textit{dimension vector} of $C$. Given a circle diagram, each complete connected path of arrows is called a \textit{circle}. The number of arrows in a circle is its \textit{length}. \begin{example} A circle diagram of dimension vector $(2,3,3,2)$ is given by \begin{center} \scalebox{0.6}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, node/.style={}] \node (1) {$\bullet$}; \node (1') [left=0.25 of 1] {$\bullet$}; \node (1'') [left=0.15 of 1'] {\textbf{$b_0$}}; \node (2) [above right=0.5 and 0.5 of 1] {$\bullet$}; \node (2') [above=0.25 of 2] {$\bullet$}; \node (2'') [above=0.25 of 2'] {$\bullet$}; \node (2''') [above=0.15 of 2''] {\textbf{$b_1$}}; \node (3) [below right=0.5 and 0.5 of 2] {$\bullet$}; \node (3') [right=0.25 of 3] {$\bullet$}; \node (3'') [right=0.25 of 3'] {$\bullet$}; \node (3''') [right=0.15 of 3''] {\textbf{$b_2$}}; \node (4) [below left=0.5 and 0.5 of 3] {$\bullet$}; \node (4') [below=0.25 of 4] {$\bullet$}; \node (4'') [below=0.15 of 4'] {\textbf{$b_3$}}; \path[->] (1) edge [bend left=15] (2') (2) edge [bend left=15] (3) (3) edge [bend left=15] (4) (4) edge [bend left=15] (1) (3') edge [bend left=15] (4') (4') edge [bend left=15] (1') (2'') edge [bend left=15] (3''); \end{tikzpicture}} \end{center} This circle diagram has one circle of length $1$, one circle of length $2$ and one circle of length $4$. \end{example} The set of all circle diagrams of type $\ell$, modulo permutation of vertices in the same position, is denoted $\Ca(\ell)$. The subset consisting of all circle diagrams, whose circles have length at most $x$, is denoted $\Ca^{(x)}(\ell)$. Furthermore, we denote by $\ell(C)$ the length of a circle diagram $C$, that is, the number of circles in the diagram.\\[1ex] A \textit{Frobenius circle diagram} is a circle diagram $C$ of type $\ell$, with $t$ circles, such that: \begin{enumerate} \item Each circle $C(i)$ contains a distingushed (or \textit{marked}) vertex $s_i$ in position $0$. \item If $a_i$ is the number of vertices following $s_i$ in the circle, and $b_i$ the number of vertices preceeding $s_i$ in the circle, then, after possibly relabelling circles, $$ \mathbf{a}=(a_{1},\dots,a_{t}), \quad \mathbf{b}=(b_1,\dots,b_{t}) $$ determine a Frobenius partition. \end{enumerate} The set of Frobenius circle diagrams is denoted $\Ca_F(\ell)$. \begin{example} A Frobenius circle diagram of $(4,5,4,3)$ is given by \begin{center} \scalebox{0.6}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, node/.style={}] \node (1) {\fbox{$s_1$}}; \node (1') [left=0.25 of 1] {$\bullet$}; \node (1'') [left=0.25 of 1'] {\fbox{$s_2$}}; \node (1''') [left=0.25 of 1''] {\fbox{$s_3$}}; \node (2) [above right=0.5 and 0.5 of 1] {$\bullet_1$}; \node (2') [above=0.25 of 2] {$\bullet$}; \node (2'') [above=0.25 of 2'] {$\bullet$}; \node (2''') [above=0.25 of 2''] {$\bullet$}; \node (2'''') [above=0.25 of 2'''] {$\bullet$}; \node (3) [below right=0.5 and 0.5 of 2] {$\bullet_2$}; \node (3') [right=0.25 of 3] {$\bullet$}; \node (3'') [right=0.25 of 3'] {$\bullet$}; \node (3''') [right=0.25 of 3''] {$\bullet$}; \node (4) [below left=0.5 and 0.5 of 3] {$\bullet_{3}$}; \node (4') [below=0.25 of 4] {$\bullet$}; \node (4'') [below=0.25 of 4'] {$\bullet$}; \path[->] (1) edge [bend left=15] (2') (2) edge [bend left=15] (3) (3) edge [bend left=15] (4) (4) edge [bend left=15] (1) (2') edge [bend left=15] (3') (3') edge [bend left=15] (4') (3'') edge [bend left=15] (4'') (4') edge [bend left=15] (1') (1') edge [bend left=15] (2'') (1''') edge [bend left=15] (2'''') (1'') edge [bend left=15] (2''') (2''') edge [bend left=15] (3''') (4'') edge [bend left=15] (1''); \end{tikzpicture}} \end{center} Then there are three circles: One circle of length $8$ with mark $s_1=4$, one circle of length $4$ with mark $s_2=3$ and one circle of length $1$ with mark $s_3=1$. A Frobenius partition arises as $((3,2,0),(5,2,1))$ which can be visualized by the partition $(6,4,4,2)$; and by the diagrams \begin{center}\small\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.8em, column sep=0.8em, text height=0.6ex, text depth=0.2ex] { \bullet & \bullet & \bullet & \bullet &\bullet & \bullet\\ \bullet & \bullet & \bullet & \bullet & & \\ \bullet & \bullet & \bullet & \bullet && \\ \bullet & \bullet & & & & \\ }; \path[->] (m-1-1) edge (m-1-2) (m-1-2) edge (m-1-3) (m-1-3) edge (m-1-4) (m-1-4) edge (m-1-5) (m-1-5) edge (m-1-6) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-3-3) edge (m-3-4) (m-4-1) edge (m-3-1) (m-3-1) edge (m-2-1) (m-2-1) edge (m-1-1) (m-4-2) edge (m-3-2) (m-3-2) edge (m-2-2) ; \draw [blue!10!white,fill=blue!10!white] (3,-1) -- (3,1) -- (6,1) -- (6,0.5) -- (3.5,0.5) -- (3.5,-1) -- (3,-1); \draw [blue!10!white,fill=blue!10!white] (4,-0.5) -- (4,0) -- (5,0) -- (5,-0.5) -- (4,-0.5); \draw (3,0) --(3,-0.5) -- (3,-1) -- (3.5,-1) -- (4,-1) --(4,-0.5) -- (5,-0.5) -- (5,0.5) -- (6,0.5) -- (6,1) -- (3,1) -- (3,0); \draw (3,0) --(5,0); \draw (3,-0.5) -- (4,-0.5); \draw (3,0.5) -- (5,0.5); \draw (3.5,1) -- (3.5,-1); \draw (4,1) --(4,-0.5); \draw (4.5,1) -- (4.5,-0.5); \draw (5,1) --(5,0.5); \draw (5.5,1) -- (5.5,0.5); \node at (2,0) {$=$}; \node at (3.25,0.75) {$s_1$}; \node at (3.75,0.25) {$s_2$}; \node at (4.25,-0.25) {$s_3$}; \end{tikzpicture} \end{center} \end{example} Clearly, not every circle diagram with arbitrary marks yields a Frobenius circle diagram, as the following counterexample shows. \begin{counterexample} Consider the circle diagram with marks $s_1, s_2$ and $s_3$: \begin{center} \scalebox{0.6}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, node/.style={}] \node (1) {$\bullet$}; \node (1') [left=0.25 of 1] {\fbox{$s_1$}}; \node (1'') [left=0.25 of 1'] {\fbox{$s_2$}}; \node (1''') [left=0.25 of 1''] {\fbox{$s_3$}}; \node (2) [above right=0.5 and 0.5 of 1] {$\bullet_1$}; \node (2') [above=0.25 of 2] {$\bullet$}; \node (2'') [above=0.25 of 2'] {$\bullet$}; \node (2''') [above=0.25 of 2''] {$\bullet$}; \node (3) [below right=0.5 and 0.5 of 2] {$\bullet_2$}; \node (3') [right=0.25 of 3] {$\bullet$}; \node (3'') [right=0.25 of 3'] {$\bullet$}; \node (3''') [right=0.25 of 3''] {$\bullet$}; \node (4) [below left=0.5 and 0.5 of 3] {$\bullet_{3}$}; \node (4') [below=0.25 of 4] {$\bullet$}; \node (4'') [below=0.25 of 4'] {$\bullet$}; \path[->] (1) edge [bend left=15] (2) (2) edge [bend left=15] (3) (3) edge [bend left=15] (4) (4) edge [bend left=15] (1') (1') edge [bend left=15] (2') (3') edge [bend left=15] (4') (4') edge [bend left=15] (1'') (1'') edge [bend left=15] (2'') (3'') edge [bend left=15] (4'') (4'') edge [bend left=15] (1''') (2''') edge [bend left=15] (3''') (1''') edge [bend left=15] (2''') ;\end{tikzpicture}} \end{center} This does not correspond to any Frobenius partition. \end{counterexample} By definition, each Frobenius circle diagram gives rise to a partition. Conversely, if a partition $(\mathbf{a},\mathbf{b})$ is given in Frobenius form, then for each Frobenius hook $(a_i,b_i)$, we construct a circle $C(i)$ whose vertices are in bijection with the boxes of the hook, a vertex $u$ being in position $i$ if the content of the corresponding box equals $i$ modulo $\ell$. Then there is an arrow from vertex $u$ to vertex $v$ if the box of $v$ is above, or to the right of $u$, in the hook. Finally, the vertex $s_i$ corresponding to the hinge of the hook will be in position $0$. We mark this vertex. In this way, we get a Frobenius circle diagram. It is straight-forward to check that this defines a bijection between the set of all Frobenius circle diagrams and the set of all partitions. \\[1ex] The weight of a circle is simply the number of vertices in block zero (or the number of times the circle passes through zero). The \textit{weight} $\mathrm{wt}_{\ell}(C)$ of a Frobenius circle diagram is the weight of the longest circle. This notion is defined so that the weight of a Frobenius circle diagram equals the weight of the corresponding partition. \section{The enhanced nilpotent cone}\label{sect:enc} Let $V$ be an $n$-dimensional complex vector space, and $\N(V) \subset \End(V)$ the nilpotent cone. We denote by $\N(V)^{(x)}$ the closed subvariety $\{ \varphi \ | \ \varphi^x=0 \}$ of $x$-nilpotent endomorphisms. Each parabolic subgroup $P\subseteq \GL(V)$ acts by conjugation on $\N(V)^{(x)}$. This action has been studied in \cite{B2}. In particular, the main result of \textit{loc. cit.}, together with Theorem \ref{thm:enc_bijection} below, implies that: \begin{theorem} There are only finitely many $P$-orbits in $\N(V)^{(x)}$ if and only if \begin{enumerate} \item $x\leq 2$, \item $P$ is maximal and $x=3$; or \item there exists a basis of $V$ for which $P$ has upper-block shape $(1,n-1)$ or $(n-1,1)$. \end{enumerate} \end{theorem} Cases 1. and 2. are described in detail in \cite{B2}. Let $\Q_{\infty}$ be the framed Jordan quiver: \begin{center}\small\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.01em, column sep=1.5em, text height=0.5ex, text depth=0.1ex] { & \bullet & \bullet \\ & \infty & 0 \\ }; \path[->] (m-1-2) edge node[above=0.05cm] {$v$} (m-1-3) (m-1-3) edge [loop right] node{$\varphi$} (m-1-3);\end{tikzpicture} \end{center} For each $x \ge 2$, define the algebra $\A_{\infty}(x) \coloneqq \mathbf{C} \Q_{\infty}/I_x$, where $I_x = (\varphi^x)$ is admissible. We fix the dimension vector $\mathbf{d} = (d_{\infty},d_0)\coloneqq(1,n)$. The group $\GL_{\mathbf{d}} = \GL_1 \times \GL(V)$ acts on $\Rep(\A_{\infty}(x),\mathbf{d})$ and its orbits are the same as the orbits of $\GL(V)$. Therefore we consider $\Rep(\A_{\infty}(x),\mathbf{d})$ as a $\GL(V)$-variety. As such, we have a canonical identification $\Rep(\A_{\infty}(x),\mathbf{d}) = V \times \N(V)^{(x)}$, where $\GL(V)$ acts on the latter by $g.(v,N) = (g\cdot v,g\cdot N\cdot g^{-1})$ for all $g\in \GL(V), v\in V$ and $N\in \N(V)^{(x)}$. This setup is known as the \textit{enhanced nilpotent cone}. Let $\Rep(\A_{\infty}(x),\mathbf{d})^{\circ}$ be the $\GL(V)$-stable open subset consisting of all representations where $v \neq 0$ and $V^{\circ} := V \smallsetminus \{ 0 \}$ so that $\Rep(\A_{\infty}(x),\mathbf{d})^{\circ} = V^{\circ} \times \N(V)^{(x)}$. Let $P' \subset P$ be the subgroup where the entry in the $1 \times 1$ block is $1$. \begin{theorem} \label{thm:enc_bijection} There is an isomorphism of $\GL(V)$-varieties (resp. $\GL_{\mathbf{d}}$-varieties) \begin{enumerate} \item[(1)] $V^{\circ} \times \N(V)^{(x)} \simeq \GL(V) \times^{P'} \N(V)^{(x)}$. \item[(2)] $\Rep(\A_{\infty}(x),\mathbf{d})^{\circ} \simeq \GL_{\mathbf{d}} \times^{P} \N(V)^{(x)}$. \end{enumerate} \end{theorem} \begin{proof} Part (1). Choose $v \in V^{\circ}$ such that $P' = \mathrm{Stab}_{\GL(V)}(v)$. Since $V^{\circ} = \GL(V) \cdot v = \GL(V) / P'$, the isomorphism follows from Lemma \ref{thm:basis_transl} applied to the $\GL(V)$-equivariant projection map $V^{\circ} \times \N(V)^{(x)} \rightarrow V^{\circ}$. \\ Part (2). There is a well-defined group homomorphism $\eta : P \rightarrow \mathbf{C}^{\times}$ given by projection onto the $1 \times 1$ block. Embed $P \hookrightarrow \GL_{\mathbf{d}} = \GL_1 \times \GL(V)$ by $p \mapsto (\eta(p^{-1}),p)$. Then, again, we choose $v \in V^{\circ} \subset \Rep(\A_{\infty}(x),\mathbf{d})^{\circ}$ such that $\mathrm{Stab}_{\GL_{\mathbf{d}}}(v) = P$. Since $V^{\circ} = \GL_{\mathbf{d}} \cdot v \simeq \GL_{\mathbf{d}}/ P$, the isomorphism again follows from Lemma \ref{thm:basis_transl} applied to the $\GL_{\mathbf{d}}$-equivariant projection map $\Rep(\A_{\infty}(x),\mathbf{d})^{\circ} \rightarrow V^{\circ}$. \end{proof} Thus, there are bijections between the sets of orbits \[ \left(V\times \N(V)^{(x)}\right)/\GL(V) ~~\xleftarrow{\alpha}~~\N^{(x)}/P'~~ \xrightarrow{\beta}~~ \Rep(\A_{\infty}(x),\mathbf{d})^{\circ}/\GL_{\mathbf{d}}. \] These bijections preserve orbit closure relations, dimensions of stabilizers (of single points) and codimension of orbits. Therefore the closure order relation, orbit dimensions and singularity type of the orbits in $\left(V\times \N(V)^{(x)}\right)/\GL(V)$, that were obtained in \cite{AH}, can be translated into the corresponding information for orbits in $\N^{(x)}/P'$. \subsection{Representation types} We begin to examine the representation theory of the algebra $\A_{\infty}(x)$ by figuring out, if there are infinitely many representations of a fixed dimension vector before discussing the representation type of $\A_{\infty}(x)$. \begin{lemma}\label{lem:enc_dimv} There are only finitely many isomorphism classes of $\A_{\infty}(x)$-representations of dimension vector $\mathbf{d} = (d_{\infty},d_0)$ if and only if $d_{\infty}=1$ or $x\leq 3$. \end{lemma} \begin{proof} Finiteness for $d_{\infty}=1$ follows from \cite{AH}, finiteness for $x\leq 3$ follows from \cite{B2}. The fact that $\A_{\infty}(x)$ has infinite representation type in all other cases was shown in \cite{B2}, where explicit one parameter families were constructed. \end{proof} In order to decide whether the algebra $\A_{\infty}(x)$ is of finite representation type, tame or wild, we look at the universal covering quiver $\Gamma_{\infty}$ of $\Q_{\infty}$ \cite{Ga3}: \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.65em, column sep=1.5em, text height=1.1ex, text depth=0.2ex] {\vdots & \vdots\\ \bullet & \bullet\\ \bullet & \bullet\\ \bullet & \bullet\\ \vdots & \vdots \\}; \path[->] (m-2-1) edge node[above=0.05cm] {$v^{(1)}$} (m-2-2) (m-3-1) edge node[above=0.05cm] {$v^{(0)}$} (m-3-2) (m-4-1) edge node[above=0.05cm] {$v^{(-1)}$} (m-4-2) (m-2-2) edge node[right]{$\varphi^{(1)}$} (m-3-2) (m-3-2) edge node[right]{$\varphi^{(0)}$} (m-4-2);\end{tikzpicture}\] where $v^{(i)} : \varepsilon_{\infty}^{(i)} \rightarrow \varepsilon_{0}^{(i)}$ and $\varphi^{(i)} : \varepsilon_{0}^{(i)} \rightarrow \varepsilon_{0}^{(i-1)}$. Together with the admissible ideal $(\varphi^x)$ (this notation means that the ideal is generated by every vertical path of length $x$), we obtain the covering algebra $\Gamma_{\infty}(x):= \mathbf{C} \Gamma_{\infty} /(\varphi^x)$. If $\Gamma_{\infty}(x)$ is of wild representation type, then via the covering functor \cite{Ga3}, the algebra $\A_{\infty}(x)$ is of wild representation type, as well. \begin{lemma}\label{lem:enc_reptype} The algebra $\A_{\infty}(x)$ is of finite representation type if and only if $x\leq 3$, and of wild representation type otherwise. \end{lemma} \begin{proof} Finiteness for $x\leq 3$ follows from Lemma \ref{lem:enc_dimv}. The algebra $\Gamma(x)$ is strongly simply connected, since every convex subcategory is triangular and fulfills the separation condition: the radicals of all projective indecomposables are indecomposable. Thus, Corollary \ref{lem:wild_crit} implies that $\Gamma_{\infty}(x)$ has wild representation type if and only if there is a dimension vector $\mathbf{d}\in\mathbf{N} (\Gamma_{\infty})_0$, such that $q_{\Gamma_{\infty}(x)}(\mathbf{d})\leq -1$. If $x\geq 4$, one such dimension vector is: \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.1em, column sep=-0.5em, text height=0.1ex, text depth=0.1ex] { 1 & 2\\ 2 & 3\\ 2 & 3\\ 1 & 2\\ }; \end{tikzpicture} \end{center} Hence $\A_{\infty}(x)$ is wild, too. \end{proof} \subsection{The indecomposable representations}\label{ssect:enc_indec} In this section, we classify all indecomposable representations $M$ of $\A_{\infty}(x)$ that have the property that $(\dim M)_{\infty} \le 1$. \subsubsection{Classification of indecomposables of dimension vector $(0,n)$}\label{sssect:enc_indecs0} The Jordan normal form implies that there is (up to isomorphism) exactly one indecomposable representation with dimension vector $(0,n)$, which is given by the Jordan block of size $n$. We denote the natural indecomposable representative by \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.05em, column sep=2em, text height=1.5ex, text depth=0.2ex] {U(0,n) : &0 & \mathbf{C}^{n}\\ }; \path[->] (m-1-2) edge node[above=0.05cm] {} (m-1-3) (m-1-3) edge [loop right] node{$J_n$} (m-1-3);\end{tikzpicture}\] \subsubsection{Classification of indecomposables of dimension vector $(1,n)$}\label{sssect:enc_indecs1} Some additional work is required to understand the indecomposable representations with dimension vector $(1,n)$. We recall some notions from \cite{AH}. First, given a nilpotent matrix $X$ of type $\lambda \vdash n$, a \textit{Jordan basis} $\{ v_{i,j} \ | \ 1 \le i \le \ell(\lambda), \ 1 \le j \le \lambda_i \}$ is a basis of $V$ such that $X(v_{i,j}) = v_{i,j-1}$ if $j > 1$ and $0$ otherwise. Similarly, if $(v,X)$ is a representation of $\mathcal{A}_{\infty}(n)$ of dimension $(1,n)$, then a \textit{normal basis} for $(v,X)$ is a Jordan basis such that $$ v = \sum_{i = 1}^{\ell(\mu)} v_{i,\mu_i}, $$ where $\mu \subset \lambda$ is a partition such that $\nu_i = \lambda_i - \mu_i$ also defines a partition. Thus, given a bipartition $(\mu;\nu)$ of $n$, by choosing a normal basis, one gets an element $(v,X) \in \N_{\infty}(n)$. By \cite[Proposition 2.3]{AH}, the orbit of $(v,X)$ does not depend on the choice of normal basis and the rule $(\mu;\nu) \mapsto G \cdot (v,X) =: \Xi(\mu;\nu)$ is a bijection $\Xi : \mathcal{P}_2(n) \rightarrow \N_{\infty}(n)/G$. \begin{lemma}\label{lem:enc_indec1n} There is a bijection $\Upsilon'$ from $\Pa_F(n)$ to the set of indecomposable representations (up to isomorphism) of $\A_{\infty}(n)$ with dimension vector $(1,n)$. Given a Frobenius partition $(\mathbf{a},\mathbf{b})$ of length $k$, the latter are represented by \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.05em, column sep=2em, text height=1.5ex, text depth=0.2ex] {\Upsilon'(\mathbf{a},\mathbf{b}) : & \mathbf{C} & \mathbf{C}^{m}\\ }; \path[->] (m-1-2) edge node[above=0.05cm] {v} (m-1-3) (m-1-3) edge [loop right] node{$J_{P(\mathbf{a},\mathbf{b})}$} (m-1-3);\end{tikzpicture} \] where $v=\sum_{i=1}^{k} v_{i,a_i +1}$, with respect to a Jordan basis $\{ v_{i,j} \}$ for $J_{P(\mathbf{a},\mathbf{b})}$. \end{lemma} \begin{remark} Lemma \ref{lem:enc_indec1n} also appeared in the recent preprint \cite{DoGinT}, as Lemma 11.2.1. \end{remark} \begin{example} Consider the Frobenius partition $(\mathbf{a},\mathbf{b})=((1,0),(3,0))$. Then $P(\mathbf{a},\mathbf{b})=(5,1)$. Thus, we can find a basis $\{ e_ i\}$ of $\mathbf{C}^6$, such that \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.05em, column sep=2em, text height=1.5ex, text depth=0.2ex] {\Upsilon'(\mathbf{a},\mathbf{b}) : & \mathbf{C} & & \mathbf{C}^{6}\\ }; \path[->] (m-1-2) edge node[above=0.05cm] {$e_2 + e_6$} (m-1-4) (m-1-4) edge [loop right] node{$J_{(5,1)}$} (m-1-4);\end{tikzpicture} \] \end{example} \begin{proof}[Proof of Lemma \ref{lem:enc_indec1n}] Given a Frobenius partition $(\mathbf{a},\mathbf{b})$ of length $k$, let $\mu = (a_1 + 1, \dots,a_k +1)$ and $\nu = (b_1, \dots, b_k)$ so that $(\mu,\nu)$ is a bipartition of $n$ with $\mu + \nu = P(\mathbf{a},\mathbf{b})$. This identifies $\mathcal{P}_F(n)$ with the subset $$ \mathcal{P}_{2,F}(n) := \{ (\mu,\nu) \in \mathcal{P}_2(n) \ | \ \mu_1 > \cdots > \mu_k > 0, \textrm{ and } \nu_1 > \cdots > \nu_k \ge 0 \}. $$ Then we need to show that a) if $(v,X) \in \mathcal{O}_{(\mu,\nu)}$ with $(\mu,\nu) \notin \mathcal{P}_{2,F}(n)$, then $(v,X)$ is decomposable; and b) if $(\mu,\nu) \in \mathcal{P}_{2,F}(n)$, then $(v,X)$ is indecomposable. \begin{enumerate} \item[(a)] We assume that $(\mu,\nu) \notin \mathcal{P}_{2,F}(n)$. Let $\lambda = \mu + \nu$. If $\ell(\lambda) = 1$, then $(\mu;\nu) = ((n-k),(k))$ for some $k \le n$. These all belong to $\mathcal{P}_{2,F}(n)$, except when $k = n$. This corresponds to $X$ a single Jordan block and $v = 0$, which is clearly decomposable. Also, we note that if $\mu_k = 0$ and $\nu_k \neq 0$, then let $V_1$ be the span of $\{ v_{i,j} \ | \ i < k \}$ and $V_2$ the span of the $\{ v_{k,j} \}$. Then $v \in V_1$ and $X(V_i) \subset V_i$ (with $X |_{V_2}$ a Jordan block of length $\nu_k$). Thus, $(v,X)$ is decomposable. Therefore, we may assume that $k > 1$ and $\mu_k \neq 0$. There exists some $i < k$ such that $\mu_i = \mu_{i+1}$ or $\nu_{i} = \nu_{i+1}$. We begin by assuming that $\mu_i = \mu_{i+1}$. It is enough to assume that $i = 1$ and $\lambda = (\lambda_1, \lambda_2)$. Let $v_{1,1}, \dots, v_{1,\lambda_1}, v_{2,1}, \dots, v_{2,\lambda_{2}}$ be a normal basis of $V$ with $v = v_{1,\mu_1} + v_{2,\mu_2}$. There are two subcases: \begin{itemize} \item[(i)] $\lambda_{1} > \lambda_{2}$. We take new basis $v_{1,\lambda_1}' = v_{1,\lambda_1} + v_{2,\lambda_2}, v_{1,\lambda_1 -1}' = v_{1,\lambda_1 -1} + v_{2,\lambda_2 -1}, \dots$ and $v_{2,\lambda_2}' = v_{1,\lambda_2} + v_{2,\lambda_2}, v_{2,\lambda_2 -1}' = v_{1,\lambda_2 -1} + v_{2,\lambda_2 -1}, \dots$. Then $v$ belongs to the subspace spanned by the $v_{2,j}'$ and the representation is decomposable. We note that with respect to the new basis $(X,v)$ has type $\mu' = (0,\mu_2), \nu' = (\mu_1 + \nu_1,\nu_{2})$, which is \textit{not} a normal form in the sense of \cite{AH}. \item[(ii)] $\lambda_1 = \lambda_2$. Take a new basis $v_{1,\lambda_1}' = v_{1,\lambda_1} - v_{2,\lambda_1}, v_{1,\lambda_1 -1}' = v_{1,\lambda_1 -1} - v_{2,\lambda_1 -1}, \dots$ and $v_{2,\lambda_2}' = v_{1,\lambda_1} + v_{2,\lambda_1}, v_{2,\lambda_2 -1}' = v_{1,\lambda_1 -1} + v_{2,\lambda_1 -1}, \dots$. Again $v$ is in the subspace spanned by the $v_{2,j}'$ and $(X,v)$ has type $\mu' = (0,\mu_2), \nu' = (\mu_1 + \nu_1,\nu_{2})$, which is \textit{not} a normal form. \end{itemize} When $\nu_{i} = \nu_{i+1}$, we can take $i = 1$ again. Let $v_{1,1}, \dots, v_{1,\lambda_1}, v_{2,1}, \dots, v_{2,\lambda_{2}}$ be a normal basis of $V$ with $$ v = v_{1,\mu_1} + v_{2,\mu_2} = v_{1,\lambda_1 - \nu_1} + v_{2,\lambda_2 - \nu_2}. $$ Again, one considers the two subcases (i) $\lambda_{1} > \lambda_{2}$ and (ii) $\lambda_1 = \lambda_2$. Repeating the above argument shows that these representations are decomposable. We note that, when one takes the new basis as above, $(v,X)$ has type $\mu' = (\mu_1,0), \nu' = (\nu_1,\mu_2 + \nu_{2})$, resp. type $\mu' = (\mu_1,0), \nu' = (\nu_1,\mu_2 + \nu_{2})$, when $\lambda_{1} > \lambda_{2}$, resp. when $\lambda_1 = \lambda_2$. \item[(b)] Take $(\mu;\nu) \in \mathcal{P}_{2,F}(n)$, and assume that the corresponding representation is decomposable i.e. $V = V_1 \oplus V_2$ with $v \in V_1$ and $X(V_i) \subset V_i$. By \cite[Corollary 2.9]{AH}, the Jordan type of $X |_{Z_{\mathfrak{g}}(X) \cdot v}$ is $\mu$ and the type of $X |_{V / Z_{\mathfrak{g}}(X) \cdot v}$ is $\nu$. If the Jordan type of $X |_{V_1}$ is $\eta$ and the type of $X |_{V_2}$ is $\zeta$, then the fact that $\mathfrak{g}_X \cdot v \subset V_1$ implies that $\mu \subseteq \eta$ and $\zeta \subseteq \nu$. Here $\lambda = \eta \sqcup \zeta$. The fact that $V_2 \neq 0$ implies that there exists some $i$ such that $\mu_i = 0$ but $\nu_i \neq 0$. But this contradicts the fact that $(\mu;\nu) \in \mathcal{P}_{2,F}(n)$. \end{enumerate} \end{proof} The bijection $\varphi: \Pa(n)\rightarrow \Pa_F(n)$ from Subsection \ref{ssect:part_YD} immediately yields the following corollary. \begin{corollary} There is a bijection $\Upsilon$ between $\Pa(n)$ and the set of indecomposable representations (up to isomorphism) of dimension vector $(1,n)$, given by $\Upsilon\coloneqq\Upsilon'\circ \varphi$. \end{corollary} \begin{example} The indecomposable representations of dimension vector $(1,6)$ are (up to isomorphism) given by $$\begin{tabular}{|l|l|l|l|}\hline Jordan blocks & Representation & Frobenius partition & Partition\\ \hline (6) & $(e_1,J_6)$ & $((0),(5))$ & $(6)$\\ \hline (6) & $(e_2,J_6)$ & $((1),(4))$ & $(5,1)$\\ \hline (6) & $(e_3,J_6)$ & $((2),(3))$ & $(4,1,1)$\\ \hline (6) & $(e_4,J_6)$ & $((3),(2))$ & $(3,1,1,1)$\\ \hline (6) & $(e_5,J_6)$ & $((4),(1))$ & $(2,1,1,1,1)$\\ \hline (6) & $(e_6,J_6)$ & $((5),(0))$ & $(1,1,1,1,1,1)$\\ \hline (5,1) & $(e_2+e_6,J_{5,1})$ & $((1,0),(3,0))$ & $(4,2)$ \\ \hline (5,1) & $(e_3+e_6,J_{5,1})$ & $((2,0),(2,0))$ & $(3,2,1)$ \\ \hline (5,1) & $(e_4+e_6,J_{5,1})$ & $((3,0),(1,0))$ & $(2,2,1,1)$ \\ \hline (4,2) & $(e_2+e_5,J_{4,2})$ & $((1,0),(2,1))$ & $(3,3)$ \\ \hline (4,2) & $(e_3+e_6,J_{4,2})$ & $((2,1),(1,0))$ & $(2,2,2)$ \\\hline \end{tabular}$$ \begin{comment} Starting with a representation $M$, we can without loss of generality assume that it looks as follows: \[ \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.02em, column sep=0.08em, text height=1.0ex, text depth=0.25ex] { M:& K & \xrightarrow{v} & K^{6} \\}; \path[->] (m-1-4) edge [loop right] node{$N$} (m-1-4); \end{tikzpicture}\] where $N$ is a nilpotent matrix in Jordan normal form. \begin{itemize} \item Let $N = J_6$ and consider an arbitrary embedding $v$. If the representation is indecomposable, then an easy calculation shows that there is a matrix $g\in Z(J_m)$, such that $g*v=e_i$ for some $i$. The representation for $e_3$ can be drawn as follows: \begin{center}\small\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.01em, column sep=1.5em, text height=0.5ex, text depth=0.1ex] {\bullet & &\bullet& & \bullet& & \bullet& & \bullet& & \bullet \\[2ex] & & &&&\bullet && ~& &&~\\ }; \path[->] (m-1-1) edge node {} (m-1-3) (m-1-3) edge node {} (m-1-5) (m-1-5) edge node{} (m-1-7) (m-1-7) edge node{} (m-1-9) (m-1-9) edge node{} (m-1-11) (m-2-6) edge node{} (m-1-5);\end{tikzpicture}\end{center} This yields the Frobenius partition $((2),(3))$ which corresponds to the marked Young diagram \[\begin{Young} &&$\bullet$& &&\cr \end{Young}\] in $\Pa_m(6)$ and the Young diagram \[\begin{Young} &&&\cr \cr \cr \end{Young}\] in $\Pa(6)$. The corresponding partition is, thus, $(4,1,1)$. All of the $6$ representations obtained for the maximal Jordan block are clearly indecomposable and non-isomorphic. \item Let $N = J_{(1,5)}$ and consider an arbitrary embedding $v$. If the representation is indecomposable, then an easy calculation shows that there is a matrix $g\in Z(J_m)$, such that $g*v=e_1+e_i$ for some $2\leq j \leq 5$. The representation for $e_1+e_3$ can be drawn as follows: \begin{center}\small\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.01em, column sep=1.5em, text height=0.5ex, text depth=0.1ex] {\bullet & &\bullet& & \bullet& & \bullet& & \bullet& & \bullet \\[2ex] & & &&&\bullet && ~& &&~\\ }; \path[->] (m-1-3) edge node {} (m-1-5) (m-1-5) edge node{} (m-1-7) (m-1-7) edge node{} (m-1-9) (m-1-9) edge node{} (m-1-11) (m-2-6) edge node{} (m-1-5) (m-2-6) edge node{} (m-1-1);\end{tikzpicture}\end{center} This yields the Frobenius partition $(1,0),(3,0))$ which corresponds to the marked Young diagram \[\begin{Young} &$\bullet$& &&\cr $\bullet$\cr \end{Young}\] in $\Pa_m(6)$ and to the Young diagram \[\begin{Young} $\bullet$\cr \end{Young} + \begin{Young} $\bullet$&&&\cr \cr \end{Young} = \begin{Young} &&&\cr &\cr \end{Young}\] in $\Pa(6)$. Thus, we extract the partition $(4,2)$ \item Let $N = J_{(2,4)}$ and consider an arbitrary matrix. An easy calculation shows that for every vector $v$ there is a matrix $g\in Z(J_m)$, such that $g*v=e_i+e_j$ for some $1\leq i\leq 2$ and $j=i+3$. The representation for $e_2 + e_5$ can be drawn as follows: \begin{center}\small\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.01em, column sep=1.5em, text height=0.5ex, text depth=0.1ex] {\bullet & &\bullet& & \bullet& & \bullet& & \bullet& & \bullet \\[1.5ex] & & &&&\bullet && ~& &&~\\ }; \path[->] (m-1-1) edge node {} (m-1-3) (m-1-5) edge node{} (m-1-7) (m-1-7) edge node{} (m-1-9) (m-1-9) edge node{} (m-1-11) (m-2-6) edge node{} (m-1-9) (m-2-6) edge node{} (m-1-3);\end{tikzpicture}\end{center} This yields a Frobenius partition $(2,1),(1,0))$ which corresponds to the marked Young diagram \[\begin{Young} &&$\bullet$ &\cr &$\bullet$\cr \end{Young}\] in $\Pa_m(6)$ and to the Young diagram \[\begin{Young} $\bullet$\cr \cr \end{Young} + \begin{Young} $\bullet$&\cr \cr \cr \end{Young} = \begin{Young} &\cr &\cr &\cr \end{Young}\] in $\Pa(6)$. The corresponding partition is $(2,2,2)$. \end{itemize} \end{comment} \end{example} Given a partition $\lambda$, we write $M_{\lambda} := \Upsilon(\lambda)$. Note that the Jordan block sizes of $M_\lambda$ are not given by $\lambda$, but by $P(\varphi({\lambda}))$. We deduce that: \begin{theorem}\label{thm:enc_class} Every representation $M$ in $\N_{\infty}(n) := \Rep(\A_{\infty}(n),(1,n))$ decomposes as a direct sum \begin{equation}\label{eq:Mbipart} M\simeq M_{\lambda}\oplus\bigoplus_{i=1}^{\ell(\mu)} U(0,\mu_i) \end{equation} for some bipartition $(\lambda;\mu) \in \mathcal{P}_2(n)$. \end{theorem} We denote the $G$-orbit of the representation (\ref{eq:Mbipart}) by $\mathcal{O}_{(\lambda;\mu)}$. \begin{corollary} There is a bijection $\Phi : (\lambda;\mu) \mapsto \mathcal{O}_{(\lambda;\mu)}$ from $\Pa_2(n)$ to the orbits in the enhanced nilpotent cone. \end{corollary} \subsection{Translating between the different parameterizations}\label{ssect:enc_transl} The $\GL(V)$-orbits in $V \times \N(V)$ were first studied by Bernstein \cite{Ber}. It particular, it was noted there that there are only finitely many orbits. Explicit representatives of these orbits were independently given by Achar-Henderson \cite[Proposition 2.3]{AH} and Travkin \cite[Theorem 1]{Tr}. Recall that the parameterization in \textit{loc. cit.} is given by $\Xi : \mathcal{P}_2(n) \rightarrow \N_{\infty}(n) / G$. \\ We define $\Psi : \mathcal{P}_2(n) \rightarrow \mathcal{P}_2(n)$ as follows. Take $(\mu;\nu) \in \mathcal{P}_2(n)$ and let $k = \ell(\lambda)$, where $\lambda = \mu + \nu$. Then $i$ belongs to the set $\mathrm{Re}(\mu;\nu) \subset \{ 1, \dots, k \}$, the set of removable rows, if and only if one of the following holds: \begin{itemize} \item[(a)] $\mu_i = \mu_{i+1}$, \item[(b)] $\nu_{i-1} = \nu_i$; or \item[(c)] $i = k $ and $\mu_k = 0$. \end{itemize} If $\mathrm{Re}(\mu;\nu) = \{ i_1, \dots, i_r \}$ then $\zeta := (\lambda_{i_1} \ge \cdots \ge \lambda_{i_r})$. Let $\mu' = (\mu_1 - 1, \dots, \mu_k - 1)$ and \begin{align*} \widehat{\mu} & = \mu' \textrm{ with $\mu_{i_1}', \dots, \mu_{i_r}'$ removed,} \\ \widehat{\nu} & = \nu \textrm{ with $\nu_{i_1}, \dots, \nu_{i_r}$ removed.} \end{align*} Then $(\widehat{\mu},\widehat{\nu}) = \varphi(\eta)$ is a Frobenius partition, and $$ \Psi(\mu;\nu) := (\eta;\zeta) \in \mathcal{P}_2(n). $$ \begin{example} If $(\mu;\nu) = ((4,4,3,1),(3,2,2))$, then $(\eta,\zeta) = ((3,1,1,1),(7,5))$. \end{example} \begin{theorem}\label{thm:travkin} The map $\Psi$ is a bijection such that $$ \Phi \circ \Psi (\mu;\nu) = \mathcal{O}_{\Psi(\mu;\nu)} = \Xi (\mu;\nu). $$ \end{theorem} \begin{proof} We check that $\Phi \circ \Psi(\mu;\nu) = \Xi(\mu;\nu)$ for all $(\mu;\nu)$. This will imply that $\Psi$ is a bijection. This is essentially already contained in the proof of Lemma \ref{lem:enc_indec1n}. Let $M = (v,X) \in \Xi(\mu;\nu)$. This means that there is a normal basis $\{ v_{i,j} \}$ of $V$ such that $X$ has Jordan type $\lambda = \mu + \nu$ and $v = \sum_{i = 1}^{\ell(\mu)} v_{i,\mu_i}$. We wish to show that $$ M\simeq M' \oplus\bigoplus_{i=1}^{\ell(\zeta)} U(0,\zeta_i) $$ where $M'$ is isomorphic to the representation $\Upsilon'(\widehat{\mu},\widehat{\nu})$ of Lemma \ref{lem:enc_indec1n}. The proof is by induction on $n = |\lambda|$. Assume that there exists $i \in \{1 ,\dots, k \}$ such that $\mu_i = \mu_{i+1}$. Then, the proof of Lemma \ref{lem:enc_indec1n} shows that $M \simeq M_1 \oplus U(0,\lambda_i)$, where $M_1$ belongs to $\Xi(\mu'';\nu'')$, where $\mu''$ is $\mu$ with $\mu_i$ removed and $\nu''$ is $\nu$ with $\nu_i$ removed. By induction, $\Xi(\mu'';\nu'') = \Phi \circ \Psi (\mu'';\nu'')$ and hence $\Xi(\mu;\nu) = \Phi \circ \Psi (\mu;\nu)$. In exactly the same way, if $\nu_{i-1} = \nu_i$ or if $i = k$ and $\mu_k = 0$ then $M$ has a summand isomorphic to $U(0,\lambda_i)$. This reduces us to the situation where $\mu_1 > \cdots > \mu_k > 0$ and $\nu_1 > \cdots > \nu_k \ge 0$ i.e. we may assume that $(\mu;\nu)$ belongs to the set $\mathcal{P}_{2,F}(n)$ defined in the proof of Lemma \ref{lem:enc_indec1n}. If we set $ \widehat{\mu} = (\mu_1 - 1, \dots, \mu_k - 1)$ and $\widehat{\nu} = \nu$, then Lemma \ref{lem:enc_indec1n} says that $(\widehat{\mu},\widehat{\nu})$ is a Frobenius partition and $M \in \Upsilon'(\widehat{\mu},\widehat{\nu})$. This completes the proof of the theorem. \end{proof} \begin{example} We consider the case $n = 3$. Then \begin{displaymath} \begin{array}{c|c} (\mu;\nu) & \Psi(\mu;\nu) \\ \hline ((3);\emptyset) & ((1,1,1);\emptyset) \\ ((2,1);\emptyset) & ((1,1);(1)) \\ ((1,1,1);\emptyset) & ((1);(1,1)) \\ ((2);(1)) & ((2,1);\emptyset) \\ ((1,1);(1)) & ((1);(2)) \\ ((1);(2)) & ((3);\emptyset) \\ ((1);(1,1)) & ((2);(1)) \\ (\emptyset;(3)) & (\emptyset;(3)) \\ (\emptyset;(2,1)) & (\emptyset;(2,1)) \\ (\emptyset;(1,1,1)) & (\emptyset;(1,1,1)) \end{array} \end{displaymath} \end{example} We have shown that there is an explicit bijection from the set of bipartitions of $n$ to itself, that intertwines our parameterization of $G$-orbits on the enhanced nilpotent cone with the parameterization given in \cite{AH} and \cite{Tr}. This bijection is very non-trivial and we hope to figure out a better combinatorial understanding of its properties in the near future. \section{The enhanced cyclic nilpotent cone}\label{sec:cyclicenh} The results described above for the enhanced nilpotent cone all have analogues for the enhanced cyclic nilpotent cone. As one might expect, this situation is combinatorially more involved, but the approach is similar. \\[1ex] Let $\Q_{\infty}(\ell)$ be the enhanced cyclic quiver with $\ell+1$ vertices. \begin{center} \scalebox{0.6}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, node/.style={}] \node (99) {$\bullet_{\infty}$}; \node (1)[right of=99] {$\bullet_0$}; \node (2) [above right=1 and 0.5 of 1] {$\bullet_1$}; \node (7) [below right=1 and 0.5 of 1] {$\bullet_{\ell-1}$}; \node (3) [above right=0.25 and 1 of 2] {$\bullet_2$}; \node (6) [below right=0.25 and 1 of 7] {$\bullet_{\ell-2}$}; \node (4) [below right=0.25 and 1 of 3] {$\bullet_3$}; \node (5) [above right=0.25 and 1 of 6] {$\bullet_{\ell-3}$}; \path[->] (99) edge node {$v$} (1) (1) edge [bend left=15] node {$\varphi_0$} (2) (2) edge [bend left=15] node {$\varphi_1$} (3) (3) edge [bend left=15] node {$\varphi_2$} (4) (5) edge [bend left=15] node {$\varphi_{\ell-3}$} (6) (6) edge [bend left=15] node {$\varphi_{\ell-2}$} (7) (7) edge [bend left=15] node {$\varphi_{\ell-1}$} (1); \path[-,dotted] (4) edge [bend left=35] (5); \end{tikzpicture}} \end{center} We define the cyclic enhanced algebra to be $$ \A_{\infty}(\ell,x):=\mathbf{C} \Q_{\infty}(\ell)/\langle(\varphi_{\ell-1}\circ \cdots \circ\varphi_0)^x\rangle. $$ Let us fix a dimension vector $\mathbf{d}_{\infty} :=(1,d_0,...,d_{\ell-1})$ of $\Q_{\infty}(\ell)$. The group $\GL_{\mathbf{d}_{\infty}}:=\GL_1\times \prod_{i=0}^{\ell-1}\GL_{d_i}$ acts on the representation variety $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$. Recall that $\Q(\ell)$ denotes the unframed cyclic quiver. We define $\A(\ell,x):=\mathbf{C} \Q(\ell)/\langle(\varphi_{\ell-1}\circ...\circ\varphi_0)^x\rangle$. This algebra has been studied previously by Kempken \cite{Ke}. Let us fix the dimension vector $\mathbf{d} :=(d_0,...,d_{\ell-1})$. The group $\GL_{\mathbf{d}}:=\prod_{i=0}^{\ell-1}\GL_{d_i}$ acts on the representation variety $\Rep(\A(\ell,x),\mathbf{d})$. Just as in section \ref{sect:enc}, one can relate orbits in the cyclic nilpotent cone $\Rep(\A(\ell,x),\mathbf{d})$ and in the enhanced cyclic nilpotent cone $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$. If $V = \mathbf{C}^{d_0}$, identified in the obvious way with a subspace of $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$, then we take $V^{\circ} = V \smallsetminus \{ 0 \}$ and let $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})^{\circ}$ denote its preimage under the projection $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty}) \rightarrow V$. Choose $v \in V^{\circ} \subset \Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$ and let $$ P = \mathrm{Stab}_{\GL_{\mathbf{d}_{\infty}}}(v), \quad P' = \mathrm{Stab}_{\GL_{\mathbf{d}}}(v). $$ Analogous to Theorem \ref{thm:enc_bijection}, we have \begin{theorem} \label{thm:cenc_bijection} There is an isomorphism of $\GL_{\mathbf{d}}$-varieties (resp. of $\GL_{\mathbf{d}_{\infty}}$-varieties): \begin{enumerate} \item $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})^{\circ} \cong \GL_{\mathbf{d}} \times^{P'} \Rep(\A(\ell,x),\mathbf{d})$. \item $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})^{\circ} \cong \GL_{\mathbf{d}_{\infty}} \times^{P} \Rep(\A(\ell,x),\mathbf{d})$. \end{enumerate} \end{theorem} \begin{remark}\label{rem:theta} Let $N = d_0 + \cdots + d_{\ell-1}$. There is an automorphism $\theta$ of $\mathfrak{g} := \mathfrak{gl}_N$ such that $$ \mathfrak{g}_1 := \left\{ X \in \mathfrak{g} \ \Big| \ \theta(X) = \exp \left( \frac{2 \pi \sqrt{-1}}{\ell} \right) \right\} $$ is canonically identified with $\Rep(\Q(\ell),\mathbf{d})$. The space $\mathfrak{g}_1$ is a representation of $\GL_N^{\theta} = \GL_{\mathbf{d}}$, and is an example of a $\theta$-representation as introduced and studied by Vinberg \cite{Vinberg}. Under the above identification, the cyclic nilpotent cone $\Rep(\A(\ell,x),\mathbf{d})$ is precisely the nilcone in the $\theta$-representation $\mathfrak{g}_1$. Therefore one can view Theorem \ref{thm:cenc_bijection} as a first step in a programme to study parabolic conjugacy classes in the nilcone of $\theta$-representations. In particular, it raises the following problem:\\ Classify all triples $(G,\theta,P)$, where $G$ is a reductive group over $K$, $\theta$ is a finite automorphism of $\mathfrak{g} = \mathrm{Lie} \ G$ and $P \subset G^{\theta}$ is a parabolic subgroup such that the number of $P$-orbits in the nilcone $\N(\mathfrak{g}_1)$ is finite. \end{remark} \subsection{Representation types}\label{sec:reptype} We begin by classifying the representation type of the algebra $\A_{\infty}(\ell,x)$. The universal covering quiver $\Gamma_{\infty}(\ell)$ is given by \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.95em, column sep=1.5em, text height=1.5ex, text depth=0.2ex] {\vdots & \vdots\\ \bullet & \bullet\\ \bullet & \bullet\\ \bullet & \bullet\\ \vdots & \vdots \\}; \path[->] (m-2-1) edge node[above] {$v^{(1)}$} (m-2-2) (m-3-1) edge node[above] {$v^{(0)}$} (m-3-2) (m-4-1) edge node[above] {$v^{(-1)}$} (m-4-2) (m-2-2) edge[dashed] node[right]{$\underline{\varphi}^{(i)}$} (m-3-2) (m-3-2) edge[dashed] node[right]{$\underline{\varphi}^{(i-1)}$} (m-4-2);\end{tikzpicture}\] Here, $\underline{\varphi}^{(i)}= \varphi_{\ell-1}^{(i)} \circ \cdots \circ \varphi_0^{(i)}$ is a path of length $\ell$. The quotient of this path algebra by the relations $\langle \underline{\varphi}^{(i)} \circ \cdots \circ \underline{\varphi}^{(i+x-1)} \circ \underline{\varphi}^{(i+x)} \ | \ i \in \Z \rangle$ gives the covering algebra $\Gamma_{\infty}(\ell,x):=\mathbf{C} \Gamma_{\infty}(\ell)/(\underline{\varphi}^x)$. If $\Gamma_{\infty}(\ell,x)$ is of wild representation type, then via the covering functor \cite{Ga3}, the algebra $\A_{\infty}(\ell,x)$ is of wild representation type as well. Since the covering algebra is strongly simply connected (as every projective indecomposable admits at every vertex a vector space of dimension at most $1$), we can make use of the results of subsection \ref{ssect:repTypes}; in particular of Lemma \ref{lem:wild_crit}. Furthermore, since $\Gamma_{\infty}(\ell,x)$ is locally bounded (our ideal cancels infinite paths) and $\mathbf{Z}$ acts freely by shifts, we know by \cite{Ga3}: If $\Gamma_{\infty}(\ell,x)$ is locally of finite representation type, then $\A_{\infty}(\ell,x)$ is of finite representation type and every indecomposable representation is obtained from an indecomposable $\Gamma_{\infty}(\ell,x)$-representation (via the obvious functor which builds direct sums of vector spaces in the same "column" and linear maps accordingly). \begin{lemma}\label{lem:cenc_reptype} The algebra $\A_{\infty}(\ell,x)$ has \begin{enumerate} \item[(a)] finite representation type iff $(\ell,x)\in\{(1,1),(1,2),(1,3),(2,1),(3,1)\}$, \item[(b)] tame representation type iff $(\ell,x)\in\{(2,2),(4,1)\}$. \end{enumerate} In every remaining case, the algebra $\A_{\infty}(\ell,x)$ is of wild representation type. \end{lemma} \begin{proof} This is a case by case analysis. \begin{itemize} \item Firstly, let $\ell=1$, that is, we are in the situation of the enhanced nilpotent cone. Then every case follows from Lemma \ref{lem:enc_reptype}. \item Let $\ell=2$. \begin{itemize} \item For $x=1$, by knitting, we compute the Auslander-Reiten quiver of $\Gamma_{\infty}(2,1)$, which is finite. It is cyclic by means of a shift of the $\mathbf{Z}$-action and is depicted in Appendix \ref{ssect:arq21}. There, given a representation $M$ of a finite slice of $\Gamma_{\infty}(2,1)$, we denote by $M^{(i)}$ for $i\in\mathbf{Z}$ the shifted representation $M$, such that the support of $M^{(i)}$ is non-zero in the $i$-th row (numbered from bottom to top) of $\Gamma_{\infty}(2,1)$, but zero below. By Covering Theory \cite{Ga3}, the algebra $\A_{\infty}(2,1)$ is, thus, of finite representation type. \item If $x=2$, then the algebra is tame: The covering algebra contains an Euclidean subquiver of type $\widetilde{\mathsf{D}}_6$ and hence $\A_{\infty}(2,2)$ has infinite representation type. It is indeed tame by \cite[Theorem 2.4]{Sko3}: The Galois covering is strongly simply connected locally bounded and the algebra does not contain a convex subcategory which is hypercritical (see a list in \cite{Un}) or pg-critical (see a list in \cite{NoeSk}). \item For $x\geq 3$, there is a dimension vector with negative quadratic form, and wildness follows from Lemma \ref{lem:wild_crit}: \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.1em, column sep=-0.5em, text height=0.1ex, text depth=0.1ex] { & 1\\ 1 & 2\\ & 2\\ 1 & 3\\ & 2\\ 1 & 2\\ & 1\\ }; \end{tikzpicture} \end{center} \end{itemize} \item Let $\ell=3$. \begin{itemize} \item If $x=1$, as above, we compute the Auslander-Reiten quiver of $\Gamma_{\infty}(3,1)$, which is finite. It is also cyclic by means of a shift of the $\mathbf{Z}$-action and is depicted in Appendix \ref{ssect:arq31}. There, given a representation $M$ of a finite slice of $\Gamma_{\infty}(3,1)$, we denote by $M^{(i)}$ for $i\in\mathbf{Z}$ the shifted representation $M$, such that the support of $M^{(i)}$ is non-zero in the $i$-th row (numbered from bottom to top) of $\Gamma_{\infty}(3,1)$, but zero below. By Covering Theory \cite{Ga3}, the algebra $\A_{\infty}(3,1)$ is, thus, of finite representation type. \item Let $x\geq 2$. The following dimension vector, for which $q_{\A}(\mathbf{d}) = -1$, proves wildness of $\A_{\infty}(3,2)$ by Lemma \ref{lem:wild_crit} and induces a $2$-parameter family of non-isomorphic representations: \begin{center} \begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.1em, column sep=-0.5em, text height=0.1ex, text depth=0.1ex] { & 1\\ & 2\\ 1 & 3\\ & 3\\ & 3\\ 1 & 3\\ & 2\\ & 1\\ }; \end{tikzpicture} \end{center} Wildness of $\A_{\infty}(3,x)$ for $x\geq 2$ follows. \end{itemize} \item Let $\ell = 4$. The covering quiver contains an euclidean (and therefore tame) subquiver of type $\widetilde{\mathsf{E}}_7$, thus, we always have infinite representation type. \begin{itemize} \item If $x=1$, then then algebra $\A_{\infty}(4,1)$ is tame by \cite[Theorem 2.4]{Sko3}: The Galois covering is strongly simply connected locally bounded and the algebra does not contain a convex subcategory which is hypercritical (see a list in \cite{Un}) or pg-critical (see a list in \cite{NoeSk}). \item For $x\geq 2$, the algebra $\A_{\infty}(4,x)$ is of wild representation type, since the covering quiver contains the wild subquiver (see \cite{Un}): \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.95em, column sep=1.5em, text height=1.5ex, text depth=0.2ex] { &\bullet &&&&\bullet&&\\ \bullet & \bullet& \bullet &\bullet& \bullet& \bullet & \bullet & \bullet\\}; \path[-] (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-2-4) edge (m-2-5) (m-2-5) edge (m-2-6) (m-2-6) edge (m-2-7) (m-2-7) edge (m-2-8) (m-1-2) edge (m-2-2) (m-1-6) edge (m-2-6);\end{tikzpicture}\] \end{itemize} \item If $\ell \ge 5$, then the algebra $\A_{\infty}(\ell,x)$ has wild representation type. In this case, the covering quiver $\Gamma_{\infty}(\ell,x)$ contains the wild subquiver \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.95em, column sep=1.5em, text height=1.5ex, text depth=0.2ex] { & &&&\bullet&&&&\\ \bullet & \bullet& \bullet & \bullet& \bullet & \bullet& \bullet& \bullet & \bullet\\}; \path[-] (m-2-1) edge (m-2-2) (m-2-2) edge (m-2-3) (m-2-3) edge (m-2-4) (m-2-4) edge (m-2-5) (m-2-5) edge (m-2-6) (m-2-6) edge (m-2-7) (m-2-7) edge (m-2-8) (m-2-8) edge (m-2-9) (m-1-5) edge (m-2-5);\end{tikzpicture}\] Thus, $\A_{\infty}(5,x)$ is of wild representation type for all $x$. \end{itemize} \end{proof} \subsection{The indecomposable representations}\label{ssect:ecnc_indec11} As for the usual enhanced nilpotent cone, we consider separately the two cases: indecomposable representations $M$ of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$; or indecomposable representations $M$ with $(\dim M)_{\infty} = 0$. \subsubsection{Classification of indecomposables of dimension vector $(0,*)$}\label{sssect:cenc_indecs0} In this case, we are basically studying indecomposable representations of $\A(\ell,x)$. For $i\in\Z_{\ell}$ and $N\in\mathbf{N}\backslash \{0\}$, let $U(i,N)$ be the $N$-dimensional indecomposable module defined as follows: as a $\mathbf{C}$-vector space, it has a basis $v_0,\dots,v_{N-1}$, with $v_k$ being a basis vector of the vector space at vertex $i+k\in\mathbf{Z}_{\ell}$ of $U(i,N)$. The linear maps of the representation map $v_k$ to $v_{k+1}$ if possible; and to $0$, otherwise. We can draw a picture of the $\Q(4)$-representation $U(2,10)$ as follows, which makes clear the structure of the indecomposables: \begin{center} \scalebox{0.6}{ \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3cm, thick, node/.style={}] \node (1) {$\bullet_0$}; \node (1') [left=0.25 of 1] {$\bullet$}; \node (2) [above right=0.5 and 0.5 of 1] {$\bullet_1$}; \node (2') [above=0.25 of 2] {$\bullet$}; \node (2'') [above=0.25 of 2'] {$\bullet$}; \node (3) [below right=0.5 and 0.5 of 2] {$\bullet_2$}; \node (3') [right=0.25 of 3] {$\bullet$}; \node (3'') [right=0.25 of 3'] {$\bullet$}; \node (4) [below left=0.5 and 0.5 of 3] {$\bullet_{3}$}; \node (4') [below=0.25 of 4] {$\bullet$}; \path[->] (1) edge [bend left=15] (2') (2) edge [bend left=15] (3) (3) edge [bend left=15] (4) (4) edge [bend left=15] (1) (2') edge [bend left=15] (3') (3') edge [bend left=15] (4') (4') edge [bend left=15] (1') (1') edge [bend left=15] (2'') (2'') edge [bend left=15] (3''); \end{tikzpicture}} \end{center} Define $\mathcal{U}(\ell,x) := \{ (i,N) \in \Z_{\ell} \times \mathbf{N} \textrm{ such that } | \{ 0 \le j \le N-1, \ j \equiv -i \ \mathrm{mod} \ \ell \} | \le x \}$. \begin{theorem}\label{thm:cenc_indecs_circle} The isomorphism classes of indecomposable representations of $\A(\ell,x)$ are parametrized by the representations $$ \{ U(i,N) \ | \ (i,N) \in \mathcal{U}(\ell,x) \}. $$ \end{theorem} \begin{proof} The universal covering quiver $\Gamma$ is an infinite quiver of type $\mathsf{A}$. However, the corresponding relations are a bit tricky. They are that $$ \varphi_{\ell-1}^{(i)} \circ \cdots \circ \varphi_0^{(i)}\circ \varphi_{\ell-1}^{(i+1)} \circ \cdots \circ \varphi_{\ell-1}^{(i+x)} \circ \cdots \circ \varphi_0^{(i+x)} = 0 $$ for all $i \in \Z$ (and note that the composition of $x\cdot \ell$ maps is always equals $0$, as one might expect). But we can essentially ignore this and note that every indecomposable representation $U_{\Gamma}(i,N)$ of $\Gamma$ is nilpotent (here $i \in \Z$ and $N \ge 1$, and the representation is defined just as for the cyclic quiver). So it suffices to check which of these factors through $\A(\ell,x)$ after applying the covering functor $F : \rep_{\mathbf{C}} \mathbf{C} \Gamma \rightarrow \rep_{\mathbf{C}} \mathbf{C} \Q(\ell)$. We have $F(U_{\Gamma}(i,N)) = U(\overline{i},N)$, where $\overline{i}$ is the image of $i$ in $\Z_{\ell}$. Now we note first that the nilpotent endomorphism $\varphi_{\ell-1} \circ \cdots \circ \varphi_0$ of $U(\overline{i},N)_0$ is a single Jordan block (of size $\dim U(\overline{i},N)_0$). Therefore it factors through $\A(\ell,x)$ if and only if $\dim U(\overline{i},N)_0 \le x$. But \begin{align*} \dim U(\overline{i},N)_0 & = \dim \bigoplus_{j \in \Z} U_{\Gamma}(i,N)_{j \ell} \\ &= | \{ j \in \Z \ | \ i \le \ell j \le i + N - 1 \} | \\ & = | \{ 0 \le k \le N-1 \ | \ k \equiv -i \ \mathrm{mod} \ \ell \} | \end{align*} Thus, the indecomposable representations of $\A(\ell,x)$ that are obtained from $\Gamma(\ell,x)$ via the covering functor are precisely those $U(i,N)$ such that $(i,N) \in \mathcal{U}(\ell,x)$. Since we know by \cite{Ke} that the algebra $\A(\ell,x)$ is representation-finite, Covering Theory \cite{Ga3} implies that these are, in fact, all isomorphism classes of indecomposables. \end{proof} \subsubsection{Classification of indecomposables of dimension vector $(1,*)$}\label{sssect:cenc_indecs1} The second case deals with indecomposable $\A_{\infty}(\ell,x)$-representations $M$ with $(\dim M)_{\infty} = 1$. The classification is given by Frobenius circle diagrams. \begin{theorem}\label{thm:cenc_indecs1} Fix $x, \ell \ge 1$. \begin{enumerate} \item There are canonical bijections between: \begin{itemize} \item The set of isomorphism classes of indecomposable nilpotent representations $M$ of $\Q_{\infty}(\ell)$ with $(\dim M)_{\infty} = 1$. \item The set $\Ca_F(\ell)$ of Frobenius circle diagrams. \item The set of all partitions. \end{itemize} \item These bijections restrict to bijections between: \begin{itemize} \item The set of isomorphism classes of indecomposable representations $M$ of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$. \item The set $\{ C \in \Ca_F(\ell) \ | \ \mathrm{wt}_{\ell}(C) \le x \}$ of Frobenius circle diagrams of weight at most $x$. \item The set of all partitions $\{ \lambda \in \mathcal{P} \ | \ \mathrm{wt}_{\ell}(\lambda) \le x \}$ of weight at most $x$. \end{itemize} \end{enumerate} \end{theorem} \begin{proof} It is clear that statement (2) implies statement (1). We concentrate on statement (2). It has already been explained in section \ref{sec:circle} that the set $\{ C \in \Ca_F(\ell) \ | \ \mathrm{wt}_{\ell}(C) = x \}$ of Frobenius circle diagrams of weight $x$ is in bijection with the set of all partitions of weight $x$. Therefore it suffices to show that the set of isomorphism classes of indecomposable representations $M$ of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$ is in bijection with the set $\{ C \in \Ca_F(\ell) \ | \ \mathrm{wt}_{\ell}(C) \le x \}$. Let $M$ be an indecomposable representation of $\A_{\infty}(\ell,x)$ with $(\dim M)_{\infty} = 1$. We denote by $U$ the restriction of $M$ to $\Q(\ell)$. By Theorem \ref{thm:cenc_indecs_circle}, we may assume, without loss of generality, that $U = U(i_1,N_1) \oplus \cdots \oplus U(i_k,N_k)$ for some $(i_j,N_j) \in \mathcal{U}(\ell,x)$. That is, $U$ is described by a certain circle diagram $C$. The embedding of $\mathbf{C} = M_{\infty}$ into $U_0$ defines a marking of the circle diagram: let $v$ be the image of $1 \in \mathbf{C}$ in $U_0$, and recall that the vertices in $b_0$ are a basis of $U_0$. Then a vertex $i$ of $b_0$ is marked if and only if the coefficient of the corresponding basis vector in the expansion of $v$ is non-zero. The fact that the indecomposable representations correspond precisely to Frobenius circle diagrams can then be shown by repeating the arguments given in the proof of Lemma \ref{lem:enc_indec1n}. We do not repeat them here. \end{proof} Given a Frobenius circle diagram $C$, we denote by $M_{C}$ the corresponding canonical indecomposable nilpotent representation. \subsection{A combinatorial parametrization}\label{sec:comborbits} Given a fixed dimension vector $\mathbf{d}_{\infty}=(1,d_0,\dots,d_{\ell-1})$ of $\Q_{\infty}(\ell)$, we denote $\mathbf{d}:=(d_{0},\dots,d_{\ell-1})$ and we can parametrize the $G$-orbits in the cyclic enhanced nilpotent cone by making use of section \ref{ssect:ecnc_indec11}. Denote by $\mathcal{C}_{2,F}(\mathbf{d})$ the set of tuples $(\mathcal{C}_F,\mathcal{C})$ of a Frobenius circle diagram $\mathcal{C}_F$ with dimension vector $\mathbf{d}_1$ and a circle diagram $\mathcal{C}$ with dimension vector $\mathbf{d}_2$, such that $\mathbf{d}_1+\mathbf{d}_2=\mathbf{d}$. Then the results of section \ref{ssect:ecnc_indec11} imply that: \begin{theorem}\label{thm:cenc_class} Every representation $M$ in $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$ decomposes as a direct sum \begin{equation}\label{eq:Mbipart2} M\simeq M_{C'}\oplus\bigoplus_{i=1}^{\ell(C)} U(0,C_i) \end{equation} of indecomposable representations, for some unique tuple $(C',C) \in\mathcal{C}_{2,F}(\mathbf{d})$. \end{theorem} Recall that the $\GL_{\mathbf{d}_{\infty}}$-orbits in $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$ are the same as the $\GL_{\mathbf{d}}$-orbits. We denote the $\GL_{\mathbf{d}}$-orbit of the representation (\ref{eq:Mbipart2}) by $\mathcal{O}_{C',C}$. We deduce from Theorem \ref{thm:cenc_class} that: \begin{proposition} There is a bijection $\Phi_{\ell}: (C',C)\mapsto \mathcal{O}_{C',C}$ from $\mathcal{C}_{2,F}(\mathbf{d})$ to the set of $\GL_{\mathbf{d}}$-orbits in $\Rep(\A_{\infty}(\ell,x),\mathbf{d}_{\infty})$. \end{proposition} In applications to admissible $\mathscr{D}$-modules \cite{BeB2}, we are interested in representations of a very particular dimension. Namely, if $\delta = (1, \dots, 1)$ is the minimal imaginary root for $\Q(\ell)$ as in section \ref{sec:affineA}, then we fix $$ \mbf{v} := \varepsilon_{\infty} + n \delta = (1, n, \dots, n). $$ We use Theorem \ref{thm:cenc_indecs_circle} and Theorem \ref{thm:cenc_indecs1} to derive a combinatorial enumeration of the $G$-orbits in the enhanced cyclic nilpotent cone $\N_{\infty}(\ell,n) := \Rep(\A_{\infty}(\ell),\mbf{v})$. Recall from Theorem \ref{thm:cenc_indecs1} that the indecomposable nilpotent representations $M$ of $\Q_{\infty}(\ell)$ with $(\dim M)_{\infty} = 1$ are parametrized by the set of partitions. Given a partition $\lambda$, we write $M_{\lambda}$ for the corresponding indecomposable representation and $\mathbf{d}_{\lambda}$ for its dimension vector. Recall from section \ref{sec:affineA} the definition of $\ell$-residue, $\mathrm{res}_{\ell}(\lambda)$, of a partition $\lambda$ and the shifted $\ell$-residue, $\mathrm{sres}_{\ell}(\nu)$, of an $\ell$-multipartition $\nu$. \begin{proposition}\label{prop:cenc_param_orbits} The $\GL_{\mathbf{d}}$-orbits in the enhanced cyclic nilpotent cone $\N_{\infty}(\ell,n)$ are naturally labelled by the set $$ \mathcal{Q}(n,\ell) := \left\{ (\lambda;\nu) \in \mathcal{P} \times \mathcal{P}_{\ell} \ | \ \mathrm{res}_{\ell}(\lambda) + \mathrm{sres}_{\ell}(\nu) = n \delta\right\}. $$ \end{proposition} \begin{proof} Let $\mathcal{O}$ be an orbit and $M \in \mathcal{O}$. Then $M = M_{\lambda} \oplus Y$, where $M_{\lambda}$ is an indecomposable nilpotent representations $M$ of $\Q_{\infty}(\ell)$ with $(\dim M)_{\infty} = 1$ and $Y = U(i_1, N_1) \oplus \cdots \oplus U(i_k,N_k)$ is a direct sum of indecomposable nilpotent representations of $\Q(\ell)$ such that $$ \dim M_{\lambda} + \sum_{j = 1}^k \dim U(i_j,N_j) = \varepsilon_{\infty} + n \delta. $$ We associate to $Y_{\nu} := Y$ the multipartition $\nu$, where $\nu^{(i)} = N_{j_1} \ge N_{j_2} \ge \dots$, where the $j_r$ run over all $1 \le j_r \le k$ such that $i_{j_r} = i$. Thus, the question is simply to find all $(\lambda;\nu) \in \mathcal{P} \times \mathcal{P}_{\ell}$ such that $\dim (M_{\lambda} \oplus Y_{\nu}) = e_{\infty} + n \delta$. Under the identification of $\Z \Q(\ell)$ with $\Z[\Z_{\ell}]$, we have $\dim U(i,N) = \sigma^i \mathrm{res}_{\ell}(N)$ and hence $$ \dim Y = \mathrm{sres}_{\ell}(\nu). $$ Therefore it suffices to show that $\mathbf{d}_{\lambda}= \dim M_{\lambda}$ equals $\varepsilon_{\infty} + \mathrm{res}_{\ell}(\lambda)$. Recall that $\Gamma_{\infty}$ is the covering quiver of $\Q_{\infty}(\ell)$ and $\Gamma$ the covering quiver of $\Q(\ell)$. As in the proof of Theorem \ref{thm:cenc_indecs_circle}, let $F : \rep_{\mathbf{C}} \mathbf{C} \Gamma \rightarrow \rep_{\mathbf{C}} \mathbf{C} \Q(\ell)$ denote the covering functor. If $M_{\lambda}'$ is the unique lift of $M_{\lambda}$ to $\Gamma_{\infty}$ such that $v^{(i)} = 0$ for all $i \neq 0$ (see section \ref{sec:reptype}), then $M_{\lambda}' |_{\Gamma}$ equals $U_{\Gamma}(-b_1,p_1) \oplus \cdots \oplus U_{\Gamma}(-b_k,p_k)$, where $(a_1 > \cdots > a_r ; b_1 > \cdots > b_r)$ is $\lambda$ written in Frobenius form and $p_i := a_i + b_i + 1$. Therefore, $$ \dim M_{\lambda} = \sum_{i =1}^r \sigma^{-b_i} \mathrm{res}_{\ell}(p_i) = \mathrm{res}_{\ell}(\lambda) $$ as required. \end{proof} In the proof of Proposition \ref{prop:cenc_param_orbits} we have shown that \begin{equation}\label{eq:dimension} \dim U(i,N) = \sigma^i \mathrm{res}_{\ell}(N), \quad \mathbf{d}_{\lambda} = \varepsilon_{\infty} + \mathrm{res}_{\ell}(\lambda). \end{equation} If $\nu = \emptyset$ then $(\lambda;\nu) \in \mathcal{Q}(n,\ell)$ if and only if $\mathrm{res}_{\ell}(\lambda) = n \delta$. The set of all such $\lambda$ is precisely the set of partitions of $n \ell$ that have trivial $\ell$-core. This set, in turn, is in bijection with the set $\mathcal{P}_{\ell}(n)$ of $\ell$-multipartitions of $n$, the bijection given by taking the $\ell$-quotient of $\lambda$ i.e. if $\lambda$ has trivial $\ell$-core then it is uniquely defined by its $\ell$-quotient. \subsection{Translating between the different parametrizations}\label{ssect:cenc_transl} The goal of this final section is to describe how to pass directly between the combinatorial parametrization of the $\GL_{\mathbf{d}}$-orbits in the enhanced cyclic nilpotent cone given by Johnson \cite{Joh}, and our parametrization given in Proposition \ref{prop:cenc_param_orbits}. In order to do this, we first recall the former.\\[1ex] A tuple $(\lambda,\epsilon)$ is called an \textit{$\ell$-coloured partition} if $\lambda=(\lambda_1,...,\lambda_k)\in\Pa$ and $\epsilon=(\epsilon_1,...,\epsilon_k)\in(\mathbf{Z}/\ell\mathbf{Z})^k$. This coloured partition gives rise to a \textit{coloured Young diagram} $Y(\lambda,\epsilon)$ by defining the \textit{colour of the box} $(i,j)$ of $Y(\lambda)$ to be $\chi(i,j):=\epsilon_i+[\lambda_i-j]$; where $1\leq i\leq \ell(\lambda), 1\leq j\leq \lambda_i$ and $[x]$ denotes the residue class of $x$ modulo $\ell$. Its \textit{signature} is defined to be $\xi(\lambda,\epsilon)=(\xi(\lambda,\epsilon)_m)_{0 \leq m\leq \ell-1}$ and $$ \xi(\lambda,\epsilon)_m:= |\{(i,j) \in Y(\lambda) \ | \ \epsilon_i+[\lambda_i-j]=m \}|. $$ In this language, the well-known classification of orbits in the cyclic nilpotent cone can be stated as: \begin{lemma} Let $\mathbf{d}$ be a dimension vector of $\A(\ell)$. There is a bijection from the set of $\ell$-coloured partitions of signature $\mathbf{d}$ to the isomorphism classes of nilpotent $\A(\ell)$-representations of dimension vector $\mathbf{d}$. \end{lemma} Given an $\ell$-coloured partition $(\lambda,\epsilon)$ of signature $\mathbf{d}$, it is mapped to the $\A(\ell)$-representation $(V,N)$ where $V=V_0 \oplus \cdots \oplus V_{\ell-1}$ has a coloured Jordan basis $\{v_{i,j}\}_{1\leq i\leq l(\lambda), 1\leq j\leq \lambda_i}$, that is, $v_{i,j}$ is a basis vector of $V_{\chi(i,j)}$. Furthermore, $Nv_{i,j}=0$ if $j=1$ and $Nv_{i,j}=v_{i,j-1}$ if $j>1$ and the basis can, thus, be depicted best by the coloured Young diagram $Y(\lambda,\epsilon)$.\\[1ex] Note that this parametrization can be directly translated to our circle diagrams of Theorem \ref{thm:cenc_indecs_circle}: The circle diagram consists of $\ell(\lambda)$ circles of which the $i$-th starts in vertex $\epsilon_i$ and is of length $\lambda_i$. We denote this circle diagram by $C(\lambda,\epsilon)$, it corresponds to the representation \[\bigoplus_{1\leq i\leq \ell(\lambda)} U(\epsilon_i,\lambda_i). \] Let us call a tuple $(\lambda,\epsilon,\nu)$ a \textit{marked coloured partition} if $(\lambda,\epsilon)$ is a coloured partition and $\nu:\mathbf{N}\rightarrow \mathbf{Z}$ is a \textit{marking function}, which satisfies $\nu_i\leq \lambda_i$ for all $i$. We define $\mu:=(\mu_i)_{1\leq i\leq \ell(\lambda)}=(\lambda_i-\nu_i)_{1\leq i\leq \ell(\lambda)}$. Note that we have switched the roles of $\mu$ and $\nu$ in comparison to \cite{Joh} - this is consistent with our conventions in subsection \ref{ssect:enc_transl}. A marked coloured partition $(\lambda,\epsilon,\nu)$ is called a \textit{striped $\ell$-bipartition}, if \begin{enumerate} \item $\epsilon+[\lambda-\nu] = 0$ in $\mathbb{Z} / \ell \mathbb{Z}$, \item $-\ell<\nu_i$ for all $i$, \item $\nu_j<\nu_i+\ell$ and $\mu_j<\mu_i+\ell$ for each $i<j$. \end{enumerate} In the case $\ell=1$, this yields the set of double partitions $\Pa_2(n)$, where $n$ is the dimension vector, since $(\mu,\nu)$ is a bi-partition of $n$. The set of all striped $\ell$-bipartitions is denoted $\Pa_{st}(\ell)$; the subset with fixed signature $\xi$ is denoted $\Pa_{st}(\ell,\xi)$. The classification of orbits in the enhanced cyclic nilpotent cone, as in \cite{Joh}, is then given by: \begin{proposition} There is a bijection $\Xi_{\ell}$ from the set $\Pa_{st}(\ell)$ to the set of orbits in the enhanced cyclic nilpotent cone. This bijection restricts to fixed dimension vectors, i.e. $\Pa_{st}(\ell, \mathbf{d})$ is in bijection with the $\GL_{\mathbf{d}}$-orbits in $\Rep(\A_{\infty}(\ell),\mathbf{d}_{\infty})$. \end{proposition} We can write down $\Xi_{\ell}$ explicitly . Let $(\lambda,\epsilon,\nu)\in\Pa_{st}(\ell,\mathbf{d})$, then there is a coloured Jordan basis $B:=\{v_{i,j}\}_{1\leq i\leq l(\lambda), 1\leq j\leq \lambda_i}$ and a nilpotent $\A(\ell)$-representation $N$ in normal-form adapted to the basis $B$ as described above. Set $v_{i,j}=0$ if $j\leq 0$. Then $\Xi_{\ell}(\lambda,\epsilon,\nu)$ is defined to be the orbit of the nilpotent representation $(v,N)$ of the cyclic enhanced nilpotent cone, where $v = \sum_{i=1}^{\ell(\lambda)} v_{i,\nu_i}$. Pictorially, this means that the $i$-th circle is marked at position $\mu_i$ (not $\nu_i$, as one might expect). Interpreting $\Xi_{\ell}(\lambda,\epsilon,\nu)$ as an $\A_{\infty}(\ell)$-representation, we obtain \[\begin{tikzpicture} \matrix (m) [matrix of math nodes, row sep=0.05em, column sep=2em, text height=1.5ex, text depth=0.2ex] {\Xi_{\ell}(\lambda,\epsilon,\nu)\cong & \mathbf{C} & \mathbf{C}^n\\ }; \path[->] (m-1-2) edge node[above=0.05cm] {$\iota$} (m-1-3) (m-1-3) edge [loop right] node{$C(\lambda,\epsilon)$} (m-1-3);\end{tikzpicture}, \] where $n=\sum_{i=0}^{\ell-1}\mathbf{d}_i$ and the right hand part is given by a circle diagram and, technically speaking, a graded vector space with a cycle of maps. Furthermore, $\iota=\sum_{i=1}^{\ell(\lambda)} e_{i,\nu_i}$ and $e_{i,j}$ is the standard embedding into the $(i,j)$-th basis vector $v_{i,j}$ of $B$. \\[1ex] Our aim is to define a map which translates this parametrization to our classification $\Phi_{\ell}$ from Proposition \ref{prop:cenc_param_orbits}. That is, we have to define a bijection $\Psi_{\ell}:\Pa_{st}(\ell,\mathbf{d})\rightarrow\mathcal{C}_{2,F}(\mathbf{d})$, such that $$ \Phi_{\ell} \circ \Psi_{\ell} (\lambda,\epsilon,\nu) = \mathcal{O}_{\Psi_{\ell}(\lambda,\epsilon,\nu)} = \Xi_{\ell} (\lambda,\epsilon,\nu). $$ We define $\Psi_{\ell}$ on $\Pa_{st}(\ell)$, since the restriction to a fixed signature will then obviously determine the desired bijection.\\[1ex] Let $(\lambda,\epsilon,\nu) \in \Pa_{st}(\ell)$ and $\mu_i=\lambda_i-\nu_i$. Then $i$ belongs to the set $\mathrm{Re}(\lambda,\epsilon,\nu) \subset \{ 1, \dots, \ell(\lambda) \}$ of removable rows if and only if one of the following holds: \begin{itemize} \item[(a)] $\nu_i \leq 0$, \item[(b)] there is some $j>i$ such that $\mu_j \geq \mu_{i}$ and $(\lambda_i,\epsilon_i)\neq (\lambda_j,\epsilon_j)$ or \item[(c)] there is some $j<i$, such that $\nu_{j} \leq \nu_i$. \end{itemize} In case $\ell=1$, this reduces to our definition of removable rows in section \ref{ssect:enc_transl}. The set $\mathrm{Re}(\lambda,\epsilon,\nu)$ leads to a coloured partition as follows. Define the partition $\hat{\lambda}=(\lambda_i)_{i\in \mathrm{Re}(\lambda,\epsilon,\nu)}$ and the colouring $\hat{\epsilon}=(\epsilon_i)_{i\in \mathrm{Re}(\lambda,\epsilon,\nu)}$. Then $(\hat{\lambda},\hat{\epsilon})$ is a coloured partition and $C(\hat{\lambda},\hat{\epsilon})$ is a circle diagram. The remaining part of $(\lambda,\epsilon,\nu)$ determines a Frobenius circle diagram: the circles are parametrized by $I:=\{1\leq i\leq \ell(\lambda)\mid i\notin \mathrm{Re}(\lambda,\epsilon,\nu)\}$. The $i$-th circle is of length $\lambda_i$, starts in position $\epsilon_i$ and is marked at the $\mu_i$-th vertex (clockwise). By construction, the mark is always in position $0$ and the circles form a Frobenius partition. Let us denote this Frobenius circle diagram $C_F(\lambda,\epsilon,\nu)$. We define $\Psi_{\ell}(\lambda,\epsilon,\nu):=(C(\hat{\lambda},\hat{\epsilon}), C_F(\lambda,\epsilon,\nu))$. \begin{theorem}\label{thm:johnson} The map $\Psi_{\ell}$ is a bijection such that $$ \Phi_{\ell} \circ \Psi_{\ell} (\lambda,\epsilon,\nu) = \mathcal{O}_{\Psi_{\ell}(\lambda,\epsilon,\nu)} = \Xi_{\ell} (\lambda,\epsilon,\nu). $$ \end{theorem} \begin{proof} We check that $\Phi_{\ell} \circ \Psi_{\ell}(\lambda,\epsilon,\nu) = \Xi_{\ell}(\lambda,\epsilon,\nu)$ for all $(\lambda,\epsilon,\nu)$. This implies that $\Psi_{\ell}$ is a bijection. Let $M = (v,X) \in \Xi_{\ell}(\lambda,\epsilon,\nu)$. This means that there is a coloured normal basis $B=\{ v_{i,j} \}$ of $V$ such that $X$ is of cyclic normal form $(\lambda,\epsilon)$ and $v = \sum_{i = 1}^{\ell(\lambda)} v_{i,\nu_i}$. We wish to show that $M$ decomposes into $$ M\simeq M' \oplus\bigoplus_{i\in \mathrm{Re}(\lambda,\epsilon,\nu)} U(\epsilon_i,\lambda_i) $$ where $M'$ is the representation corresponding to $C_F(\lambda,\epsilon,\nu)$ via Theorem \ref{thm:cenc_indecs1}.\\[1ex] The proof is by induction on $\ell(\lambda)$. Assume that there exists $i \in \{1 ,\dots, \ell(\lambda) \}$, such that there is $j>i$ with $\mu_i < \mu_{j}$ and $(\lambda_i,\epsilon_i)\neq (\lambda_j,\epsilon_j)$. Then, repeating the argument in the proof of Lemma \ref{lem:enc_indec1n}, we deduce that $M \simeq M_1 \oplus U(\epsilon_i,\lambda_i)$, where $M_1\in\Xi(\lambda',\epsilon',\nu')$, and $(\lambda',\epsilon',\nu')$ is obtained from $(\lambda',\epsilon',\nu')$ by removing the $i$-th component of $\lambda$, $\epsilon$ and $\nu$. By induction, $\Xi_{\ell}(\lambda',\epsilon',\nu') = \Phi_{\ell} \circ \Psi_{\ell} (\lambda',\epsilon',\nu')$ and hence $\Xi_{\ell}(\lambda,\epsilon,\nu) = \Phi_{\ell} \circ \Psi_{\ell}(\lambda,\epsilon,\nu)$. We proceed in exactly the same way if $\nu_{j} \leq \nu_i$ for $j<i$, or if $\nu_i\leq 0$: Then, $M$ has a direct summand isomorphic to $U(\epsilon_i,\lambda_i)$. We have shown $$ M\simeq M' \oplus\bigoplus_{i\in \mathrm{Re}(\lambda,\epsilon,\nu)} U(\epsilon_i,\lambda_i) $$ The representation $M'$ is the indecomposable representation belonging to the Frobenius circle diagram $C_F(\lambda,\epsilon,\nu)$. Thus, it is indecomposable and we have decomposed $\Xi_{\ell} (\lambda,\epsilon,\nu)$ into a direct sum of indecomposables: $$ \Phi_{\ell} \circ\Psi_{\ell} (\lambda,\epsilon,\nu)= \Phi_{\ell}(C(\hat{\lambda},\hat{\epsilon}), C_F(\lambda,\epsilon,\nu))=\Xi_{\ell} (\lambda,\epsilon,\nu). $$ This completes the proof of the theorem. \end{proof} We end this section by giving an example. \begin{example} Consider the $4$-striped bi-partition $(\lambda,\epsilon,\mu)$, where $\lambda=(16,14,13,11,9,6,5,5,2)$, $\epsilon=(0,2,0,1,3,0,2,2,0)$ and $\mu=(8,4,5,4,0,2,3,3,-2)$. It can be depicted as follows; the position of $\mu_i$ is highlighted: \begin{center} $ \scalebox{0.7}{ \begin{ytableau} 3&2&1&0&3&2&1&*(lightblue)0&3&2&1&0&3&2&1&0\\ \none&\none&\none&\none&3&2&1&*(lightblue)0&3&2&1&0& 3&2&1&0&3&2\\ \none&\none&\none&0&3&2&1&*(lightblue)0& 3&2&1&0&3&2&1&0\\ \none&\none&\none&\none&3&2&1&*(lightblue)0& 3&2&1&0&3&2&1\\ \none&\none&\none&\none&\none&\none&\none&\none&3&2&1&0& 3&2&1&0&3 \\ \none&\none&\none&\none&\none&\none&1&*(lightblue)0& 3&2&1&0\\ \none&\none&\none&\none&\none&2&1&*(lightblue)0& 3&2\\ \none&\none&\none&\none&\none&2&1&*(lightblue)0& 3&2\\ \none&\none&\none&\none&\none&\none&\none&\none&\none&\none&1 &0\\ \end{ytableau}}$\end{center} Then $\mathrm{Re}(\lambda,\epsilon,\mu)=(2,3,5,6,8,9)$. The remaining rows yield a Frobenius circle diagram in the obvious way: \begin{center} $ \scalebox{0.7}{ \begin{ytableau} 3&2&1&0&3&2&1&*(lightblue)0&3&2&1&0&3&2&1&0\\ \none&\none&\none&\none&3&2&1&*(lightblue)0& 3&2&1&0&3&2&1\\ \none&\none&\none&\none&\none&2&1&*(lightblue)0& 3&2\\ \end{ytableau}}$\end{center} The rows which correspond to $\mathrm{Re}(\lambda,\epsilon,\mu)$ are removed and yield a circle diagram, where the circles start in (highlighted) positions $\epsilon_i$ and have lengths $\lambda_i$: \begin{center} $ \scalebox{0.7}{ \begin{ytableau} \none&\none&\none&\none&3&2&1&0&3&2&1&0& 3&2&1&0&3&*(lightred)2\\ \none&\none&\none&\none&\none&0&3&2&1&0& 3&2&1&0&3&2&1&*(lightred)0\\ \none&\none&\none&\none&\none&\none&\none&\none&\none&3&2&1&0& 3&2&1&0&*(lightred)3 \\ \none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&1&0& 3&2&1&*(lightred)0\\ \none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&2&1&0& 3&*(lightred)2\\ \none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&\none&1 &*(lightred)0\\ \end{ytableau}}$\end{center} The direct sum of indecomposable representations of the cyclic enhanced nilpotent cone can, thus, be read of directly. \end{example} \bibliographystyle{plain}
1,108,101,564,502
arxiv
\subsection*{1. Introduction} In recent years both quantum complexity and quantum information theory have had a substantial presence in quantum black holes. In gauge/gravity duality a deep connection between holographic quantum complexity and the horizon geometry has been proposed {[}1-4{]}. In the framework of AdS/CFT, and assuming ER=EPR {[}5{]}, quantum complexity has been argued to ensure infalling observers safe travels. The question of whether or not high energy quanta are present behind the horizon reduces to the question of whether Alice can decode a subfactor of the Hilbert space of the Hawking radiation before the complexity bound is saturated. So in the case of a two-sided AdS black hole, firewalls are present if either Alice acts with a maximally complex (\emph{i.e.} exponential in the entropy) unitary operator or if she waits for classical recurrence time. Two important holographic duals have been proposed {[}4,5{]}, namely the \textquotedbl{}complexity=action\textquotedbl{} and \textquotedbl{}complexity=volume\textquotedbl{} conjectures. The former relates the complexity of a boundary CFT to the action of the dual Wheeler-DeWitt patch in AdS, while the latter relates the complexity of a holographic state to the volume of a maximally extended spacelike hypersurface behind the horizon. On the other hand, delocalization of quantum information between random subsystems with respect to the Hilbert space factorization, and the subsequent growth of entanglement is a central point for studying the interior black hole region. Black holes with interior dynamics described as a quantum circuit {[}6{]} have been proved to be fast scramblers. In this framework they have been shown to scramble quantum information in time logarithmic in the number of the degrees of freedom. That is, the dynamics takes an initially localized perturbation and makes it undetectable to an observer who fails to study a significant part of the initial degrees of freedom. In turn, there is a growing consensus that the scrambling time is the appropriate time scale associated with release of quantum information. In light of these advancements, we argue that calculating retention time scales is actually a question of relative state complexity. We claim that in such scenarios Alice cannot calculate the relative state complexity before the complexity bound is saturated. Alice has two options, she could either act with a maximally complex unitary operator or act with a future precursor operator to the perturbed (late time) state, and rely on extreme fine-tuning. Both options are shown to be computationally unrealizable for evaporating black holes. \subsection*{2. Black holes as random quantum circuits} In the current Section we describe black holes as random quantum circuits {[}6{]}. These are systems composed of $K$ degrees of freedom, and have discrete time-step evolution $\varDelta\tau$, which is dictated by a universal gate set of 2-local gates. A gate set is a collection of gates (simple unitary transformations) which at each time-step act on the qubit system. For simplicity we choose our gate set to consist of 2-local gates, where each gate can act on no more than 2 qubits per time-step. \subsection*{2.1 Fast scramblers} Black holes have been proven to be the fastest scramblers in Nature {[}6,10{]}. Scrambling, a form of strong thermalization, is a process which stores information in highly nontrivial correlations between subsystems. When chaotic thermal quantum systems of large number of degrees of freedom undergo scrambling, their initial state, although not lost, is very computationally costly to be recovered. In this paper we assume that although the modes are scrambled they are still localized in a certain way across the horizon, Fig. 1. However, because of the strong thermalization, they remain \emph{indistinguishable} from the rest of the black hole degrees of freedom as far as Alice is concerned. Suppose Alice is outside the black hole, and throws a few qubits inside. From her perspective, those extra modes will be effectively diffused, \emph{i.e.} smeared across the horizon in a scrambling time \begin{equation} t_{*}\sim\frac{\beta}{2\pi}\log N^{2} \end{equation} where $\beta$ is the inverse temperature, and $N$ is the number of degrees of freedom. As a result, a scrambling time after perturbing the black hole, Alice will not be able to distinguish those extra qubits. This statement is similar to the upper chaos bound for general thermal systems in the large $N$ limit {[}22{]}. In particular, the large $N$ factor is what initially keeps the commutators small. However, for $t>t_{*}$, scrambling yields rapid commutator growth, and so the distance between the initial and perturbed states in complexity geometry increases non-trivially, (12,13). \begin{figure} \includegraphics[scale=0.6]{fig}\caption{Depiction of a black hole as an $(K+n)$-qubit system. Imagine the region inside the circle is the black hole interior, and the one outside of the circle is the exterior region. The red dots are the scrambled extra $n$ qubits embedded into the horizon. The present figure was inspired by Fig. 10 from {[}2{]}.} \end{figure} In strongly coupled thermal quantum systems chaos and scrambling are intimately related, which is why (1) is of particular interest to both quantum cloning, and retention time scales, \emph{i.e.} the minimum time for information to begin leaking via Hawking radiation. Imagine Bob crosses the horizon carrying a qubit, and Alice hovers outside the black hole. It was shown {[}6{]} that by the time Alice recovers the perturbed qubit by collecting the outgoing modes, and enters the black hole, Bob will have already hit the singularity. So retention time scale of order the logarithm of the entropy (2) is just enough to save black hole complementarity from paradoxes \begin{equation} t_{ret}\geq\frac{\beta}{2\pi}\log N^{2} \end{equation} Thus quantum cloning cannot be verified given the above bound is respected. Recent studies of quantum information and quantum gravity {[}6,7,8,12,13,14,15,16,17{]} support the scrambling time as the appropriate time scale at which black holes begin releasing information via Hawking radiation. Note that in such generic early evaporation scenarios where quantum information begins leaking out of order the scrambling time, not every Hawking particle carries information as this would make the retention time scale $t_{ret}\sim\log r_{S}$ , which would violate the no-cloning bound (2). \subsection*{2.2 Qubit description of black holes} A quantum circuit is composed of gates, and describes the evolution of a quantum system of qubits. The gates may be defined to act on an arbitrary number of qubits, and to couple any given pair of them. The gates may act in succession or in parallel, where series-type quantum circuits are not good scramblers. Here, we present a random quantum circuit with time-dependent Hamiltonian which has been proved to scramble in time logarithmic in the entropy {[}11{]}. Consider a $K$-qubit analog of a Schwarzschild black hole in 3+1 dimensions, where \begin{equation} K\sim S_{BH}\sim\frac{A}{4G_{N}} \end{equation} here $A$ is horizon area, and $G_{N}$ is Newton's constant. Let the $K$ qubits be in some initial pure state of the form \begin{equation} \left|\psi\right\rangle =\sum_{i}\alpha_{i}\left|i\right\rangle \end{equation} where $\alpha_{i}$ is the amplitude, and $\left|i\right\rangle $ is the Hilbert space basis. The state lives in a Hilbert space of $2^{K}$ dimensions. In this framework the Hamiltonian is given as {[}11{]} \begin{equation} H_{i}=\sum_{l<m}\sum_{\alpha_{l},\alpha_{m}=0}^{3}\sigma_{l}^{\alpha_{l}}\otimes\sigma_{m}^{\alpha_{m}}\varDelta B_{i,l,m,\alpha_{l},\alpha_{m}} \end{equation} where $\varDelta B_{i,l,m,\alpha_{l},\alpha_{m}}$ denote the independent Gaussians, $\sigma_{i}$ are Pauli matrices, and the eigenenergies live in a $2^{K}$ dimensional state space. Thus the evolution between 2 successive time-steps is \begin{equation} e^{-iH_{i}\Delta\tau}e^{-iH_{i+1}\Delta\tau} \end{equation} Here, there is an inverse relation between the time-step and the strength of the interactions. It was thus demonstrated by Hayden et. al. in {[}11{]} that the time required to scramble an arbitrary number of degrees of freedom scales like $\log k$. The evolution of the $K$-qubit system is controlled by a random quantum circuit, composed of a universal gate set of 2-local gates, where we assume the gate set approximates Hamiltonian time evolution \begin{equation} U=\left\{ g_{i}\right\} \end{equation} Suppose $k$-local gates with $k>2$ are strictly penalized. In addition to the $k$-local restriction, we assume the gates have non-zero couplings only with respect to the nearest qubits, similar to ordinary lattice Hamiltonians. Of course, in principle, nothing demands this particular locality constraint, and we could have easily allowed any arbitrary pair of qubits to couple. The evolution is divided into time steps $\varDelta\tau$, where at each \emph{time-step} a random gate set is chosen. The choice is random because at each time-step the gate set is picked via a time-dependent Hamiltonian governed by a stochastic probability distribution. Furthermore, the random choice has to also determine which $k$ qubits the gate set will act on. Note that the random quantum circuit that we use is bounded from above by $K/2$ gates which are allowed to act in parallel every time-step. We suggest a natural time scale to associate with the time intervals between successive time steps would be the Schwarzschild radius $\varDelta\tau\sim r_{S}$ (i.e. light crossing time). One does not have to look further than elementary black hole mechanics to see why this is the case. For instance, in a freely evaporating black hole, $r_{S}$ is adiabatically decreasing. Consequently, the time intervals between successive time steps are shorter, and thus black hole evaporates at a faster rate. This fits well with classical black hole thermodynamics \begin{equation} r_{S}\sim T^{-1}\sim\beta \end{equation} where $\beta$ denotes the inverse temperature. Since this random quantum circuit scrambles information logarithmically, throughout the paper we consider it to be an effective analog of general early evaporation models. \subsection*{2.3 Relative state complexity} In this Section we argue that calculating retention time scales for systems controlled by random quantum circuits is a question of relative state complexity. We show that calculating relative state complexity is extremely difficult, and in order for Alice to carry out the computation she would need to either act with an exponentially complex unitary operator or apply a future precursor operator to the perturbed state, and rely on extreme fine-tuning. We can simply define circuit (gate) complexity as the minimum number of gates required to implement a particular state. The evolution of a quantum state via time-dependent Hamiltonian resembles the motion of a non-relativistic particle through the homogeneous $SU(2^{K})$ space {[}18{]}. That is, the particle defines a trajectory between a pair of points, where one point corresponds to some initial state $\left|\psi\right\rangle $, while the second point corresponds to a perturbed state $\left|\psi'\right\rangle $, where $\left|\psi\right\rangle ,\left|\psi'\right\rangle $ $\in SU(2^{K})$. The particle thus moves on a $2^{K}$ dimensional state space. Without loss of generality the state evolution can be straightforwardly given by the Schrodinger equation \begin{equation} i\frac{\partial\left|\psi\right\rangle }{\partial t}=H\left|\psi\right\rangle \end{equation} Given two states are different to begin with, their relative state complexity naturally increases with time. A compelling argument was made in {[}18{]} that the naive way of defining the distance between two states does not capture the whole story. The classical Fubini metric bounds the state distance as $d\in[0,\pi/2]$. Obviously, the upper bound can be easily saturated and we need a different measure to quantify the relative state complexity between two states\emph{. }Due to the exponential upper bound of complexity, $C_{max}=e^{K}$, quantifying relative state complexity necessitates the use of \emph{complexity metric}. That is the notion of distance between states on the non-standard $SU(2^{K})$, see Refs. {[}19,20{]}. Intuitively, the farther apart two states are on $SU(2^{K})$, the higher their relative state complexity is. Geometrically, we can think of relative state complexity as \emph{the minimum geodesic length in $SU(2^{K})$ which connects two states.} In light of the proposed random quantum circuit, and following the definition of circuit complexity, we define relative state complexity as \emph{the minimum number of time steps required to go from one quantum state to another.} Keep in mind that every time step a random set of 2-local gates is chosen following a time-dependent Hamiltonian controlled by a stochastic probability distribution. So essentially, using Nielsen's approach {[}19{]}, we are interested in assigning a notion of distance (geodesic length) to the gate sets. Here the minimum length increase in complexity geometry sets a lower bound for the minimum complexity growth, which corresponds to acting with a single 2-local gate. More precisely, suppose Alice perturbs the $K$-qubit system immediately after its formation by $n$ qubits, where $n\ll K$, Fig. (1). We ask: what is the relative state complexity of $\left|\psi\right\rangle $ and $\left|\psi'\right\rangle $? In other words, what is the minimum number of time steps $N(\varDelta\tau)$ in which Alice could time-reverse the perturbation \begin{equation} \left|\psi\right\rangle =U_{1}U_{2}U_{3}...U_{N(\Delta\tau)}\left|\psi'\right\rangle \end{equation} where $\left|\psi\right\rangle \in$ $K$-qubit system, $\left|\psi'\right\rangle \in$ $(K+n)$-qubit system. Let's now turn to the main objective of this paper which is to address the question: $\vphantom{}$ \emph{In a young black hole, can Alice calculate the relative state complexity of $\left|\psi\right\rangle $ and $\left|\psi'\right\rangle $ in time less than $2^{K}$? } $\vphantom{}$ In our case calculating the relative state complexity means counting the number of time steps in which the extra $n$ qubits are radiated to infinity. Our claim is that in implementing $U$, Alice cannot beat $2^{K}$ because of the causal structure of the black hole spacetime. Alice does not have access to all the relevant degrees of freedom which dramatically increases the computational complexity. Therefore, the inability of implementing $U$ faster than $2^{K}$ not only renders the computation unrealizable for astrophysical black holes, but also takes away Alice's predictive power. We will now look at the two ways Alice could hope to estimate (9). Namely, she could either apply gate sets to the radiated qubits, or act with an extremely fine-tuned precursor. \subsubsection*{2.3.1 How fast can Alice calculate the relative state complexity?} In this subsection we consider the Harlow-Hayden reasoning {[}16{]} but for the case of a young black hole. Recall that in {[}16{]} Harlow and Hayden argued that AMPS' conjectured violation of the equivalence principle after Page time is computationally unrealizable for black holes formed by sudden collapse since it requires complicated measurements on the emitted Hawking cloud. We now study the limit of the proposed $2^{k+m+r}$ complexity bound, and demonstrate that it is strong enough to hold even for the case where (i) entanglement entropy is still low, and (ii) $\mathcal{H}_{R}\ll\mathcal{H}_{BH}$. Here we employ a standard Hilbert space decomposition where $k$ are the black hole qubits in $\mathcal{H}_{BH}$, $m$ are the qubits of the black hole atmosphere in $\mathcal{H}_{B}$, and $r$ are the emitted qubits in $\mathcal{H}_{R}$, whose dimensionality grows as the black hole evaporates. We assume Alice can only manipulate the $r$ qubits, and that all outside observers must agree on $\mathcal{H}_{B}\otimes\mathcal{H}_{R}$. For her this is essentially a decoding problem, where she acts with the unitary transformation $U$ to $\mathcal{H}_{R}$. Alice's goal is to decode $\mathcal{H}_{R}$ in search for the extra $n$ qubits, and count the number of time steps in which they were radiated away. To demonstrate more clearly the robustness of the $2^{k+m+r}$ complexity bound, suppose we violate the fast scrambling conjecture {[}11{]}. The violation is in the sense that Alice can recognize the perturbed $n$ qubits easier, and doesn't need to decode a significant part of all the system's degrees of freedom. Even in this case, however, we argue there is an overwhelming probability Alice \emph{cannot} beat $2^{k+m+r}$. Since the scrambling time is the shortest time-scale compatible with the no-cloning bound (2), one might naively expect Alice can time-reverse the perturbation in time comparable to the scrambling time. Considering the exponentially high upper bound of complexity, however, we can easily see that this is not the case {[}2{]}. Even though the scrambling time is negligible compared to the time-scale associated with reaching maximum complexity, in a scrambling time the complexity of the system scales as \begin{equation} C_{*}=S\log S \end{equation} Although nowhere near the upper bound of $C_{max}=e^{K}$, the scrambling complexity is high enough to make the computation extremely difficult. From a geometry perspective, by the scrambling time, due to quantum chaos, the trajectories of the 2 points on $SU(2^{K})$ diverge exponentially, Fig. 2. \begin{figure} \includegraphics[scale=0.53]{fig1} \caption{A pair of points and their trajectories on complexity geometry $SU(2^{K})$. They are initially arbitrarily close, i.e. low relative state complexity. At the scrambling time, however, their trajectories diverge, and the distance between them grows exponentially. } \end{figure} Despite being initially arbitrarily close, given they are separated to begin with, in just a scrambling time the distance (i.e. relative state complexity) between them grows exponentially. Let's further illustrate the point with the use of an out-of-time-order correlator (OTOC) $C(t)$. OTOCs are used for measuring quantum chaos {[}24,25{]}. In particular, OTOCs describe how initially commuting operators develop into non-commuting ones. Suppose $A(0)$ and $B(t)$ are simple Hermitian operators, where $B$ is just a time-evolved $A$. Initially, for $t\ll t_{*}$ the correlator is approximately constant. Then at the scrambling time, due to quantum chaos {[}22{]} \begin{equation} C(t)=-\left\langle (B(t),A(0))^{2}\right\rangle _{\beta} \end{equation} After the scrambling time, regardless of $A$ and $B$, the correlator takes the form \begin{equation} C(t)=2\left\langle BB\right\rangle \left\langle AA\right\rangle \end{equation} This exponential decay is associated with the initial rapid growth of the commutator, which becomes highly non-trivial at the scrambling time. Note that for small $t$, the large number of black hole degrees of freedom suppress the commutator. Therefore, the scrambling time is enough to make the operators very complicated, and thus the distinguishablity between them non-trivial. So what can Alice do? The obvious thing to do would be to brute-force. In this case Alice will have to first artificially group the $r$ qubits in $\mathcal{H}_{R}$ into different sets, and then apply a complex unitary transformation to those sets in search for the extra $n$ qubits. The difficulty here is to have a unitary transformation which acts on a particular set of qubits {[}16{]}. Unlike the Harlow-Hayden argument where Alice tries to decode a subfactor of $\mathcal{H}_{R}$ in order to verify entanglement with $\mathcal{H}_{B}$, here the decoding task is especially complicated given Alice will have to engineer multiple such unitary transformations because of the monotonically growing dimensionality of $\mathcal{H}_{R}$. It seems that even with the assumption we made that Alice need not probe a significant part of all the initial degrees of freedom to recognize the extra qubits, the computation remains extremely non-trivial. Of course, Alice could try to use some clever tricks to carry out the computation faster than $2^{k+m+r}$. For instance, she could try to impose some special structure to the unitary transformation, and in particular, on how it evolves with time. She could engineer $U$ to allow specific sequences of gate sets to act every time-step on preferred qubit sets. However, such modifications can only account for very small changes which are not enough to speed up the computation to reasonable time scales. Another possibility is for Alice to manipulate the adiabatically growing number of $r$ qubits and make them interact in a preferred manner. By making use of the smaller dimensionality of $\mathcal{H}_{R}$, she could form sets of qubits, establish certain connection between them, and choose which sets to interact every time step. However, despite $r\ll k$ engineering such a connection between the $r$ qubits would obviously require introducing additional degrees of freedom which would scale like an exponential in $r$. Thus the computation becomes very complex even for relatively small $r$. In fact, by trying to fine-tune the qubits in this way, Alice makes her decoding task harder. Clearly, even in the case of a young black hole, and making the unphysical assumption that Alice need not decode a large part of the entropy, the computation remains very hard. Even though there is nothing, in principle, that prevents the first $n$ emitted qubits to be the perturbed one, this is exponentially unlikely. So even with weak violations of the fast scrambling conjecture the $2^{k+m+r}$ bound holds with an overwhelming probability. What about black holes which have evaporated more than half of their initial degrees of freedom? One could hope to speed up the computation by letting the black hole evaporate passed its Page time, and apply certain gate sets on $\mathcal{H}_{R}$. For example, once $\mathcal{H}_{R}>\mathcal{H}_{BH}$, ancillary qubits could be introduced and entangled with subfactors of $\mathcal{H}_{R}$. Then one could apply gates to those subfactors in effort to implement $U$ faster than $2^{k+m+r}$. Unfortunately, as long as the extra qubits scale like $r$, the computation does not get any faster. Therefore, unless Alice finds a way to calculate the relative state complexity in a way which does not involve exponential number of gates, the $2^{k+m+r}$ time-scale remains solid. \subsubsection*{2.3.2 Precursors and extreme fine-tuning} Here we examine Alice's second attempt of calculating the relative state complexity which now involves applying a future precursor operator to the late-time state. For simplicity, we study the case using a generic time-independent Hamiltonian but expect the main conclusions to hold for the time-dependent ones, too. Alice's task here is to adjust the late-time state and time reverse it. Effectively, this means running the chaotic black hole dynamics backwards. We show that this process of time-reversing (10) by applying a future precursor operator to the perturbed state is notoriously difficult, and Alice still cannot beat $2^{k+m+r}$. The particular argument should be considered in the context of complexity geometry on $SU(2^{K})$. The laws of physics are time-reversible, so any state perturbation that we introduce could be reversed. Naturally, however, for $t>t_{*}$ complexity tends to increase linearly in $K$ until it saturates the bound of $C_{max}=e^{K}$. Therefore, after the scrambling time we expect linear increase of the relative state complexity between $\left|\psi\right\rangle $ and $\left|\psi'\right\rangle $ as $t$ grows. Geometrically, this corresponds to a linear growth of the minimum geodesic length connecting the two states in $SU(2^{K})$. Let's analyze the same example of Alice perturbing the $K$-qubit system immediately after its formation with $n$ qubits. As we already saw, whatever Alice does, she cannot carry out the computation faster than $2^{k+m+r}$. Determined to calculate the relative state complexity before the complexity bound is saturated, however, imagine she now acts with a specific operator, namely a future precursor operator {[}21{]}. A future precursor operator $P_{p}^{+}$ is a highly non-local operator which when applied at a certain time simulates acting with a local operator $P$ at an earlier time \begin{equation} P_{p}^{+}=U(t)PU^{\dagger}(t) \end{equation} where $U(t)=e^{-iHt}$. Generally, calculating a precursor operator for $\Delta t\geq t_{*}$ is extremely difficult as one has to keep track of all the interactions of the degrees of freedom of the system. The computational costs grow immensely in cases involving black holes because they not only have a large number of degrees of freedom but also saturate the chaos bound {[}22{]}. For the first scrambling time after perturbing a black hole, the complexity growth is governed by the Lyapunov exponent, and hence grows exponentially {[}23{]}. Black holes are the fastest scramblers in Nature, and due to their chaotic dynamics, only a scrambling time after the perturbation, all of the degrees of freedom ($K\sim10^{77}$ for a solar mass black hole) will have indirectly interacted. Evidently, the precursor operator quickly becomes extremely difficult to calculate. Whatever Alice does to implement the precursor, she must time-reverse all of the interactions between the degrees of freedom of the black hole which requires an extreme degree of fine-tuning. Regardless of the exponential complexity, however, individual interactions remain well defined. In our case acting with the precursor operator takes the general form \begin{equation} \left|\psi\right\rangle =e^{-iHt}Pe^{iHt}\left|\psi'\right\rangle \end{equation} Similar to the evolution of a quantum state via time-dependent Hamiltonian, the action of a future precursor resembles a backward motion of a particle through complexity geometry. The high complexity of (14) corresponds to the complexity associated with constructing a thermofield-double state using only $t<0$ degrees of freedom, see Ref. {[}21{]}. In both cases, due to the large number of degrees of freedom an extreme fine-tuning is required. Even a mistake of order a single qubit will accumulate, and result in a completely different end-state. The system only becomes more sensitive to errors as the time separation increases. Therefore, unlike regular unitary operators which need not always be complex, precursors are typically extremely complex and unstable to perturbations (the butterfly effect) whenever the time separation is at least of order the scrambling time. Expanding (15) for $\Delta t\sim t_{*}$ yields \begin{equation} \left|\psi\right\rangle =e^{-iH(t_{*}-t_{i})}Pe^{iH(t_{*}-t_{i})}\left|\psi'\right\rangle \end{equation} where $t_{i}$ is the initial time when the $K$-qubit system was perturbed. Notice we have restricted our analysis not to include cases when either $\Delta t\ll t_{*}$ or $\Delta t\gg t_{*}$. The former case was discussed in Ref. {[}23{]}, where it was argued that before the scrambling time the distance in complexity geometry between the initial and perturbed states remains approximately constant. Initially, for $t<t_{*}$, the large $N$ terms keep the commutators relatively small. So scrambling is what drives the rapid distance growth in $SU(2^{K})$. On the other hand, the latter case is unnecessary since, as we showed, the computation becomes unmanageable in just $t_{*}$. Therefore, due to the chaotic black hole dynamics, and the great deal of fine-tuning required, the probability of Alice implementing the precursor (and thus calculating the relative state complexity) without making a mistake of even a single qubit is exponentially small. In conclusion, we can see that due to the causal structure of the black hole geometry, and the chaotic dynamics there is nothing Alice can do that would allow her to calculate the relative state complexity faster than $2^{k+m+r}$. This exponential time scale, however, is only applicable for AdS black holes since astrophysical black hole evaporate much before the complexity bound is saturated. So in the case of black hole formed by a sudden collapse there are two very general scenarios, associated with minimum and maximum retention time scales, $t_{min}$ and $t_{max}$, respectively. Obviously, the fastest retention time possible which obeys the no-cloning bound (2) is \begin{equation} t_{min}\sim\mathcal{O}\left(\frac{\beta}{2\pi}\log N^{2}\right) \end{equation} up to some constant. This is similar to the Hayden-Preskill result {[}6{]} concerning the mirror-like dynamics of an old black hole. On the other hand, the longest retention time for astrophysical black holes would be of order the evaporation time $t_{ev}$ \begin{equation} t_{max}\sim\mathcal{O}(M^{3}) \end{equation} Usually, such retention time scales are associated with remnants which have been seriously questioned due to the apparent violation of the Bekenstein entropy bound. \subsection*{3. Conclusions} The goal of this paper was to argued that calculating retention time scales is a decoding task, and a problem of relative state complexity. Our claim was that Alice cannot calculate the relative state complexity between an initial and perturbed states before the complexity bound is saturated. We considered a quantum system of $K$ qubits whose interactions are dictated by a random quantum circuit, and assumed the gate sets approximate a Hamiltonian evolution. In this framework, at every time-step the quantum circuit implements a random set of 2-local gates according to a time-dependent Hamiltonian with stochastic probability distribution. In this setting we perturbed the $K$-qubit system with $n$ qubits, and assumed that (i) $\mathcal{H}_{R}\ll\mathcal{H}_{BH}$, (ii) the black hole begins releasing information a scrambling time after its formation, and (iii) nothing in principle prevents the first $n$ emitted qubits to be the perturbed ones. We examined several techniques Alice could use to decode $\mathcal{H}_{R}$, and showed she cannot beat $2^{k+m+r}$. We demonstrated there is an overwhelming probability Alice cannot decode $\mathcal{H}_{R}$ in time less than $2^{k+m+r}$, unless she acts with an exponentially complex unitary operator or apply an extremely fine-tuned future precursor operator to the perturbed state in $SU(2^{K})$, which renders the computation unrealizable for evaporating black holes. In summary, we made the case that the $2^{k+m+r}$ bound proposed by Harlow and Hayden {[}16{]} holds strong even for young black holes.
1,108,101,564,503
arxiv
\section{Introduction} Disclosing the phase structure of strongly interacting matter is one of the major motivations in the study of finite-temperature quantum chromodynamics (QCD). The volume independence of susceptibilities from lattice QCD data confirms a crossover chiral and deconfinement transition at small values of the baryochemical potential $\mu_{\rm B}$ \cite{Aoki:2006we,Borsanyi:2010bp}. In the regime of large densities, studies of effective models like the linear sigma or Nambu-Jona-Lasinio (NJL) model suggest a first-order phase transition and a critical end point (CEP) \cite{Scavenius:2000qd,Schaefer:2007pw,Fukushima:2008wg,Herbst:2010rf}. This can be further supported by investigations within the approach of Dyson-Schwinger equations \cite{Fischer:2014ata}. Due to various approximations and limitations of all these methods there is, however, no agreement about the location of the CEP and transition line. During the RHIC beam energy scan, STAR has recently reported measurement of directed flow \cite{Adamczyk:2014ipa}, which might be interpreted as experimental evidence for a first-order phase transition. In order to determine the transition temperature and chemical potential experimentally, quantities which may signal a chiral phase transition are required. Of special interest in this context are susceptibilities of conserved charges like the net-baryon number or electric charge, who have been shown to display a peak at a crossover and first-order phase transition and a divergence at a CEP \cite{Redlich:2006rf,Schaefer:2006ds}. Such fluctuations have been proposed as experimental observable for the detection of a CEP in a heavy-ion collision \cite{Stephanov:1998dy,Stephanov:1999zu}. However, measurements in the NA49 experiment could hardly find any non-monotonic behavior \cite{Alt:2008ab,Anticic:2008aa}. It has later been shown that higher moments or cumulants and their ratios are even more sensitive to a critical structure \cite{Stephanov:2008qz}. Of particular interest here is the kurtosis, a volume-independent quantity which becomes negative on the crossover side of the CEP \cite{Skokov:2010uh,Stephanov:2011pb}. The beam-energy scan carried out by the STAR collaboration was able to find deviations of the kurtosis from hadron resonance gas and UrQMD transport model calculations \cite{Aggarwal:2010wy,Adamczyk:2013dal}. As an alternative to the measurement of fluctuations, it has been shown in \cite{Asakawa:2008ti} that the ratio of antiprotons to protons is sensitive to the presence of a CEP due to a focusing of the isentropic trajectories. It is important to note that all predictions have been made under the assumption that the phase transition takes place in equilibrium, resulting both in divergent fluctuations at a CEP and finite ones at the first-order transition. However, the system produced in a heavy-ion collision is rapidly expanding and cooling which makes it inevitable to consider dynamical effects. Besides the finite system size, critical slowing down is expected to influence the dynamics near a CEP. This has been demonstrated phenomenologically in \cite{Berdnikov:1999ph} and within a nonequilibrium fluid dynamical model in \cite{Nahrgang:2011mv,Herold:2013bi}. At a dynamical first-order phase transition, spinodal instabilities play a crucial role. Including them in the NJL model, the authors in \cite{Sasaki:2007db} succeeded to demonstrate how the quark number susceptibility diverges all along the isothermal spinodals. These divergences result from the convex structure of the pressure and the presence of a mechanically instable region. Consequently, one would expect large fluctuations not only at a CEP but also, and possibly stronger, at a first-order phase transition. Here, the fast collective expansion of matter produced after the collision of two nuclei should lead to the formation of a supercooled phase \cite{Csernai:1995zn,Zabrodin:1998dk,Keranen:2002sw,Nahrgang:2011vn}. If nucleation times are large, this phase will spinodally decompose \cite{Mishustin:1998eq,Randrup:2009gp,Randrup:2010ax}, leading to domain formation in the order parameter fields \cite{Herold:2013bi} and non-uniform structures like droplets in the baryon density, driven by pressure gradients. The subsequent hadronization of such droplets would result in non-statistical multiplicity fluctuations and an enhancement of higher flow harmonics \cite{Steinheimer:2012gc,Herold201414}. In order to draw final conclusions from the experimental data, models for a dynamical phase transition including critical behavior and also finite size and time effects are required. This would also allow predictions for future experiments at FAIR \cite{Friman:2011zz} and NICA \cite{nica:whitepaper} which will cover the region of high densities in the QCD phase diagram. In this article we present a study of event-by-event fluctuations from a fully dynamical model of heavy-ion collisions. Starting from a linear sigma model with dilatons \cite{Sasaki:2011sd}, we couple a fluid of quarks and gluons to the explicit propagation of the sigma field as the chiral order parameter and the dilaton representing a gluon condensate. Such an ansatz has been pursued for the first time in \cite{Mishustin:1998yc}, where the production and collapse of vacuum bubbles was observed during the expansion of the chiral fluid. We go beyond this study by augmenting the classical Euler-Lagrange equations for the sigma field with terms for dissipation and noise, considering the proper and full nonequilibrium dynamics from the interaction of the locally thermalized fluid with the out-of-equilibrium evolution of the field. The corresponding Langevin equation has been derived selfconsistently in \cite{Nahrgang:2011mg}. In former dynamical studies the gluons were included on the basis of the Polyakov loop \cite{Herold:2013bi}, a static quantity defined in Euclidean space-time. In contrast to this, the dilaton field has two advantages: First, it comes with a kinetic term in the Lagrangian, making its dynamics straightforward to derive. Second, the problem of negative pressures at a first-order phase transition in the Polyakov loop model \cite{Steinheimer:2012gc} can be avoided. We begin with a description of the model and the equations of motion in Sec.~\ref{sec:model}, followed by the calculation of the quark number susceptibility and kurtosis at a nonequilibrium first-order phase transition in the regime of low temperatures in Sec.~\ref{sec:suscep}. In Sec.~\ref{sec:trajectories}, we focus on the impact of a nonequilibrium evolution on fluctuation observables by determining the variance and kurtosis of the net-baryon number distribution in an event-by-event study. We conclude with a summary and outlook in Sec.~\ref{sec:summary}. \section{Nonequilibrium chiral fluid dynamics} \label{sec:model} We provide a dynamical nonequilibrium model based on a linear sigma model with a dilaton field \cite{Sasaki:2011sd}, for which the Lagrangian density reads \begin{eqnarray} \label{eq:Lagrangian} {\cal L}&=&\overline{q}\left(i \gamma^\mu \partial_\mu-g_{\rm q} \sigma\right)q + \frac{1}{2}\left(\partial_\mu\sigma\right)^2 + \frac{1}{2}\left(\partial_\mu\chi\right)^2 + {\cal L}_A- U_{\sigma}-U_{\chi}~, \\ \label{eq:LagrangianA} {\cal L}_A&=&-\frac{1}{4}A_{\mu\nu}A^{\mu\nu}+\frac{1}{2}g_A^2\left(\frac{\chi}{\chi_0}\right)^2 A_\mu A^\mu~, \\ U_{\sigma}&=&\frac{\lambda^2}{4}\left[\sigma^2-f_{\pi}^2\left(\frac{\chi}{\chi_0}\right)^2\right]^2-h\left(\frac{\chi}{\chi_0}\right)^2\sigma~, \\ U_{\chi}&=&\frac{1}{4}B\left(\frac{\chi}{\chi_0}\right)^4\left[\ln\left(\frac{\chi}{\chi_0}\right)^4-1\right]~. \end{eqnarray} In addition to the usual linear sigma model which describes the melting of the chiral condensate $\sigma\sim\langle\bar q q\rangle$ at high temperatures or net-baryon densities, it includes a dilaton or glueball field which we may identify with the gluon condensate $\langle A_{\mu\nu}A^{\mu\nu}\rangle$. The term ${\cal L}_A$ in the Lagrangian stands for a constituent gluon field $A_\mu$ which acquires mass from the nonvanishing expectation value of the gluon condensate $\langle\chi\rangle$. Its field strength tensor is defined as $A_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$. We consider only the light quarks $q=(u,d)$ in our present study. The model captures essential features of QCD in the strong coupling regime, the spontaneous breakdown of chiral symmetry and the trace anomaly. An alternative approach including the Polyakov loop as the thermal Wilson line over the color-electric field $A_0$ has been pursued in \cite{Herold:2013bi}. However, this Polyakov-quark-meson (PQM) model \cite{Schaefer:2004en} yields negative values of the pressure at a first-order phase transition with spinodal instabilities \cite{Steinheimer:2013xxa}. The masses of both the constituent quarks and gluons are dynamically generated via $m_{\rm q}=g_{\rm q}\sigma$ and $m_A=g_A\chi/\chi_0$. From this the coupling constants are fixed by reproducing the vacuum nucleon and glueball masses $3m_{\rm q}=m_N=940$~MeV and $2m_A=m_G=1.7$~GeV. The term $h=f_\pi m_\pi^2$ with the pion decay constant $f_\pi=93$~MeV and the pion mass $m_\pi=138$~MeV explicitly breaks chiral symmetry. In vacuum, the energy density equals $B/4=0.76~\mbox{GeV}/\mbox{fm}^3$ which determines the bag constant B. The dimensionful parameter $\chi_0$ is obtained from setting the vacuum glueball mass $m_G$ equal to the second derivative of the potential $U_{\chi}$. The self-coupling of the chiral field can be evaluated as $\lambda^2=\frac{m_\pi^2-m_\sigma^2}{2f_\pi^2}$ and thus depends on the vacuum sigma mass. As shown in \cite{Sasaki:2011sd}, a value of $m_\sigma=900$~MeV yields a reasonable behavior of the gluon condensate around the chiral transition in comparison with lattice QCD data. At nonzero baryochemical potential, lattice calculations have to rely on sophisticated methods to circumvent the infamous sign problem via reweighting \cite{Fodor:2001pe} or an imaginary chemical potential \cite{deForcrand:2002ci}. Obtained results are however still inconclusive about both the existence and position of a CEP. The corresponding phase diagram of our model has a chiral critical point at temperature $T_{\rm CEP}=89$~MeV and quark chemical potential $\mu_{\rm CEP}=329$~MeV with an adjacent first-order phase transition line. The critical chemical potential here is rather large compared with recent results from Dyson-Schwinger equations \cite{Fischer:2014ata} or a chiral quark-hadron model \cite{Dexheimer:2009hi} which predict values of $\mu_{\rm CEP}$ below $200$~MeV. On the other hand, a similarly large $\mu_{\rm CEP}$ and correspondingly small $T_{\rm CEP}$ are predicted by a PQM model supplemented with fluctuations \cite{Herbst:2010rf} or the mean-field NJL model \cite{Scavenius:2000qd}. The linear sigma model with dilatons and also the PQM model approach the Stefan-Boltzmann limit at large temperatures which the standard linear sigma model fails to reproduce \cite{Sasaki:2011sd}. At large chemical potentials, however, the equation of state for the PQM model predicts negative values of the pressure in the region with spinodal instabilities. In contrast to that, the equation of state is well-behaved for the model with dilatons. Above the temperature of $250$~MeV, scale symmetry is restored in the dilaton model, here it predicts a strong first-order phase transition which is not seen in lattice QCD. This defect is nevertheless neglectable for our studies as we are not going to probe the regime of deconfined gluons but focus on the chiral transition only. Within the mean-field approximation, the effective thermodynamic potential is obtained by a path integration over the quark and gluon fields \begin{equation} V_{\rm eff}=\Omega_{q\bar q}+\Omega_{A}+U_{\sigma}+U_{\chi}+\Omega_0~. \end{equation} The quark and gluon contributions can be evaluated as \begin{eqnarray} \Omega_{\rm q\bar q}&=&-2 N_f N_c T\int\frac{\mathrm d^3 p}{(2\pi)^3} \left\{\ln\left[1+\mathrm e^{-\frac{E_{\rm q}-\mu}{T}}\right]+\ln\left[1+\mathrm e^{-\frac{E_{\rm q}+\mu}{T}}\right]\right\}~, \\ \Omega_{A}&=&2 (N_c^2-1) T\int\frac{\mathrm d^3 p}{(2\pi)^3} \left\{\ln\left[1-\mathrm e^{-\frac{E_A}{T}}\right]\right\}~, \end{eqnarray} with the quasiparticle energies $E_{\rm q}=\sqrt{p^2+m_{\rm q}^2}$ and $E_A=\sqrt{p^2+m_A^2}$, respectively. Here and in the following, $\mu=\mu_{\rm B}$ denotes the quark chemical potential. A constant term $\Omega_0$ is added to ensure zero potential and pressure in vacuum. Having integrated out the quark and gluon degrees of freedom, we treat them as an ideal fluid to mimic the quark-gluon plasma at high energy densities. The fluid is described by the energy-momentum tensor $T^{\mu\nu}=(e+p)u^\mu u^\nu-p g^{\mu\nu}$ and the quark number current $N_{\rm q}^\mu=n_{\rm q} u^\mu$. The pressure is given by \begin{equation} \label{eq:pressure} p = -\Omega_{q\bar q}-\Omega_{A}~, \end{equation} from where energy and quark density are obtained via the standard thermodynamic relations $e=T\partial p/\partial T +\mu n_{\rm q}-p$ and $n_{\rm q}=\partial p/\partial \mu$. A full nonequilibrium dynamics for the coupled system of the sigma field and quarks has been derived in \cite{Nahrgang:2011mg} from the two-particle irreducible effective action. We adopt the result for the extended model with gluons and dilatons, the Langevin equation of motion for the sigma field reading \begin{equation} \label{eq:eomsigma} \partial_\mu\partial^\mu\sigma+\eta_{\sigma}\partial_t \sigma+\frac{\delta V_{\rm eff}}{\delta\sigma}=\xi_{\sigma}~. \end{equation} In addition to the classical Euler-Lagrange equation it contains a damping coefficient $\eta_\sigma$ and a stochastic noise field $\xi_{\sigma}$ describing dissipation and noise in the thermalized heat bath of quarks and gluons. Physically, the damping occurs due to the decay of one sigma into a quark-antiquark pair. As the sigma meson becomes light around the phase transition, also $\eta_\sigma$ decreases and eventually vanishes at the critical point. Its explicit form is given by \begin{equation} \label{eq:dampingcoeff} \eta_{\sigma}=\frac{12 g^2}{\pi}\left[1-2n_{\rm F}\left(\frac{m_\sigma}{2}\right)\right]\frac{1}{m_\sigma^2}\left(\frac{m_\sigma^2}{4}-m_{\rm q}^2\right)^{3/2}~. \end{equation} We work in the approximation of Gaussian white noise, with the noise field correlator \begin{equation} \label{eq:dissfluctsigma} \langle\xi_{\sigma}(t,\vec x)\xi_{\sigma}(t',\vec x')\rangle_\xi=\delta(\vec x-\vec x')\delta(t-t')m_\sigma\eta_{\sigma}\coth\left(\frac{m_\sigma}{2T}\right)~. \end{equation} Similar to the sigma in the quark fluid, one might expect the dilaton in the gluonic medium to be damped, too. The corresponding process would be the emission of two gluons $\chi\rightarrow\chi+\rm g+g$, according to Eq. (\ref{eq:LagrangianA}). However, as the dilaton mass is of the order of two times the in-medium mass of the gluons, this process is kinematically forbidden. We therefore apply the classical equation of motion to the dilaton field \begin{equation} \label{eq:eomchi} \partial_\mu\partial^\mu\chi+\frac{\delta V_{\rm eff}}{\delta\chi}=0~. \end{equation} Conservation of the overall energy-momentum as well as the baryon number are ensured by the fluid dynamical equations \begin{eqnarray} \label{eq:fluidT} \partial_\mu T^{\mu\nu}&=&-\partial_\mu\left(T_\sigma^{\mu\nu}+T_\chi^{\mu\nu}\right)~,\\ \label{eq:fluidN} \partial_\mu N_{\rm q}^{\mu}&=&0~. \end{eqnarray} Mainly due to the aforementioned dissipation, the fields lose energy to the fluid which is transferred via the source terms in Eq. (\ref{eq:fluidT}). This effect will be especially significant at a first-order phase transition, during the formation of a supercooled phase and its subsequent decay \cite{Herold:2013bi,Nahrgang:2011vn}. Since the evolution of the sigma field is stochastic in nature, also the fluid dynamical equations (\ref{eq:fluidT}), (\ref{eq:fluidN}) are becoming stochastic via the coupling to the source term. Fluctuating fluid dynamics has recently attracted attention in the context of heavy-ion collisions \cite{Kapusta:2011gt}. Finally, our set of equations is closed by a nonequilibrium equation of state, where the pressure explicitly depends on the local values of the fields $\sigma$ and $\chi$, cf. Eq. (\ref{eq:pressure}). \section{Susceptibilities in the spinodal region} \label{sec:suscep} Of particular interest for the detection and localization of the chiral phase transition are fluctuations of conserved charges like the net-baryon number. From effective models, one can calculate susceptibilities which describe fluctuations in the quark number density as response to changes in the chemical potential. They are in general defined as \begin{equation} c_n = \frac{\partial^n(p/T^4)}{\partial(\mu/T)^n}~. \end{equation} Here we focus on two coefficients, namely $c_2$, which is proportional to the quark number susceptibility $\chi_{\rm q}$ and the variance of fluctuations $\sigma^2$, and the kurtosis $\kappa$, given by the ratio of the fourth to the second coefficient. They are related to fluctuations in the quark number $N_{\rm q}$ as \begin{eqnarray} c_2&=&\frac{\chi_{\rm q}}{T^2}=\frac{\sigma^2}{V T^3}=\frac{1}{V T^3}\langle\delta N_{\rm q}^2\rangle~, \\ \frac{c_4}{c_2}&=&\kappa=\frac{\langle\delta N_{\rm q}^4\rangle}{\langle\delta N_{\rm q}^2\rangle}-3\langle\delta N_{\rm q}^2\rangle~, \end{eqnarray} with $\delta N_{\rm q}=N_{\rm q}-\langle N_{\rm q}\rangle$, the deviation from the ensemble average of the quark number distribution. Note that for $\kappa$ the volume and temperature dependence cancel to leading order. In \cite{Stephanov:1999zu} it has been shown that event-by-event fluctuations like $\langle\delta N_{\rm q}^2\rangle$ diverge at a critical point. The non-monotonic behavior of fluctuations in heavy-ion collisions as a function of beam energy was proposed as an experimental signal. It was later shown in \cite{Sasaki:2007db} within the NJL model that susceptibilities also diverge along the spinodal lines of the first-order phase transition. This requires that nonequilibrium effects, i. e. spinodal instabilities, are taken into account. The presence of a mechanically instable region with $\partial p/\partial n_{\rm q} <0$ then consequently leads to diverging susceptibilities. For the sigma model with dilatons, we calculate both the quark number susceptibility and the kurtosis as a function of density for fixed temperature $T=40$~MeV, where the model exhibits a first-order phase transition, see Fig. \ref{fig:susceptibilities}. In the left plot, the susceptibility is shown as a function of the quark density. Similar to the result from the NJL model, we see divergences at the isothermal spinodal points. Inside the spinodal region, the susceptibility becomes negative due to instabilities. On the right hand side we show the kurtosis in the same density range. Here we also find strong divergences when crossing the spinodal lines. Interestingly, $\kappa$ remains positive even inside the coexistence region. On the other hand, it is known that the kurtosis becomes negative when approaching the critical end point from the crossover side \cite{Skokov:2010uh}. \begin{figure}[t] \centering \subfloat[\label{fig:suscep}]{ \centering \includegraphics[scale=0.61,angle=270]{nsuscep.pdf} } \hfill \subfloat[\label{fig:kurtosis}]{ \centering \includegraphics[scale=0.61,angle=270]{nkurt.pdf} } \caption[Quark number susceptibility and kurtosis]{Quark number susceptibility \subref{fig:suscep} and kurtosis \subref{fig:kurtosis} for a nonequilibrium first-order phase transition at $T=40$~MeV.} \label{fig:susceptibilities} \end{figure} Further insight into the nature of these fluctuations can be gained by determining the critical exponents which control the strength of the divergences at the CEP or spinodals, respectively. In the vicinity of the singularity, the behavior of susceptibility and kurtosis may be described by a power law of the form \begin{eqnarray} \chi_{\rm q} &\sim & (\mu-\mu_0)^{-\gamma}~, \\ \kappa &\sim & (\mu-\mu_0)^{-\zeta}~. \label{eq:critexp} \end{eqnarray} We calculate the critical exponents $\gamma$ and $\zeta$ both analytically and numerically. The chiral transition may be described by a Ginzburg-Landau effective theory around the CEP and the spinodal lines \cite{Sasaki:2007qh}. At both points, the first and second derivatives of the effective potential vanish. Around a zero of the second derivative, it can be expanded in terms of $\delta\sigma=\sigma-\sigma_0$ \begin{equation} V_{\rm eff}=a_0 +a_1\delta\sigma+a_2\delta\sigma^2+a_3\delta\sigma^3+a_4\delta\sigma^4~. \end{equation} At $\delta\sigma=0$ we have $a_1=a_2=0$, so for these coefficients the relation $a_1=b_1(\mu-\mu_0)$ and $a_2=b_2(\mu-\mu_0)$ holds for small values of $\delta\sigma$. From $\partial V_{\rm eff} /\partial \sigma=0$ we obtain $a_1+a_2\delta\sigma+a_3\delta\sigma^2+a_4\delta\sigma^3 =0$, and we can assume that $\delta\sigma\sim (\mu-\mu_0)^\alpha$ with $0<\alpha<1$. At the spinodal point, the leading term reads $a_1+a_3\delta\sigma^2=0$, giving $\alpha=1/2$ and consequently $\gamma=1/2$. For a CEP, we also have $a_3=0$, so from $a_1+a_4\delta\sigma^3=0$ we end up with $\alpha=1/3$ and $\gamma=2/3$. For the fourth generalized susceptibility $c_4$, we can immediately state the coefficients to be $5/2$ for the CEP and $8/3$ for the spinodal. As the kurtosis is proportional to the ratio of the fourth to the second derivative of the effective potential with respect to $\mu$, we get $\zeta=2$ for both cases. The same values for $\gamma$ and $\zeta$ have been found within a numerical analysis fitting the forms in Eqs. (\ref{eq:critexp}) to the numerically determined susceptibility and kurtosis. Fig. \ref{fig:critical} shows the analytical and the numerical results versus the reduced chemical potential $\mu_{\rm r}=(\mu-\mu_0)/\mu_0$ for illustration. The critical properties for a CEP and an isothermal spinodal point are different due to a change in the universality class, indicating different critical behavior and strength of divergences. The same result and exponents have been found for a chiral NJL model with finite current quark masses \cite{Sasaki:2007qh}. The critical exponents of the kurtosis are naturally in agreement for both types of transition and are found to be equal to $2$. \begin{figure}[t] \centering \subfloat[\label{fig:expsusc}]{ \centering \includegraphics[scale=0.62,angle=270]{expsusc.pdf} } \hfill \subfloat[\label{fig:expkurtosis}]{ \centering \includegraphics[scale=0.62,angle=270]{expkurtosis.pdf} } \caption[Quark number susceptibility and kurtosis]{Quark number susceptibility \subref{fig:expsusc} and kurtosis \subref{fig:expkurtosis} as function of the reduced chemical potential near the critical point (circles) and first-order phase transition (triangles) for fixed temperature $T=40$~MeV.} \label{fig:critical} \end{figure} \section{Nonequilibrium enhancement of fluctuation signals} \label{sec:trajectories} The results from the previous section indicate that in dynamical systems as they are created in heavy-ion collisions, fluctuation signals may not only be enhanced in the vicinity of a critical point, but also in the spinodal region. This might provide us with a more applicable means to investigate the QCD phase structure as the region around the CEP with enhanced susceptibility is small \cite{Schaefer:2006ds,Kunihiro:1991qu,Hatta:2002sj} and subject to finite size and time effects that limit the growth of fluctuations. We may therefore expect larger fluctuations at a first-order phase transition than at a CEP. We test this assumption within the nonequilibrium chiral fluid dynamics model introduced in Sec.~\ref{sec:model}. We initialize a spherical droplet of quark-gluon plasma by defining an initial temperature and quark chemical potential, with a Woods-Saxon distribution to ensure a smooth transition to the vacuum at the edges. Then the fields are initialized with their respective equilibrium distribution, assuming Gaussian fluctuations around the thermal expectation values $\langle\sigma\rangle$ and $\langle\chi\rangle$, and finally we calculate the fluid dynamical quantities out of the values for $T$, $\mu$, $\sigma$ and $\chi$. Fluctuations in the initial conditions have only minor influence on the evolution as these are quickly washed out by the damping and superposed by the stochastic noise. By choosing appropriate initial values for $T$ and $\mu$, we are able to observe the expansion and cooling through the crossover, critical and spinodal region. The total quark number is in each case fixed to $N_{\rm q}=67$. In Fig.~\ref{fig:traj}, we show event-averaged trajectories for the three scenarios in the $T$-$n_{\rm q}$-plane. The values of the density and temperature in a single event are obtained from an averaging over a central volume. Each single cell follows of course its individual path through the phase diagram, and plotting all of them would yield a blob moving from higher to lower densities. The volume-averaged trajectories differ from event to event due to different noise configurations. The curves start on the right side proceeding to lower density on the left. Interestingly, we see that the first-order curve shows a slightly increasing temperature at intermediate densities between $1.6/\mbox{fm}^3$ to $0.6/\mbox{fm}^3$. This is a result of the reheating effect that occurs after the decomposition of a supercooled phase and is typical for a first-order phase transition. It has already been found in earlier works of nonequilibrium fluid dynamical models \cite{Herold:2013bi,Nahrgang:2011vn}. Note that this is a purely dynamical effect which causes the trajectory to even cross the CEP curve, where the temperature decreases monotonically. It also implies that for this curve there is some significant deviation from the equilibrium trajectory along the corresponding isentrope. This crossing was also observed for the earlier used PQM model \cite{Herold201414} if the curves were plotted in the $T$-$n_{\rm q}$-plane and the initial conditions were chosen close enough to each other. In the present model the initial conditions for a CEP and first-order transiton are inevitably closer as $T_{\rm CEP}$ is comparatively low. For the first-order phase transition we also find bubbles created through spinodal decomposition, as has been reported in earlier fluid dynamical studies \cite{Herold:2013bi,Herold201414}. However, in the present model these high-density droplets are not stable but start to decay after traversing the spinodal region. This is a direct consequence of the now strictly positive pressure as has been pointed out already in the introduction. \begin{figure}[t] \centering \includegraphics[scale=0.7]{traj.pdf} \caption[Trajectories]{Trajectories for a crossover, critical and spinodal transition. The gray area depicts the spinodal region and the black circle indicates the position of the CEP. Arrows show the direction of the evolution. } \label{fig:traj} \end{figure} We show the evolution of event-by-event fluctuations in the net-baryon number $N_{\rm B}=3 N_{\rm q}$ corresponding to these trajectories as a function of time in Fig.~\ref{fig:ebe}. The baryon number is extracted directly from the fluid dynamical density. As a conserved quantity, it will fluctuate only slightly when including a freeze-out and hadronic interactions in the final state \cite{Koch:1987}. We use two different methods to determine $N_{\rm B}$: First, within a fixed volume in the center of the collision, with an extension of $10$~fm in $x$-direction and $1$~fm each in $y$- and $z$-direction. Second, by limiting the region of acceptance via rapidity to $|y|<0.5$ and transverse momentum density to $100 \mbox{ MeV/fm}^3<p_T<500 \mbox{ MeV/fm}^3$ as it was performed in recent measurements at STAR \cite{Adamczyk:2014ipa}. Both methods yield qualitatively similar results. The variance of fluctuations is enhanced at a CEP in comparison with a crossover transition and even more, by a factor of $5$ to $6$, at a first-order phase transition. As the variance depends on the volume, we observe clear differences in the scales of the two plots. Furthermore, the volume strongly varies when applying a constant rapidity and momentum cut, therefore a more irregular structure in the time dependence can be found for that case. For the kurtosis in Fig.~\ref{fig:ebe2}, we also find that the crossover transition produces values close to zero, while a clear enhancement can be found at the CEP. Again, the largest fluctuations occur for a first-order phase transition. Remarkably, at CEP and first-order transition, both positive and negative values of $\kappa$ occur during the evolution, confirming the assumption of critical behavior in the spinodal region. Although the kurtosis is not dependent on the volume, we find its values to be an order of magnitude higher when using a rapidity and momentum cut than in the case of a fixed test volume. This can be explained considering the overall conservation of quark or baryon number in the system under consideration. In contrast to that, the baryon number is only on average conserved in a grand canonical ensemble which is used for effective model or lattice QCD calculations. As shown in \cite{Bleicher:2000ek,urqmdkurtosis,Bzdak:2012an}, this global conservation significantly effects ratios of cumulants, making them dependent on the fraction of measured to total baryons. At this point we should note that it is nontrivial to draw the connection between the time evolution of the variance and kurtosis and event-by-event fluctuations from experiment, which are supposed to be emitted over a hypersurface of constant energy density or temperature. The latter can only be measured after freeze-out and therefore a signal from the phase transition can only be extracted if the chemical freeze-out temperature is close to the temperature of hadronization. Otherwise, the fluctuations may have been washed out after passing the phase transition. Finally, one needs to relate the baryon number fluctuations to quantities that are actually measured, like fluctuations in the proton number \cite{Kitazawa:2012at}, and consider the evolution of these fluctuations in the hadronic phase \cite{Kitazawa:2013bta}. \begin{figure}[t] \centering \subfloat[\label{fig:ebesuscep}]{ \centering \includegraphics[scale=0.7,angle=270]{sigma_vol.pdf} } \hfill \subfloat[\label{fig:ebekurtosis}]{ \centering \includegraphics[scale=0.7,angle=270]{sigma_pt.pdf} } \caption[Event-by-event fluctuations]{Variance of the net-baryon number from the fluid dynamical evolution in a fixed volume \subref{fig:ebesuscep} and with rapidity and transverse momentum cut \subref{fig:ebekurtosis}.} \label{fig:ebe} \end{figure} \begin{figure}[t] \centering \subfloat[\label{fig:ebesuscep2}]{ \centering \includegraphics[scale=0.7,angle=270]{kurt_vol.pdf} } \hfill \subfloat[\label{fig:ebekurtosis2}]{ \centering \includegraphics[scale=0.7,angle=270]{kurt_pt.pdf} } \caption[Event-by-event fluctuations]{Kurtosis of the net-baryon number from the fluid dynamical evolution in a fixed volume \subref{fig:ebesuscep2} and with rapidity and transverse momentum cut \subref{fig:ebekurtosis2}.} \label{fig:ebe2} \end{figure} \section{Summary and Outlook} \label{sec:summary} We have investigated baryon number fluctuations within a chiral model with dilatons from two different approaches: First, through the calculation of susceptibilities, where we went beyond standard thermodynamics by including spinodal instabilities. We were able to show that both the quark number susceptibility and kurtosis diverge at the CEP and spinodal lines. The singularity in the susceptibility at the spinodals becomes suddenly stronger at the CEP, indicated by a larger critical exponent. The implications of such a behavior for experiment are strong enhancements of event-by-event fluctuations in the net-baryon number, which we investigated in a second dynamical approach. Propagating the chiral field and the dilaton explicitly on a locally thermalized background of quarks and gluons, we simulated the expansion of the hot and dense plasma created in a heavy-ion collision. We extracted the variance and kurtosis of the net-baryon number. Both are stronger at a CEP in comparison with a crossover scenario, and even more enhanced when the system evolves through the spinodal region of the first-order phase transition. In the future we are going to include hadronic degrees of freedom for a more realistic description of the chirally broken and confined phase. It is furthermore necessary to study particle distributions from a freeze-out or hadronic afterburner. This would also allow us to study the momentum anisotropy and the effect of the phase transition on flow. For the determination of susceptibilities, it would be interesting to include quantum or thermal fluctuations and study their effect on the critical properties near the CEP and first-order phase transition. \section*{Acknowledgements} This work is funded by Suranaree University of Technology (SUT) and CHE-NRU (NV.12/2557) project. The authors thank Igor Mishustin and Chihiro Sasaki for fruitful discussions and Dirk Rischke for providing the SHASTA code that was used for the fluid dynamical simulation. M. N. acknowledges support from the U.S. Department of Energy under grant DE-FG02-05ER41367 and a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD). The computing resources have been provided by the National e-Science Infrastructure Consortium of Thailand, the Center for Computer Services at SUT and the Frankfurt Center for Scientific Computing. \section*{References} \bibliographystyle{unsrt}
1,108,101,564,504
arxiv
\section{Introduction} \label{sec:back} We start by recalling on what the Hopfield models are. These models are well known in mathematical physics. However, we will be purely interested in their mathematical properties and the definitions that we will give below in this short introduction are almost completely mathematical without much of physics type of insight. Given a relatively simple and above all well known structure of the Hopfield models we do believe that it will be fairly easy for readers from both, mathematics and physics, communities to connect to the parts they find important to them. Before proceeding with the detailed introduction of the models we will also mention that fairly often we will define mathematical objects of interest but will occasionally refer to them using their names typically known in physics. The model that we will study was popularized in \cite{Hop82} (or if viewed in a different context one could say in \cite{PasFig78,Hebb49}). It essentially looks at what is called Hamiltonian of the following type \begin{equation} \cH(H,\x)=\sum_{i\neq j}^{n} A_{ij}\x_i\x_j,\label{eq:ham} \end{equation} where \begin{equation} A_{ij}(H)=\sum_{l=1}^{m} H_{il}H_{lj},\label{eq:hamAij} \end{equation} is the so-called quenched interaction and $H$ is an $m\times n$ matrix that can be also viewed as the matrix of the so-called stored patterns (we will typically consider scenario where $m$ and $n$ are large and $\frac{m}{n}=\alpha$ where $\alpha$ is a constant independent of $n$; however, many of our results will hold even for fixed $m$ and $n$). Each pattern is essentially a row of matrix $H$ while vector $\x$ is a vector from $R^n$ that emulates spins (or in a different context one may say neuron states). Typically, one assumes that the patterns are binary and that each neuron can have two states (spins) and hence the elements of matrix $H$ as well as elements of vector $\x$ are typically assumed to be from set $\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}$. In physics literature one usually follows convention and introduces a minus sign in front of the Hamiltonian given in (\ref{eq:ham}). Since our main concern is not really the physical interpretation of the given Hamiltonian but rather mathematical properties of such forms we will avoid the minus sign and keep the form as in (\ref{eq:ham}). To characterize the behavior of physical interpretations that can be described through the above Hamiltonian one then looks at the partition function \begin{equation} Z(\beta,H)=\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{\beta\cH(H,\x)},\label{eq:partfun} \end{equation} where $\beta>0$ is what is typically called the inverse temperature. Depending of what is the interest of studying one can then also look at a more appropriate scaled $\log$ version of $Z(\beta,H)$ (typically called the free energy) \begin{equation} f_p(n,\beta,H)=\frac{\log{(Z(\beta,H)})}{\beta n}=\frac{\log{(\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{\beta\cH(H,\x)})}}{\beta n}.\label{eq:logpartfun} \end{equation} Studying behavior of the partition function or the free energy of the Hopfield model of course has a long history. Since we will not focus on the entire function in this paper we just briefly mention that a long line of results can be found in e.g. excellent references \cite{PasShchTir94,ShchTir93,BarGenGueTan10,BarGenGueTan12,Tal98}. In this paper we will focus on studying optimization/algorithmic aspects of $\frac{\log{(Z(\beta,H)})}{\beta n}$. More specifically, we will look at a particular regime $\beta,n\rightarrow\infty$ (which is typically called a zero-temperature thermodynamic limit regime or as we will occasionally call it the ground state regime). In such a regime one has \begin{equation} \hspace{-.3in}\lim_{\beta,n\rightarrow\infty}f_p(n,\beta,H)= \lim_{\beta,n\rightarrow\infty}\frac{\log{(Z(\beta,H)})}{\beta n}=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\cH(H,\x)}{n} =\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n},\label{eq:limlogpartfun} \end{equation} which essentially renders the following form (often called the ground state energy) \begin{equation} \lim_{\beta,n\rightarrow\infty}f_p(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n},\label{eq:posham} \end{equation} which will be one of the main subjects that we study in this paper. We will refer to the optimization part of (\ref{eq:posham}) as the positive Hopfield form. In addition to this form we will also study its a negative counterpart. Namely, instead of the partition function given in (\ref{eq:partfun}) one can look at a corresponding partition function of a negative Hamiltonian from (\ref{eq:ham}) (alternatively, one can say that instead of looking at the partition function defined for positive temperatures/inverse temperatures one can also look at the corresponding partition function defined for negative temperatures/inverse temperatures). In that case (\ref{eq:partfun}) becomes \begin{equation} Z(\beta,H)=\sum_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}e^{-\beta\cH(H,\x)},\label{eq:partfunneg} \end{equation} and if one then looks at its an analogue to (\ref{eq:limlogpartfun}) one then obtains \begin{equation} \hspace{-.3in}\lim_{\beta,n\rightarrow\infty}f_n(n,\beta,H)=\lim_{\beta,n\rightarrow\infty}\frac{\log{(Z(\beta,H)})}{\beta n}=\lim_{n\rightarrow\infty}\frac{\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}-\cH(H,\x)}{n} =\lim_{n\rightarrow\infty}\frac{\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:limlogpartfunneg} \end{equation} This then ultimately renders the following form which is in a way a negative counterpart to (\ref{eq:posham}) \begin{equation} \lim_{\beta,n\rightarrow\infty}f_n(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:negham} \end{equation} We will then correspondingly refer to the optimization part of (\ref{eq:negham}) as the negative Hopfield form. In the following sections we will present a collection of results that relate to behavior of the forms given in (\ref{eq:posham}) and (\ref{eq:negham}) when they are viewed in a statistical scenario. The results that we will present will essentially correspond to what is called the ground state energies of these models. As it will turn out, in the statistical scenario that we will consider, (\ref{eq:posham}) and (\ref{eq:negham}) will be almost completely characterized by their corresponding average values \begin{equation} \lim_{\beta,n\rightarrow\infty}Ef_p(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{E\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}\label{eq:poshamavg} \end{equation} and \begin{equation} \lim_{\beta,n\rightarrow\infty}Ef_n(n,\beta,H)=\lim_{n\rightarrow\infty}\frac{E\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2}{n}.\label{eq:neghamavg} \end{equation} Before proceeding further with our presentation we will be a little bit more specific about the organization of the paper. In Section \ref{sec:poshop} we will present a powerful mechanism that can be used create bounds on the ground state energies of the positive Hopfield form in a statistical scenario. We will then in Section \ref{sec:neghop} present the corresponding results for the negative Hopfield form. In Section \ref{sec:conc} we will present a brief discussion and several concluding remarks. \section{Positive Hopfield form} \label{sec:poshop} In this section we will look at the following optimization problem (which clearly is the key component in estimating the ground state energy in the thermodynamic limit) \begin{equation} \max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:posham1} \end{equation} For a deterministic (given fixed) $H$ this problem is of course known to be NP-hard (it essentially falls under the class of binary quadratic optimization problems). Instead of looking at the problem in (\ref{eq:posham1}) in a deterministic way i.e. in a way that assumes that matrix $H$ is deterministic, we will look at it in a statistical scenario (this is of course a typical scenario in statistical physics). Within a framework of statistical physics and neural networks the problem in (\ref{eq:posham1}) is studied assuming that the stored patterns (essentially rows of matrix $H$) are comprised of Bernoulli $\{-1,1\}$ i.i.d. random variables see, e.g. \cite{Tal98,PasShchTir94,ShchTir93}. While our results will turn out to hold in such a scenario as well we will present them in a different scenario: namely, we will assume that the elements of matrix $H$ are i.i.d. standard normals. We will then call the form (\ref{eq:posham1}) with Gaussian $H$, the Gaussian positive Hopfield form. On the other hand, we will call the form (\ref{eq:posham1}) with Bernoulli $H$, the Bernoulli positive Hopfield form. In the remainder of this section we will look at possible ways to estimate the optimal value of the optimization problem in (\ref{eq:posham1}). Below we will introduce a strategy that can be used to obtain an upper bound on the optimal value. \subsection{Upper-bounding ground state energy of the positive Hopfield form} \label{sec:poshopub} In this section we will look at problem from (\ref{eq:posham1}). In fact, to be a bit more precise, in order to make the exposition as simple as possible, we will look at its a slight variant given below \begin{equation} \xi_p=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2.\label{eq:sqrtposham1} \end{equation} As mentioned above, we will assume that the elements of $H$ are i.i.d. standard normal random variables. Before proceeding further with the analysis of (\ref{eq:sqrtposham1}) we will recall on several well known results that relate to Gaussian random variables and the processes they create. First we recall the following results from \cite{Gordon85} that relates to statistical properties of certain Gaussian processes. \begin{theorem}(\cite{Gordon85}) \label{thm:Gordonpos1} Let $X_{i}$ and $Y_{i}$, $1\leq i\leq n$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices \begin{enumerate} \item $E(X_{i}^2)=E(Y_{i}^2)$ \item $E(X_{i}X_{l})\leq E(Y_{i}Y_{l}), i\neq l$. \end{enumerate} Let $\psi()$ be an increasing function on the real axis. Then \begin{equation*} E(\min_{i}\psi(X_{i}))\leq E(\min_i \psi(Y_{i})) \Leftrightarrow E(\max_{i}\psi(X_{i}))\geq E(\max_i\psi(Y_{i})). \end{equation*} \end{theorem} In our recent work \cite{StojnicHopBnds10} we rely on the above theorem to create an upper-bound on the ground state energy of the positive Hopfield model. However, the strategy employed in \cite{StojnicHopBnds10} only a basic version of the above theorem where $\psi(x)=x$. Here we will substantially upgrade the strategy by looking at a very simple (but way better) different version of $\psi()$. We start by reformulating the problem in (\ref{eq:sqrtposham1}) in the following way \begin{equation} \xi_p=\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x.\label{eq:sqrtposham2} \end{equation} We do mention without going into details that the ground state energies will concentrate in the thermodynamic limit and hence we will mostly focus on the expected value of $\xi_p$ (one can then easily adapt our results to describe more general probabilistic concentrating properties of ground state energies). The following is then a direct application of Theorem \ref{thm:Gordonpos1}. \begin{lemma} Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable and let $c_3$ be a positive constant. Then \begin{equation} E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}e^{c_3(\y^T H\x + g)})\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n,\|\y\|_2=1}e^{c_3(\g^T\y+\h^T\x)}).\label{eq:posexplemma} \end{equation}\label{lemma:posexplemma} \end{lemma} \begin{proof} As mentioned above, the proof is a standard/direct application of Theorem \ref{thm:Gordonpos1}. We will sketch it for completeness. Namely, one starts by defining processes $X_i$ and $Y_i$ in the following way \begin{equation} Y_i=(\y^{(i)})^T H\x^{(i)} + g\quad X_i=\g^T\y^{(i)}+\h^T\x^{(i)}.\label{eq:posexplemmaproof1} \end{equation} Then clearly \begin{equation} EY_i^2=EX_i^2=\|\y^{(i)}\|_2^2+\|\x^{(i)}\|_2^2=2.\label{eq:posexplemmaproof2} \end{equation} One then further has \begin{eqnarray} EY_iY_l & = & (\y^{(i)})^T\y^{(l)}(\x^{(l)})^T\x^{(i)}+1\nonumber \\ EX_iX_l & = & (\y^{(i)})^T\y^{(l)}+(\x^{(l)})^T\x^{(i)}.\label{eq:posexplemmaproof3} \end{eqnarray} And after a small algebraic transformation \begin{eqnarray} EY_iY_l-EX_iX_l & = & (1-(\y^{(i)})^T\y^{(l)})-(\x^{(l)})^T\x^{(i)}(1-(\y^{(i)})^T\y^{(l)}) \nonumber \\ & = & (1-(\x^{(l)})^T\x^{(i)})(1-(\y^{(i)})^T\y^{(l)})\nonumber \\ & \geq & 0.\label{eq:posexplemmaproof4} \end{eqnarray} Combining (\ref{eq:posexplemmaproof2}) and (\ref{eq:posexplemmaproof4}) and using results of Theorem \ref{thm:Gordonpos1} one then easily obtains (\ref{eq:posexplemma}). \end{proof} One then easily has \begin{multline} E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2+g)})= E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}y^TH\x+g)})\\ =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(e^{c_3(\y^TH\x+g)})).\label{eq:chpos1} \end{multline} Connecting (\ref{eq:chpos1}) and results of Lemma \ref{lemma:posexplemma} we have \begin{multline} E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2+g)})= E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(e^{c_3(\y^TH\x+g)}))\\ \leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}(e^{c_3(\g^T\y+h^T\x)})) =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x})\max_{\|\y\|_2=1}(e^{c_3\g^T\y}))\\ =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x}))E(\max_{\|\y\|_2=1}(e^{c_3\g^T\y})),\label{eq:chpos2} \end{multline} where the last equality follows because of independence of $\g$ and $\h$. Connecting beginning and end of (\ref{eq:chpos2}) one has \begin{equation} E(e^{c_3g})E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)})\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x}))E(\max_{\|\y\|_2=1}(e^{c_3\g^T\y})).\label{eq:chpos3} \end{equation} Applying $\log$ on both sides of (\ref{eq:chpos3}) we further have \begin{equation} \log(E(e^{c_3g}))+\log(E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\leq \log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x})))+\log(E(\max_{\|\y\|_2=1}(e^{c_3\g^T\y}))),\label{eq:chpos4} \end{equation} or in a slightly more convenient form \begin{equation} \log(E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\leq -\log(E(e^{c_3g}))+\log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x})))+\log(E(\max_{\|\y\|_2=1}(e^{c_3\g^T\y}))).\label{eq:chpos5} \end{equation} It is also relatively easy to see that \begin{equation} \log(E(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\geq E\log(e^{c_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)})=Ec_3(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2), \label{eq:chpos6} \end{equation} and \begin{equation} -\log(E(e^{c_3g}))=-\log(e^{\frac{c_3^2}{2}})=-\frac{c_3^2}{2}.\label{eq:chpos7} \end{equation} Connecting (\ref{eq:chpos5}), (\ref{eq:chpos6}), and (\ref{eq:chpos7}) one finally can establish an upper bound on the expected value of the ground state energy of the positive Hopfiled model \begin{equation} E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)\leq -\frac{c_3}{2}+\frac{1}{c_3}\log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{c_3\h^T\x}))) +\frac{1}{c_3}\log(E(\max_{\|\y\|_2=1}(e^{c_3\g^T\y}))).\label{eq:chpos8} \end{equation} Let $c_3=c_3^{(s)}\sqrt{n}$ where $c_3^{(s)}$ is a constant independent of $n$. Then (\ref{eq:chpos8}) becomes \begin{eqnarray} \hspace{-.5in}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} & \leq & -\frac{c_3^{(s)}}{2}+\frac{1}{nc_3^{(s)}}\log(E(\max_{\x\in\{-1,1\}^n}(e^{c_3^{(s)}\h^T\x}))) +\frac{1}{nc_3^{(s)}}\log(E(\max_{\|\y\|_2=1}(e^{c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & -\frac{c_3^{(s)}}{2}+\frac{1}{c_3^{(s)}}\log(E(e^{c_3^{(s)}|\h_1|})) +\frac{1}{nc_3^{(s)}}\log(E(\max_{\|\y\|_2=1}(e^{c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & -\frac{c_3^{(s)}}{2}+\frac{c_3^{(s)}}{2}+\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\frac{1}{nc_3^{(s)}}\log(E(\max_{\|\y\|_2=1}(e^{c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & \frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\frac{1}{nc_3^{(s)}}\log(E(\max_{\|\y\|_2=1}(e^{c_3^{(s)}\sqrt{n}\g^T\y}))).\label{eq:chpos9} \end{eqnarray} One should now note that the above bound is effectively correct for any positive constant $c_3^{(s)}$. The only thing that is then left to be done so that the above bound becomes operational is to estimate $E(\max_{\|\y\|_2=1}(e^{c_3^{(s)}\sqrt{n}\g^T\y}))=Ee^{c_3^{(s)}\sqrt{n}\|\g\|_2}$. Pretty good estimates for this quantity can be obtained for any $n$. However, to facilitate the exposition we will focus only on the large $n$ scenario. In that case one can use the saddle point concept applied in \cite{SPH}. However, here we will try to avoid the entire presentation from there and instead present the core neat idea that has much wider applications. Namely, we start with the following identity \begin{equation} \|\g\|_2=\min_{\gamma\geq 0}(\frac{\|\g\|_2^2}{4\gamma}+\gamma).\label{eq:gamaiden} \end{equation} Then \begin{multline} \frac{1}{nc_3^{(s)}}\log(Ee^{c_3^{(s)}\sqrt{n}\|\g\|_2})=\frac{1}{nc_3^{(s)}}\log(Ee^{c_3^{(s)}\sqrt{n}\min_{\gamma\geq 0}(\frac{\|\g\|_2^2}{4\gamma}+\gamma)}) \doteq \frac{1}{nc_3^{(s)}}\min_{\gamma\geq 0}\log(Ee^{c_3^{(s)}\sqrt{n}(\frac{\|\g\|_2^2}{4\gamma}+\gamma)})\\ =\min_{\gamma\geq 0}(\frac{\gamma}{\sqrt{n}}+\frac{1}{c_3^{(s)}}\log(Ee^{c_3^{(s)}\sqrt{n}(\frac{\g_i^2}{4\gamma})})),\label{eq:gamaiden1} \end{multline} where $\doteq$ stands for equality when $n\rightarrow \infty$. $\doteq$ is exactly what was shown in \cite{SPH}. In fact, a bit more is shown in \cite{SPH} and a few corrective terms were estimated for finite $n$ (for our needs here though, even just replacing $\doteq$ with $\leq$ inequality suffices). Now if one sets $\gamma=\gamma^{(s)}\sqrt{n}$ then (\ref{eq:gamaiden1}) gives \begin{equation} \frac{1}{nc_3^{(s)}}\log(Ee^{c_3^{(s)}\sqrt{n}\|\g\|_2}) =\min_{\gamma^{(s)}\geq 0}(\gamma^{(s)}+\frac{1}{c_3^{(s)}}\log(Ee^{c_3^{(s)}(\frac{\g_i^2}{4\gamma^{(s)}})})) =\min_{\gamma^{(s)}\geq 0}(\gamma^{(s)}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\gamma^{(s)}})).\label{eq:gamaiden2} \end{equation} After solving the last minimization one obtains \begin{equation} \widehat{\gamma^{(s)}}=\frac{2c_3^{(s)}+\sqrt{4(c_3^{(s)})^2+16\alpha}}{8}.\label{eq:gamaiden3} \end{equation} Connecting (\ref{eq:chpos9}), (\ref{eq:gamaiden1}), (\ref{eq:gamaiden2}), and (\ref{eq:gamaiden3}) one finally has \begin{equation} \hspace{-.5in}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} \leq \frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma^{(s)}}}),\label{eq:ubmorsoph} \end{equation} where clearly $\widehat{\gamma^{(s)}}$ is as in (\ref{eq:gamaiden3}). As mentioned earlier the above inequality holds for any $c_3^{(s)}$. Of course to make it as tight as possible one then has \begin{equation} \hspace{-.5in}\frac{E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} \leq \min_{c_3^{(s)}\geq 0} \left (\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma^{(s)}}}\right ).\label{eq:ubmorsoph1} \end{equation} We summarize our results from this subsection in the following lemma. \begin{lemma} Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $n$ be large and let $m=\alpha n$, where $\alpha>0$ is a constant independent of $n$. Let $\xi_p$ be as in (\ref{eq:sqrtposham1}). Let $\widehat{\gamma^{(s)}}$ be such that \begin{equation} \widehat{\gamma^{(s)}}=\frac{2c_3^{(s)}+\sqrt{4(c_3^{(s)})^2+16\alpha}}{8}.\label{eq:gamaiden3thm} \end{equation} and $\xi_p^{(u)}$ be a scalar such that \begin{equation} \xi_p^{(u)}=\min_{c_3^{(s)}\geq 0} \left (\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma^{(s)}}}\right ).\label{eq:condxipuposgenlemma} \end{equation} Then \begin{equation} \frac{E\xi_p}{\sqrt{n}}\leq\xi_p^{(u)}.\label{eq:posgenexplemma} \end{equation} Moreover, \begin{eqnarray} & & \lim_{n\rightarrow\infty}P(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\leq \xi_p^{(u)})\geq 1\nonumber \\ & \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p\leq \xi_p^{(u)})\geq 1 \nonumber \\ & \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_p^2\leq (\xi_p^{(u)})^2)\geq 1. \label{eq:posgenproblemma} \end{eqnarray} In particular, when $\alpha=1$ \begin{equation} \frac{E\xi_p}{\sqrt{n}}\leq\xi_p^{(u)}=1.7832.\label{eq:posgenexplemma1} \end{equation} \label{lemma:posgenlemma} \end{lemma} \begin{proof} The first part of the proof related to the expected values follows from the above discussion. The probability part follows by the concentration arguments that are easy to establish (see, e.g discussion in \cite{StojnicHopBnds10}). \end{proof} One way to see how the above lemma works in practice is (as specified in the lemma) to choose $\alpha=1$ to obtain $\xi_p^{(u)}=1.7832$. This value is substantially better than $1.7978$ offered in \cite{StojnicHopBnds10}. \section{Negative Hopfield form} \label{sec:neghop} In this section we will look at the following optimization problem (which clearly is the key component in estimating the ground state energy in the thermodynamic limit) \begin{equation} \min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2^2.\label{eq:negham1} \end{equation} For a deterministic (given fixed) $H$ this problem is of course known to be NP-hard (as (\ref{eq:posham1}), it essentially falls under the class of binary quadratic optimization problems). Instead of looking at the problem in (\ref{eq:negham1}) in a deterministic way i.e. in a way that assumes that matrix $H$ is deterministic, we will adopt the strategy of the previous section and look at it in a statistical scenario. Also as in previous section, we will assume that the elements of matrix $H$ are i.i.d. standard normals. In the remainder of this section we will look at possible ways to estimate the optimal value of the optimization problem in (\ref{eq:negham1}). In fact we will introduce a strategy similar the one presented in the previous section to create a lower-bound on the optimal value of (\ref{eq:negham1}). \subsection{Lower-bounding ground state energy of the negative Hopfield form} \label{sec:neghoplb} In this section we will look at problem from (\ref{eq:negham1}).In fact, to be a bit more precise, as in the previous section, in order to make the exposition as simple as possible, we will look at its a slight variant given below \begin{equation} \xi_n=\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2.\label{eq:sqrtnegham1} \end{equation} As mentioned above, we will assume that the elements of $H$ are i.i.d. standard normal random variables. First we recall (and slightly extend) the following result from \cite{Gordon85} that relates to statistical properties of certain Gaussian processes. This result is essentially a negative counterpart to the one given in Theorem \ref{thm:Gordonpos1} \begin{theorem}(\cite{Gordon85}) \label{thm:Gordonneg1} Let $X_{ij}$ and $Y_{ij}$, $1\leq i\leq n,1\leq j\leq m$, be two centered Gaussian processes which satisfy the following inequalities for all choices of indices \begin{enumerate} \item $E(X_{ij}^2)=E(Y_{ij}^2)$ \item $E(X_{ij}X_{ik})\geq E(Y_{ij}Y_{ik})$ \item $E(X_{ij}X_{lk})\leq E(Y_{ij}Y_{lk}), i\neq l$. \end{enumerate} Let $\psi()$ be an increasing function on the real axis. Then \begin{equation*} E(\min_{i}\max_{j}\psi(X_{ij}))\leq E(\min_{i}\max_{j}\psi(Y_{ij})). \end{equation*} Moreover, let $\psi()$ be a decreasing function on the real axis. Then \begin{equation*} E(\max_{i}\min_{j}\psi(X_{ij}))\geq E(\max_{i}\min_{j}\psi(Y_{ij})). \end{equation*} \begin{proof} The proof of all statements but the last one is of course given in \cite{Gordon85}. Here we just briefly sketch how to get the last statement as well. So, let $\psi()$ be a decreasing function on the real axis. Then $-\psi()$ is an increasing function on the real axis and by first part of the theorem we have \begin{equation*} E(\min_{i}\max_{j}-\psi(X_{ij}))\leq E(\min_{i}\max_{j}-\psi(Y_{ij})). \end{equation*} Changing the inequality sign we also have \begin{equation*} -E(\min_{i}\max_{j}-\psi(X_{ij}))\geq -E(\min_{i}\max_{j}-\psi(Y_{ij})), \end{equation*} and finally \begin{equation*} E(\max_{i}\min_{j}\psi(X_{ij}))\geq E(\max_{i}\min_{j}\psi(Y_{ij})). \end{equation*} \end{proof} \end{theorem} In our recent work \cite{StojnicHopBnds10} we rely on the above theorem to create a lower-bound on the ground state energy of the negative Hopfield model. However, as was the case with the positive form, the strategy employed in \cite{StojnicHopBnds10} relied only on a basic version of the above theorem where $\psi(x)=x$. Similarly to what was done in the previous subsection, we will here substantially upgrade the strategy from \cite{StojnicHopBnds10} by looking at a very simple (but way better) different version of $\psi()$. We start by reformulating the problem in (\ref{eq:sqrtposham1}) in the following way \begin{equation} \xi_n=\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x.\label{eq:sqrtnegham2} \end{equation} As was the case with the positive form, we do mention without going into details that the ground state energies will again concentrate in the thermodynamic limit and hence we will mostly focus on the expected value of $\xi_n$ (one can then easily adapt our results to describe more general probabilistic concentrating properties of ground state energies). The following is then a direct application of Theorem \ref{thm:Gordonneg1}. \begin{lemma} Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $\g$ and $\h$ be $m\times 1$ and $n\times 1$ vectors, respectively, with i.i.d. standard normal components. Also, let $g$ be a standard normal random variable and let $c_3$ be a positive constant. Then \begin{equation} E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\min_{\|\y\|_2=1}e^{-c_3(\y^T H\x + g)})\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\min_{\|\y\|_2=1}e^{-c_3(\g^T\y+\h^T\x)}).\label{eq:negexplemma} \end{equation}\label{lemma:negexplemma} \end{lemma} \begin{proof} As mentioned above, the proof is a standard/direct application of Theorem \ref{thm:Gordonneg1}. We will sketch it for completeness. Namely, one starts by defining processes $X_i$ and $Y_i$ in the following way \begin{equation} Y_{ij}=(\y^{(j)})^T H\x^{(i)} + g\quad X_{ij}=\g^T\y^{(j)}+\h^T\x^{(i)}.\label{eq:negexplemmaproof1} \end{equation} Then clearly \begin{equation} EY_{ij}^2=EX_{ij}^2=2.\label{eq:negexplemmaproof2} \end{equation} One then further has \begin{eqnarray} EY_{ij}Y_{ik} & = & (\y^{(k)})^T\y^{(j)}+1 \nonumber \\ EX_{ij}X_{ik} & = & (\y^{(k)})^T\y^{(j)}+1,\label{eq:negexplemmaproof3} \end{eqnarray} and clearly \begin{equation} EX_{ij}X_{ik}=EY_{ij}Y_{ik}.\label{eq:negexplemmaproof31} \end{equation} Moreover, \begin{eqnarray} EY_{ij}Y_{lk} & = & (\y^{(j)})^T\y^{(k)}(\x^{(i)})^T\x^{(l)}+1 \nonumber \\ EX_{ij}X_{lk} & = & (\y^{(j)})^T\y^{(k)}+(\x^{(i)})^T\x^{(l)}.\label{eq:negexplemmaproof32} \end{eqnarray} And after a small algebraic transformation \begin{eqnarray} EY_{ij}Y_{lk}-EX_{ij}X_{lk} & = & (1-(\y^{(j)})^T\y^{(k)})-(\x^{(i)})^T\x^{(l)}(1-(\y^{(j)})^T\y^{(k)}) \nonumber \\ & = & (1-(\x^{(i)})^T\x^{(l)})(1-(\y^{(j)})^T\y^{(k)})\nonumber \\ & \geq & 0.\label{eq:negexplemmaproof4} \end{eqnarray} Combining (\ref{eq:negexplemmaproof2}), (\ref{eq:negexplemmaproof31}), and (\ref{eq:negexplemmaproof4}) and using results of Theorem \ref{thm:Gordonneg1} one then easily obtains (\ref{eq:negexplemma}). \end{proof} Following what was done in Subsection \ref{sec:poshopub} one then easily has \begin{multline} E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2+g)})= E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\max_{\|\y\|_2=1}\y^TH\x+g)})\\ =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\min_{\|\y\|_2=1}(e^{-c_3(\y^TH\x+g)})).\label{eq:chneg1} \end{multline} Connecting (\ref{eq:chneg1}) and results of Lemma \ref{lemma:negexplemma} we have \begin{multline} E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2+g)})= E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\min_{\|\y\|_2=1}(e^{-c_3(\y^TH\x+g)}))\\ \leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\min_{\|\y\|_2=1}(e^{-c_3(\g^T\y+h^T\x)})) =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x})\min_{\|\y\|_2=1}(e^{-c_3\g^T\y}))\\ =E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x}))E(\min_{\|\y\|_2=1}(e^{-c_3\g^T\y})),\label{eq:chneg2} \end{multline} where the last equality follows because of independence of $\g$ and $\h$. Connecting beginning and end of (\ref{eq:chneg2}) one has \begin{equation} E(e^{-c_3g})E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)})\leq E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x}))E(\min_{\|\y\|_2=1}(e^{-c_3\g^T\y})).\label{eq:chneg3} \end{equation} Applying $\log$ on both sides of (\ref{eq:chneg3}) we further have \begin{equation} \hspace{-.5in}\log(E(e^{-c_3g}))+\log(E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\leq \log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x})))+\log(E(\min_{\|\y\|_2=1}(e^{-c_3\g^T\y}))),\label{eq:chneg4} \end{equation} or in a slightly more convenient form \begin{equation} \hspace{-.5in}\log(E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\leq -\log(E(e^{-c_3g}))+\log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x})))+\log(E(\min_{\|\y\|_2=1}(e^{-c_3\g^T\y}))).\label{eq:chneg5} \end{equation} It is also relatively easy to see that \begin{equation} \log(E(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}))\geq E\log(e^{-c_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)})=-Ec_3(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2), \label{eq:chneg6} \end{equation} and as earlier \begin{equation} -\log(E(e^{-c_3g}))=-\log(e^{\frac{c_3^2}{2}})=-\frac{c_3^2}{2}.\label{eq:chneg7} \end{equation} Connecting (\ref{eq:chneg5}), (\ref{eq:chneg6}), and (\ref{eq:chneg7}) one finally can establish a lower bound on the expected value of the ground state energy of the negative Hopfiled model \begin{equation} E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)\geq \frac{c_3}{2}-\frac{1}{c_3}\log(E(\max_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(e^{-c_3\h^T\x}))) -\frac{1}{c_3}\log(E(\min_{\|\y\|_2=1}(e^{-c_3\g^T\y}))).\label{eq:chneg8} \end{equation} Let $c_3=c_3^{(s)}\sqrt{n}$ where $c_3^{(s)}$ is a constant independent of $n$. Then (\ref{eq:chneg8}) becomes \begin{eqnarray} \hspace{-.5in}\frac{E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} & \geq & \frac{c_3^{(s)}}{2}-\frac{1}{nc_3^{(s)}}\log(E(\max_{\x\in\{-1,1\}^n}(e^{-c_3^{(s)}\h^T\x}))) -\frac{1}{nc_3^{(s)}}\log(E(\min_{\|\y\|_2=1}(e^{-c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & \frac{c_3^{(s)}}{2}-\frac{1}{c_3^{(s)}}\log(E(e^{c_3^{(s)}|\h_1|})) -\frac{1}{nc_3^{(s)}}\log(E(\min_{\|\y\|_2=1}(e^{-c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & \frac{c_3^{(s)}}{2}-\frac{c_3^{(s)}}{2}-\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) -\frac{1}{nc_3^{(s)}}\log(E(\min_{\|\y\|_2=1}(e^{-c_3^{(s)}\sqrt{n}\g^T\y})))\nonumber \\ & = & -\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) -\frac{1}{nc_3^{(s)}}\log(E(\min_{\|\y\|_2=1}(e^{-c_3^{(s)}\sqrt{n}\g^T\y}))).\label{eq:chneg9lift} \end{eqnarray} One should now note that the above bound is effectively correct for any positive constant $c_3^{(s)}$. The only thing that is then left to be done so that the above bound becomes operational is to estimate $E(\min_{\|\y\|_2=1}(e^{-c_3^{(s)}\sqrt{n}\g^T\y}))=Ee^{-c_3^{(s)}\sqrt{n}\|\g\|_2}$. Pretty good estimates for this quantity can be obtained for any $n$. However, to facilitate the exposition we will focus only on the large $n$ scenario. Again, in that case one can use the saddle point concept applied in \cite{SPH}. However, as earlier, here we will try to avoid the entire presentation from there and instead present the core neat idea that has much wider applications. Namely, we start with the following identity \begin{equation} -\|\g\|_2=\max_{\gamma\geq 0}(-\frac{\|\g\|_2^2}{4\gamma}-\gamma).\label{eq:gamaiden} \end{equation} Then \begin{multline} \frac{1}{nc_3^{(s)}}\log(Ee^{-c_3^{(s)}\sqrt{n}\|\g\|_2})=\frac{1}{nc_3^{(s)}}\log(Ee^{c_3^{(s)}\sqrt{n}\max_{\gamma\geq 0}(-\frac{\|\g\|_2^2}{4\gamma}-\gamma)}) \doteq \frac{1}{nc_3^{(s)}}\max_{\gamma\geq 0}\log(Ee^{-c_3^{(s)}\sqrt{n}(\frac{\|\g\|_2^2}{4\gamma}+\gamma)})\\ =\max_{\gamma\geq 0}(-\frac{\gamma}{\sqrt{n}}+\frac{1}{c_3^{(s)}}\log(Ee^{-c_3^{(s)}\sqrt{n}(\frac{\g_i^2}{4\gamma})})),\label{eq:gamaiden1lift} \end{multline} where as earlier $\doteq$ stands for equality when $n\rightarrow \infty$. Also, as mentioned earlier, $\doteq$ is exactly what was shown in \cite{SPH}. Now if one sets $\gamma=\gamma^{(s)}\sqrt{n}$ then (\ref{eq:gamaiden1lift}) gives \begin{multline} \frac{1}{nc_3^{(s)}}\log(Ee^{-c_3^{(s)}\sqrt{n}\|\g\|_2}) =\max_{\gamma^{(s)}\geq 0}(-\gamma^{(s)}+\frac{1}{c_3^{(s)}}\log(Ee^{-c_3^{(s)}(\frac{\g_i^2}{4\gamma^{(s)}})})) =\max_{\gamma^{(s)}\geq 0}(-\gamma^{(s)}-\frac{\alpha}{2c_3^{(s)}}\log(1+\frac{c_3^{(s)}}{2\gamma^{(s)}}))\\ =\max_{\gamma^{(s)}\leq 0}(\gamma^{(s)}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\gamma^{(s)}})).\label{eq:gamaiden2lift} \end{multline} After solving the last maximization one obtains \begin{equation} \widehat{\gamma_n^{(s)}}=\frac{2c_3^{(s)}-\sqrt{4(c_3^{(s)})^2+16\alpha}}{8}.\label{eq:gamaiden3lift} \end{equation} Connecting (\ref{eq:chneg9lift}), (\ref{eq:gamaiden1lift}), (\ref{eq:gamaiden2lift}), and (\ref{eq:gamaiden3lift}) one finally has \begin{equation} \hspace{-.5in}\frac{E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} \geq -\left (\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma_n^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma_n^{(s)}}})\right ),\label{eq:ubmorsoph} \end{equation} where clearly $\widehat{\gamma^{(s)}}$ is as in (\ref{eq:gamaiden3}). As mentioned earlier the above inequality holds for any $c_3^{(s)}$. Of course to make it as tight as possible one then has \begin{equation} \hspace{-.5in}\frac{E(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}\|H\x\|_2)}{\sqrt{n}} \geq - \min_{c_3^{(s)}\geq 0} \left (\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma_n^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma_n^{(s)}}}\right ).\label{eq:ubmorsoph1} \end{equation} We summarize our results from this subsection in the following lemma. \begin{lemma} Let $H$ be an $m\times n$ matrix with i.i.d. standard normal components. Let $n$ be large and let $m=\alpha n$, where $\alpha>0$ is a constant independent of $n$. Let $\xi_n$ be as in (\ref{eq:sqrtnegham1}). Let $\widehat{\gamma_n^{(s)}}$ be such that \begin{equation} \widehat{\gamma_n^{(s)}}=\frac{2c_3^{(s)}-\sqrt{4(c_3^{(s)})^2+16\alpha}}{8}.\label{eq:gamaiden3thm} \end{equation} and $\xi_n^{(l)}$ be a scalar such that \begin{equation} \xi_n^{(l)}=-\min_{c_3^{(s)}\geq 0} \left (\frac{1}{c_3^{(s)}}\log(\mbox{erfc}(-\frac{c_3^{(s)}}{\sqrt{2}})) +\widehat{\gamma^{(s)}}-\frac{\alpha}{2c_3^{(s)}}\log(1-\frac{c_3^{(s)}}{2\widehat{\gamma^{(s)}}}\right ).\label{eq:condxipuneggenlemma} \end{equation} Then \begin{equation} \frac{E\xi_n}{\sqrt{n}}\geq\xi_n^{(l)}.\label{eq:neggenexplemma} \end{equation} Moreover, \begin{eqnarray} & & \lim_{n\rightarrow\infty}P(\min_{\x\in\{-\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}}\}^n}(\|H\x\|_2)\geq \xi_n^{(l)})\geq 1\nonumber \\ & \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_n\geq \xi_n^{(l)})\geq 1 \nonumber \\ & \Leftrightarrow & \lim_{n\rightarrow\infty}P(\xi_n^2\geq (\xi_n^{(l)})^2)\geq 1. \label{eq:neggenproblemma} \end{eqnarray} In particular, when $\alpha=1$ \begin{equation} \frac{E\xi_n}{\sqrt{n}}\geq\xi_n^{(l)}=0.32016.\label{eq:neggenexplemma1} \end{equation} \label{lemma:neggenlemma} \end{lemma} \begin{proof} The first part of the proof related to the expected values follows from the above discussion. The probability part follows by the concentration arguments that are easy to establish (see, e.g discussion in \cite{StojnicHopBnds10}). \end{proof} One way to see how the above lemma works in practice is (as specified in the lemma) to choose $\alpha=1$ to obtain $\xi_n^{(u)}=0.32016$. This value is substantially better than $0.2021$ offered in \cite{StojnicHopBnds10}. \section{Practical algorithmic observations of Hopfield forms} \label{sec:alghop} We just briefly comment on the quality of the results obtained above when compared to their optimal counterparts. Of course, we do not know what the optimal values for ground state energies are. However, we conducted a solid set of numerical experiments using various implementations of bit flipping algorithms (we of course restricted our attention to $\alpha=1$ case). Our feeling is that both bounds provided in this paper are very close to the exact values. We believe that the exact value for $\lim_{n\rightarrow\infty}\frac{E\xi_p}{\sqrt{n}}$ is somewhere around $1.78$. On the other hand, we believe that the exact value for $\lim_{n\rightarrow\infty}\frac{E\xi_n}{\sqrt{n}}$ is somewhere around $0.328$. Another observation is actually probably more important. In terms of the size of the problems, the limiting value seems to be approachable substantially faster for the negative form. Even for a fairly small size $n=50$ the optimal values are already approaching $0.34$ barrier on average. However, for the positive form even dimensions of several hundreds are not even remotely enough to come close to the optimal value (of course for larger dimensions we solved the problems only approximately, but the solutions were sufficiently far away from the bound that it was hard for us to believe that even the exact solution in those scenarios is anywhere close to it). Of course, positive form is naturally an easier problem but there is a price to pay for being easier. One then may say that one way the paying price reveals itself is the slow $n$ convergence of the limiting optimal values. \section{Conclusion} \label{sec:conc} In this paper we looked at classic positive and negative Hopfield forms and their behavior in the zero-temperature limit which essentially amounts to the behavior of their ground state energies. We introduced fairly powerful mechanisms that can be used to provide bounds of the ground state energies of both models. To be a bit more specific, we first provided purely theoretical upper bounds on the expected values of the ground state energy of the positive Hopfield model. These bounds present a substantial improvement over the classical ones we presented in \cite{StojnicHopBnds10}. Moreover, they in a way also present the first set of rigorous theoretical results that emphasizes the combinatorial structure of the problem. Also, we do believe that in the most widely known/studied square case (i.e. $\alpha=1$) the bounds are fairly close to the optimal values. We then translated our results related to the positive Hopfield form to the case of the negative Hopfield form. We again targeted the ground state regime and provided a theoretical lower bound for the expected behavior of the ground state energy. The bounds we obtained for the negative form are an even more substantial improvement over the classical corresponding ones we presented in \cite{StojnicHopBnds10}. In fact, we believe that the bounds for the negative form are very close to the optimal value. As was the case in \cite{StojnicHopBnds10}, the purely theoretical results we presented are for the so-called Gaussian Hopfield models, whereas in reality often a binary Hopfield model can be preferred. However, all results that we presented can easily be extended to the case of binary Hopfield models (and for that matter to an array of other statistical models as well). There are many ways how this can be done. Instead of recalling on them here we refer to a brief discussion about it that we presented in \cite{StojnicHopBnds10}. We should add that ther results we presented in \cite{StojnicHopBnds10} are tightly connected with the ones that can be obtained through the very popular replica methods from statistical physics. In fact, what was shown in \cite{StojnicHopBnds10} essentially provided a rigorous proof that the replica symmetry type of results are actually rigorous upper/lower bounds on the ground state energies of the positive/negative Hopfield forms. In that sense what we presented here essentially confirms that a similar set of bounds that can be obtained assuming a variant of the first level of symmetry breaking are also rigorous upper/lower bounds. Showing this is relatively simple but does require a bit of a technical exposition and we find it more appropriate to present it in a separate paper. We also recall (as in \cite{StojnicHopBnds10}) that in this paper we were mostly concerned with the behavior of the ground state energies. A vast majority of our results can be translated to characterize the behavior of the free energy when viewed at any temperature. While such a translation does not require any further insights it does require paying attention to a whole lot of little details and we will present it elsewhere. \begin{singlespace} \bibliographystyle{plain}
1,108,101,564,505
arxiv
\section*{Graphical Abstract (Optional)} To create your abstract, please type over the instructions in the template box below. Fonts or abstract dimensions should not be changed or altered. \vskip1pc \fbox{ \begin{tabular}{p{.4\textwidth}p{.5\textwidth}} \bf \input{./source/title} \\ Youngkyoon Jang, Hatice Gunes, Ioannis Patras\\[1pc] \includegraphics[width=.3\textwidth]{top-elslogo-fm1 & \input{./source/abstract} \end{tabular} } \end{table*} \clearpage \thispagestyle{empty} \ifpreprint \vspace*{-1pc} \else \fi \begin{table*}[!t] \ifpreprint\else\vspace*{-15pc}\fi \section*{Research Highlights (Required)} To create your highlights, please type the highlights against each \verb+\item+ command. \vskip1pc \fboxsep=6pt \fbox{ \begin{minipage}{.95\textwidth} It should be short collection of bullet points that convey the core findings of the article. It should include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point.) \vskip1pc \begin{itemize} \item Face-SSD does not rely on a pre-normalisation step such as face detection and cropping. \item Face-SSD is a generic architecture that can be utilised for many face analysis tasks. \item Face-SSD provides real-time performance for a number of face-related applications. \item We evaluate and analyse the best combination of data augmentation methods for each application. \item We demonstrate several example applications of face analysis using the proposed Face-SSD \end{itemize} \vskip1pc \end{minipage} } \end{table*} \clearpage \end{comment} \ifpreprint \setcounter{page}{1} \else \setcounter{page}{1} \fi \begin{frontmatter} \title{\input{./source/title}} \author[1]{Youngkyoon \snm{Jang}\corref{cor1}} \cortext[cor1]{Corresponding author: Tel.: +44-(0)752-214-2643;} \ead{[email protected]} \author[2]{Hatice \snm{Gunes}} \author[3]{Ioannis \snm{Patras}} \address[1]{University of Bristol, 1 Cathedral Square, Trinity Street, Bristol BS1 5DD, UK} \address[2]{University of Cambridge, William Gates Building, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK} \address[3]{Queen Mary University of London, Mile End Road, London E1 4NS, UK} \received{1 May 2013} \finalform{10 May 2013} \accepted{13 May 2013} \availableonline{15 May 2013} \communicated{S. Sarkar} \begin{abstract} \input{./source/abstract} \end{abstract} \begin{keyword} \KWD Face Analysis\sep Smile Recognition\sep Facial Attribute Prediction\sep Affect Recognition\sep Valence and Arousal Estimation\sep Single Shot MultiBox Detector \end{keyword} \end{frontmatter} \section{Introduction} \label{sec: introduction} \input{./source/introduction} \section{Related Work} \label{sec: related work} \input{./source/related_work} \section{The Proposed Framework: Face-SSD} \label{sec: method} \input{./source/methodology} \section{Experiments and Results} \label{sec: experiments} \input{./source/experiments} \section{Conclusions} \label{sec: conclusion} \input{./source/conclusion} \section*{Acknowledgments} \input{./source/acknowledgement} \bibliographystyle{model2-names} \subsection{Model Construction} \label{model_construction} Face-SSD consists of layers performing at various stages feature extraction (VGG16 Conv. Layers), face detection, and face analysis as shown in Fig. \ref{fig:basic_structure}(a). $G[1:10]$ represents convolution and pooling layer groups with the same input resolution. For example, G2 consists of two convolution layers and one pooling layer, whereas G6 consists of two convolution layers. Similarly to SSD {\citep{conf/ECCV/Liu16}}, Face-SSD outputs six-scale ($S=6$) heatmap volumes generated by multiple output convolution layers [(f1, t1):(f6, t6)]. f[1:6] is produced by the face detection part, while t[1:6] is produced by the face analysis part. The output convolution layers of the two different parts are aligned and concatenated at the end. Each concatenated output convolution layer outputs a pixel-wise heatmap volume consisting of $(1+4+n)$ heatmap planes. For example, the concatenated output convolution layer for the second scale ($s=2$) outputs a three-dimensional volume ($HM_2 \times HM_2 \times (1+4+n)$) consisting of $(1+4+n)$ heatmap planes having the same resolution ($HM_2 \times HM_2$) of the second scale, as shown in Fig. \ref{fig:basic_structure}(b). The first plane indicates the existence of a face. The next four heatmap planes at each spatial position $i$ contain the centre $(cx, cy) \in R^2$ of the face bounding box and its width $w$, and height $h$. The former is relative to the location $i$ (i.e., $(cx, cy)$ are actually offsets) and the latter is relative to the current heatmap scale $s$. The remaining set of $n$ heatmap planes are the confidences for the $n$ face analysis tasks -- note that these are also heatmaps, that is, they have spatial dimensions as well. All of the convolution layers are followed by ReLU activation function except for the output convolution layer. For the output convolution layer, for binary classification tasks, such as face classification, smile recognition and attribute prediction, we use the sigmoid function (see Fig. \ref{fig:basic_structure}(b), (b-1) and (b-2), respectively). For regression tasks such as bounding box offsets and valence-arousal estimation, we use linear functions similarly to SSD {\citep{conf/ECCV/Liu16}} (see Fig. \ref{fig:basic_structure}(b) and (b-3)). The parameters for the layers in Face-SSD are summarised in Table \ref{table: network parameter detail}. The parameters of the convolution layer are denoted in the order of number of kernels, kernel size, stride and padding, while the parameters of the pool layer follow the order of kernel size, stride and padding. During training, the output (prediction) values that appear in heatmaps responsible for the bounding box and tasks are examined only when the corresponding face label exists in the pixel (see details in Sec. \ref{subsubsec: face detection}). During testing, the values for the bounding box and the task-related output are examined only when the corresponding face confidence score exceeds a threshold. The face detection threshold is determined by selecting the optimal value that provides the best performance on the face detection task. \begin{table}[!t] \small \caption{The detailed parameters of Face-SSD layers (see text)} \label{table: network parameter detail} \centering \begin{tabular}{|c|c|c|} \hline Group ID & Conv. ID: Parameters & Pool \\ \hline\hline G1 & [1:2]: (64, 3, 1, 1) & (2, 2, 0) \\ \hline G2 & [1:2]: (128, 3, 1, 1) & (2, 2, 0) \\ \hline G3 & [1:3]: (256, 3, 1, 1) & (2, 2, 0) \\ \hline G4 & [1:3]: (512, 3, 1, 1) & (2, 2, 0) \\ \hline G5 & [1:3]: (512, 3, 1, 1) & (3, 1, 1) \\ \hline \multirow{2}{*}{G6} & 1: (1024, 3, 1, 1) & $\cdot$ \\ & 2: (1024, 1, 1, 0) & $\cdot$ \\ \hline \multirow{2}{*}{G7} & 1: (256, 1, 1, 0) & $\cdot$ \\ & 2: (512, 3, 2, 1) & $\cdot$ \\ \hline \multirow{2}{*}{G8} & 1: (128, 1, 1, 0) & $\cdot$ \\ & 2: (256, 3, 2, 1) & $\cdot$ \\ \hline \multirow{2}{*}{G9} & 1: (128, 1, 1, 0) & $\cdot$ \\ & 2: (256, 3, 1, 0) & $\cdot$ \\ \hline \multirow{2}{*}{G10} & 1: (128, 1, 1, 0) & $\cdot$ \\ & 2: (256, 3, 1, 0) & $\cdot$ \\ \hline \multirow{3}{*}{Out. Conv.} & $C_{f}$: (1, 3, 1, 1) & $\cdot$ \\ & $B$: (4, 3, 1, 1) & $\cdot$ \\ & $C_{t}$: (n, 3, 1, 1) & $\cdot$ \\ \hline \end{tabular} \end{table} \subsubsection{Implementation details} \label{subsec:SmileNet_Insignts} \noindent {\textbf{Single aspect ratio:}} We utilise only one aspect ratio (square) configuring a default box to assign a ground truth label to a pixel position in a heatmap, as shown in Fig. \ref{fig:default_box_matching_ex}. This is because face deformations, caused by expression and pose, result in similar aspect ratios. This is in accordance with the related work in the literature -- e.g., Hao et al. {\citep{conf/CVPR/Hao17}} proposed Single-Scale RPN utilising one anchor box and Zhang et al. {\citep{conf/ICCV/SZhang17}} proposed S$^{3}$FD utilising one default box. \noindent {\textbf{Usage of pre-trained models:}} Several works including Liu et al. {\citep{conf/ICCV/Liu15}} demonstrate that models pre-trained on object recognition (e.g., ImageNet {\citep{conf/CVPR/Deng09}}) are useful for face localisation. Similarly, networks pre-trained on face recognition (e.g., CelebFaces {\citep{conf/CVPR/Sun14}}) are useful for capturing face attributes at a more detailed level. For this reason, we selectively use pre-trained parameters (trained with an object dataset {\citep{jour/IJCV/Russakovsky15, conf/ICLR/Simonyan115}} and a face dataset {\citep{conf/IWBFIAT/Koestinger11}}) to initialise the convolution filters for face detection and analysis tasks (see details in Sec. \ref{subsec: training}). This usage of pretrained models helps with improving the Face-SSD performance for both face detection (utilising large patterns) and analysis (utilising relatively smaller patterns) tasks. \subsection{Training} \label{subsec: training} Training of Face-SSD follows the following four steps: \begin{enumerate} \item Copying parameters of the VGG16 network {\citep{conf/ICLR/Simonyan115}} (convolution layers) to the VGG16 (feature extraction) part $G[1:5]$ of Face-SSD and subsampling\footnote{For example, the first fully connected layer $fc6$ of the VGG16 network {\citep{conf/ICLR/Simonyan115}} connects all the positions of a $T_{i} = (f_{vi}, m, m) = (512, 7, 7)$ dimensional input feature map, where $f_{vi}$ is the feature (kernel) dimension at each of the $m^2$ spatial locations, to a $f_{vo} = 4096$ dimension output vector $T_{o}$. Let us organise the weights in a tensor $W_{vgg}$ with dimensions $(f_{vo}, f_{vi}, m, m) = (4096, 512, 7, 7)$. On the other hand, Face-SSD takes an input feature map with dimensions $(512, 18, 18)$ and outputs a feature map with dimensions $T^{\prime}_{o}$ = $(a, m^{\prime}, m^{\prime}) = (1024, 18, 18)$ using filters with kernel size $3\times3$. The weight tensor $W_{fssd}$ is then of dimensions $(1024, 512, 3, 3)$. In order to initialise the $W_{fssd}$, we uniformly subsample the $W_{vgg}$ along each of its modes -- in our case by a factor $(4,1,3,3)$. This corresponds to subsampling by a factor of $4$ along the dimension of the output feature vector $T_{o}$ and by a factor of $3$ along each spatial dimension of the input tensor $T_{i}$ of the VGG16 network -- we copy the corresponding weights.} the parameters from fully connected layers ($fc6$ and $fc7$) of VGG16 network to the $G6$ layers of Face-SSD, as described in SSD {\citep{conf/ECCV/Liu16}}. \item Freezing the face analysis part and finetuning the face detection part by using the AFLW (face) dataset {\citep{conf/IWBFIAT/Koestinger11}}. \item Copying the parameters of the layers $G[4:10]$ constituting the face detection part to the corresponding layers of the face analysis part. \item Freezing the face detection part and finetuning the layers $G[4:10]$ constituting the face analysis part by using task-related datasets (e.g., CelebA {\citep{conf/ICCV/Liu15}} or GENKI-4K {\citep{GENKI_DB}} for smile recognition, CelebA {\citep{conf/ICCV/Liu15}} for facial attribute prediction, AffectNet {\citep{jour/TAC/Mollahosseini17}} for valence-arousal estimation). \end{enumerate} The first and second steps are similar to the initialisation and end-to-end learning process of SSD network {\citep{conf/ECCV/Liu16}}. We use the same cost function as the SSD to finetune the face detection part of Face-SSD. \begin{figure*} [t!] \centering \includegraphics[width=0.95\linewidth]{./imgs/methodology_default_box_matching_ex.png} \caption{ {Example of matched default box for the face confidence heatmaps ${C_f}_{[4:5]}$, produced by $f4$ and $f5$ output convolution layers (see Fig. \ref{fig:basic_structure}). (a) Dotted boxes (grey) represent multiple candidate default boxes with multiple different aspect ratios. Face-SSD (b) uses only one aspect ratio in the matching process of the default box $d$. The example image is one of the sample images of AFLW dataset {\citep{conf/IWBFIAT/Koestinger11}}.}} \label{fig:default_box_matching_ex} \end{figure*} \subsubsection{Face Detection} \label{subsubsec: face detection} As described above, finetuning of the face detection part is based on the use of an objective loss function $L_{face}$, which is a weighted sum of the face classification loss $L_{cls}$ and the bounding box regression loss $L_{reg}$ defined as: \begin{equation} L_{face}(x_{f}, c, l, g) = \frac{1}{N}( L_{cls}(x_{f}, c) + \lambda x_{f} L_{reg}(l, g) ), \label{eq:objective_function} \end{equation} where N is the total number of matched default boxes. For the regression loss $L_{reg}$, smooth L1 loss {\citep{conf/ICCV/Girshick15}} is used for calculating the distance between the predicted $l=\{l_{cx}, l_{cy}, l_w, l_h\}$ and the ground truth $g=\{g_{cx}, g_{cy}, g_w, g_h\}$ bounding boxes {\citep{conf/ECCV/Liu16}}, as shown in Eq. \ref{eq:reg_loss} and \ref{eq:smooth function}. Specifically, \begin{equation} \begin{split} L_{reg}(l, g) = \sum\limits_{m \in \{cx,cy,w,h\}} smooth_{L_{1}} (l_m - \hat{g}_m), \\ \hat{g}_{cx} = (g_{cx} - d_{cx}) / d_w,\indent \hat{g}_{cy} = (g_{cy} - d_{cy}) / d_h, \\ \hat{g}_{w} = \log(g_{w} / d_{w}),\indent \hat{g}_{h} = \log(g_{h} / d_{h}), \label{eq:reg_loss} \end{split} \end{equation} where \begin{equation} smooth_{L_{1}} (k) = \begin{cases} 0.5 k^{2}, & \text{if } \| k \| < 1 \\ \| k \| - 0.5, & \text{otherwise} \end{cases} \label{eq:smooth function} \end{equation} The face classification loss $L_{cls}$ is based on binary cross entropy over face confidence scores $c$, as shown in Eq. \ref{eq:cls loss}. \begin{equation} L_{cls}(x_{f},c) = -x_{f} log(c) - (1 - x_{f}) log (1 - c ) \label{eq:cls loss} \end{equation} The flag $x_{f}\in\{1,0\}$, used in the equations above is set to 1 when the overlap between the ground truth and the default bounding box $d=\{d_{cx}, d_{cy}, d_w, d_h\}$ exceeds a threshold. Note that the regression loss is only used when $x_{f}=1$, and is disabled otherwise. At the later stages of the training, similar to {\citep{conf/ECCV/Liu16}} we use Hard Negative Mining (HNM), that is, we sort calculated losses only in the background region ($\neg(x_{f}=1)$) in descending order and select and backpropagate only from the highest ones. Following {\citep{conf/ECCV/Liu16}}, we set the loss-balancing weight $\lambda$ (in Eq. \ref{eq:objective_function}) to $1$. \subsubsection{Face Analysis} \label{subsubsec: task analysis} This section describes how to apply Face-SSD to various face analysis tasks. We address three problems: smile recognition as binary classification, facial attribute prediction as multi-class recognition and valence-arousal estimation as multi-task regression. In all three problems, the architecture of the network differs only in terms of the number $n$ of the facial task heatmaps. For datasets that have multiple annotations for the same image, Face-SSD supports multi-task learning by defining a multi-task loss function as in Eq. \ref{eq:task loss}. \begin{equation} L_{total} = \sum\limits_{t = 1}^{T} ||w_{t} L_{t}(g_{t}, p_{t})||_{2}, \label{eq:task loss} \end{equation} That is, the multi-task loss $L_{total}$ is defined as the $L2$ norm of multiple weighted individual face analysis task losses $\{w_{t} L_{t}\}$. $L_{t}$ is used to calculate errors using a ground truth $g_{t}$ and a prediction $p_{t}$ for a given task $t$. $T$ denotes the total number of face analysis tasks. In what follows we define the loss functions used for different problems we address. \vspace{5mm} \label{subsubsec: smile recognition} \noindent {\textbf{Smile Recognition.}} The smile classification loss $L_{smile}$, is the binary cross entropy over smile confidence scores $e$ and the ground truth $x_{e}=\{1,0\}$ as defined in Eq. \ref{eq:emotion_loss}. \begin{equation} L_{smile}(x_{e}, e) = -x_{e} log(e) - (1 - x_{e}) log (1 - e ) \label{eq:emotion_loss} \end{equation} The ground truth $x_{e}=\{1,0\}$ at each location is set using the default box matching strategy {\citep{conf/ECCV/Liu16}}. The loss is defined at each spatial location of the output heatmap, and in this case, we do not use Hard Negative Mining (HNM), which was required to select negative samples for face detection (see Sec. \ref{subsubsec: face detection}). Finetuning the network for face analysis tasks (i.e., smile recognition) does not impair the face detection performance due to freezing the parameters for the face detection part of Face-SSD. \vspace{5mm} \label{subsubsec: multi-attribute learning} \noindent {\textbf{Facial Attribute Prediction.}} Facial attribute prediction is treated as multiple binary classification problems where a number of attributes may exist simultaneously. For example, a face attribute (such as smiling) can appear independently of other attributes (such as the gender or hair colour). Therefore, we define the facial attribute prediction loss $L_{att}$ as the average of independent attribute losses, that is \begin{equation} L_{att} (G, P) = - \frac{1}{N_a} \sum\limits_{a = 1}^{N_a} (g_{a} log(p_{a}) + (1 - g_{a}) log (1 - p_{a})), \label{eq:att_loss} \end{equation} where $N_a$ denotes the total number of attributes. $g_{a} \in G$ and $p_{a} \in P$ denote the ground truth (1 or 0) label and a predicted attribute confidence score of the $a$-th attribute, respectively. For calculating a single attribute prediction loss associated with an individual attribute $a$, we use the binary cross entropy over attribute confidence scores $p_{a}$. \vspace{5mm} \label{subsubsec: v-a estimation} \noindent {\textbf{Valence and Arousal Estimation.}} Similar to several other previous works (e.g. {\citep{jour/IVC/Koelstra13}}, {\citep{jour/TAC/Mollahosseini17}}), we treat arousal and valence prediction as a regression problem. Valence is related to the degree of positiveness of the affective state, whereas arousal is related to the degree of excitement {\citep{jour/PR/Russell03, jour/JPSP/Russell99}}. We used the Euclidean (L2) distance between the predicted value $\hat{y}_n$ and ground truth value of valence/arousal $y_{n}$, as shown in Eq. \ref{eq:euclidean_dist}. The loss is then defined as the sum of the valence $E_v$ and the arousal $E_a$ losses, that is \begin{equation} \begin{split} L_{emo} = E_v + E_a, \\ E = \frac{1}{2N} \sum\limits_{n=1}^{N} || \hat{y}_n - y_{n} ||_{2}^{2}, \label{eq:euclidean_dist} \end{split} \end{equation} where $N$ is the number of image samples in a mini-batch. \subsubsection{Data Augmentation in Training} \label{subsec:data_augmentation} Face-SSD uses a $300 \times 300$ resolution and $3$ channel colour input image. Prior to data augmentation, all pixel values for the R, G, and B channels of a sample image are normalised based on the mean and standard deviation values of the entire dataset. Each sample image is first flipped in the horizontal direction with a probability of 0.5. In the training session, we randomly select one of data augmentation mechanisms (shrinking, cropping, gamma correction and Hide-and-Seek (H-a-S) {\citep{conf/ICCV/Singh17}}) to create noisy data samples for each epoch. Both shrinking and cropping maintain the aspect ratio. Gamma correction is applied separately to the individual R, G, B channels. In Hide and Seek (H-a-S) {\citep{conf/ICCV/Singh17}} we hide image subareas and force a network to seek more context in areas that are not as discriminative as key distinctive areas such as lip corners. We first randomly select a division number among $3$, $4$, $5$ or $6$. If we select $3$, the image region will be divided into $9$ ($3 \times 3$) sub-image patches. Each sub-image patch is then hidden (filled with the mean R, G, B values of all data samples in a dataset) with a probability of $0.25$. \subsection{Testing} \label{subsubsec: testing} The registration-free Face-SSD for a specific face analysis task (e.g., smile recognition) is based on both face and task (e.g., smile) confidence scores. First, the locations in the face confidence heatmap, for which the score exceeds a threshold ($th_{face} = 0.1$), are selected. Then Non-Maximum Suppression (NMS) method (with jaccard overlap value $0.35$ as in S$^{3}$FD {\citep{conf/ICCV/SZhang17}}) is used to extract the final bounding boxes. Subsequently, a task-specific threshold $th_{t}$ is applied on the task related score of the final bounding boxes (Fig. \ref{fig:ex_face_emotion_detection}). In the case of the regression (e.g., valence-arousal estimation), the output value of the final bounding box is used. As mentioned in Sec. \ref{model_construction}, each output layer of Face-SSD generates several heatmaps: one for face detection, four for the offset coordinates of face bounding box and $n$ for the $n$ number of face analysis tasks, as shown in Fig. \ref{fig:basic_structure}(b). Specifically, Fig. {\ref{subfig:smile_results_ex}} and {\subref{subfig:v_a_results_ex}} visualise the heatmaps generated by Face-SSD's second and third-scale output layers ($s=2, 3$), which handle the second and third smallest sizes of the face that appears in the image, respectively. Thus, activations in the heatmap are high when a specific size of face is detected. For the given example of smile recognition, as shown in Fig. {\ref{subfig:smile_results_ex}}, the forefront heatmap shows two clusters of pixels, indicating the existence of two faces. The rearmost heatmap highlights the corresponding pixel only when a task is detected. In this example the heatmap has high values when the detected face is a smiling face. \begin{figure} [t!] \centering \includegraphics[width=0.95\linewidth]{./imgs/methodology_detection_result_face_only.png} \subfigure[Smile Recognition] { \label{subfig:smile_results_ex} \includegraphics[width=0.95\linewidth]{./imgs/methodology_detection_result_face_and_emotion.png} } \subfigure[Valence-Arousal Estimation] { \label{subfig:v_a_results_ex} \includegraphics[width=0.95\linewidth]{./imgs/methodology_detection_result_face_and_va_ar.png} } \caption{{Examples of face detection and face analysis tasks. As a representative example of classification and regression, we visualised the output heatmaps for smile recognition and valence-arousal estimation. \subref{subfig:smile_results_ex} The heatmaps represent face classification, bounding box regression and smile recognition results. \subref{subfig:v_a_results_ex} For the valence-arousal example, we only visualise the output heatmaps for face classification, valence and arousal estimation from the bottom row. We rescaled the range of output values at the valence-arousal estimation heatmap from $[-1:1]$ to $[0:255]$ for the visualisation. The median (127) in this example represents the neutral valence or arousal value (0).}} \label{fig:ex_face_emotion_detection} \end{figure} \subsection{Datasets} \label{subsec: exp datasets} In this paper, we show the performance of the proposed Face-SSD on three representative face analysis applications such as smile recognition (binary classification), facial attribute prediction (multiple class recognition), and valence-arousal estimation (multiple task regression). We stress that the structure of the network, including the number of filters and filter sizes remain the same -- the only change is the number of output layer heatmaps. We used GENKI-4K {\citep{GENKI_DB}}, CelebA {\citep{conf/ICCV/Liu15}}, and AffectNet {\citep{jour/TAC/Mollahosseini17}} datasets to test the three representative applications using Face-SSD. Beginning with {\citep{jour/TPAMI/Whitehill09}}, which performed the first extensive smile detection study, most of the subsequent studies used the GENKI-4K\footnote{The GENKI-4K {\citep{GENKI_DB}} dataset is a subset of the GENKI dataset used in {\citep{jour/TPAMI/Whitehill09}}. This dataset consists of $4,000$ face images, each labelled with smile and head pose (yaw, pitch, roll). Only the GENKI-4K dataset is publicly available.} dataset for performance evaluation {\citep{GENKI_DB}}. In this paper, the smiling face detection experiments were performed not only on the GENKI-4K dataset but also on the CelebA dataset {\citep{conf/ICCV/Liu15}} which also contains smile labels. For facial attribute prediction experiments, we used the CelebA dataset {\citep{conf/ICCV/Liu15}} which is the most representative dataset. Finally, for the valence-arousal estimation experiment we used the AffectNet {\citep{jour/TAC/Mollahosseini17}} dataset consisting of continuous level (valence-arousal) labels and face images captured in the wild. The AFLW dataset {\citep{conf/IWBFIAT/Koestinger11}} used for face detection and other datasets used for face analysis tasks (i.e., GENKI-4K {\citep{GENKI_DB}}, CelebA {\citep{conf/ICCV/Liu15}}, AffectNet {\citep{jour/TAC/Mollahosseini17}}) have different bounding box positions and shapes. To solve this problem, we empirically adjusted the bounding box position of these datasets to create a square box that surrounds the entire face area centred on the nose (similar to the bounding box of the AFLW dataset). To do this, we first used the trained Face-SSD to detect a face bounding box. Then, we double-checked whether the detected bounding box is correct. If it was incorrect, we modified the bounding box manually. In particular, when using the CelebA {\citep{conf/ICCV/Liu15}} dataset, we only examined smile recognition and facial attribute prediction performance for annotated faces. Each image sample in the CelebA dataset has only one bounding box with its corresponding attribute labels, even if the image contains multiple faces. Therefore, when multiple bounding boxes were detected (black boxes in Fig. \ref{fig:multi-detection case for attribute testing}) during the test time, we only calculated the accuracy for the detected bounding box that matched the ground truth position (red box in Fig. \ref{fig:multi-detection case for attribute testing}). If there is no bounding box detected for the ground truth location, it is considered as a false negative when calculating the accuracy. \begin{figure} [t!] \small \begin{center} \includegraphics[width=0.95\linewidth]{./imgs/imp_attribute_multibox_case.png} \end{center} \vspace{-0.4cm} \caption{{If there were multiple faces detected (black boxes), only the annotated faces with the ground truth label ({\color{R}{red box}}) were evaluated for attribute prediction. The face detected in the background was not used for accuracy measurement.}} \label{fig:multi-detection case for attribute testing} \end{figure} \subsection{Face Detection} \label{subsec: face detection performance} First, we evaluate the face detection performance. Although Face-SSD performs face detection in parallel with one or more tasks, the face analysis task results appearing in the output heatmap are only examined at the corresponding pixel positions that indicate successful face detection (as discussed in Sec. \ref{subsubsec: testing}). Here, we evaluate the face detection performance of Face-SSD on face analysis task datasets, including GENKI-4K {\citep{GENKI_DB}}, CelebA {\citep{conf/ICCV/Liu15}}, and AffectNet {\citep{jour/TAC/Mollahosseini17}}. According to {\citep{jour/JoV/Du11}}'s experimental results, the visual recognition ability of a human is degraded when image resolution falls below $20 \times 30$ pixels. For this reason, the face detection of Face-SSD aims to support face analysis tasks rather than detecting tiny faces, which is beyond the scope of this work. To this end, we evaluate the face detection performance on the face analysis task (e.g., smile, attribute, valence-arousal) datasets that do not include severe occlusion or very small faces. Instead, these datasets consist of images that typically contain high-resolution faces compared to $20 \times 30$ pixels and are captured in the wild (with naturalistic variations in pose, occlusion, and/or scale). The face detection results are shown in Table \ref{table: FD_HNM_HaS_Effects} in terms of Equal Error Rate (EER) and Average Precision (AP) {\citep{jour/IJCV/Everingham10}}. First, we investigated face detection performance using the same strategy as the SSD {\citep{conf/ECCV/Liu16}} called Face-SSD Baseline (Face-SSD-B) {\citep{conf/ICCVW/Jang17}}. The AFLW dataset {\citep{conf/IWBFIAT/Koestinger11}} was used for training face detection part of Face-SSD. For data augmentation, Face-SSD-B used shrinking, cropping, and gamma correction (see details in Sec. \ref{subsec:data_augmentation}). Using the data augmentation, Face-SSD-B trained on the non-challenging face dataset AFLW did not achieve a competitive performance (EER=$05.42\%$ and AP=$99.50$) in comparison to using other face detection datasets. However, unlike general face detection evaluation, we used the simplest face analysis task dataset (GENKI-4K {\citep{GENKI_DB}}) to provide a performance comparison between different strategy combinations. \begin{table*}[!t] \small \caption{Effects of using Hard Negative Mining (HNM) and Hide-and-Seek (H-a-S) methods when training face detection in Face-SSD. (See text for more details about abbreviations and description)} \label{table: FD_HNM_HaS_Effects} \centering \begin{tabular}{|c||c c|c|c c|c c||c|c|} \hline \multirow{2}{*}{ } & \multicolumn{2}{ |c| }{IoU for GTs} & \multirow{2}{*}{HNM} & \multicolumn{2}{ |c| }{H-a-S for All} & \multicolumn{2}{ |c|| }{H-a-S for Half} & \multicolumn{2}{ |c| }{GENKI-4K Test Results} \\ \cline{2-3}\cline{5-10} & 0.50 & 0.35 & & Fine & Coarse & Fine & Coarse & EER ($\%$) & AP \\ \hline\hline {\textbf{Face-SSD-B}}aseline {\citep{conf/ICCVW/Jang17}} & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & 05.42 & 99.50 \\ \hline Face-SSD-B with More GTs & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & {\textbf{03.68}} & {\textbf{99.91}} \\ \hline Face-SSD-B with HNM & $\cdot$ & $\checkmark$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & {\textbf{01.72}} & {\textbf{99.88}} \\ \hline \multirow{4}{*}{Face-SSD-B with H-a-S} & $\cdot$ & $\checkmark$ & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & 34.83 & 93.54 \\ & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & 08.26 & 97.79 \\ & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\checkmark$ & $\cdot$ & 01.95 & 99.89 \\ & $\cdot$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\cdot$ & $\checkmark$ & {\textbf{01.16}} & {\textbf{99.91}} \\ \hline {\textbf{Face-SSD}} & $\cdot$ & $\checkmark$ & $\checkmark$ & $\cdot$ & $\cdot$ & $\cdot$ & $\checkmark$ & {\textbf{00.66}} & {\textbf{99.88}} \\ \hline \end{tabular} \end{table*} To improve the face detection performance we first lowered the IoU threshold from $0.50$ to $0.35$ when assigning ground truths, similarly to S$^{3}$FD {\citep{conf/ICCV/SZhang17}}. Lowering the IoU threshold when matching default box increases the number of positive examples. By doing so the accuracy was improved from EER=$05.42\%$ and AP=$99.50$ to EER=$03.68\%$ and AP=$99.91$. In order to improve the performance further, we applied a Hard Negative Mining (HNM) strategy on the training data samples in a minibatch. Specifically, we extracted $30\%$ of the data samples that currently output the largest loss in a minibatch, and then re-used the data samples in the next minibatch. By doing so, we further reduced the detection error from EER=$03.68\%$ and AP=$99.91$ to EER=$01.72\%$ and AP=$99.88$. Finally, we applied H-a-S {\citep{conf/ICCV/Singh17}} as one of our data augmentation strategies. However, unlike what is reported in the original H-a-S paper {\citep{conf/ICCV/Singh17}}, when the H-a-S method was applied to all training samples, the detection performance dropped significantly to EER=$34.83\%$ and AP=$93.54$. Applying the H-a-S method randomly to approximately half of the training samples reduced the error to EER=$01.95\%$ and AP=$99.89$. In addition, as shown in Table \ref{table: FD_HNM_HaS_Effects}, our results indicate that for face detection it is better to hide coarsely divided patches (EER=$01.16\%$ and AP=$99.91$) than to hide finely divided ones (EER=$01.95\%$ and AP=$99.89$) because face detection relies on relatively large continuous patterns. In Table \ref{table: FD_HNM_HaS_Effects}, for H-a-S, the coarse patch division process randomly selects the patch size from 3, 4, 5 and 6 (see Sec. \ref{subsec:data_augmentation}), whereas the fine patch division process randomly selects the patch splitting size from 16, 32, 44 and 56 as proposed originally in {\citep{conf/ICCV/Singh17}}. \begin{figure} [t!] \begin{center} \includegraphics[width=0.95\linewidth]{./imgs/exp_face_detection_precision_recall.png} \includegraphics[width=0.95\linewidth]{./imgs/exp_face_detection_ROC.png} \end{center} \vspace{-0.4cm} \caption{{Experimental curves for face detection performance on GENKI-4K {\citep{GENKI_DB}}, CelebA {\citep{conf/ICCV/Liu15}} and AffectNet {\citep{jour/TAC/Mollahosseini17}} datasets: Precision-Recall curves and Receiver Operating Characteristic (ROC) curves.}} \label{fig:exp_face_detection_curves} \end{figure} By applying the training strategies of low IoU threshold, HNM and H-a-S, we achieved EER=$0.66\%$ and AP=$99.88$ on the GENKI-4K dataset. For the CelebA dataset, we achieved EER=$0.56\%$ and AP=$99.88$, as shown in Fig. \ref{fig:exp_face_detection_curves}. For the AffectNet dataset, we achieved EER=$0.65\%$ and AP=$99.41$. These results indicate that Face-SSD can robustly detect faces in unconstrained environments, and the Face-SSD can be used for further face analysis tasks such as facial attribute and affect prediction along the dimensions of valence and arousal. The optimal thresholds for the best face detection accuracy were $0.20$ for the GENKI-4K dataset, $0.16$ for CelebA dataset, and $0.11$ for AffectNet dataset. \subsection{Face Analysis} \label{subsec: smile recognition performance} Face-SSD is inspired by SSD {\citep{conf/ECCV/Liu16}}, which promises real-time detection performance. Thus, the parameter values used in the process of finetuning the face detection and the face analysis parts of Face-SSD are initialised with the values used for training the base network of SSD {\citep{conf/ECCV/Liu16}}. We used SGD with initial learning rate=$10^{-3}$, momentum=$0.9$, weight decay=$0.0005$, and batch size=$16$. We used learning rate=$10^{-3}$ for the first $40K$ iterations, then continued training for $40K$ with learning rate=$10^{-2}$. We continuously reduced the learning rate every $40K$ iterations until it reached learning rate=$10^{-5}$. Increasing the learning rate for the second $40K$ iterations speeds up the optimisation process. However, we first started the training process with learning rate=$10^{-3}$, because the optimisation process tends to diverge if we use a larger learning rate in the beginning. The following sections detail the experiments we have conducted to evaluate the two main performance factors of Face-SSD, namely prediction accuracy and processing time, for two tasks: smile recognition and facial attribute prediction. \begin{table*}[!t] \small \caption{A detailed comparison with the state-of-the-art methods on the GENKI-4K dataset {\citep{GENKI_DB}}. We summarise the features, classifiers, detection / registration methods and input image resolution (width, height, and channel) that were used in previous studies in published order. All previous studies require a normalised (cropped and aligned) input image, which necessarily require face detection and registration steps in advance (except {\citep{jour/MVA/Chen17}}-II and III). Some works {\citep{jour/TIP/Shan12, conf/ICSPRA/Jain13, conf/ACPR/Zhang15, conf/ICIP/Li16, jour/MVA/Chen17}} do not specify how to detect and align a face (in this case, `?'), while {\citep{conf/ECCVW/Kahou14}} mentions that the original image is used if the face detection fails.} \label{table: comparison} \centering \begin{tabular}{|c|c|c|c|c|c|l|} \hline Method & Feature & Classifier & Detection & Registration & Input ($W \times H \times C$) & Accuracy $(\%)$ \\ \hline\hline {\citep{jour/TIP/Shan12}} & Pixel comparison & AdaBoost & ? & Eyes (manual) & $48\times48\times1$ & $89.70 \pm 0.45$ \\ {\citep{conf/ACCV/Liu12}} & HOG & SVM & VJ* & Eyes & $48\times48\times1$ & $92.26 \pm 0.81$ \\ {\citep{conf/ICSPRA/Jain13}} & Multi-Gaussian & SVM & VJ* &? & $64\times64\times1$ & $92.97$ \\ {\citep{conf/ECCVW/Kahou14}} & LBP & SVM & VJ*+Sun* / ori. & $5+6$ Pts & $96\times96\times1$ & $93.20 \pm 0.92$ \\ {\citep{jour/NeuroCom/An15}} & HOG & ELM & VJ* & Flow-based* & $100\times100\times1$ & $88.20$ \\ {\citep{conf/ACPR/Zhang15}} & CNN & Softmax & ? & Face Pts & $90\times90\times1$ & $94.60 \pm 0.29$ \\ {\citep{conf/ICIP/Li16}} & Gabor-HOG & SVM & VJ* / manual & ? & $64\times64\times1$ & $91.60 \pm 0.89$ \\ {\citep{jour/MVA/Chen17}}-I & CNN & SVM & Liu* & ? & $64\times64\times1$ & $92.05 \pm 0.74$ \\ \hline {\citep{jour/MVA/Chen17}}-II & CNN & SVM & Liu* & $\cdot$ & $64\times64\times1$ & ${\textbf{90.60}} \pm 0.75$ \\ {\citep{jour/MVA/Chen17}}-III & CNN & SVM & $\cdot$ & $\cdot$ & $64\times64\times1$ & $78.10 \pm 0.56$ \\ \hline {\textbf{Face-SSD}} & {\textbf{CNN}} & {\textbf{Sigmoid }} & $\cdot$ & $\cdot$ & ${\textbf{300}}\times{\textbf{300}}\times{\textbf{3}}$ & ${\textbf{95.76}} \pm 0.56$ \\ \hline \end{tabular} \\ * VJ: {\citep{jour/IJCV/Viola04}}, Liu: {\citep{conf/ICCV/Liu15}}, Sun: {\citep{conf/CVPR/Sun13}}, Flow-based: {\citep{jour/NeuroCom/An15}} \end{table*} \subsubsection{Smile Recognition} \label{subsec: quantitative results} Accuracy for this task refers to the smile recognition performance including the face detection results. If face detection fails, the result of smile recognition is considered to be a non-smile. {\textbf{Testing on the GENKI-4K dataset:}} Experiments that use this dataset are conventionally based on four-fold validation procedures. However, as GENKI-4K dataset contains a relatively small number of data samples ($4,000$), for training we initially utilised the CelebA dataset that contains a rich set of images. When Face-SSD was trained on the CelebA dataset, we used the entire GENKI-4K dataset for testing. We obtained a smile recognition accuracy of $95.23\%$, as shown in Fig. \ref{fig:ROC_graph_smile_GENKI}. Despite being trained on a completely different dataset with different characteristics, Face-SSD has already surpassed all the latest methods that used the GENKI-4K dataset for testing, as shown in Table \ref{table: comparison}. \begin{figure} [t!] \small \begin{center} \includegraphics[width=0.95\linewidth]{./imgs/exp_ROC_curve_smile_accuracy_GENKI.png} \end{center} \vspace{-0.4cm} \caption{{Receiver Operating Characteristic (ROC) curve for smiling face detection accuracy using GENKI-4K {\citep{GENKI_DB}} dataset. Tr and Te represent training and testing, respectively.}} \label{fig:ROC_graph_smile_GENKI} \end{figure} To provide a fair comparison with other methods that use the four-fold validation strategy, we used the GENKI-4K dataset together with the bounding box annotations obtained with our method (as explained in Sec. \ref{subsec: exp datasets}) to finetune the Face-SSD, which was trained on the CelebA dataset. In this case, the smile recognition accuracy is improved further. This is due to the fact that the training samples in GENKI-4K dataset are relatively similar to the testing samples as compared to CelebA dataset. Although the training and testing samples do not overlap, using the same dataset (GENKI-4K) for training helps Face-SSD learn the test sample characteristics of the same (GENKI-4K) dataset. Our four-fold validation results were $96.33\%$, $96.30\%$, $95.30\%$ and $95.10\%$, as shown in Fig. \ref{fig:ROC_graph_smile_GENKI}. Compared to the accuracies reported by existing works listed in Table \ref{table: comparison}, our method obtains the best results with mean=${\textbf{95.76}}\%$ and standard deviation=$0.56\%$. Although Face-SSD does not require separate steps for face detection and registration, Face-SSD's smile recognition results rely on the face detection performed in parallel on the same architecture (as explained in Sec. \ref{subsec: face detection performance}). Among the existing works listed in Table \ref{table: comparison}, Chen's work ({\citep{jour/MVA/Chen17}}-II) reports testing accuracy when the registration process is not used. We therefore compare Face-SSD's smile recognition performance more closely to the method of Chen ({\citep{jour/MVA/Chen17}}-II). Our experimental results show that Face-SSD outperforms (${\textbf{95.76}}\%$) the most recently reported smile recognition result of Chen ($90.60\%$) based on a deep learning architecture ({\citep{jour/MVA/Chen17}}-II). \begin{figure*} [!t] \begin{center} \includegraphics[width=0.95\linewidth]{./imgs/exp_eer_for_att_prediction.png} \end{center} \vspace{-0.4cm} \caption{{Performance comparison in terms of accuracy ($\%$) on CelebA {\citep{conf/ICCV/Liu15}} dataset for facial attribute prediction. Face-SSD delivers excellent prediction performance that is very close to the state-of-the-art models without modifying the Face-SSD architecture. The state-of-the-art models are PANDA {\citep{conf/CVPR/Zhang14}}, LNets+ANet {\citep{conf/ICCV/Liu15}}, CTS-CNN {\citep{conf/ICB/Zhong16}}, MT-RBM PCA {\citep{conf/CVPRW/Ehrlich16}}, Walk and Learn {\citep{conf/CVPR/Wang16}}, MCNN-AUUX {\citep{conf/AAAI/Hand17}}, DMTL {\citep{jour/PAMI/Han17}}, R-Codean {\citep{jour/PRL/Sethi18}}. (See Table \ref{table: comparison_att_CelebA} for more detailed accuracy comparisons.)}} \label{fig:ACC_multi_att} \end{figure*} {\textbf{Testing on the CelebA dataset:}} In the second experiment, we used the CelebA dataset to train and test Face-SSD. In this experiment, we randomly selected $75\%$ of the dataset for training and used the remaining $25\%$ for the testing. We performed several experiments using different combinations of randomly selected training and test samples. Our experimental results show that Face-SSD detects smiling faces accurately (mean=${\textbf{92.81}}\%$), similarly to the state-of-the-art methods ({\citep{conf/ICCV/Liu15}}: $92.00\%$ and {\citep{conf/FG/Ranjan16}}: $93.00\%$), as shown in Table \ref{table: comparison_CelebA}. However, Face-SSD is much faster (${\textbf{47.28}}$ $ms$) than the other methods ({\citep{conf/ICCV/Liu15}}: $139$ $ms$, {\citep{conf/FG/Ranjan16}}: $3,500$ $ms$) that require region proposal methods for smile recognition (see Table \ref{table: comparison_CelebA}). \begin{table}[!t] \small \caption{Comparison to the state-of-the-art methods on the CelebA dataset in terms of accuracy $(\%)$ and time (ms.). RP, EB and SS refer to Region Proposal, EdgeBox {\citep{conf/ECCV/Zitnick14}} and Selective Search {\citep{conf/ICCV/Sande11}}, respectively.} \label{table: comparison_CelebA} \centering \begin{tabular}{|c|c|c|c|} \hline Method & RP & Acc. $(\%)$ & Time (ms.) \\ \hline\hline Liu et al. {\citep{conf/ICCV/Liu15}} & EB {\citep{conf/ECCV/Zitnick14}} & $92.00$ & $139.00$ \\ Ranjan et al. {\citep{conf/FG/Ranjan16}} &SS {\citep{conf/ICCV/Sande11}} & $93.00$ & $3,500.00$ \\ {\textbf{Face-SSD}} & $\cdot$ & ${\textbf{92.81}}$ & ${\textbf{47.28}}$ \\ \hline \end{tabular} \end{table} \begin{table*}[!t] \tiny \caption{Comparison to the state-of-the-art methods for facial attribute prediction on the CelebA dataset in terms of prediction accuracy. The average accuracies of PANDA {\citep{conf/CVPR/Zhang14}}, LNets+ANet {\citep{conf/ICCV/Liu15}}, CTS-CNN {\citep{conf/ICB/Zhong16}}, MT-RBM PCA {\citep{conf/CVPRW/Ehrlich16}}, Walk and Learn {\citep{conf/CVPR/Wang16}}, MCNN-AUUX {\citep{conf/AAAI/Hand17}}, DMTL {\citep{jour/PAMI/Han17}}, R-Codean {\citep{jour/PRL/Sethi18}}, and the proposed Face-SSD are $85.42\%$, $87.30\%$, $86.60\%$, $86.97\%$, $88.65\%$, $91.29\%$, $92.60\%$, $90.14\%$ and $90.14\%$, respectively.} \label{table: comparison_att_CelebA} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c||c|} \hline & {\rotatebox[origin=c]{90}{5 o Clock Shadow}} & {\rotatebox[origin=c]{90}{Arched Eyebrows}} & {\rotatebox[origin=c]{90}{Attractive}} & {\rotatebox[origin=c]{90}{Bags Under Eyes}} & {\rotatebox[origin=c]{90}{Bald}} & {\rotatebox[origin=c]{90}{Bangs}} & {\rotatebox[origin=c]{90}{Big Lips}} & {\rotatebox[origin=c]{90}{Big Nose}} & {\rotatebox[origin=c]{90}{Black Hair}} & {\rotatebox[origin=c]{90}{Blond Hair}} & {\rotatebox[origin=c]{90}{Blurry}} & {\rotatebox[origin=c]{90}{Brown Hair}} & {\rotatebox[origin=c]{90}{Bushy Eyebrows}} & {\rotatebox[origin=c]{90}{Chubby}} & {\rotatebox[origin=c]{90}{Double Chin}} & {\rotatebox[origin=c]{90}{Eyeglasses}} & {\rotatebox[origin=c]{90}{Goatee}} & {\rotatebox[origin=c]{90}{Gray Hair}} & {\rotatebox[origin=c]{90}{Heavy Makeup}} & {\rotatebox[origin=c]{90}{High Cheekbones}} & \\ \hline PANDA (CVPR14) &88.0 & 78.0 & 81.0 & 79.0 & 96.0 & 92.0 & 67.0 & 75.0 & 85.0 & 93.0 & 86.0 & 77.0 & 86.0 & 86.0 & 88.0 & 98.0 & 93.0 & 94.0 & 90.0 & 86.0 & \\ LNets+ANet (ICCV15) &91.0 & 79.0 & 81.0 & 79.0 & 98.0 & 95.0 & 68.0 & 78.0 & 88.0 & 95.0 & 84.0 & 80.0 & 90.0 & 91.0 & 92.0 & 99.0 & 95.0 & 97.0 & 90.0 & 87.0 & \\ CTS-CNN (ICB16) &89.0 & 83.0 & 82.0 & 79.0 & 96.0 & 94.0 & 70.0 & 79.0 & 87.0 & 93.0 & 87.0 & 79.0 & 87.0 & 88.0 & 89.0 & 99.0 & 94.0 & 95.0 & 91.0 & 87.0 & \\ MT-RBM PCA (CVPRW16) &90.0 & 77.0 & 76.0 & 81.0 & 98.0 & 88.0 & 69.0 & 81.0 & 76.0 & 91.0 & 95.0 & 83.0 & 88.0 & 95.0 & 96.0 & 96.0 & 96.0 & 97.0 & 85.0 & 83.0 & \\ Walk and Learn (CVPR16) &84.0 & {\textbf{87.0}} & 84.0 & 87.0 & 92.0 & 96.0 & 78.0 & 91.0 & 84.0 & 92.0 & 91.0 & 81.0 & {\textbf{93.0}} & 89.0 & 93.0 & 97.0 & 92.0 & 95.0 & {\textbf{96.0}} & {\textbf{95.0}} & \\ \hline MCNN-AUX (AAAI17) &94.5 & 83.4 & 83.1 & 84.9 & 98.9 & {\textbf{96.0}} & 71.5 & 84.5 & {\textbf{89.8}} & {\textbf{96.0}} & 96.2 & 89.2 & 92.8 & 95.7 & 96.3 & {\textbf{99.6}} & 97.2 & {\textbf{98.2}} & 91.5 & 87.6 & \\ DMTL (TPAMI17) &{\textbf{95.0}} & 86.0 & {\textbf{85.0}} & {\textbf{99.0}} & 99.0 & 96.0 & {\textbf{88.0}} & {\textbf{92.0}} & 85.0 & 91.0 & 96.0 & {\textbf{96.0}} & 85.0 & {\textbf{97.0}} & {\textbf{99.0}} & 99.0 & {\textbf{98.0}} & 96.0 & 92.0 & 88.0 & \\ R-Codean (PRL18) &92.9 & 81.6 & 79.7 & 83.2 & {\textbf{99.5}} & 94.5 & 79.9 & 83.7 & 84.8 & 95.0 & {\textbf{96.6}} & 83.0 & 91.4 & 95.5 & 96.5 & 98.2 & 96.8 & 97.9 & 89.7 & 86.7 & \\ Face-SSD &92.9 & 82.0 & 81.3 & 82.5 & 98.6 & 95.2 & 77.8 & 82.3 & 87.9 & 93.6 & 95.0 & 83.5 & 89.6 & 95.1 & 96.0 & 99.2 & 96.3 & 97.6 & 90.7 & 86.8 & \\ \hline \hline & {\rotatebox[origin=c]{90}{Male}} & {\rotatebox[origin=c]{90}{Mouth Slightly Open}} & {\rotatebox[origin=c]{90}{Mustache}} & {\rotatebox[origin=c]{90}{Narrow Eyes}} & {\rotatebox[origin=c]{90}{No Beard}} & {\rotatebox[origin=c]{90}{Oval Face}} & {\rotatebox[origin=c]{90}{Pale Skin}} & {\rotatebox[origin=c]{90}{Pointy Nose}} & {\rotatebox[origin=c]{90}{Receding Hairline}} & {\rotatebox[origin=c]{90}{Rosy Cheeks}} & {\rotatebox[origin=c]{90}{Sideburns}} & {\rotatebox[origin=c]{90}{Smiling}} & {\rotatebox[origin=c]{90}{Straight Hair}} & {\rotatebox[origin=c]{90}{Wavy Hair}} & {\rotatebox[origin=c]{90}{Wearing Earrings}} & {\rotatebox[origin=c]{90}{Wearing Hat}} & {\rotatebox[origin=c]{90}{Wearing Lipstick}} & {\rotatebox[origin=c]{90}{Wearing Necklace}} & {\rotatebox[origin=c]{90}{Wearing Necktie}} & {\rotatebox[origin=c]{90}{Young}} & Mean \\ \hline PANDA (CVPR14) &97.0 & 93.0 & 93.0 & 84.0 & 93.0 & 65.0 & 91.0 & 71.0 & 85.0 & 87.0 & 93.0 & 92.0 & 69.0 & 77.0 & 78.0 & 96.0 & 93.0 & 67.0 & 91.0 & 84.0 & 85.42 \\ LNets+ANet (ICCV15) &98.0 & 92.0 & 95.0 & 81.0 & 95.0 & 66.0 & 91.0 & 72.0 & 89.0 & 90.0 & 96.0 & 92.0 & 73.0 & 80.0 & 82.0 & 99.0 & 93.0 & 71.0 & 93.0 & 87.0 & 87.30 \\ CTS-CNN (ICB16) &{\textbf{99.0}} & 92.0 & 93.0 & 78.0 & 94.0 & 67.0 & 85.0 & 73.0 & 87.0 & 88.0 & 95.0 & 92.0 & 73.0 & 79.0 & 82.0 & 96.0 & 93.0 & 73.0 & 91.0 & 86.0 & 86.60 \\ MT-RBM PCA (CVPRW16) &90.0 & 82.0 & {\textbf{97.0}} & 86.0 & 90.0 & 73.0 & 96.0 & 73.0 & 92.0 & 94.0 & 96.0 & 88.0 & 80.0 & 72.0 & 81.0 & 97.0 & 89.0 & 87.0 & 94.0 & 81.0 & 86.97 \\ Walk and Learn (CVPR16) &96.0 & {\textbf{97.0}} & 90.0 & 79.0 & 90.0 & {\textbf{79.0}} & 85.0 & 77.0 & 84.0 & {\textbf{96.0}} & 92.0 & {\textbf{98.0}} & 75.0 & 85.0 & {\textbf{91.0}} & 96.0 & 92.0 & 77.0 & 84.0 & 86.0 & 88.65 \\ \hline MCNN-AUX (AAAI17) &98.2 & 93.7 & 96.9 & 87.2 & 96.0 & 75.8 & {\textbf{97.0}} & 77.5 & 93.8 & 95.2 & 97.8 & 92.7 & 83.6 & 83.9 & 90.4 & {\textbf{99.0}} & {\textbf{94.1}} & 86.6 & 96.5 & 88.5 & 91.29 \\ DMTL (TPAMI17) &98.0 & 94.0 & 97.0 & 90.0 & {\textbf{97.0}} & 78.0 & 97.0 & {\textbf{78.0}} & {\textbf{94.0}} & 96.0 & {\textbf{98.0}} & 94.0 & {\textbf{85.0}} & {\textbf{87.0}} & 91.0 & 99.0 & 93.0 & 89.0 & {\textbf{97.0}} & {\textbf{90.0}} & {\textbf{92.60}} \\ R-Codean (PRL18) &95.9 & 89.8 & 96.3 & {\textbf{90.6}} & 94.6 & 76.5 & 96.9 & 77.0 & 93.6 & 95.3 & 97.6 & 92.8 & 81.2 & 75.4 & 82.7 & 97.9 & 92.0 & {\textbf{89.8}} & 95.9 & 86.6 & 90.14 \\ Face-SSD &97.3 & 91.9 & 96.0 & 89.0 & 94.9 & 74.8 & 95.7 & 74.9 & 93.1 & 94.3 & 96.6 & 91.8 & 83.4 & 85.1 & 86.9 & 98.5 & 92.6 & 87.8 & 95.6 & 87.6 & 90.29 \\ \hline \end{tabular} \end{table*} \subsubsection{Facial Attribute Prediction} \label{subsec: attr prediction performance} In this section, we evaluated the performance of attribute prediction using Face-SSD for the prediction of $40$ attributes such as gender, age, etc. Our framework treats this problem as multiple binary classification problems using $40$ heatmaps at the output layers. The only difference with the smile recognition case is the number of filter kernels used at the final layer -- everything else remains the same, including the learning hyperparameters. The effects of modifying various settings during training are presented in Table \ref{table: att_color_HaS_Effects}. Our experiment focuses specifically on the effects of using the Gamma Correction (GC) and Hide-and-Seek (H-a-S) strategies used in the data augmentation process. Depending on the attribute label, there are two possible data augmentation strategies that might affect the accuracy of facial attribute prediction. Gamma correction (colour value adjustment) affects the accuracy of predicting colour-related attributes, such as hair colour (e.g., Black, Blond, Brown and Gray Hair), skin colour (e.g., Pale Skin and Rosy Cheeks) and presence of cosmetics (e.g., Heavy Makeup and Wearing Lipstick). Hide-and-Seek, which forces the Face-SSD to seek more of the overall face area, seems to affect the accuracy of predicting the overall face area-related attributes including ``Attractive, Blurry, Chubby, Heavy Makeup, Oval Face, Pale Skin and Young''. \begin{figure} [t!] \small \begin{center} \includegraphics[width=0.95\linewidth]{./imgs/exp_accuracy_improve_using_GC_HaS.png} \end{center} \caption{{Removing Gamma Correction (GC) during training Face-SSD (Case C in Table \ref{table: att_color_HaS_Effects}) improves the accuracy of predicting color-related attributes comparing to using GC (Case A in Table \ref{table: att_color_HaS_Effects}). Using Hide-and-Seek (H-a-S) (Case B in Table \ref{table: att_color_HaS_Effects}) does not improve overall face area-related attributes as expected.}} \label{fig:effect of using GC and H-a-S} \end{figure} As shown in Table \ref{table: att_color_HaS_Effects}, we tested Face-SSD with all possible combinations using Gamma Correction and Hide-and-Seek during training, and all other settings remained the same as face detection part in Face-SSD (See Table \ref{table: FD_HNM_HaS_Effects}). As we expected, using Gamma Correction (Case A and B in Table \ref{table: att_color_HaS_Effects}), which modifies the original colour of the training image, degrades the attribute recognition performance compared to training without Gamma Correction (Case C and D in Table \ref{table: att_color_HaS_Effects}). Although training without Gamma Correction primarily improves the accuracy of the colour-related attributes (e.g., Black Hair, Blond Hair, Brown Hair and Heavy Makeup), it also helps improve overall accuracy in other attributes, as shown in Fig. \ref{fig:effect of using GC and H-a-S}. By removing only Gamma Correction, Face-SSD achieves state-of-the-art accuracy ($90.29\%$) that is competitive results ($> 90\%$) similarly to MCNN-AUX {\citep{conf/AAAI/Hand17}}, DMTL {\citep{jour/PAMI/Han17}} and R-Codean {\citep{jour/PRL/Sethi18}}. (See Fig. \ref{fig:ACC_multi_att}) Interestingly, the use of Hide-and-Seek improves accuracy, but does not primarily improve the accuracy of attributes that are related to large facial areas, such as ``Attractive, Blurry, Chubby, Heavy Makeup, Oval Face, Pale Skin and Young'' as it was originally expected. On the contrary, it helps to identify more details in certain face areas (e.g., Bushy Eyebrows, Mouth Slightly Open, Straight Hair, Wavy Hair, Wearing Earrings, Wearing Necktie), as shown in Fig. \ref{fig:effect of using GC and H-a-S}. When training without using Gamma Correction, Face-SSD does not benefit from the use of Hide-and-Seek, as shown in Table \ref{table: att_color_HaS_Effects} (Case D). The reason for this is that training without using Gamma Correction has had more impact on improving the accuracy of the same attributes, as shown in Fig. \ref{fig:effect of using GC and H-a-S}. The results of the Face-SSD shown in Table \ref{table: comparison_att_CelebA} are obtained by training Face-SSD using of Hide-and-Seek (Case D in Table \ref{table: att_color_HaS_Effects}), but not using Gamma Correction. Although we use the generalised Face-SSD architecture as opposed to using a specially designed architecture for facial attribute prediction, we achieved state-of-the-art accuracy (in the top three among the performances of related works). \begin{table}[!t] \small \caption{The effect of using Gamma Correction (GC) and Hide-and-Seek (H-a-S) in the data augmentation process when training Face-SSD for attribute prediction using CelebA dataset.} \label{table: att_color_HaS_Effects} \centering \begin{tabular}{|c|c|c||c|} \hline & Using GC & Using H-a-S & Accuracy ($\%$) \\ \hline\hline Face-SSD A & $\checkmark$ & $\cdot$ & 89.57 \\ \hline Face-SSD B & $\checkmark$ & $\checkmark$ & 90.06 \\ \hline Face-SSD C & $\cdot$ & $\cdot$ & 90.15 \\ \hline Face-SSD D & $\cdot$ & $\checkmark$ & 90.29 \\ \hline \end{tabular} \end{table} \subsubsection{Valence and Arousal Estimation} \label{subsec: V-A estimation performance} In this section, we investigated the performance of valence-arousal estimation using Face-SSD. Unlike the previous sections that address binary classification (smile recognition) and multi-class recognition (facial attribute prediction) problems, Face-SSD for valence-arousal solves a regression problem. To this end we used a state-of-the-art dataset called AffectNet {\citep{jour/TAC/Mollahosseini17}}. AffectNet consists of face images captured in the wild and its corresponding annotations of valence-arousal and emotion. To confirm the regression ability of Face-SSD, we only investigated the valence-arousal estimation performance. Note that, as AffectNet consists only of cropped face images, we trained Face-SSD using a data augmentation strategy that allows only minor variations in terms of face size. Therefore, during testing, Face-SSD typically handles large faces for valence-arousal estimation. Despite this limitation during training, however, Face-SSD is able to handle not only large faces but also faces of medium size during testing, as shown in Fig. \ref{fig:teaser}(c). The performance of the valence-arousal estimation is shown in Table \ref{table:ACC_val_aro}. For valence estimation, AffectNet yields slightly better results than Face-SSD. On the other hand, in terms of arousal, Face-SSD provides better results. Overall, Face-SSD provides close to the state-of-the-art performance without any modification to the original architecture of the Face-SSD network. See {\citep{jour/TAC/Mollahosseini17}} for a detailed description of the units in the Table \ref{table:ACC_val_aro}. \begin{table}[!t] \small \caption{Experimental results of valence and arousal estimation using AffectNet {\citep{jour/TAC/Mollahosseini17}} dataset. Experimental results are reported using Root Mean Square Error (RMSE), Pearsons Correlation Coefficient (CoRR), Sign Agreement Metric (SAGR) and Concordance Correlation Coefficient (CCC) (see {\citep{jour/TAC/Mollahosseini17}} for the detailed description of the metrics).} \label{table:ACC_val_aro} \centering \begin{tabular}{|c||c|c|c|c|} \hline & \multicolumn{2}{ |c| }{Valence} & \multicolumn{2}{ |c| }{Arousal}\\ \cline{2-5} & AffectNet & Face-SSD & AffectNet & Face-SSD \\ \hline\hline RMSE & {\textbf{0.37}} & 0.4406 & 0.41 & {\textbf{0.3937}} \\ \hline CORR & {\textbf{0.66}} & 0.5750 & {\textbf{0.54}} & 0.4953 \\ \hline SAGR & {\textbf{0.74}} & 0.7284 & 0.65 & {\textbf{0.7129}} \\ \hline CCC & {\textbf{0.60}} & 0.5701 & 0.34 & {\textbf{0.4665}} \\ \hline \end{tabular} \end{table} \subsection{Computational Speed and Complexity} \label{subsec: computation and complexity} For all of the Face-SSD applications presented in this paper, we obtained an average processing time of ${\textbf{47.39}}$ $ms$ (${\textbf{21.10}}$ $FPS$) during testing, with an experimental environment consisting of an Intel Core i7-6700HQ CPU processor and an NVIDIA GeForce GTX 960M GPU with 23.5GB of DRAM. We used Theano for Face-SSD implementation. As shown in Table \ref{table:proc_time_param_cnt}, most Face-SSD applications achieve near real-time processing speed. Smile recognition (binary classification), facial attribute prediction (40-class recognition) and valence-arousal estimation (multiple task regression) take $47.28$ $ms$ ($21.15$ $FPS$), $47.55$ $ms$ ($21.03$ $FPS$) and $47.37$ $ms$ ($21.11$ $FPS$), respectively. Using the proposed generic Face-SSD for face analysis, the number of model parameters indicating complexity does not increase linearly even when the number of facial analysis tasks and classes increases. Although facial attribute prediction performs $40$ times more tasks than smile recognition, the processing time by the attribute prediction task increases only by $0.27$ $ms$ and requires a small number of additional parameters ($0.09$ $M$). As shown in Table \ref{table: comparison_CelebA}, the proposed Face-SSD is significantly faster than traditional methods that use the steps of region proposal and task prediction to analyse faces. For example, the work of Liu et al. {\citep{conf/ICCV/Liu15}} requires $35$ $ms$ to generate the face confidence heatmap and $14$ $ms$ to classify the attributes. In addition, this method requires another $90$ $ms$ to find the candidate bounding box (EdgeBox {\citep{conf/ECCV/Zitnick14}}) for localising the final bounding box that ends up with a total processing time of $139$ $ms$ ($7.19$ $FPS$). The work of Ranjan et al. {\citep{conf/FG/Ranjan16}} takes an average of $3,500$ $ms$ ($0.29$ $FPS$) to process an image. Ranjan et al. {\citep{conf/FG/Ranjan16}} explains that the main bottleneck for speed is the process of proposing regions (Selective Search {\citep{conf/ICCV/Sande11}}) and the repetitive CNN process for every individual proposal. \begin{table}[!t] \small \caption{The total number of parameters and processing time for various face analysis applications using Face-SSD.} \label{table:proc_time_param_cnt} \centering \begin{tabular}{|c|c|c|} \hline Face Analysis Task & Parameter Number & ms (FPS) \\ \hline\hline Face Detection Part (only) & $2.31$ $M$ & 25.57 (39.11) \\ \hline \hline Smile Recognition & $4.44$ $M$ & 47.28 (21.15) \\ \hline Facial Attribute Prediction & $4.53$ $M$ & 47.55 (21.03) \\ \hline Valence-Arousal Estimation & $4.46$ $M$ & 47.37 (21.11) \\ \hline \hline Average of All Applications & $4.48$ $M$ & 47.39 (21.10) \\ \hline \end{tabular} \end{table} To ensure a fair comparison of the processing times, we should measure the time in the same experimental environment. However, Liu et al. {\citep{conf/ICCV/Liu15}} does not provide detailed information about the experimental environment, except that they use GPUs. Ranjan et al. {\citep{conf/FG/Ranjan16}} implemented their all-in-one network using $8$ CPU cores and GTX TITAN-X GPUs. The processing speed of the proposed Face-SSD is $74$ times faster than the all-in-one network, even in a less powerful experimental environment. Although Face-SSD is faster than other face analysis methods, the processing speed is lower than the base object detection (SSD) model {\citep{conf/ECCV/Liu16}} as the complexity of Face-SSD is nearly twice that of SSD, as shown in Table \ref{table:proc_time_param_cnt}. Placing more layers to perform face analysis tasks increased the number of parameters in Face-SSD. However, the structure of the all-in-one network {\citep{conf/FG/Ranjan16}} shows that sharing more convolutional features does not degrade the performance of various tasks. Capitalising on this idea, we expect to further reduce the complexity of Face-SSD by sharing more layers and assigning a relatively small number of layers to other face analysis tasks.
1,108,101,564,506
arxiv
\section{Introduction} \label{sec:1} The spectrum of the Sun is the outcome of the physics governing the outer layers of our star. Understanding the formation of the solar spectrum is a necessary step in order to be able to predict its variability along the solar magnetic cycle and to measure the solar surface composition. The solar spectrum at wavelengths longer than about 140 nm is variable only at the few-percent level, and given the exquisite accuracy of the solar parameters, observations of the Sun may provide the best available standard to calibrate and guide the construction of theoretical model atmospheres for late-type stars. Ultimately, our ability to predict the luminosities of other stars and entire galaxies can be tested and improved by studying the solar spectrum. The UV part of the spectrum is of particular relevance for us, as it is closely connected to the chemistry of the Earth's atmosphere, and the evolution of life on Earth. Astrophysically, the UV is exciting for its wealth of information: the strongest atomic lines concentrate in this spectral window, and so do ionization edges. Although the Sun is not a particularly luminous star, it shares atmospheric physics with other F-G-K late-type stars which contribute significant mass and light to distant galaxies, as shown in many of the papers included in this volume. Perhaps the most severe difficulty to model the outer layers of the Sun is related to the existence of an 'upper atmosphere', where the time-averaged thermal gradient is reversed and a combination of high temperature and low density drives the plasma far from equilibrium conditions (see, Judge 2005 and Rutten 2007 for recent reviews). Semi-empirical time-independent models of the upper atmosphere have provided significant insight (see, e.g., the classical Vernazza, Avrett \& Loesser 1981 paper). Increasingly sophisticated hydrodynamical simulations are making their way upwards into the lower chromosphere (Wedemeyer et al. 2004; Wedemeyer-B\"ohm et al. 2007). Space imagery of the upper atmosphere reveals a complicated interaction of magnetic field and waves. Such images contrast with the much simpler picture that we get from optical observations of the photosphere, where the magnetic field that permeates our star causes only a small distortion from {\it field-free} conditions, and the temperature contrast of the granulation is only a few percent. Fortunately, it is possible to study the lower atmosphere independently from higher layers. In the optical and infrared the upper atmosphere is optically thin and the opacity, dominated by the H$^{-}$ continuum, is only superseded by metal opacity at wavelengths shorter than about 300 nm. As we move further into the UV, the rapidly increasing metal opacity shifts the spectrum formation into the lower chromosphere. The change of character is reflected in the time variability of the integrated solar spectrum, which exceeds 10 \% at $\lambda \sim 140$ nm and 50 \% at $\lambda < 120$ nm. An array of empirical models that represent the different magnetic structures on the solar surface (e.g. sunspots, plage, network, etc.) needs to be considered to describe the variability of the solar spectrum throughout the solar cycle (see, e.g., Fligge, Solanki \& Unruh 2000, Fontenla et al. 1999), but at $\lambda > 200$ nm, a single model is expected to be a reasonable approximation, given that the vast majority of the solar surface is typically free from regions with strong magnetic fields (what is usually referred to as the 'quiet' Sun or the internetwork). There is an extensive literature on the comparison of calculated and observed solar UV fluxes. Most readers will remember the UV {\it missing opacity} problem, but the literature on this subject has been sparse over the last decade. We first review recent results, and then move on to describe our current efforts to improve models of the solar photosphere and compile updated opacities. \section{Anybody said 'missing' opacity?} \label{sec:2} Early studies found too much UV flux in model atmosphere calculations (Houtgast \& Namba 1968, Labs \& Neckel 1968, Matsushima 1968). Based on a linelist from semi-empirical calculations of atomic structure (Kurucz \& Peytremann 1975), completed with literature values, Kurucz (1992) concluded that the problem was solved, but his proposal was criticized by Bell et al. (1994), as high-resolution observations did not confirm many of the predicted features. Bell, Balanchandran \& Bautista (2001) revisited this issue armed with updated Fe I opacity from the R-matrix calculations of Bautista (1997), concluding that the problem was significantly reduced, but still present. They found that if iron opacity was responsible for the deficit, the new data could only account for half of the missing opacity. More recently, we performed a similar study using Gaussian-averaged photoionization cross-sections from the opacity project for elements with atomic numbers 6--14 and scaled hydrogenic cross-sections for Fe I, arriving at the opposite conclusion (Allende Prieto, Hubeny \& Lambert 2003a). Independently, Fontenla et al. (1999, see also Fontenla et al. 2006) used a combination of semi-empirical models to model the solar spectrum. They noticed an opacity deficit around 410 nm. Nonetheless, the use of semi-empirical models whose temperature structure have been modified to reproduce observed fluxes, makes the discussion of absolute fluxes somewhat circular. Note also that the continuum metal opacities considered in these studies are outdated and neglect atomic iron. There were several differences among the calculations of Balachandran et al. and ours. First of all, different model atmospheres were used: a MARCS model versus an interpolated Kurucz (1993) solar model. A different solar surface composition was adopted by the two groups. Most relevant, Balachandran et al. adopted $\log \epsilon$(Mg)$=7.44$ and $\log \epsilon$(Fe)$=7.55$, and we used $\log \epsilon$(Mg)$=7.58$ and $\log \epsilon$(Fe)$=7.50$. Our higher magnesium abundance can explain up to about 5 \% less flux in our calculations at 400 nm and up to 20 \% shortwards of 300 nm (see Section \ref{sec:4}), but the difference between the iron abundances, although smaller, goes in the wrong direction. Our calculations had (at least!) one prominent shortcoming: molecular opacity was neglected. We also made a mistake, including natural damping in L$\alpha$ too far from the transition's frequency. Mathematically, the natural damping contribution to the Lorentzian wings of L$\alpha$ is strong enough to contribute very far, even into the optical. Natural damping in Lyman alpha far from the transition frequency becomes in fact Rayleigh scattering, and should be treated as such. The opacity deficit, if any, has not been clearly linked to photoionization of atomic iron, and the solar photospheric abundances of several major elements have been systematically reduced over the last few years (see Asplund 2005, Asplund, Grevesse \& Sauval 2005). It is time to take a closer look at this issue. \section{Revisiting the problem: opacities, equation of state, chemical composition and model atmospheres} \label{sec:3} The problem of atmospheric structure, regardless of geometry, is intrinsically coupled to the chemical composition of the star. The relevant atomic and molecular opacities need to be accounted for, not only to predict accurately the spectrum shape, but also to describe properly the energy balance, equation of state and, ultimately, the atmospheric structure (see the paper by Hubeny in this volume). The chemical composition of the solar atmosphere, in turn, is determined from spectral synthesis calculations based on a model atmosphere computed for a given composition. Thus, abundances, opacities, equation of state, and model atmospheres are intrinsically coupled: changing one of these elements in isolation may be meaningless. Below, we briefly describe the main updates that we are implementing in our calculations. \subsection{Abundances} Over the last 7 years, a number of spectroscopic investigations of the solar chemical composition have significantly modified the standard values generally adopted for the solar photosphere. The largest updates affect some of the most abundant elements, such as carbon or oxygen (Allende Prieto, Asplund \& Lambert 2001, 2002, Asplund et al. 2004, 2005), but minor changes affect also iron (Asplund et al. 2000b), silicon (Asplund 2000), or calcium (Asplund et al. 2005). The latter reference summarizes these revisions, which are based on a new generation of three-dimensional time-dependent (non-magnetic) simulations of the solar surface. Updates have also been made for heavier elements (Sneden \& Lawler 2005), albeit their impact on the solar absolute fluxes is only marginal. In our calculations, we have adopted the mixture proposed by Asplund et al. (2005). Note, however, that this compilation is not based on a homogeneous analysis with a single model atmosphere and a uniform protocol. The abundances for a number of elements are derived afresh, but for others it represents a critical evaluation of new and old results, by different authors with various degrees of simplification, such as a strict adherence to LTE or the adoption of NLTE corrections for some species, which are still unavailable or unreliable for many other, in particular when it comes to 3D calculations. \subsection{Opacities} After the widely-used photoionization cross-sections of Peach (1970), a significant improvement came with the calculations of atomic structure and opacities performed by the international collaboration known as the Opacity Project (Seaton et al. 1992). Until very recently, the Opacity Project (OP) provided two extreme products: cross-sections for each atomic state, or Rosseland mean opacities. For calculating synthetic fluxes, or model atmospheres, one needs monochromatic opacities, but LTE codes deal most comfortably with opacities per species, and do not need detailed photoionization cross-sections for every single energy configuration. This situation has recently changed with the release of monochromatic opacities for each element as a function of temperature and electron density (Seaton 2005), but the inconvenience of having to deal with cross-sections has likely to do with the slow integration of the OP data in astronomical codes. We have implemented model atoms and ions for the most relevant species for F-G-K-type atmospheres using the OP photoionization cross-sections (Allende Prieto et al. 2003b). The data format follows the specifications for the NLTE model atmosphere code Tlusty (Hubeny \& Lanz 1995) and the spectral synthesis code Synspec (Hubeny \& Lanz 2000). As the computed energy levels are relatively inaccurate, the location of the predicted resonances (associated with two-electron autoionization; see the review article by Sultana Nahar in this volume) in the cross-sections is uncertain, and therefore we have smoothed them following the prescription proposed by Bautista, Romano \& Pradhan (1998). These models continue to be updated periodically, and are publicly available\footnote{{\tt http://hebe.as.utexas.edu/at/} and {\tt http://nova.astro.umd.edu/}}. The OP calculations cover most elements from hydrogen through calcium, but for iron ions have been superseded by newer results from the Iron Project (see Bautista 1996, 1997, Bautista \& Pradhan 1997, Nahar \& Pradhan 1996, 1999, and Nahar's paper in this volume). The distribution of data for Fe I and Fe II (the relevant iron ions for late-type stellar atmospheres) through the Iron Project data base is still patchy, but working in collaboration with Manuel Bautista and Sultana Nahar, I have {\it translated} the data files to the same format employed by the OP, and new model atoms for Tlusty/Synspec have been produced. \begin{figure} \centering \includegraphics[height=12cm,angle=90]{callende_f1.ps} \caption{Ratio of the solar irradiances computed with a simplified and full-blown iron model atoms (Fe and Fe$^{+}$). The full-blown models account for the radiative opacity from more than a thousand levels, while the boiled-down models include only about a hundred.} \label{fig:1} \end{figure} The Iron Project model ions are significantly larger than those for lighter elements based on the OP data, including of the order of 700 energy levels per ion. Assuming the relative populations of levels with similar energies and the same quantum numbers L and S are in equilibrium at a given temperature, it is possible to combine the cross-sections of these levels creating {\it super-levels}. The concept of super-levels, introduced by Anderson (1989; see also Hubeny \& Lanz 1995), can be exploited to effectively reduce the complexity of the opacity calculations, as well as to speed up the solution of the rate equations in NLTE problems. For a solar-like atmosphere, using this simplification for Fe I (assuming $T=5000$ K) and for Fe II ($T=7000$ K), leads to errors in the computed absolute flux less than 1 \% when the size of the model atoms is reduced tenfold, as shown in Fig. 1. \subsection{Equation of state} By adopting a model atmosphere that has been precalculated, all relevant thermodynamical quantities are readily available as a function of the location in the atmosphere. As we discussed above, the input atomic and molecular data, as well as the abundances, will determine the resulting structure and energy flux, but some quantities, such as the emergent flux, are expected to be more sensitive to small variations in some of the basic inputs than others, such as the thermal structure of the model atmosphere. We have explored the effect of small changes in the input chemical composition on the emergent fluxes by considering the thermal atmospheric structure fixed (see Section \ref{sec:4}). Under this approximation, we still recompute consistently the electron density and solve the molecular equilibrium. This step involves a major upgrade from our earlier calculations in order to consider the presence of molecules, their impact on the electron density, and ultimately on the atomic species (I. Hubeny, private communication). To this purpose, the most recent versions of Synspec include routines kindly provided from U. J$\o$rgensen. Both atomic and molecular partition functions are adopted from Irwin (1981 and private communication), while other molecular data are from Tsuji (1973). \subsection{Model atmospheres} As argued above, computing absolute fluxes involves solving consistently the problem of atmospheric structure and calculating the radiation field for any given set of abundances. We are using a NLTE model atmosphere code, but including in detail all the relevant sources of opacity for late-type atmospheres and accounting for departures from LTE simultaneously is a massive problem. On the other hand, mild or no departures from LTE are expected for many atomic and molecular species. Thus, we are working towards a hybrid scheme where the contribution to the opacity for most species is computed in LTE and stored in a look-up table, while only the most relevant ions are considered in NLTE. We have already mentioned recent updates in the solar photospheric abundances associated with a new kind of model atmospheres based on 3D hydrodynamics. Surface inhomogeneities, in particular solar granulation, may have an important effect on the absolute flux emerging from the solar surface. Radiative transfer solvers for 3D are typically ready to handle only simple line opacities: one line profile or a few. Computing absolute fluxes, especially in the UV domain, requires including very large number of overlapping atomic and molecular transitions, in addition to detailed metal photoionization cross-sections. To this goal, a new radiative transfer code has been developed by L. Koesterke (private communication), able to consider full-blown opacities, including electron and Rayleigh scattering. Koesterke et al. (2007) find that the solar 3D model by Asplund et al. (2000a) performs similarly to 1D models regarding limb darkening in the continuum, despite a simplified description of the radiation field. In addition, the same model vastly outperforms 1D models regarding line formation, and in particular the center-to-limb variation of line profiles. The ability of 3D models to match the solar limb darkening had been put into question by Ayres et al. (2006). Based on tests using a horizontal- and time-averaged structure from the simulations by Asplund et al. (2000a), these authors predicted a dramatic failure of the new models. The more rigorous calculations by Koesterke et al. show that the limb-darkening of a three-dimensional model is very different from that of a 1D model derived by taking the average over surfaces with constant vertical optical depth. The effects of surface convection on the absolute solar fluxes are currently being investigated with the new radiative transfer code. \section{The role of chemical composition: a seven-pipe problem} \label{sec:4} Comparing absolute solar fluxes predicted by model atmospheres with observations usually involves adopting a standard set of chemical abundances, but can we consider the chemical composition as a fixed set of parameters? The recent revisions for carbon and oxygen, together with the typical error bars still quoted in solar abundance studies, which sometimes exceed 0.1 dex, suggest that the answer is NO. \begin{figure} \centering \includegraphics[height=12cm,angle=90]{callende_f2.ps} \caption{Relative variations in the solar surface flux emergent from a 1D solar model atmosphere resulting from changes in the adopted chemical composition. The atmospheric structure (the run of temperature versus mass column density) is considered constant in these calculation.} \label{fig:1} \end{figure} Only a few elements can make an important impact on the computed solar fluxes: directly through contributed opacity, or indirectly, by their effect on the atmospheric structure or the number of free electrons they release through ionization. We have calculated, using a solar Kurucz model, the effect of changing the abundances of the most relevant elements on the solar spectrum. The results of 0.2 dex variations in the X/H ratios, where X is He, C, O, Mg, and Fe, are shown in Fig. 2. Ca, Si, and some iron peak elements can also have an effect. H$^{-}$ dominates the continuum opacity in the solar optical and infrared. In the blue and UV, atomic iron and magnesium contribute significant continuum opacity through photoionization, and iron also provides abundant line opacity. At wavelengths shorter than 200 nm, aluminum and silicon need to be considered as well (but see comments in \S \ref{sec:1}). Molecules, mainly CH, CO, and OH dominate relatively narrow bands of the optical, IR and UV solar spectrum. Besides H, at least iron, magnesium and silicon, are significant contributors to the pool of free electrons, which has a tremendous impact on the continuum opacity as the number density of free electrons is smaller than that of hydrogen atoms and therefore controls the formation of H$^{-}$. At first sight, the impact of changing the helium abundance in Fig. 2 may be a surprise. This is truly an indirect effect: as all abundances are normalized to H and He is very abundant, N(He)/N(H) $\sim 0.07$, an increase in He/H involves a significant reduction in N(H), and consequently in the atomic hydrogen opacity, which results in an increased irradiance. Fortunately, the solar He/H ratio is known precisely from helioseismology. Inspection of Fig.2, considering that the observed solar absolute fluxes are likely accurate to a level of $\sim 1$ \% or better, indicates that the current uncertainties in the chemical composition of the solar surface may be a dominant source of error in the flux calculations. This situation is similar to the case of the predicted solar neutrino fluxes! (Bahcall \& Serenelli 2005). \section{Conclusions} \label{sec:5} Observations of the solar angular diameter by different authors and methods still show rather significant discrepancies (see, e.g., Basu 1998, and the references discussed by Wittmann \& Neckel 1996), and probably a poorly-understood time variation. Nonetheless, this quantity is known with a relative accuracy many orders of magnitude higher than for any other star, opening the possibility to compare detailed observed absolute fluxes with the predictions from model atmospheres to learn about physics and astronomy. So far, no assessment has been made of the potential impact of the new generation of the 3D hydrodynamical model atmospheres on the computed solar irradiance, but the presence of inhomogeneities introduced by convective overshooting could alter the solar spectrum significantly. Computing fluxes from 3D models involves 3D radiative transfer; using horizontally-averaged structures to explore 3D models is bound to lead to erroneous conclusions. The availability of several detailed hydrodynamical simulations of the solar surface (e.g., Asplund et al. 2000a, Wedemeyer et al. 2004, V\"ogler et al. 2005) contrasts with the scarcity of detailed radiative transfer using them. Computing absolute fluxes is more demanding than relative values, and much more sensitive to input values such as the adopted chemical composition. Modern opacities should be employed, in particular computed state-of-the-art photoionization cross-sections for atomic iron, magnesium, aluminum, and silicon, as well as line opacity from the most important diatomic molecules. Our tests indicate the need for fully consistent calculations in order to disentangle the impact of changes in composition and input micro-physics. The blue and UV fluxes of the Sun are particularly sensitive to the abundances of hydrogen, carbon, oxygen, magnesium, aluminum, silicon, calcium and iron. Our preliminary results hint that the uncertainties in the composition of the solar atmosphere may be a dominant source of error in predicting the radiation output of the Sun. {\it Acknowledgments.} It is my pleasure to recognize significant contributions to this work from Martin Asplund, Manuel Bautista, Lars Koesterke, Sultana Nahar, David Lambert, Thierry Lanz, and in particular Ivan Hubeny. I thank Emanuele Bertone, Miguel Ch\'avez, Lino Rodr\'{\i}guez-Merino y Daniel Rosa-Gonz\'alez for their kind hospitality. Support from NASA (NAG5-13057, NAG5-13147) is thankfully acknowledged.
1,108,101,564,507
arxiv
\section{INTRODUCTION} \label{sec-intro} Extreme star formation rates of $\gtrsim10^3~M_\odot$~yr$^{-1}$ have been derived for luminous infrared galaxies discovered by deep $IR$ surveys \citep{ds98,younger08,riechers09}. In such radiation-pressure supported galactic disks with the maximum starburst \citep{thompson05}, large scale outflows can be triggered by superwinds from massive young stars and supernova explosions, which may play important roles in galaxy formation and evolution \citet{cooper08}. Galactic outflows can regulate star formation by heating cool gas \citep{tang09}. They can enrich both intergalactic medium and galactic disks \citep{heckman90}. Their feedback can also explain the apparent discrepancy between the theoretical prediction of the dark matter halo mass function and the measured stellar mass function for galaxies in the successful $\Lambda$CDM scenario \citep{sh03}. In this Letter, we probe kinematic signatures of molecular outflows in a sample of 27 ultra-luminous infrared galaxies (ULIRGs) recently studied by \citet{chung09}. A large fraction of ULIRGs are mainly powered by merger-induced starbursts \citep{sanders88} and hence make good targets for investigating associated outflows. Evidence for such an outflow has been reported in a single ULIRG system such as Arp~220 \citep{sakamoto09} and Mrk~231 \citep{feruglio10}. Those outflow signatures are however too faint to be detected individually in our ULIRG sample. Therefore we employ a stacking analysis to look for faint and broad high velocity line wings in the $^{12}$CO line profile. In order to inspect outflow driving mechanisms, our ULIRG sample is partitioned into two groups based on optical emission line diagnostics: starburst dominated galaxies and galaxies with large AGN contributions. With the reduced noise in the stacked composite spectrum, we also measure the average brightness of other weaker molecular lines such as $^{13}$CO(1--0) and $^{12}$CN(1--0), as an independent measurement of molecular gas properties, that have been detected only in the nearest $IR$ luminous galaxies \citep[e.g.][]{aalto95,aalto02}. \section{SAMPLE and STACKING} \label{sec-sample} We use the sample and the data from the recent Redshift Search Receiver (RSR) $^{12}$CO $J=1\rightarrow0$ survey of local ULIRGs by \citet{chung09} . The observations were carried out with the Five College Radio Astronomy Observatory (FCRAO) 14m Telescope in 2007 and 2008, targeting 29 ULIRGs at $z=0.043-0.11$. As discussed in detail by \citet{chung09}, this is a representative subset of ULIRGs as the primary selection criteria were those related to observational scheduling and the redshift range that brings the $^{12}$CO $J=1\rightarrow 0$ line within the bandpass of the RSR system. In our stacking analysis, we include only the 27 CO detected objects. The CO line luminosity $L_{\rm co}^\prime$ of the sample ranges from 1.2 to 15.3$\times10^9~$K~km~s$^{-1}$~pc$^{2}$ with a median value of 6.7$\times10^9~$K~km~s$^{-1}$~pc$^{2}$. The stacking of the RSR spectra is performed using the following procedure. First, each coadded spectrum is shifted to the rest frequency by multiplying the observed frequency by (1+$z_{\rm co}$), where $z_{\rm co}$ is the CO redshift of each ULIRG derived from the line fitting \citep{chung09}. Each CO spectrum is normalized by the best fit Gaussian peak, and then all spectra are {\it ``aligned''} at the frequency centroid. A linear interpolation is used in the alignment process. The normalized spectra are averaged, weighted by the $rms$ ``noise'' measured in the normalized spectra, excluding the $\pm0.5~$GHz regions around the three transitions of our interest, $^{13}$CO, $^{12}$CN, and $^{12}$CO as well as the noisy end channels. Finally, the averaged spectrum is Hanning smoothed to produce the final spectral resolution of 61~MHz (158~km~s$^{-1}$ at 115.27~GHz). A ``non-ULIRG'' comparison spectrum was derived by stacking the RSR spectra of 19 $z=0.037-0.066$ galaxies selected for their high HI mass ($M_{\rm HI}\gtrsim2\times10^{10}~M_\odot$; Haynes et al. in prep., O'Neil et al., in prep). These 19 galaxies were selected from another RSR commissioning programs - the CO survey of 29 HI rich galaxies at similar redshifts. Nineteen galaxies were detected in CO with comparable $S/N$ and sensitivity as our ULIRG sample, but they are otherwise normal in their star formation and nuclear activities. These HI rich galaxies mostly look like normal spirals in the optical and their mean $FIR$ luminosity is $2.8\pm1.4\times10^{10}~L_\odot$, 30 times lower than that of our ULIRG sample. Their $L_{\rm co}^{\prime}$ of 0.4--3.2$\times10^9$~K~km~s$^{-1}$~pc$^{2}$ is only slightly smaller than that of our ULIRG sample. Further details of the RSR CO observations of these HI-rich spirals will be presented elsewhere (Chung et al. in prep). The rms noise in the final spectra for the ULIRG sample and the control sample, normalized by the $^{12}$CO peak flux, are 0.014 and 0.027, respectively, yielding the S/N that is better than those of individual spectra by a factor of 5 to 47 (see Table~\ref{tbl-stack}). Figure~\ref{fig-chemi} shows the stacked, normalized spectrum of a broad frequency range (109.65--116.25~GHz) which includes all three transitions, $^{13}$CO (1--0), $^{12}$CN (1--0), and $^{12}$CO (1--0). In Figure~\ref{fig-stack}, we zoom in the 4500~km~s$^{-1}$ range around the $^{12}$CO line to show the characteristics of the profile more in detail. \begin{figure} \plotone{fig1.ps} \caption{Full Composite Redshift Search Receiver (RSR) spectra of the ULIRG and comparison sample. The rest frequencies of $^{13}$CO (110.20~GHz), $^{12}$CN (113.50~GHz), and $^{12}$CO (115.27~GHz) are indicated with dotted vertical lines. The stacked spectrum of 27 $^{12}$CO detected ULIRGs is shown on the top, the stacked spectrum of a subsample of 14 ``Sbrst'' or ``H{\sc ii}'' (SB group, $L_{IR}^{\rm AGN}\lesssim0.12L_{IR}$) and 13 seyfert or LINER type ULIRGs (AGN group, $L_{IR}^{\rm AGN}\approx0.32L_{IR}$) are shown in the middle, followed by the stacked spectrum of 19 non-ULIRG sample at the bottom. The FWZI spectral regions used to measure the line and wing flux density are shown as boxes. \label{fig-chemi}} \end{figure} \section{STARBURST POWERED OUTFLOWS} \label{sec-outflow} The stacked spectrum of the ULIRG group has revealed broad wings around the CO line as seen on the top of Figure~\ref{fig-chemi}.. The wings are blue- and redshifted by $\approx1000~$km~s$^{-1}$ from the main CO line peak (FWZI of 2000~km~s$^{-1}$) with the total line integral of 19$\pm5$\% of the total (see Fig.~\ref{fig-stack}). The line wings of the ULIRG stacked spectrum are detected with S/N$\sim3$ in each channel, which would be difficult to be detected in individual spectra. In fact, the effective integration time of the stacked ULIRG spectrum is 115.8~hrs, 10--60 times more than the integration on individual ULIRGs. Note that such wings are not present in the control sample of HI rich galaxies with the effective integration 127.7~hrs as shown on the bottom of Figure~\ref{fig-chemi}. The comparison of the ULIRGs with the non-ULIRG population is better shown in the upper two panels of Figure~\ref{fig-stack}. \begin{table*} \centering \scriptsize \caption{Outflow Properties and Molecular Abundance\label{tbl-stack}} \begin{tabular}{rccccccc} \hline\hline \multicolumn{1}{l}{Group} &\multicolumn{4}{c}{----------------------------- Mean -----------------------------} &\multicolumn{3}{c}{------------ Stacked ------------}\\ &$z_{\rm co}$ &rms (mK/$T_A^*$) &$T_{\rm peak}$/rms &$W_{\rm co}$ (km/s) &$\frac{{\rm wing}}{{\rm 12CO}}$ &$\frac{{\rm 12CO}}{{\rm 13CO}}$ &$\frac{{\rm 12CO}}{{\rm 12CN}}$\\ \hline \multicolumn{1}{l}{ULIRGs................}&&&&&&&\\ All (27) &0.072$\pm$0.023& 0.46$\pm$0.15&6.4$\pm$4.1& 263$\pm$~59&0.19$\pm$0.05 & $\geq$~16.6 & $\geq$~16.6\\ Sbrst+H{\sc ii} (14) &0.071$\pm$0.027& 0.45$\pm$0.15&7.3$\pm$4.2& 266$\pm$~67&0.25$\pm$0.06 & $\geq$~13.3 & $\geq$~13.3\\ Sy+LIN (13) &0.073$\pm$0.026& 0.47$\pm$0.16&5.4$\pm$4.0& 262$\pm$~54&$\leq$~0.12 & 11.1$\pm$6.2 & $9.3\pm4.3$\\ \multicolumn{1}{l}{Non-ULIRGs.........~~}&&&&&&&\\ HI-rich spirals (19) &0.050$\pm$0.007& 0.39$\pm$0.09&5.0$\pm$2.7& 266$\pm$103&$\leq$~0.11 & $\geq$~~7.3 & $\geq$~7.3\\ \hline \end{tabular} \end{table*} Such broad wings can form when entrained cool gas gets ejected along with hot ionized outflowing gas \citep{curran99,narayanan06}. In order to examine whether a starburst or an AGN is powering the outflow, we have divided the ULIRG sample into two groups: (1) 14 ULIRGs which are classified as ``Sbrst'' or ``H{\sc ii}'' (SB group) with no obvious sign of AGNs; and (2) 13 ULIRGs with ``seyfert'' spectra (AGN group). This grouping is done based on the classification from the NASA/IPAC Extragalactic Database (NED)\footnote{This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.}. Objects with a ``LINER'' classification are included in the AGN group unless it also has a ``Sbrst'' or ``H{\sc ii}'' designation. Galaxies with a hybrid (``Sy+SB/H{\sc ii}'') classification are included in the AGN group. Among our sample, eight ULIRGs from each group have been modeled in their spectral energy distribution by \citet{farrah03} who found an AGN contribution to the $IR$ luminosity of $\ge$27\% for the eight in the AGN group, which is a factor of two higher than that of the eight in the SB group ($\sim12$\%). The mean S/N measured by the ratio of the CO line peak and the rms is highest for the SB group due to a few objects with the strongest CO emission among the sample, but this ratio does not vary significantly from group to group. In fact, the mean rms of each group and the CO linewidths of different groups are very similar as summarized in Table~\ref{tbl-stack}, making our results robust. We show the comparison of the two groups on the bottom of Figure~\ref{fig-stack}. The rms noise in the normalized stacked spectra is 0.0189 and 0.032 for the SB group and the AGN group, respectively. The wings around $^{12}$CO line appears to be even stronger in the SB group compared to the wings seen in the entire ULIRG sample, with more flux in the wings which is about 25\% of the total CO flux. These broad features however disappear when only the ULIRGs with Seyfert spectra are combined. The total line flux was measured by integrating the line flux density within the FWZI regions (thin solid lines in Fig. 1) for both groups with and without wings. The wing-only flux was measured within the same FWZI as the AGN group and the control sample, and the difference between the total and the line wing flux has been adopted as the CO flux for these groups. The fractional flux in the wings and the upper limits of different subgroups are summarized in Table~\ref{tbl-stack}. \citet{rvs05} also found a lower frequency of neutral wind among Seyfert 2 ULIRGs in their study of Na~I~D absorption line in 26 AGN/starburst-composite ULIRGs at $0.03<z<0.44$, further supporting the starburst origin for this massive neutral wind. \begin{figure} \plotone{fig2.ps} \caption{A zoom-in view of the composite $^{12}$CO $J=1\rightarrow0$ Redshift Search Receiver (RSR) spectra. On the top, the ULIRG sample of 27 CO detected galaxies from \citet[][]{chung09} is compared to the non ULIRG sample of 19 HI-rich galaxies. On the bottom, the CO spectra of the SB group and the AGN group are compared.\label{fig-stack}} \end{figure} The energetics of the neutral wind traced in broad CO wings is also consistent with being powered by the ongoing starburst traced in the far-infrared. The energy injection rate ($dE/dt$) in a wind-blown bubble of a radius $R$ expanding at velocity $v$ into an infinite homogeneous medium with density $n_0$ can be expressed as \citep{weaver77}, \begin{equation} \frac{dE}{dt}\sim3.3\times10^{35}R_{kpc}^2v_{km/s}^3n_{0,cm^{-3}}~erg~s^{-1}. \end{equation} Adopting a bubble size of $0.2~$kpc \citet{sakamoto09} found for the high velocity molecular wind in Arp~220, an outflow velocity of $1000$~km~s$^{-1}$ from the line wing velocity in the stacked spectrum, and an ambient density of 10~cm$^{-3}$ \citep[e.g.][]{veilleux95}, we drive an energy injection rate of $\sim1.3\times10^{44}$~ergs~s$^{-1}$. Assuming an energy output per supernova of $\sim10^{51}$~ergs \citep{veilleux95}, our estimated $dE/dt$ yields a supernova rate, $\nu_{\rm SN,yr^{-1}}$ of $4$~yr$^{-1}$. Using a Scalo initial mass function (IMF) with a mass range of $5-100~M_\odot$, the star formation rate inferred from this supernova rate \citep[$SFR_{M>5M_\odot}=24.4~\nu_{\rm SN,yr^{-1}}~M_\odot$~yr$^{-1}$,][]{condon92,rosa05} is $SFR\approx100~M_\odot~$yr$^{-1}$. This agrees well with the $SFR$ derived from the $FIR$ luminosity for these ULIRGs, 134-352~$M_\odot~$yr$^{-1}$ \citep[see Eq. 8 of][]{hopkins03}. The outflow speed implied by the CO line wings, $1000$~km~s$^{-1}$, is comparable to the wind velocity measured by other phases of the superwind such as H$\alpha$ and Na~D \citep[400-800~km~s$^{-1}$,][]{martin99} and OH \citep[1400~km~s$^{-1}$,][]{fischer10}. If they all trace the same outflow, then Eq. (1) suggests that the spatial extent of Na D and H$\alpha$ should be larger ($\sim0.4~$kpc) while the spatial scale for the OH winds measured by Herschel would be more compact ($\lesssim0.1$~kpc) than the CO wind. The fact that wind velocities measured in molecular outflows are larger than in the optical may indicate that supernova driven wind embedded in molecular gas slows down by the time it breaks out of the starburst region. The inferred outflow speed exceeds the escape velocity and is high enough to blow away the molecular gas that hosts the star formation activity and to pollute the surrounding IGM significantly. Depending on whether the CO line is optically thin or thick, the outflowing molecular gas mass ranges between 1-6$\times10^9~M_\odot$. The line wings are symmetric in intensity and shape on both sides of the line, and this suggests that the wings are bipolar in geometry and likely optically thin. This is not contradictory to the highly asymmetric CO $3\rightarrow2$ line wings in Arp~220 (FWZI = 1000~km~s$^{-1}$) found by \citet{sakamoto09} since the $J=3\rightarrow2$ line has a higher optical depth. These observations suggest that more than $10^9~M_\odot$ of molecular gas can be removed from the central starburst region through such a wind, rapidly depleting the gas supply for the starburst. Some of this gas may eventually rain back onto the galaxy, enriching the galactic disk \citep{heckman03}. Our conclusion that the central starburst can power the observed massive outflow contradicts the conclusion by \citet{feruglio10} that the 750~km~s$^{-1}$ wind in CO $1\rightarrow0$ found in Mrk~231 is powered by the AGN activity and thus is an example of a ``quasar feedback'' at work. Although \citet{feruglio10} adopted a smaller than Galactic CO-to-H$_2$ conversion factor, they may still have over-estimated gas mass and mass outflow rate as an optically thin estimate leads to a $\ge2$ times smaller gas mass, lowering the mass outflow rate much closer to the current $SFR$. A stronger case for an AGN-driven molecular outflow is found in NGC~1266 where the optically thin CO mass outflow rate clearly exceeds the observed current $SFR$ (Alatalo et al., in prep). The CO outflow velocity is much lower ($\sim400$~km~s$^{-1}$), however, and this phenomenon may not be very common, at least at low-$z$, since this is the only object with an AGN-driven molecular outflow found in their survey of a large number of early-type galaxies. \citet{narayanan08} have shown that an AGN-driven molecular outflow may persist longer than a SB-driven outflow using a numerical simulation, but the accuracy of such model predictions and the sub-grid physics included needs to be tested further using a large sample of AGN+SB systems. \section{13CO AND 12CN ABUNDANCES} \label{sec-abundance} Two important molecular transitions also appear in our stacked composite spectra, and we examine their line strengths to gain further insights into the molecular ISM in these ULIRGs. Those are the lowest transitions of $^{13}$CO and $^{12}$CN (CN hereafter) at 110.20 and 113.50~GHz, which have been detected in local starburst and Seyfert galaxies \citep{casoli92,aalto95,aalto02,perez07}. In the RSR composite spectra, we find $\gtrsim3~\sigma$ bumps at both $^{13}$CO and CN frequencies only in the AGN ULIRG group with the flux ratios, $11.1\pm6.2$ and $9.3\pm4.3$ for $^{12}$CO/$^{13}$CO and $^{12}$CO/CN, respectively (Fig.~\ref{fig-chemi}). The other groups do not show such features, and the lower limits in $^{12}$CO/$^{13}$CO and $^{12}$CO/CN are summarized in Table~\ref{tbl-stack}. The same linewidths as the AGN group have been adopted to calculate the upper limits of $^{13}$CO (660~km~s$^{-1}$) and CN (600~km~s$^{-1}$) for the other groups. The ratio $^{12}$CO/$^{13}$CO ($R_{10}$) has been reported to be generally larger in starburst galaxies \citep[$R_{10}\geq20$,][]{gh01} than in optically thick normal spirals \citep[$10<R_{10}<20$,][]{casoli92} or Seyfert galaxies \citep[$R_{10}\approx12$,][]{ps98}. It has been suggested that the overproduction of $^{12}$C, a primary product of nucleosynthesis \citep{bb96} in actively star forming galaxies, is responsible for this trend \citep{casoli92}. Alternatively, \citet[][also 1995]{aalto91} have suggested that $R_{10}$, which gauges the optical depth of $^{13}$CO gas in LTE \citep[$I_{\rm^{12}CO}/I_{\rm^{13}CO}\approx1/\tau_{\rm^{13}CO}$;][]{paglione01}, can increase when molecular clouds are disturbed by powerful tidal force in merger driven starburst galaxies. Increased velocity dispersion within GMCs and a broader cloud-to-cloud velocity distribution can reduce the $^{13}$CO opacity within these starburst nuclei. Meanwhile, CN is known to be a tracer of dense gas with lower critical density than HCN \citep[by a factor of 5;][]{perez07,baan08}. CN molecule is a photo- or X-ray dissociation product of HCN and HNC \citep{baan08}, and is predicted to be abundant in both PDRs and XDRs \citep{kohno08}. \citet{meijerink07} however found the CN/HCN ratio to be enhanced in XDR than in PDR, and toward the DR edges where the gas is highly ionized as in XDR. Our finding of the lowest CO/CN ratio in the AGN ULIRG group may imply a higher ionization rate of dense molecular gas as predicted by the Meijerink et al. models. We have only 8 (four SB and four AGN) and 14 objects (nine SB and five AGN) whose $^{13}$CO and CN lines fall within the RSR frequency band, and the significance of these results will have to be confirmed with a larger sample. \section{FUTURE PROSPECTS} \label{sec-future} There are ongoing theoretical efforts to model galactic outflows to understand their detailed properties such as their frequency and energetics \citep[e.g.][]{cn10} and the feedback on scaling relations of galaxies \citep[e.g.][]{sales10}. Even for objects at cosmological distances where more direct morphological clues such as superbubbles, filaments, and chimneys are not visible, these outflow models can be tested by examining their spectroscopic signatures. Presently there is no consensus to on the driving mechanism for the observed outflows: \citet[i.e. starburst - ][vs. AGN - Ferglio et al. 2010; Fischer et al. 2010]{sakamoto09,riechers09}. Obtaining a better understanding is a pre-requisite in evaluating the importance of outflow feedback plays in galaxy evolution. The Redshift Search Receiver on the Large Millimeter Telescope (LMT) - a 50m single-dish facility being built at Volc$\acute{\rm a}$n Sierra Negra, near Puebla, Mexico will extend our capability to study galaxy outflows and winds at higher redshifts with its vastly improved sensitivity. Spatially resolved morphological and kinematical details obtainable using the ALMA will offer us the most stringent observational test for the origin of these massive molecular outflows. We are grateful to Mike Brewer, Don Lydon, Kamal Souccar, Gary Wallace, Ron Grosslein, John Wielgus, Vern Fath, and Ronna Erickson for their technical support of the Redshift Search Receiver commissioning. This work was supported by NSF grants AST 0096854, AST 0540852, and AST 0704966. We also thank K. Alatalo, D. Sanders, and N. Scoville for their helpful discussions. Support for this work was (also) provided by the National Research Foundation of Korea to the Center for Galaxy Evolution Research.
1,108,101,564,508
arxiv
\section{Introduction}\label{sect_intro} The temperature of the plasma within coronal loops has traditionally been determined using spectroscopic methods such as emission line ratios \citep{phi08} and emission measure locii techniques \citep{jor87, lan02, del02}. In \cite{me09}, the temperature along a coronal loop structure was seismologically determined using Solar Terrestrial Relations Observatory/Extreme-Ultraviolet Imager \citep[STEREO/EUVI,][]{wue04} observations of wave propagation along a loop system. These waves were interpreted as manifestations of the slow magnetoacoustic mode. The stereoscopic observations were used to derive the propagation geometry with an inclination of ${37 \pm 6} ^{\circ}$ to the local normal and true coronal slow mode phase speed of $132 \pm 9$ km~s$^{-1}$. Thus, the sound speed was then used to infer the plasma temperature of $0.84 \pm 0.12$~MK. \cite{me09} was the first direct measurement of the slow mode speed within a coronal loop and inference of the loop plasma temperature using this technique. The work presented here aims to provide an independent observational test of those results and conclusions. The plasma temperature measured using spectroscopic emission line diagnostics is compared to the result obtained using the seismological technique, and confirmed to be in agreement. \section{Observations} The observations were conducted on 2008 January 10, as part of the Joint Observing Program (JOP) 200 - `Multi-point, High Cadence EUV Observations of the Dynamic Solar Corona'. Using the Extreme-Ultraviolet Imaging Spectrometer \citep[EIS,][]{cul07} on the HINODE satellite. The HINODE/EIS observations complement the stereoscopic STEREO/EUVI observations with spectroscopic observations from a third point, viewed along the Sun-Earth line. The EIS observations consist of a rastered image using the 2\arcsec slit at $90$ positions with 25~s exposures to build up a $180\arcsec\times512\arcsec$ rastered image from 18:07:32~UT until 18:47:52~UT. The raster study includes data for 24 emission line windows with a width of 48 pixels and a wavelength dispersion of 0.0223~\AA/pixel. \begin{figure}[t] \epsscale{0.8} \centering \plotone{fig1.eps} \caption{\ion{Fe}{12} 195\mbox{\AA} intensity image of the coronal loop system analyzed in \cite{me09}. The loop intensity is extracted from the loop system along the indicated path. The EM locii method is applied using the mean intensity along the path to determine the temperature profile as a function of distance.} \label{fig1} \end{figure} The active region loop system discussed in \cite{me09} is analyzed here, where the loop footpoints are centered on solar $x,y$ coordinates (60\arcsec, 0\arcsec) in the Heliocentric-Cartesian reference frame of the HINODE spacecraft. Figure~\ref{fig1} shows the loop system viewed in emission from \ion{Fe}{12} 195\mbox{\AA}. \section{Analysis} \subsection{Preparation of the data} The EIS data are calibrated using EIS\_PREP within the {\it Solarsoft} database, with standard corrections for dark current, cosmic rays, hot/warm pixels, dusty pixels and an absolute calibration is applied to obtain the data in units of (ergs cm$^{-2}$ s$^{-1}$ s$r^{-1}$ \AA$^{-1}$). Every two pixels are binned along the $y$ axis to increase the signal to noise ratio within the data, resulting in $2\arcsec \times 2\arcsec$ pixels. The emission line profiles are then fit with multiple Gaussians using the {\it Solarsoft} routine EIS\_AUTO\_FIT\_GEN. The effects on the line centroids due to the tilt of the EIS slit and the orbital variation are corrected. \subsection{EM locii} The temperature of the plasma is investigated using the emission measure locii technique \citep[see][and references within]{jor87, del02, lan02}. If the EM locii curves intersect at a single point (see Figure~\ref{fig2}), it may be assumed that the plasma is isothermal, and the point of intersection used to estimate the plasma temperature and emission measure. To minimize the uncertainties introduced by the elemental abundances, emission lines from the same ion are analyzed. In the case of this data set, the available emission lines of Iron are: \ion{Fe}{10} 184.0\mbox{\AA}, \ion{Fe}{11} 188.23\mbox{\AA}, \ion{Fe}{11} 188.30\mbox{\AA}, \ion{Fe}{12} 186.89\mbox{\AA} and \ion{Fe}{12} 195.12\mbox{\AA}. The emission measure locii technique is applied by over plotting the emission measure curves for each line given by the emission measure: \begin{equation}\label{eqn1} EM(T)=\frac{4\pi d^{2}I}{G(T)}, \end{equation} where $I$ is the intensity of the emission line, $d$ is the distance between the emission source and the observer, and $G(T)$ is the contribution function for a particular ionization state. The contribution functions are calculated here using the \mbox{\ion{Fe}{0}} ionization equilibrium of \cite{arn92} and coronal abundances of \cite{fel92a}. To reduce the contamination effect of emission from background plasma, the background emission is estimated and subtracted from the data by subtracting the intensity of pixel [14,22], which is observed to be within a low emission region in all the lines and is adjacent to the loop system. \begin{figure}[t] \epsscale{1.} \centering \plotone{fig2.eps} \caption{Example of the EM locii curves for pixel [46,44]. The dashed lines indicate the points of intersection between each of the curves.} \label{fig2} \end{figure} \subsection{Isothermal map} The emission measure locii technique is based on the assumption of an isothermal plasma. To determine regions where this assumption may be appropriate, a simple algorithm is applied to classify candidate isothermal pixels. Figure~\ref{fig2} shows an example of the EM locii curves for a single pixel within the loop system. The points of intersection between the curves are determined, indicated by dashed lines in Figure~\ref{fig2}. The variance of the temperature values for the intersections is used to classify a pixel as isothermal or not isothermal. Pixels with an intersection temperature variance of Log $T < 0.004$ are defined to be isothermal. Figure~\ref{fig3} shows a map of isothermal pixels defined by applying the algorithm to all pixels within the data. The isothermal map indicates that the region at the base of the loop system is defined as isothermal by the algorithm, thus suggesting that it is reasonable to apply the emission measure locii technique to estimate the temperature at the base of these loops. \begin{figure}[t] \epsscale{.8} \centering \plotone{fig3.eps} \caption{Map of isothermal pixels defined by applying the intersection temperature variance algorithm, where isothermal pixels, marked as black, are overplotted on the \ion{Fe}{12} 195\mbox{\AA} intensity image from Figure~\ref{fig1}.} \label{fig3} \end{figure} \begin{figure}[t] \epsscale{1.} \centering \plotone{fig4.eps} \caption{EM locii curves calculated using the mean intensity of each cross-section along the path shown in Figure~\ref{fig1}, where Log $T$ [MK] is along the abscissa and emission measure [cm$^{-3}$] is along the ordinate. } \label{fig4} \end{figure} \begin{figure}[t] \epsscale{1.} \centering \plotone{fig5.eps} \caption{Temperature profile as a function of distance along the path indicated in Figure~\ref{fig1}, derived using the EM locii method and intersection temperature standard deviation.} \label{fig5} \end{figure} \section{Results} \subsection{Temperature profile} The temperature profile of the loop system is investigated by applying the EM locii method along the loops. The intensity profile of the loop system is extracted by defining a path parallel to the loops with a width of 6 pixels (Figure~\ref{fig1}). In each cross-section, perpendicular to the path axis, the mean intensity of the pixels is calculated. Thus the mean intensity is determined as a function of distance along the loops. The emission measure locii curves are calculated using the mean intensity of each cross-section along the path as shown in Figure~\ref{fig4}. The temperature profile along the loop path is derived using the EM locii curves as shown in Figure~\ref{fig5}, where the temperature error is calculated using the standard deviation of the intersection temperature values. The temperature profile suggests a uniform temperature as a function of distance along the loop within the errors. The mean temperature along the whole length of the temperature profile is Log $T =5.95 \pm 0.04$~K or equivalently $T =0.89 \pm 0.09$~MK, compared to the seismological result of $0.84 \pm 0.12$~MK obtained in \cite{me09}. \section{Conclusions} The results presented here determine, spectroscopically, the temperature of the active region loop system presented in \cite{me09}, along which slow magnetoacoustic waves were found to propagate. The seismological technique, applied in \cite{me09}, derived a plasma temperature of $0.84 \pm 0.12$~MK. This temperature is independently confirmed, here, using the emission measure locii technique, with a derived temperature of $T =0.89 \pm 0.09$~MK, and is consistent with the \cite{me09} results. The agreement between the results validates the technique applied in \cite{me09}, and further strengthens the slow magnetoacoustic mode interpretation of the observed waves. The consistency between the two independent estimates of the temperature also suggests that the assumption of an isothermal plasma in the loop system is valid. The temperature measured at the base of the loop system shown in Figure~\ref{fig5} displays a uniform temperature profile. This result is also in agreement with the EUVI observations presented in \cite{me09}, where the observed waves have a constant phase speed as a function of distance. Thus, indicating a uniform temperature profile along the base of the loop system, at least to the extent of where the waves are observed. \cite{wan09} recently investigated waves similar to those reported in \cite{me09} and found consistent results. Using HINODE/EIS observations interpreted as slow magnetoacoustic waves at the footpoint of a coronal loop, they use the Doppler shift amplitude to estimate the loop inclination and estimate a temperature of $0.7 \pm 0.3$~MK. Again, this is consistent with the results of the spectroscopic diagnostics applied here. In \cite{me09}, it was suggested that it may be possible to use the propagating slow mode to measure the coronal magnetic field strength. The HINODE/EIS observations presented here indicate that, at least within the current observational diagnostic precision, at the temperature and density of this region, it is not possible to discriminate the slow mode tube speed from the sound speed to make such a measurement. This may be possible in higher temperature structures where, due to its dependence on $T$, the sound speed would be expected to have a greater divergence from the tube speed. This may be a possible achievable goal of the Solar Orbiter mission, with high resolution observations of wave propagation in different structures. \acknowledgments This research is supported by the Science and Technology Facilities Council (STFC) under grant number ST/F002769/1. Hinode is a Japanese mission developed and launched by ISAS/JAXA, with NAOJ as domestic partner and NASA and STFC (UK) as international partners. It is operated by these agencies in co-operation with ESA and NSC (Norway). M.S. Marsh would like to acknowledge the encouragement of L.E. Marsh. {\it Facilities:} \facility{HINODE (EIS)}. \bibliographystyle{apj}
1,108,101,564,509
arxiv
\section{Introduction} After the discovery of quasicrystals by Shechtman in 1982, which was only published two years later \cite{Danny}, many people realised that our common understanding of what `long-range order' might mean, is incomplete (to put it mildly). In particular, little is known in the direction of a classification, which --- despite the effort of many --- still is the situation to date. One powerful tool for the analysis of order phenomena is provided by Fourier analysis, as is clear from the pioneering work of Meyer \cite{M2}. Moreover, it not surprising that methods from physical diffraction theory, most notably the diffraction spectrum and measure of a spatial structure, have been adopted and developed. {}From another mathematical perspective, taking into account proper notions of equivalence (which are needed for any meaningful classification attempt), a similar situation is well-known from dynamical systems theory. Here, the spectrum defined by Koopman \cite{Koop} and later developed by von Neumann \cite{vN} and Halmos--von Neumann \cite{HvN} led to a complete classification of ergodic dynamical systems with pure point spectrum up to (metric) isomorphism. It is an obvious question how these spectral notions are related, and part of this article aims at a systematic comparison, building on the progress of the last 15 years or so. Since this means that large parts of the paper will have review character, our exposition will be informal in style. In particular, there will be no formal theorems. Instead, we discursively present relevant statements, concepts and underlying ideas and refer to the original literature for more details and formal proofs as well as for generalisations. We hope that the general ideas and results transpire more naturally this way, and that the general flavour of the development is transmitted, too. \smallskip The paper is organised as follows. After the introduction of some notions from point sets and spectral theory in Section~\ref{Section:Prelim}, we begin with the diffraction spectrum of an \emph{individual} Delone set in Section~\ref{Section:individual}. This part is motivated by the description of the physical process of (kinematic) diffraction, where one considers a single solid in a particle beam (photons or neutrons, say) in order to gain insight into its internal structure, and by the general mathematical aspects of Delone sets highlighted in \cite{Lag-Delone}. Next, in Section~\ref{Section:Diffraction}, we extend the view by forming a dynamical system out of a given Delone set and by extending the notion of the individual diffraction to that of a diffraction measure of an (ergodic) dynamical \emph{system}. The two pictures (diffraction of individual sets and diffraction of dynamical systems) are equivalent when the dynamical system is uniquely ergodic, but we will not only look at this case. Then, in a third step, we look at the dynamical spectrum of a Delone dynamical system in Section~\ref{Section:The-dynamical-spectrum}, and how it is related to its diffraction spectrum in Section~\ref{sec:connections}. Beyond the equivalence in the pure point case, which has been known for a while and is discussed in Section~\ref{Section:Pure-point-diffraction}, we also look into the more general case of mixed spectra, at least for systems of finite local complexity (Section~\ref{Section:Factor}). In this case, the entire dynamical spectrum can still be described by diffraction. However, one might have to consider the diffraction of a whole \emph{family of systems} that are constructed from factors. We then turn to the maximal equicontinous factor in Section~\ref{Section:MEF}. This factor stores information on continuous eigenfunctions. It can be used to understand a hierarchy of Meyer sets via dynamical systems. Continuous eigenfunctions also play a role in diffraction theory in the investigation of the so-called Bombieri--Taylor approach. Finally, in Section~\ref{Section:qpf}, we have a look at our theory if the Delone set is replaced by suitable quasiperiodic functions. We compute autocorrelation and diffraction in this case and discuss how the arising dynamical hull can be seen as the maximal equicontinuous factor of the hull of a Delone set. Moreover, we discuss an important difference between the diffraction of quasiperiodic functions and that of Delone sets. Our article gives an introduction to a field which has seen tremendous developments over the last two decades, with steadily increasing activity. In our presentation of the underlying concepts and ideas of proofs, we do not strive for maximal generality but rather concentrate (most of) the discussion to Delone sets and present examples in various places. We have also included some pointers to work in progess, as well as to some open questions. Part of the material, such as the ideas concerning an expansion of sets into eigenfunctions in Sections~\ref{Section:Pure-point-diffraction} and the discussion of (diffraction of) quasiperiodic functions in Section \ref{Section:qpf}, do not seem to have appeared in print before (even though they are certainly known in the community). \section{Preliminaries}\label{Section:Prelim} Let us begin by recalling some basic notions tailored to our later needs. We do not aim at maximal generality here but will rather mainly be working in Euclidean space $\mathbb{R}\ts^{d}$. Some extensions will be mentioned in the form of remarks. We start with discussing point sets, see \cite[Sec.~2.1]{TAO} and references therein for further details. A set consisting of one point is called a \emph{singleton set}, while countable unions of singleton sets are referred to as \emph{point sets}. A point set $\varLambda\subset \mathbb{R}\ts^{d}$ is called \emph{locally finite} if $K\cap\varLambda$ is a finite set (or empty), for any compact $K\subset\mathbb{R}\ts^{d}$. Next, $\varLambda$ is \emph{discrete} if, for any $x\in\varLambda$, there is a radius $r>0$ such that $\varLambda\cap B_{r} (x) = \{ x \}$, where $B_{r} (x)$ denotes the open ball of radius $r$ around $x$. If one radius $r>0$ works for all $x\in\varLambda$, our point set is called \emph{uniformly discrete}. Next, $\varLambda$ is called \emph{relatively dense} if a compact $K\subset \mathbb{R}\ts^{d}$ exists such that $K+\varLambda=\mathbb{R}\ts^{d}$, where $A+B := \{ a+b \mid a\in A , \hspace{0.5pt} b\in B\}$ denotes the Minkowski sum of two sets. Clearly, if $\varLambda$ is relatively dense, there is a radius $R>0$ such that we see the condition satisfied with $K = \overline{B_R (0)}$. A \emph{Delone set} in $\mathbb{R}\ts^{d}$ is a point set that is both uniformly discrete and relatively dense, so it can be characterised by two radii $r$ and $R$ in the above sense. They are therefore also called $(r,R)$-sets in the literature. Delone sets are mathematical models of atomic positions in solids, which motivates their detailed study in our context. A point set $\varLambda\subset\mathbb{R}\ts^{d}$ is said to have \emph{finite local complexity} (FLC) with respect to translations if, for any compact neighbourhood $K$ of $0$, the collection of \emph{$K$-clusters of $\varLambda$}, \[ \{ K \cap (\varLambda - x) \mid x \in \varLambda \} \] is a finite set. Again, it suffices to consider closed $R$-balls around $0$ for all $R>0$, and $\varLambda$ is an FLC set if and only if $\varLambda - \varLambda$ is locally finite; compare \cite[Prop.~2.1]{TAO}. Note that clusters (or $R$-patches in the case we use a ball) are always defined around a point of $\varLambda$, so that the empty set is \emph{not} a cluster in our sense. A considerably stronger notion than that of a Delone set with finite local complexity is that of a \emph{Meyer set}, where one demands that $\varLambda$ is relatively dense while $\varLambda - \varLambda$ is uniformly discrete; see \cite[Lemma~2.1 and Rem.~2.1]{TAO} for details and \cite{M-Nato,M-beyond} for a thorough review. Clearly, every lattice in Euclidean space is a Meyer set. Thus, Meyer sets can be thought of as natural generalisations of lattices and this has been a very fruitful point of view for the theory of Meyer sets. Meyer sets are always subsets of model sets \cite{M2,M-Nato,TAO}, and important idealisations of the atomic positions of quasicrystals. A classic example with eightfold symmetry in the plane is illustrated in Figure~\ref{fig:ABtil}. \smallskip \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{abpara.eps} \end{center} \caption{A central patch of the eightfold symmetric Ammann--Beenker tiling, which can be generated by an inflation rule and it thus a self-similar tiling; see \cite[Sec.~6.1]{TAO} for details. The set of its vertex points is an example of a Meyer set, hence it is also an FLC Delone set. Moreover, it is a regular model set, as described in detail in \cite[Ex.~7.8]{TAO}.} \label{fig:ABtil} \end{figure} There is a natural topology on the set of all Delone sets in Euclidean space. This topology can be introduced in various ways. A very structural way is to identify a Delone set $\varLambda$ with a measure by considering its \emph{Dirac comb} \[ \delta^{}_{\! \varLambda} \, = \sum_{x\in\varLambda} \delta^{}_{x} \hspace{0.5pt} , \] where $\delta_{x}$ is the normalised point measure (or Dirac measure) at $x$. Clearly, different Delone sets correspond to different measures. The vague topology on the measures then induces a topology on the Delone sets \cite{BL-1}. To identify Delone sets with measures is more than a convenient mathematical trick. It is of great unifying power as it allows us to treat sets, functions and measures on the same footing. We will have more to say about this later. At this point, we note that the topology on the Delone sets can be generated by a metric as follows. Let \[ j \! : \, \mathbb{S}^d \xrightarrow{\quad} \mathbb{R}\ts^{d}\cup\{\infty\} \] be the stereographic projection. Here, $\mathbb{S}^d$ denotes the $d$-dimensional sphere in $\mathbb{R}\ts^{d+1}$ and the point $\infty$ denotes the additional point in the one-point compactification of $\mathbb{R}\ts^{d}$, which is the image of the `north pole' under $j$. Let $d^{}_{\mathrm{H}}$ be the Hausdorff metric on the set of compact subsets of $\mathbb{S}^d$. Then, for any Delone set $\varLambda\subset \mathbb{R}\ts^{d}$, the set $j^{-1} (\varLambda \cup\{\infty\}) $ is a closed and hence compact subset of $\mathbb{S}^d$. Thus, via \[ d (\varLambda_1,\varLambda_2) \, := \, d^{}_{\mathrm{H}} \bigl( j^{-1} (\varLambda_1 \cup\{\infty\}), j^{-1} (\varLambda_2 \cup\{\infty\}) \bigr), \] we obtain a topology on the set of all Delone sets. It can be shown that this is the same topology as the one discussed above \cite{LS}. In this topology, the set of all $(r,R)$-Delone sets is compact \cite{BL-1,LS}. There is a canonical action of $\mathbb{R}\ts^{d}$ on the set of all Delone sets by translations via \[ \mathbb{R}\ts^{d} \times \mbox{Delone sets}\xrightarrow{\quad} \mbox{Delone sets} \hspace{0.5pt} , \quad (t,\varLambda) \mapsto t + \varLambda \hspace{0.5pt} . \] Clearly, this action is continuous. For any $(r,R)$-Delone set $\varLambda$, its \emph{hull} \[ \mathbb{X} (\varLambda) \, := \, \overline{\{ t + \varLambda \mid t \in \mathbb{R}\ts^{d} \}} \] is a closed and hence compact subset of the $(r,R)$-Delone sets. By construction, the hull is invariant under the translation action of $\mathbb{R}\ts^{d}$. Thus, the pair consisting of the compact hull $\mathbb{X} (\varLambda)$ and the restriction of the translation action of $\mathbb{R}\ts^{d}$ on this hull is a \emph{dynamical system}, which we denote by $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d})$. As usual, this dynamical system is called \emph{minimal} if the translation orbit of $\varLambda'$, which is $\{ t + \varLambda' \mid t \in \mathbb{R}\ts^{d} \}$, is dense for every $\varLambda' \in \mathbb{X} (\varLambda)$, and it is called \emph{uniquely ergodic} if it possesses exactly one probability measure which is invariant under the translation action. The convolution $\varphi \ast \psi$ of $\varphi,\psi\in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ is an element of $C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ with \[ \bigl(\varphi \ast \psi\bigr) (x) \, := \int_{\mathbb{R}\ts^{d}} \varphi (x-y) \, \psi (y) \, \mathrm{d} y \] for all $x\in \mathbb{R}\ts^{d}$. We will identify measures on $\mathbb{R}\ts^{d}$ with linear functionals on $C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ by means of the Riesz--Markov theorem. By the convolution of a measure $\nu$ with a function $\varphi \in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$, we mean the continuous function $\nu * \varphi$ defined by \[ \bigl(\nu \ast \varphi) (x) \, = \int_{\mathbb{R}\ts^{d}} \varphi (x - y) \, \mathrm{d} \nu (y) \hspace{0.5pt} . \] A particular role will be played by \emph{positive definite measures}, which are measures $\nu$ with \[ \bigl(\nu \ast \widetilde{\varphi}\ast \varphi \bigr)(0) \, \geqslant \, 0 \] for all $\varphi \in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$, where $\widetilde{\varphi}$ is defined by $\widetilde{\varphi} (x) = \overline{\varphi (-x) }$. Any positive definite measure is \emph{translation bounded}, meaning that $ \nu * \varphi$ is a bounded function for all $\varphi \in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$. We will also need the Fourier transform of functions, measures and distributions. For a complex-valued function $f$ on $\mathbb{R}\ts^{d}$ that is integrable with respect to Lebesgue measure, we define its Fourier transform $\widehat{f}$ as the complex-valued function given by \[ \widehat{f} (k) \, := \, \int_{\mathbb{R}\ts^{d}} \mathrm{e}^{- 2 \pi \mathrm{i} k x} f(x) \, \mathrm{d} x \hspace{0.5pt} . \] Clearly, this definition can be extended to finite measures; see \cite[Ch.~8]{TAO} for details. It turns out that it can also be extended to various other classes of objects, including tempered distributions. More delicate is the extension to unbounded measures, where we refer to \cite{BF} for background. In particular, we note that the Fourier transform of a positive definite measure exists and is a positive measure. \section{Diffraction of individual objects}\label{Section:individual} Here, we begin by considering a single Delone set $\varLambda\subset \mathbb{R}\ts^{d}$ and introduce and recall a spectral notion from the pioneering paper \cite{Hof}, which is known as the \emph{diffraction measure} of $\varLambda$; compare \cite[Sec.~9.1]{TAO} for a more detailed account. In order to put our approach in the general perspective of mathematical diffraction theory, we will identify a Delone set $\varLambda$ with its Dirac comb $\delta^{}_{\! \varLambda}$. In our setting, the diffraction measure emerges as the Fourier transform of the (natural) autocorrelation measure, in extension of the classic Wiener diagram for integrable functions; compare \cite[Sec.~9.1.2]{TAO}. Since $\delta^{}_{\! \varLambda}$ is an infinite measure, it cannot be convolved with itself, wherefore one needs to proceed via restrictions to balls (or, more generally, to elements of a general van Hove sequence \cite[Def.~2.9]{TAO}). Setting $\delta^{R}_{\!\varLambda} := \delta^{}_{\!\varLambda \cap \overline{B^{}_{R} (0)}}$, we consider \[ \gamma^{R}_{\varLambda} \, := \, \frac{\delta^{R}_{\!\varLambda} * \widetilde{\delta^{R}_{\!\varLambda}}}{\vol (B^{}_{R} (0))} \] where $\widetilde{\mu}$ is the `flipped-over' version of a measure $\mu$, defined by $\widetilde{\mu} (g) = \overline{\mu (\widetilde{g}\hspace{0.5pt} )}$ for $g\in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ and $\widetilde{g}$ as above. Complex conjugation is not relevant in our point set situation, but is needed for any extension to (complex) weighted Dirac combs and general measures. Every accumulation point of the family $\{ \gamma^{R}_{\varLambda} \mid R > 0\}$ in the vague topology, as $R\to\infty$, is called an \emph{autocorrelation} of the Delone set $\varLambda$. By standard arguments, compare \cite[Prop.~9.1]{TAO}, any Delone set possesses at least one autocorrelation, and any autocorrelation is translation bounded. If only one accumulation point exists, the autocorrelation measure \[ \gamma^{}_{\hspace{-0.5pt} \varLambda} \, = \, \lim_{R\to\infty} \gamma^{R}_{\varLambda} \] is well-defined (we will only consider this situation later), and called the \emph{natural autocorrelation}. Here, the term `natural' refers to the use of balls as averaging objects, as they are closest to the typical situation met in the physical process of diffraction. In `nice' situations, the autocorrelation will not depend on the choice of averaging sequences, as long as they are of van Hove type (where, roughly stating, the surface to volume ratio vanishes in the infinite volume limit). The volume averaged convolution in the definition of $\gamma^{}_{\!\varLambda}$ is also called the \emph{Eberlein convolution} of $\delta^{}_{\!\varLambda}$ with its flipped over version, written as \[ \gamma^{}_{\hspace{-0.5pt} \varLambda} \, = \delta^{}_{\!\varLambda}\circledast \widetilde{\delta^{}_{\!\varLambda}} \hspace{0.5pt} . \] We refer to \cite[Sec.~8.8]{TAO} for some basic properties and examples. A particularly nice situation emerges when $\varLambda$ is an FLC set, so $\varLambda-\varLambda$ is locally finite. Then, assuming the natural autocorrelation to exist, a short calculation shows that \[ \gamma^{}_{\hspace{-0.5pt} \varLambda} \, = \sum_{z\in\varLambda-\varLambda} \! \eta(z) \, \delta_{z} \quad \text{with} \quad \eta (z) \, = \lim_{R\to\infty} \frac{\card \bigl( \varLambda^{}_{R} \cap (\varLambda^{}_{R} - z)\bigr) }{\vol{B^{}_{R} (0)}}. \] According to its definition, $\eta (z)$ can be seen as the frequency of the vector $z$ from the difference set $\varLambda - \varLambda$. Thus, the autocorrelation of $\varLambda$ stores information on the set of difference vectors of $\varLambda$ and their frequencies. Note that $\gamma^{}_{\hspace{-0.5pt} \varLambda}$ in this case is a pure point measure on $\mathbb{R}\ts^{d}$. By construction, the autocorrelation of any Delone set $\varLambda$ is a positive measure, which is also positive definite. As a consequence, its Fourier transform, denoted by $\widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda}}$, exists, and is a positive (and positive definite) measure. This measure describes the outcome of a scattering (or diffraction) experiment with our `idealised solid' when put into a coherent light or particle source; see \cite{Cow} for background. By continuity of the Fourier transform, we have \[ \widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda}} \, = \lim_{R\to \infty} \widehat{ \gamma^{R}_{\varLambda}} \, = \lim_{R\to\infty} \, \frac{1}{\vol (B^{}_{R}(0))} \sum_{x,y\in \varLambda\cap B^{}_{R} (0)} e^{ 2 \pi \mathrm{i} (x-y)(\cdot)}. \] Here, the function on the right hand side is considered as a measure (namely the measure which has the function as its density with respect to Lebesgue measure) and the limit is taken in the sense of vague convergence of measures. Given the interpretation of the diffraction measure as outcome of a diffraction experiment, it is natural that special attention is paid to the set \[ \mathcal{B} \, := \, \big\{ k\in\mathbb{R}\ts^{d} \mid \widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda}} \bigl( \{k\} \bigr) > 0 \big\} \hspace{0.5pt} . \] This set is denoted as the \emph{Bragg spectrum} (after the fundamental contributions to structure analysis of crystals via diffractions of the Braggs, father and son, which was honoured with the Noble Prize in Physics in 1914). The point measures of $\widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda}}$ on the Bragg spectrum are known as \emph{Bragg peaks}, and for any $k \in \mathcal{B}$, the value $\widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda}} (\{k\})$ is called the \emph{intensity} of the Bragg peak. In this context, there is the idea around that one should have \[ \widehat{\gamma^{}_{\hspace{-0.5pt} \varLambda} } (\{k\}) \, = \lim_{n\to \infty} \biggl| \frac{1}{ \vol (B^{}_{R} (0))} \sum_{x\in \varLambda\cap B^{}_{R} (0)} \mathrm{e}^{2 \pi \mathrm{i} k x} \biggr|^2 \hspace{0.5pt} . \] Indeed, this formula is quite reasonable as it says that the intensity of the diffraction at $k$ is given as a square of a mean Fourier coefficient. We will have more to say about its validity as we go along. \begin{remark}\label{Remark-BT} The validity of such a formula is discussed in \cite{Hof} with reference to work of Bombieri and Taylor \cite{BT1,BT2}, who used the formula without justification for certain systems coming from primitive substitutions. This was later justified in \cite{GK}. For regular model sets, the formula was shown in \cite{Martin}, but is also contained in \cite{M1}; see \cite[Prop.~9.9]{TAO} as well. In both cases, the special structure at hand is used. We will discuss a structural approach to it in Section \ref{Section:MEF}. More recently, the approach via amplitudes in the form of averaged exponential sums was extended to weak model sets of extremal density, where different methods have to be used; see \cite[Prop.~8]{BHS} for details. We shall come back to this topic later. \hfill $\Diamond$ \end{remark} From now on, whenever the meaning is unambiguous, we will drop the Delone set index and simply write $\gamma$ and $\widehat{\gamma}$ for the autocorrelation and diffraction of $\varLambda$. Our approach is not restricted to Delone sets (see various remarks below), though we will mainly consider this case for ease of presentation. \begin{example}\label{ex:integers} The set ${\ts \mathbb{Z}}$ of integers, in our formulation, is described by the Dirac comb $\delta^{}_{{\ts \mathbb{Z}}}$, and possesses the natural autocorrelation $\gamma = \delta^{}_{{\ts \mathbb{Z}}}$, as follows from a straightforward Eberlein convolution; compare \cite[Ex.~8.10]{TAO}. Its Fourier transform is then given by $\widehat{\gamma} = \delta^{}_{{\ts \mathbb{Z}}}$, as a consequence of the Poisson summation formula (PSF); see \cite[Ch.~9.2.2]{TAO} for details. More generally, given a crystallographic (or fully periodic) Delone set $\varLambda\subset\mathbb{R}\ts^d$, its Dirac comb is of the form $\delta^{}_{\!\varLambda} = \delta^{}_{S} * \delta^{}_{\varGamma}$ where $\varGamma=\{ t\in\mathbb{R}\ts^{d} \mid t+\varGamma = \varGamma\}$ is the lattice of periods of $\varLambda$ and $S$ is a finite point set that is obtained by the restriction of $\varLambda$ to a (true) fundamental domain of $\varGamma$; compare \cite[Prop.~3.1]{TAO}. Now, a simple calculation gives the natural autocorrelation \[ \gamma \, = \, \dens (\varGamma) \bigl( \delta^{}_{S} * \widetilde{\delta^{}_{S}} \hspace{0.5pt} \bigr) * \delta^{}_{\varGamma} \hspace{0.5pt} , \] which is easily Fourier transformable by an application of the convolution theorem together with the general PSF in the form $\widehat{\delta^{}_{\varGamma}} = \dens (\varGamma) \, \delta^{}_{\varGamma^{*}}$, where $\varGamma^{*}$ is the dual lattice of $\varGamma$. The result is the diffraction measure \[ \widehat{\gamma} \, = \, \bigl( \dens (\varGamma) \bigr)^{2} \lvert h \rvert ^{2} \delta^{}_{\varGamma^{*}} \] where $h = \widehat{\delta_{S}}$ is a bounded continuous function on $\mathbb{R}\ts^{d}$; see \cite[Sec.~9.2.4]{TAO} for further details. We thus see that the diffraction measure is a pure point measure that is concentrated on the points of the dual lattice. It is perhaps worth noting that the finite set $S$ in the above decomposition of the Dirac comb $\delta^{}_{\!\varLambda}$ is not unique, and neither is then the function $h$, because there are infinitely many distinct possibilities to choose a fundamental domain of $\varGamma$. Still, all functions $h$ that emerge this way share the property that the values of $\lvert h \rvert^{2}$ agree on all points of $\varGamma^{*}$, so that the formula for the diffraction measure is unique and unambiguous. \hfill $\Diamond$ \end{example} \begin{remark}\label{rem:measure-dyn} As is quite obvious from our formulation, the Dirac comb of a Delone set is an example of a translation bounded measure on Euclidean space. This suggests that one can extend the entire setting to general translation bounded measures; compare \cite{BF,Hof,BL-1} as well as \cite[Chs.~8 and 9]{TAO}. Given such a measure, $\omega$ say, one then defines its autocorrelation measure as $\gamma^{}_{\omega} = \omega \circledast \widetilde{\omega}$, provided this limit exists. It is then a translation bounded, positive definite measure, hence Fourier transformable by standard arguments \cite{BF}, and $\widehat{\gamma^{}_{\omega}}$ is a translation bounded, positive measure, called the \emph{diffraction measure} of $\omega$. This point of view was first developed in \cite{Hof}, and has been generalised in a number of articles; see \cite{TAO} and references therein for background, and \cite{LS} for a general formulation. \hfill $\Diamond$ \end{remark} Figure~\ref{fig:ABspec} below shows an example of a diffraction measure for an aperiodic point set, namely that of the Ammann--Beenker point set introduced in Figure~\ref{fig:ABtil}. For the detailed calculation in the context of regular cyclotomic model sets, we refer to \cite[Secs.~7.3 and 9.4.2]{TAO}. \smallskip Although the notion of a diffraction measure is motivated by the physical process of diffraction, so that this approach looks very natural for Delone sets as mathematical models of atomic positions in a solid, the concept is by no means restricted to Delone sets, or even to measures. \begin{example}\label{ex:temp-distr} Let $\mathcal{S} (\mathbb{R}\ts)$ denote the space of Schwartz functions on $\mathbb{R}\ts$ and $\mathcal{S}' (\mathbb{R}\ts)$ its dual, the space of \emph{tempered distributions}; see \cite{Schwartz,Wal} for general background. In this context, $\delta^{\prime}_{x}$ is a distribution with compact support, defined by $(\delta^{\prime}_{x} , \varphi) = - \varphi^{\hspace{0.5pt}\prime} (x)$, where we follow the widely used convention to write $(T,\varphi)$ for the evaluation of a distribution $T \in \mathcal{S}' (\mathbb{R}\ts)$ at a test function $\varphi \in \mathcal{S} (\mathbb{R}\ts)$. Note that $\delta^{\prime}_{x}$ is \emph{not} a measure. Tempered distributions of compact support are convolvable, and one checks that $\delta^{\hspace{0.5pt}\prime}_{x} * \delta^{\hspace{0.5pt}\prime}_{y} = \delta^{\hspace{0.5pt} \prime\prime}_{x+y}$. Let us now consider $\omega = \delta^{\prime}_{{\ts \mathbb{Z}}} := \sum_{x\in{\ts \mathbb{Z}}} \delta^{\prime}_{x}$, which clearly is a tempered distribution. Also, we have $\omega = \delta^{\prime}_{0} * \delta^{}_{{\ts \mathbb{Z}}}$, so that standard arguments imply the existence of the Eberlein convolution of $\omega$. A simple calculation gives \[ \gamma^{}_{\omega} \, = \, \omega \circledast \widetilde{\omega} \, = \, \delta^{\prime\prime}_{{\ts \mathbb{Z}}} \, = \, \delta^{\prime\prime}_{0} * \delta^{}_{{\ts \mathbb{Z}}}\hspace{0.5pt} . \] This is a tempered distribution of positive type, so $( \gamma^{}_{\omega}, \varphi * \widetilde{\varphi} \, ) \geqslant 0$ for all $\varphi \in \mathcal{S} (\mathbb{R}\ts)$. Its Fourier transform, which always exists as a tempered distribution, is then actually a positive \emph{measure}, by an application of the Bochner--Schwartz theorem. Observing that $\widehat{\hspace{0.5pt}\delta^{\prime\prime}_{0}\hspace{0.5pt}}$ is a regular distribution, and thus represented by a smooth function, one can check that \[ \widehat{\hspace{0.5pt}\delta^{\prime\prime}_{0}\hspace{0.5pt}} (y) \, = \, 4 \pi^{2} y^{2} . \] Now, using the convolution theorem together with the PSF, it is routine to check that \[ \widehat{\gamma^{}_{\omega}} \, = \, \widehat{\delta^{\prime\prime}_{{\ts \mathbb{Z}}}} \, = \, 4 \pi^2 (.)^{2} \delta^{}_{{\ts \mathbb{Z}}} \, = \sum_{y\in{\ts \mathbb{Z}}} 4 \pi^2 y^2 \delta_{y} \hspace{0.5pt} . \] This is a positive pure point measure, the (natural) \emph{diffraction measure} of the tempered distribution $\omega$. In comparison to previous examples, it is \emph{not} translation bounded, which makes it an interesting extension of the measures in Example~\ref{ex:integers}. More generally, let us consider a lattice $\varGamma \subset \mathbb{R}\ts^{d}$. If $p = (p^{}_{1}, \dots , p^{}_{d})$ denotes a multi-index (so all $p_{i} \in \mathbb{N}^{}_{0}$) with $\lvert p \rvert = p^{}_{1} + \ldots +\hspace{0.5pt} p^{}_{d}$ and $x^{p} = x^{p^{}_{1}}_{1} \cdots\hspace{0.5pt} x^{p^{}_{d}}_{d}$, as well as the differential operator \[ D^{p} \, = \,\frac{\partial^{\lvert p \rvert}} {\partial x^{\hspace{0.5pt} p^{}_{1}}_{1} \cdots\hspace{0.5pt} \partial x^{\hspace{0.5pt} p^{}_{\hspace{-0.5pt} d}}_{d}} \hspace{0.5pt} , \] see \cite{Wal} for background, we get $\delta^{(p)}_{x} \! * \delta^{(q)}_{y} = \delta^{(p+q)}_{x+y}$, where $(\delta^{(p)}_{x} , \varphi) := (-1)^{\lvert p \rvert} \bigl( D^{p} \varphi\bigr) (x)$ as usual. Now, for fixed $p$, consider the lattice-periodic tempered distribution $\omega = \delta^{(p)}_{\varGamma} = \delta^{(p)}_{0} * \delta^{}_{\varGamma}$. As before, the natural autocorrelation $\gamma^{}_{\omega}$ exists, and is given by \[ \gamma^{}_{\omega} \, = \, \dens (\varGamma)\, \delta^{(2p)}_{\varGamma} \, = \, \dens (\varGamma) \, \delta^{(2p)}_{0} \hspace{-0.5pt} * \delta^{}_{\varGamma} \hspace{0.5pt} . \] This is a tempered distribution of positive type again, so its Fourier transform is a positive tempered measure. Observing \[ \widehat{\delta^{(2p)}_{0}} (y) \, = \, (4 \pi^2)^{\lvert p \rvert} y^{2p} \] in analogy to above, one can employ the convolution theorem together with the general PSF from Example~\ref{ex:integers} to calculate the diffraction, which results in \[ \widehat{\gamma^{}_{\omega}} \, = \, ( 4 \pi^{2} ) ^{\lvert p \rvert} \, \dens (\varGamma)^{2} \, (.)^{2p} \, \delta^{}_{\varGamma^{*}} \, = \, \dens (\varGamma)^{2} \, (4 \pi^{2})^{\lvert p \rvert} \sum_{y\in \varGamma^{*}} y^{2p} \hspace{0.5pt} \delta^{}_{y} \hspace{0.5pt} . \] This measure is only translation bounded for $p=0$, where it reduces to the diffraction measure of the lattice Dirac comb $\delta^{}_{\varGamma}$ of Example~\ref{ex:integers} as it must. Due to the convolution structure, one can further generalise as follows. Let $\varLambda \subset \mathbb{R}\ts^{d}$ be a Delone set with natural autocorrelation $\gamma^{}_{\!\varLambda}$, let $\nu$ be a tempered distribution of compact support, and consider $\omega = \nu * \delta^{}_{\!\varLambda}$. Clearly, this is a tempered distribution, with existing (natural) autocorrelation. The latter is given by $\gamma^{}_{\omega} = (\nu * \widetilde{\nu}\,) * \gamma^{}_{\!\varLambda}$, which is of positive type again. Fourier transform then results in the diffraction \[ \widehat{\gamma^{}_{\omega}} \, = \, \lvert \widehat{\nu} \rvert^{2} \, \widehat{\hspace{0.5pt}\gamma^{}_{\!\varLambda}\hspace{0.5pt}} \] where $\widehat{\nu}$ is a smooth function on $\mathbb{R}\ts^{d}$. \hfill $\Diamond$ \end{example} \begin{remark}\label{rem:function-space} As one can see from the general structure of the volume-weighted convolution, the concept of a diffraction measure can be put to use in a wider context. Let us thus start from a locally convex space $\mathcal{F}$ of functions on $\mathbb{R}\ts^{d}$ and let $\mathcal{F}\hspace{0.5pt} '$ be its dual, the space of continuous linear functionals on $\mathcal{F}$. Examples include $C_{\mathsf c} (\mathbb{R}\ts^{d})$, which gives the regular Borel measures with the vague topology, and $\mathcal{S} (\mathbb{R}\ts^{d})$, with the space $\mathcal{S}\hspace{0.5pt} '(\mathbb{R}\ts^{d})$ of tempered distributions as its dual, but also the space $\mathcal{D} (\mathbb{R}\ts^{d})$ of $C^{\infty}$-functions with compact support, then leading to the space $\mathcal{D}\hspace{0.5pt} ' (\mathbb{R}\ts^{d})$ of distributions \cite{Schwartz,Wal}. Various other combinations will work similarly. What we need is the concept of a functional of compact support, or a suitable variant of it, and the convolution of two linear functionals $G,H$ of that kind, as defined by \[ (G*H , \varphi) \, := \, (G \times H , \varphi^{\times}) \] where $\varphi \in \mathcal{F}$ and $\varphi^{\times} \! : \, \mathbb{R}\ts^{d} \times \mathbb{R}\ts^{d} \xrightarrow{\quad} \mathbb{C}\ts$ is defined by $\varphi^{\times} (x,y) = \varphi ( x+y)$. To expand on this, let us assume that a distribution $F\in \mathcal{D}\hspace{0.5pt} ' (\mathbb{R}\ts^{d})$ is given. Fix some $\varepsilon > 0$ and let $c^{}_{r,\varepsilon} \in \mathcal{D} (\mathbb{R}\ts^{d})$ be a non-negative function that is $1$ on the ball $B_{r} (0)$ and $0$ outside the ball $\overline{B_{r+\varepsilon} (0)}$. Such functions exist for any $r>0$. Now, consider \[ \gamma^{\, (r)}_{F,\varepsilon} \, := \, \frac{c^{}_{r,\varepsilon} F * \widetilde{c^{}_{r,\varepsilon} F}}{\int_{\mathbb{R}\ts^{d}} c^{}_{r,\varepsilon} (x) \, \mathrm{d} x} \] which is well-defined, with $\int_{\mathbb{R}\ts^{d}} c^{}_{r,\varepsilon} (x) \, \mathrm{d} x = \vol (B_{r} (0)) + \mathcal{O} (1/r)$ as $r\to\infty$. If $\lim_{r\to\infty} \gamma^{\, (r)}_{F,\varepsilon}$ exists and is also independent of $\varepsilon$, which will be the case under some mild assumptions on $F$, we call the limit $\gamma^{}_{F}$ the \emph{natural autocorrelation} of the distribution $F$. More generally, one can work with accumulation points as well. If $\gamma$ happens to be a tempered distribution, we are back in the situation that $\widehat{\gamma^{}_{F}}$ is a positive measure, called the \emph{diffraction measure} of the distribution $F$. This setting provides a versatile generalisation of the diffraction theory of translation bounded measures; see \cite{BLPS,ST} for a detailed account. \hfill $\Diamond$ \end{remark} \section{Diffraction of dynamical systems}\label{Section:Diffraction} The diffraction measure of an individual Delone set is a concept that emerges from the physical situation of a diffraction experiment. It is both well founded and useful. Still, it has a number of shortcomings that are related with the fact that it is not obvious how $\widehat{\gamma}$ `behaves' when one changes the Delone set. Since the mapping between autocorrelation and diffraction is Fourier transform, and thus one-to-one, we can address this issue on the level of the autocorrelation. Let us assume we have a Delone set $\varLambda$ whose natural autocorrelation exists. Clearly, any translate of the set should have the same autocorrelation, so \[ \gamma^{}_{t+\varLambda} \, = \, \gamma^{}_{\hspace{-0.5pt} \varLambda} \quad \text{for all} \quad t\in\mathbb{R}\ts^{d} \hspace{0.5pt} , \] and this is indeed a simple consequence of the van Hove property of the family of balls $\{ B^{}_{R} (0) \mid R > 0\}$. In fact, a proof only uses the (slightly weaker) F{\o}lner property of them for single points. Less obvious is what happens if one goes to the compact hull $\mathbb{X} (\varLambda)$ as introduced above. Nevertheless, at least from a dynamical systems point of view, it is very natural to define an autocorrelation for a dynamical system. Here, one best starts with a measure-theoretic dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d}, \mu)$, where $\mu$ is an invariant probability measure on $\mathbb{X} (\varLambda)$. In the large and relevant subclass of uniquely ergodic Delone dynamical systems with (FLC), the unique measure $\mu$ is the patch frequency measure. Then, any such measure-theoretic dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d}, \mu)$ comes with a autocorrelation $\gamma_{\mu}$ associated to it via a closed formula (as opposed to a limit). This is discussed next, where we follow \cite{BL-1}; see \cite{Gou-1} as well. Choose a function $\chi \in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ and consider the map $\gamma^{}_{\mu, \chi}\! : \, C_{\mathsf{c}} (\mathbb{R}\ts^{d}) \xrightarrow{\quad} \mathbb{C}\ts $ defined by \[ \varphi \,\mapsto \int_{\mathbb{X} (\varLambda)} \sum_{x,y\in \varLambda'} \varphi (x - y) \, \chi (x) \, \mathrm{d} \mu (\varLambda') \hspace{0.5pt} . \] Clearly, $\gamma^{}_{\mu,\chi}$ is a continuous functional on $ C_{\mathsf{c}} (\mathbb{R}\ts^{d})$. By the Riesz--Markov theorem, it can then be viewed as a measure. Now, for fixed $\varphi \in C_{\mathsf{c}} (\mathbb{R}\ts^{d})$, the map \[ C_{\mathsf{c}} (\mathbb{R}\ts^{d})\xrightarrow{\quad} \mathbb{C}\ts \hspace{0.5pt} , \quad \chi \mapsto \gamma^{}_{\mu,\chi} (\varphi), \] is continuous. Hence, it is a measure as well. Moreover, as $\mu$ is translation invariant, this measure can easily be seen to be invariant under replacing $\chi$ by any of its translates. Hence, it must be a multiple of Lebesgue measure. Consequently, it will take the same values for all $\chi$, which are \emph{normalised} in the sense that they satisfy $\int_{\mathbb{R}\ts^{d}} \chi (t) \, \mathrm{d} t = 1$. So, the map $\gamma^{}_{\mu, \chi}$ will be independent of $\chi$ provided $\chi$ is normalised. Thus, we can unambiguously define \[ \gamma^{}_{\mu} \, := \, \gamma^{}_{\mu,\chi} \] for any such normalised $\chi$. This is then called the \emph{autocorrelation} of the dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d}, \mu)$. If $\mu$ is an ergodic measure, it can be shown that, for $\mu$-almost every element $\varLambda'$ in the hull $\mathbb{X} (\varLambda)$, the individual autocorrelation $\gamma^{}_{\! \varLambda'}$ of $\varLambda'$ exists and equals $\gamma^{}_{\mu}$. In general, the assessment of equality is difficult, unless one knows that $\varLambda'$ is generic for $\mu$ in the hull. However, if the dynamical system $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d})$ is even uniquely ergodic, the autocorrelation can be shown to exist and to be equal to $\gamma^{}_{\mu}$ for every element in the hull. We refer to \cite{BL-1} for further details and references. Important cases include Delone sets derived from primitive substitution rules via their geometric realisations, see \cite[Chs.~4 and 9]{TAO} for details and many examples, and regular model sets in Euclidean space, such as the Ammann--Beenker point set from Figures~\ref{fig:ABtil} and \ref{fig:ABspec}; compare \cite[Chs.~7 and 9]{TAO} for more. \begin{figure} \begin{center} \includegraphics[width=0.86\textwidth]{abdiffract.eps} \end{center} \caption{Illustration of a central patch of the diffraction measure of the Ammann--Beenker point set of Figure~\ref{fig:ABtil}, which has pure point diffraction. A Bragg peak of intensity $I$ at $k\in\mathcal{B}$ is represented by a disc of an area proportional to $I$ and centred at $k$. Here, $\mathcal{B}$ is a scaled version of ${\ts \mathbb{Z}}[\mathrm{e}^{\pi \mathrm{i}/4}]$, which is a group; see \cite[Sec.~9.4.2]{TAO} for details. Clearly, $\mathcal{B}$ is dense, while the figure only shows Bragg peaks beyond a certain threshold. In particular, there are no extinctions in this case. At the same time, this measure is the diffraction measure of the Delone dynamical system defined by the (strictly ergodic) hull of the Ammann--Beenker point set, and $\mathcal{B}$ is its dynamical spectrum.} \label{fig:ABspec} \end{figure} For any $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d},\mu)$, the autocorrelation $\gamma^{}_{\mu}$ can be shown to be a positive definite measure. Hence, its Fourier transform exists and is a measure. This measure will be called the \emph{diffraction measure} of the dynamical system, and denoted by $\widehat{\hspace{0.5pt}\gamma^{}_{\hspace{-0.5pt} \mu}\hspace{0.5pt} }$. As in the case of the diffraction of an individual set, we will be particularly interested in the point part of the diffraction measure. The set of atoms of this pure point part is again denoted by $\mathcal{B}$ and called \emph{Bragg spectrum}. It is then possible to compute the Bragg spectrum via the following functions defined for each $k\in\mathbb{R}\ts^{d}$ by \[ c_{k}^{(R)} \! : \, \mathbb{X} (\varLambda) \xrightarrow{\quad} \mathbb{C}\ts \hspace{0.5pt} , \quad c_{k}^{(R)} (\varLambda'):= \frac{1}{ \vol (B^{}_{R} (0))} \sum_{x\in \varLambda'\cap B^{}_{R} (0)} \mathrm{e}^{2 \pi \mathrm{i} k x} . \] More specifically, as shown in \cite{Lenz}, we have \[ \widehat{\gamma^{}_{\mu}} ( \{k\}) \, = \lim_{R\to \infty} \|c_{k}^{(R)}\|^{2}_{L^2} \hspace{0.5pt} , \] where $\|\cdot\|_{L^2}$ denotes the norm of the Hilbert space $L^2 (\mathbb{X} (\varLambda), \mu)$, and if the dynamical system is ergodic, we even have \[ \widehat{\gamma^{}_{\mu}} ( \{k\}) \,= \lim_{R\to \infty} \bigl| c_{k}^{(R)} (\varLambda') \bigr|^2 \] for $\mu$-almost every $\varLambda'\in \mathbb{X} (\varLambda)$. Note that, in these cases, the corresponding limit will vanish for all $k\in\mathbb{R}\ts^{d} \setminus \mathcal{B}$. One may expect that convergence holds for all $\varLambda' \in \mathbb{X} (\varLambda)$ in the uniquely ergodic case. However, this is not clear at present. We will have more to say about this in Section~\ref{Section:MEF}. \begin{remark}\label{rem:Choquet} In the preceding discussion, ergodicity of the measure on the hull has played some role. Thus, one may wonder about what happens for general measures. Thus, let $\nu$ be an arbitrary invariant probability measure on the hull that can be written as a convex combination $\nu = \sum_{i\in I} \alpha_{i} \, \mu_{i}$ of other invariant probability measures on the hull, hence $\alpha_{i} >0$ and $\sum_{i\in I} \alpha_{i} = 1$. Then, using the same function $\chi$ for all autocorrelations, one sees that \[ \gamma^{}_{\nu} \, = \sum_{i\in I} \alpha_{i} \, \gamma^{}_{\mu_{i}} \hspace{0.5pt} . \] Invoking Choquet's theorem, compare \cite{Phelps} for background, one can thus see that the analysis of the autocorrelations of extremal and thus \emph{ergodic} invariant probability measures on the hull is the essential step in the diffraction analysis of a Delone dynamical system. \hfill $\Diamond$ \end{remark} \begin{remark}\label{rem:auto-relate} At this point, we have discussed two ways of defining an autocorrelation, namely via a limiting procedure for individual Delone sets and via integration for hulls of Delone sets. While these may seem very different procedures at first, we would like to stress that both have in common that they involve some form of \emph{averaging}. Indeed, in the limiting procedure, this is an average over $\mathbb{R}\ts^{d}$, while in the closed formula given above, it is an average over the hull. The connection between these two averages is then made by an ergodic theorem. \hfill $\Diamond$ \end{remark} \section{The dynamical spectrum}\label{Section:The-dynamical-spectrum} In the preceding section, we have seen that any Delone dynamical system $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d},\mu)$ comes with an autocorrelation measure $\gamma^{}_\mu$ (and thus also with a diffraction measure $\widehat{\gamma^{}_\mu}$). We have also seen that this autocorrelation measure agrees, for of a (typical) element of the hull, with the individual autocorrelation of this element if the measure $\mu$ is ergodic. This suggests that there is a close connection between properties of the dynamical system and the diffraction. As was realised by Dworkin \cite{Dwo}, this is indeed the case. This is discussed in this section. In order to discuss this properly, we will first have to introduce the spectral theory of a dynamical system. This is the spectral theory of what we call (in line with various other people) the \emph{Koopman representation} of the dynamical system, in recognition of Koopman's pioneering work \cite{Koop}. \smallskip A Delone dynamical system $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$ gives rise to a unitary representation $T$ of $\mathbb{R}\ts^{d}$ on the Hilbert space $L^2 (\mathbb{X}(\varLambda),\mu)$ via \[ T \! : \, \mathbb{R}\ts^{d}\xrightarrow{\quad} \mbox{unitary operators on $L^2 (\mathbb{X}(\varLambda),\mu)$} \hspace{0.5pt} , \quad t \mapsto T_t \hspace{0.5pt} , \] with \[ T_t f \, = \, f(\cdot - t)\hspace{0.5pt} . \] Indeed, we obviously have $T_{t+s} = T_t T_s$ for any $t,s\in\mathbb{R}\ts^{d}$ as well as $T_0 = \mathbbm{1}$. So, $T$ is a representation of $\mathbb{R}\ts^{d}$. Also, as the measure $\mu$ is invariant, any $T_t$, with $t\in\mathbb{R}\ts^{d}$, is isometric and, clearly, $T_{-t}$ is the inverse to $T_t$. Thus, any $T_t$ is isometric and invertible and thus unitary. Moreover, it is not hard to see that $T$ is strongly continuous, which means that, for any fixed $f\in L^2 (\mathbb{X}(\varLambda),\mu)$, the map \[ \mathbb{R}\ts^{d}\xrightarrow{\quad} L^2 (\mathbb{X}(\varLambda),\mu) \hspace{0.5pt} , \quad t \mapsto T_t f \hspace{0.5pt} , \] is continuous. We call the map $T$ the \emph{Koopman representation} of the dynamical system. As $T$ is a strongly continuous unitary representation, Stone's theorem (compare \cite{Loomis}) guarantees the existence of a projection-valued measure \[ E_T \! : \, \mbox{Borel sets on $\mathbb{R}\ts^{d}$} \xrightarrow{\quad} \mbox{projections on $L^2 (\mathbb{X}(\varLambda),\mu)$} \] with \[ \langle f, T_t f \rangle \, = \int_{\mathbb{R}\ts^{d}} \mathrm{e}^{2 \pi \mathrm{i} t k} \, \mathrm{d} \rho^{}_f (k) \, = \, \widehat{\rho^{}_f} (-t) \hspace{0.5pt} , \] for all $t\in\mathbb{R}\ts^{d}$, where $\rho^{}_f$ is the measure on $\mathbb{R}\ts^{d}$ defined by \[ \rho^{}_f (B) \, := \, \langle f, E_T (B)f \rangle\hspace{0.5pt} . \] The measure $\rho^{}_f$ is known as the \emph{spectral measure of $f$} (with respect to $T$). It is the unique measure on $\mathbb{R}\ts^{d}$ with $\langle f, T_t f \rangle = \widehat{\rho^{}_f} (-t)$ for all $t\in\mathbb{R}\ts^{d}$. The study of the properties of the spectral measures is then known as the \emph{spectral theory} of the dynamical system; see \cite{Q} for a general exposition in the one-dimensional case. In particular, the \emph{spectrum of the dynamical system} is given as the support of $E$ defined by \[ \{ k\in\mathbb{R}\ts^{d} \mid E_T (B_\varepsilon (k)) \neq 0 \;\mbox{for all $\varepsilon >0$}\}. \] Of course, the spectrum is a set and as such does not carry any information on the type of the spectral measures. For this reason, one is mostly not interested in the spectrum alone, but also in determining a spectral measure of maximal type (thus a spectral measure having the same zero sets as $E$). We discuss a substitution based system with mixed spectrum below in Example~\ref{ex:TM}. For us, the following subset of the spectrum will be particularly relevant. The \emph{point spectrum} of the dynamical system is given as \[ \{ k\in\mathbb{R}\ts^{d} \mid E_T (\{k\}) \neq 0 \; \} \hspace{0.5pt} . \] A short consideration reveals that $k\in\mathbb{R}\ts^{d}$ belongs to the point spectrum if and only if it is an eigenvalue of $T$. Here, an $f\neq 0$ with $f\in L^2 (\mathbb{X}(\varLambda),\mu)$ is called an \emph{eigenfunction} to the \emph{eigenvalue} $k\in\mathbb{R}\ts^{d}$ if \[ T_t f \, = \, \mathrm{e}^{2 \pi \mathrm{i} t k} f \] holds for all $t\in \mathbb{R}\ts^{d}$. Note that, following common practice, we call $k$ (rather than $\mathrm{e}^{2 \pi \mathrm{i} k x}$) the eigenvalue, as this matches nicely with the structure of the translation group as well as its dual (the latter written additively). If our dynamical system is ergodic, the modulus of any eigenfunction must be constant (as it is an invariant function). So, in this case, all eigenfunctions are bounded. If the system fails to be ergodic, eigenfunctions need not be bounded. However, by suitable cut-off procedures, one can always find bounded eigenfunction to each eigenvalue; compare \cite{BL-1} for a recent discussion. It is not hard to see that the eigenvalues form a group. Indeed, \begin{itemize} \item the constant function is an eigenfunction to eigenvalue $0$, \item whenever $f$ is an eigenfunction to $k$, then $\overline{f}$ is an eigenfunction to $-k$, and \item whenever $f$ and $g$ are bounded eigenfunctions to $k$ and $\ell$, respectively, the product $f g$ is an eigenfunction to $k + \ell$. \end{itemize} We denote this group of eigenvalues by $\mathcal{E} (\mu)$. Standard reasoning also shows that eigenfunctions to different eigenvalues are orthogonal. We will have more to say on eigenvalues and eigenfunctions later. \section{Connections between dynamical and diffraction spectrum}\label{sec:connections} Having introduced the dynamical spectrum, we now turn to the connection with diffraction. The crucial ingredient is that the Schwartz space $\mathcal{S}(\mathbb{R}\ts^{d})$ can be embedded into $C (\mathbb{X} (\varLambda))$ via \[ f \! : \, \mathcal{S}(\mathbb{R}\ts^{d})\xrightarrow{\quad} C (\mathbb{X} (\varLambda)) \hspace{0.5pt} , \quad \varphi \mapsto f_\varphi \hspace{0.5pt} , \] with \[ f_\varphi (\varLambda') \, := \,\bigl( \varphi * \delta^{}_{\! \varLambda'}\bigr) (0) \, = \sum_{x\in \varLambda'} \varphi (-x) \, . \] \begin{remark} We could also work with the corresponding embedding of $C_{\mathsf{c}} (\mathbb{R}\ts^{d})$ into $C (\mathbb{X} (\varLambda))$, and indeed this is often done. Note also that the existence of such embeddings will not be true for general dynamical systems, but rather requires the possibility of a `pairing' between the elements of the dynamical system and functions. Indeed, it is possible to extend (some of) the considerations below whenever such a pairing is possible \cite{BLPS,ST,LM}. \hfill $\Diamond$ \end{remark} Based on this embedding, one can provide the connection between diffraction and dynamical spectrum. Here, we follow \cite{DM} (see \cite{LM} as well), to which we refer for further details and proofs. The key formula emphasised in \cite{Dwo} is \[ \bigl(\gamma^{}_\mu \ast \widetilde{\varphi} \ast \varphi\bigr) (0) \, = \, \langle f_\varphi, f_\varphi \rangle \] for $\varphi \in \mathcal{S}(\mathbb{R}\ts^{d})$. This result was quite influential in the field, as it highlighted a connection that was implicitly also known in point process theory, compare \cite{Daley}, but had not been observed in the diffraction context. Taking Fourier transforms and using the denseness of $\mathcal{S}(\mathbb{R}\ts^{d})$ in $L^2 (\mathbb{R}\ts^{d})$, one can use this formula to obtain a (unique) isometric map \[ \Theta \! : \, L^2 (\mathbb{R}\ts^{d}, \widehat{\gamma^{}_{\mu}}) \xrightarrow{\quad} L^2 (\mathbb{X}(\varLambda),\mu)\hspace{0.5pt} , \quad \mbox{with} \; \Theta( \widehat{\varphi}\hspace{0.5pt} ) = f_\varphi \] for all $\varphi \in \mathcal{S}(\mathbb{R}\ts^{d})$. Now, both $L^2$-spaces in question admit a unitary representation of $\mathbb{R}\ts^{d}$. Indeed, we have already met the Koopman respresentation $T$. Moreover, for any $t\in\mathbb{R}\ts^{d}$, we have a unitary map \[ S_t \! : \, L^2 (\mathbb{R}\ts^{d} , \widehat{\gamma^{}_{\mu}}) \xrightarrow{\quad} L^2 (\mathbb{R}\ts^{d}, \widehat{\gamma^{}_{\mu}}) \hspace{0.5pt} , \quad S_t h = \mathrm{e}^{2 \pi \mathrm{i} t (\cdot) } h, \] and these maps yield a representation $S$ of $\mathbb{R}\ts^{d}$ on the Hilbert space $L^2 (\mathbb{R}\ts^{d},\widehat{\gamma^{}_\mu})$. Then, it is not hard to see that $\Theta$ intertwines $S$ and $T$, which means that \[ \Theta S_{t} \, = \, T_{t} \hspace{0.5pt} \Theta \] holds for all $t\in\mathbb{R}\ts^{d}$. In fact, this is clear when applying both sides to functions of the form $\widehat{\varphi}$ for $\varphi \in \mathcal{S} (\mathbb{R}\ts^{d})$ and then follows by a denseness argument in the general case. Consider now \[ \mathcal{U} \, := \, \Theta \bigl( L^2 (\mathbb{R}\ts^{d},\widehat{\gamma^{}_{\mu}}) \bigr) \hspace{-0.5pt} \, = \, \overline{\mbox{Lin}\{ f_\varphi \mid \varphi \in \mathcal{S}(\mathbb{R}\ts^{d})\}} \, \subset \, L^2 (\mathbb{X}(\varLambda),\mu) \hspace{0.5pt} , \] where the closure is taken in $L^2 (\mathbb{X}(\varLambda),\mu)$. Then, $\mathcal{U}$ is a subspace. As $\Theta$ intertwines $S$ and $T$ and is an isometry, this subspace is invariant under $T$ and the action of $S$ is equivalent to the restriction of $T$ to this subspace. In this sense, the diffraction measure completely controls a subrepresentation of $T$. \textbf{This is the fundamental connection between diffraction and dynamics.} Using the map $\Theta$, we can easily provide a closed formula for the pure point part of the diffraction measure. Any $k\in \mathcal{B}$ is an eigenvalue of $S$ (with the characteristic function $1^{}_{\{k\}}$ being an eigenfunction). Hence, any $k\in \mathcal{B}$ is an eigenvalue of $T$ with eigenfunction \[ c^{}_{k} \, := \, \Theta (1^{}_{\{k\}}) \hspace{0.5pt} . \] So, for any Bragg peak, there exists a canonical eigenfunction. This is quite remarkable as eigenfunctions are usually only determined up to some phase. The function $c^{}_{k}$ is not normalised in $L^2$. Instead, using that $\Theta$ is an isometry, we obtain \[ \big\langle c^{}_{k}, c^{}_{k}\big\rangle^{}_{L^2 (\mathbb{X}(\varLambda),\mu)} \, = \, \big\langle \Theta(1^{}_{\{k\}}), \Theta (1^{}_{\{k\}}) \big\rangle^{}_{L^2 (\mathbb{X}(\varLambda),\mu)} \, = \, \big\langle 1^{}_{\{k\}}, 1^{}_{\{k\}} \big\rangle^{}_{L^2 (\mathbb{R}\ts^{d},\widehat{\gamma^{}_{\mu}})} \, = \, \widehat{\gamma^{}_{\mu}} ( \{k\} ) \hspace{0.5pt}. \] For the pure point part of the diffraction, we thus get \label{diffraction-formula} \[ \bigl(\widehat{\gamma^{}_{\mu}}\bigr)_{\mathsf{pp}} \, = \sum_{k\in \mathcal{B}} \|c^{}_{k}\|^2 \, \delta^{}_{k} \hspace{0.5pt} . \] For a given Delone set $\varLambda$, we have now considered two procedures to investigate the associated diffraction, one via a limiting procedure and one via considering the hull. A short summary on how these two compare may be given as follows: \begin{eqnarray*} \mbox{\textbf{point set $\varLambda$}} & \; \longleftrightarrow \; & \mbox{\textbf{dynamical system $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$}}\\ \mbox{ $\gamma $ as a limit} & \longleftrightarrow & \mbox{closed formula for $\gamma$}\\ \mbox{$S$ on $L^2 (\mathbb{R}\ts^{d},\widehat{\gamma})$} & \longleftrightarrow & \mbox{restriction of $T$ to $\mathcal{U}$}\\ \mbox{Bragg spectrum}\; \mathcal{B} & \xrightarrow{\quad\;} & \mbox{group of eigenvalues}\; \mathcal{E} \\ \mbox{Intensity}\; \widehat{\gamma} (\{k\}) & \longleftrightarrow & \mbox{norm}\; \|c^{}_{k} \|^2 . \end{eqnarray*} There is more to be said about the connection between the group of eigenvalues and the Bragg spectrum, as we shall see later. \section{Pure point diffraction and expansion in eigenfunctions}\label{Section:Pure-point-diffraction} The phenomenon of (pure) point diffraction lies at the heart of aperiodic order, both in terms of physical experiments and in terms of mathematical investigations. In this section, we take a closer look at it. \smallskip We consider an ergodic Delone dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d},\mu)$. This system comes with a unitary representation $T$ of $\mathbb{R}\ts^{d}$ and a diffraction measure $\widehat{\gamma_\mu}$. It is said to have \emph{pure point diffraction} if this measure is a pure point measure. It is said to have \emph{pure point dynamical spectrum} if there exists an orthonormal basis of $L^2 (\mathbb{X}(\varLambda),\mu)$ consisting of eigenfunctions. We have already seen in the previous section that the diffraction measure controls a subspace of the whole $L^2 (\mathbb{X}(\varLambda),\mu)$. Accordingly, it should not come as a surprise that any $k\in \mathcal{B}$ is an eigenvalue of $T$ and pure point dynamical spectrum implies pure point diffraction spectrum. Somewhat surprisingly it turns out that the converse also hold. So, the Delone dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d},\mu)$ has pure point diffraction if and only if it has pure point dynamical spectrum. Thus, the two notion of pure pointedness are equivalent. Following \cite{BL-1}, we can sketch a proof as follows: The diffraction is pure point if $\widehat{\gamma}$ is a pure point measure. By the discussion above, this is the case if and only if the subrepresentation of $T$ coming from restricting to $\mathcal{U}$ has pure point spectrum. Clearly, if $T$ has pure point spectrum, then this must be true of any subrepresentation as well and pure point diffraction follows. To show the converse, note hat pure point diffraction implies that all spectral measures $\varrho^{}_{\hspace{-0.5pt} f_\varphi}$, with $\varphi \in \mathcal{S}(\mathbb{R}\ts^{d})$, are pure point measures (as these are equivalent to the spectral measures of $\widehat{\varphi}$ with respect to $S$). We have to show that then all spectral measures $\varrho^{}_{\hspace{-0.5pt} f}$, with $f\in L^2 (\mathbb{X}(\varLambda),\mu)$, are pure point measures. Consider \[ \mathcal{A} \, := \, \{f\in C(\mathbb{X}(\varLambda)) \mid \varrho^{}_f \;\mbox{is a pure point measure}\} \hspace{0.5pt} . \] Then, $\mathcal{A}$ is a vector space with the following properties. \begin{itemize} \item It is an algebra. (This ultimately follows as the product of eigenfunctions is an eigenfunctions.) \item It is closed under complex conjugation. (This ultimately follows as the complex conjugate of an eigenfunction is an eigenfunction.) \item It contains all constant functions (as these are continuous eigenfunctions to the eigenvalue $0$). \item It contains all functions of the form $f_\varphi$ (as has just been discussed) and these functions clearly separate the points of $\mathbb{X} (\varLambda)$. \end{itemize} Given these properties of $\mathcal{A}$, we can apply the Stone--Weierstrass theorem to conclude that $\mathcal{A}$ is dense in $C(\mathbb{X}(\varLambda))$ with respect to the supremum norm. Hence, $\mathcal{A}$ is also dense in $L^2 (\mathbb{X} (\varLambda),\mu)$ with respect to the Hilbert space norm, and the desired statement follows. A closer inspection of the proof also shows that the group $\mathcal{E}(\mu)$ of eigenvalues is generated by the Bragg spectrum $\mathcal{B}$ if the system has pure point diffraction spectrum. Note that the Bragg spectrum itself need not to be a group. The eigenvalues of $T$ which are not Bragg peaks are called \emph{extinctions}. We refer to \cite[Rem.~9.10]{TAO} for an explicit example. However, it is an interesting observation in this context that $\mathcal{B}$, in many examples, actually \emph{is} a group, in which case one has identified the pure point part of the dynamical spectrum as well. This is the case for the Ammann--Beenker point set, so that Figure~\ref{fig:ABspec} also serves as an illustration of the dynamical spectrum. In the case of pure point diffraction, the diffraction agrees with its pure point part and the corresponding formula of the previous section on p.~\pageref{diffraction-formula} gives $ \widehat{\gamma^{}_{\mu}} = \sum_{k\in \mathcal{B}} \|c^{}_{k}\|^2 \, \delta^{}_{k} $ with $c^{}_{k} = \Theta (1^{}_{\{k\}})$. This way, the diffraction measure can actually be used very efficiently to calculate the dynamical spectrum (in additive formulation, as we use it here). \begin{remark} The result on the equivalence of the two types of pure point spectrum has quite some history. As mentioned above, the work of Dworkin \cite{Dwo} provides the basic connection between the diffraction and the dynamical spectrum and gives in particular that pure point dynamical spectrum implies pure point diffraction spectrum; see \cite{Hof,Martin} for a discussion as well. In fact, for quite a while this was the main tool to show pure point diffraction spectrum \cite{Robbie,Sol}. For uniquely ergodic Delone dynamical systems with finite local complexity, the equivalence between the two notions of pure pointedness was then shown in \cite{LMS}. These considerations are modeled after a treatment of a related result for one-dimensional subshifts given in \cite{Q}. A different proof (sketched above), which permits a generalisation to arbitrary dynamical systems consisting of translation bounded measures, was then given in \cite{BL-1}. There, one can also find the statement that the Bragg spectrum generates the group of eigenvalues. The setting of \cite{BL-1} does not require any form of ergodicity and applies to all Delone dynamical systems (irrespective of whether they are FLC or not), though it might be difficult then to actually determine the autocorrelation explicitly, despite the closed formula given in Section~\ref{Section:Diffraction}. A generalisation of \cite{LMS} to a large class of point processes was given in \cite{Gou-1}. This work applies to all Delone dynamical systems and requires neither ergodicity nor finite local complexity. In fact, it does not even require translation boundedness of the point process, but the weaker condition of existence of a second moment. A treatment containing both the setting of \cite{Gou-1} and \cite{BL-1} was then provided in \cite{LS} and, in a slightly different form, in \cite{LM}. These are the most general results up to date. The statement on the intensity of a Bragg peak being given by the square of an $L^2$-norm and the formula for $\widehat{\gamma^{}_{\mu}}$ can be found in \cite{Lenz}. It is worth noting that the equivalence between the dynamical spectrum and the diffraction spectrum only holds in the pure point case and does not extend to other spectral types, as follows from corresponding examples in \cite{vEM}; compare Section~\ref{Section:Factor} as well. It turns out, however, that --- under suitable assumptions --- the dynamical spectrum is equivalent to a \emph{family} of diffraction spectra \cite{BLvE}. Details will be discussed in Section~\ref{Section:Factor}. \hfill $\Diamond$ \end{remark} We finish this section with a short discussion how pure point spectrum can be thought of as providing an `Fourier expansion for the underlying Delone sets'. To achieve this, we will need a normalised version of the $c^{}_k$, with $k\in\mathcal{B}$, given by \[ \widetilde{c^{}_k} \, := \, \frac{c^{}_{k}} {\bigl(\widehat{\gamma^{}_{\mu}} (\{k\})\bigr)^{1/2} } \hspace{0.5pt} . \] As $\Theta$ is an isometry, we obtain from the very definition of $c^{}_k$ for any $k\in\mathcal{B}$ and any $\varphi \in \mathcal{S} (\mathbb{R}\ts^{d})$ \[ \big\langle f_\varphi \hspace{0.5pt} , \widetilde{c^{}_k}\big\rangle \, = \, \frac{ \big\langle \Theta(\widehat{\varphi}) \hspace{0.5pt} , \Theta (1^{}_{\{k\}}) \big\rangle} {\bigl(\widehat{\gamma^{}_{\mu}} ( \{k\} )\bigr)^{1/2}} \, = \, \frac{ \big\langle \widehat{\varphi} \hspace{0.5pt} , 1^{}_{\{k\}} \big\rangle } {\bigl(\widehat{\gamma^{}_{\mu}} (\{k\}) \bigr)^{1/2}} \, = \, \bigl(\widehat{\gamma^{}_{\mu}} ( \{k\} )\bigr)^{1/2} \, \widehat{\varphi} (k). \] This in particular implies the relation \begin{equation}\label{eq:fun-relation} \langle f_\varphi \hspace{0.5pt} , \widetilde{c^{}_k}\rangle \, \widetilde{c^{}_k} \, = \, \widehat{\varphi} (k) \, \widetilde{c^{}_k} \hspace{0.5pt} . \end{equation} This formula can be found in \cite{Lenz} (with a different proof). It will be used shortly. Consider a Delone dynamical system $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$ with pure point diffraction and, hence, pure point dynamical spectrum. The basic aim is now to make sense out of the `naive' formula \[ \widehat{\delta^{}_{\!\varLambda'}} \, = \hspace{0.5pt} \sum_{k\in \mathcal{B}} c^{}_{k} (\varLambda') \, \delta^{}_{k} \hspace{0.5pt} . \] To do so, we consider this equation in a weak sense. Thus, we pair both sides with a $\widehat{\varphi}$ with some $\varphi \in \mathcal{S} (\mathbb{R}\ts^{d})$. With $\varphi^{}_{\hspace{-0.5pt}\text{-}}$ defined by $\varphi^{}_{\hspace{-0.5pt}\text{-}} (x) = \varphi (-x)$, we can then calculate \[ \begin{split} \bigl(\widehat{\delta^{}_{\!\varLambda'}} \hspace{0.5pt} , \widehat{\varphi} \bigr) \; & = \; \bigl( \delta^{}_{\!\varLambda'} \hspace{0.5pt} , \varphi^{}_{\hspace{-0.5pt}\text{-}} \bigr) \; = \; f^{}_{\varphi} (\varLambda') \; \overset{(!)}{=} \, \sum_{k\in \mathcal{B}} \langle f^{}_{\varphi} \hspace{0.5pt} , \widetilde{c^{}_{k}}\rangle \, \widetilde{c^{}_{k}} (\varLambda') \\ & \overset{\eqref{eq:fun-relation}}{=} \sum_{k\in\mathcal{B}} \widehat{\varphi} (k) \, {c^{}_{k}} (\varLambda') \; = \, \biggl( \, \sum_{x\in\mathcal{B}} c^{}_{k} (\varLambda') \, \delta^{}_{k} \hspace{0.5pt} , \widehat{\varphi} \biggr) . \end{split} \] Here, the second step follows from the definition of $f^{}_{\varphi}$, while (!) requires some justification and this justification is missing. Note, however, that $(!)$ does hold in the $L^2$ sense. Indeed, by the definition of pure point diffraction, the $\widetilde{c^{}_{k}}$, with $k\in\mathcal{B}$, form an orthonormal basis of $\mathcal{U}$ and hence \[ f^{}_{\varphi} \, = \hspace{0.5pt} \sum_{k \in \mathcal{B}} \langle f_\varphi, \widetilde{c^{}_k}\rangle \,\widetilde{c^{}_k} \] is indeed valid. In fact, this is just the expansion of a function in an orthonormal basis. So, the problem in the above reasoning is the \emph{pointwise} evaluation of the Fourier series at $\varLambda'$. We consider this an intriguing open problem. \begin{remark} Already in the original work of Meyer \cite{M1,M2}, it was an important point to capture the harmonic properties of point sets via trigonometric approximations. This led to the theory of harmonious sets; see \cite{M-Nato,M-beyond} for a detailed summary. More recently, Meyer has revisited the problem \cite{M3} and designed new schemes of almost periodicity that should help to come closer to a direct interpretation in the sense of an expansion. \hfill $\Diamond$ \end{remark} \section{Further relations between dynamical and diffraction spectra}\label{Section:Factor} Our approach so far was guided by the physical process of diffraction. The latter is usually aimed at the determination (in our terminology) of the Delone set, or as much as possible about it, from the diffraction measure. This is a hard inverse problem, generally without a unique solution. As mentioned before, diffraction is thus tailored to one set, or to one dynamical system, and \emph{not} invariant under (metric) isomorphism of dynamical systems. This is probably the reason why, from a mathematical perspective, it has not received the attention it certainly deserves. If one comes from dynamical systems theory, which has a huge body of literature on spectral properties, it appears more natural to define a spectrum in such a way that invariance under metric isomorphism is automatic, and this was achieved by Koopman \cite{Koop}, and later systematically explored by von Neumann \cite{vN}. One celebrated result in this context then is the Halmos--von Neumann theorem which states that two ergodic dynamical systems with pure point spectrum are (metrically) isomorphic if and only if they have the same spectrum, and that any such system has a representative in the form of an ergodic group addition on a locally compact Abelian group \cite{vN,HvN, CFS}. This is well in line with the discussion of pure point diffraction in the previous section (see \cite{LM} as well). There, we have seen that pure point diffraction and pure point dynamical spectrum are equivalent. So, in this case the diffraction captures essentially the whole spectral theory. A priori, it is not clear what diffraction has to say on Delone dynamical systems with mixed spectra, and the situation is indeed more complex then. \begin{example}\label{ex:TM} As was observed in \cite{vEM}, the subshift $\mathbb{X}^{}_{\mathrm{TM}}$ defined by the Thue--Morse (TM) substitution \[ \sigma^{}_{\mathrm{TM}} \! : \, a\mapsto ab \hspace{0.5pt} , \, b\mapsto ba \hspace{0.5pt} , \] has a mixed dynamical spectrum that is \emph{not} captured by the diffraction measure of the system; see also \cite[Secs.~4.6 and 10.1]{TAO} for a detailed discussion. Note that we use a formulation via substitutions here, but that one can easily obtain a Delone set as well, for instance via using the positions of all letters of type $a$ in a bi-infinite TM sequence (letters correspond to unit intervals this way). To expand on the structure, the dynamical spectrum consists of the pure point part ${\ts \mathbb{Z}} \bigl[ \frac{1}{2}\bigr]$ together with a singular continous part that can be represented by a spectral measure in Riesz product form, \[ \varrho^{}_{\mathrm{TM}} \, = \prod_{\ell=0}^{\infty} \bigl( 1 - \cos(2^{\ell+1} \pi x) \bigr) , \] where convergence is understood in the vague topology (not pointwise) and where $\varrho^{}_{\mathrm{TM}}$ is a spectral measure of maximal type in the ortho-complement of the pure point sector. Now, the diffraction measure picks up $\varrho^{}_{\mathrm{TM}}$ completely, but only the trivial part of the point spectrum, which is ${\ts \mathbb{Z}}$ in this case. Nevertheless, there is a single factor, the so-called \emph{period doubling} subshift (as defined by the substitution $\sigma^{}_{\mathrm{pd}} \! : \, a \mapsto ab \hspace{0.5pt}, \, b \mapsto aa$), which has pure point spectrum (both diffraction and dynamical). Via the equivalence in this case, one picks up the entire point spectrum, namely ${\ts \mathbb{Z}} \bigl[ \frac{1}{2}\bigr]$. The period doubling subshift emerges from the TM subshift via a simple sliding block map; see \cite[Sec.~4.6]{TAO} for details. In fact, it is possible to replace the TM system by a topologically conjugate one, also based upon a primitive substitution rule (hence locally equivalent in the sense of mutual local derivability), with the property that one restores the equivalence of the two spectral types for this system. The simplest such possibility emerges via the induced substitution for legal words of length $2$; see \cite[Sec.~5.4.1]{Q} or \cite[Sec.~4.8.3]{TAO} for details on this construction. Here, this leads to a primitive substitution rule of constant length over a $4$-letter alphabet. \hfill $\Diamond$ \end{example} It turns out \cite{BLvE} that even in the case of mixed diffraction one can capture the whole dynamical spectrum via diffraction (at least in the case of systems with finite local complexity). However, one will have to consider not only the diffraction of the original system (which is not an isomorphism invariant) but also the diffraction of a suitable set of factors (which when taken together provides an isomorphism invariant). This is discussed in this section. We follow \cite{BLvE}. Let $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$ be a Delone dynamical system of finite local complexity. Let $T$ be the associated Koopman respresentation and $E_T$ the corresponding projection valued measure. A family $ \{ \sigma_\iota \}$ of measures on $\mathbb{R}\ts^{d}$ (with $\iota$ in some index set $J$) is called a \emph{complete spectral invariant} when $E_T (A) = 0$ holds for a Borel set $A\subset \mathbb{R}\ts^{d}$ if and only if $\sigma_\iota (A) = 0$ holds for all $\iota \in J$. An example for a complete spectral invariant is given by the family of all spectral measures $\varrho^{}_f$, with $f\in L^2 (\mathbb{X}(\varLambda),\mu)$. We will meet another spectral invariant shortly. Recall that a dynamical system $(\mathbb{Y},\mathbb{R}\ts^d)$ (i.e. a compact space $\mathbb{Y}$ with a continuous action of $\mathbb{R}\ts^d$) is called a \emph{factor} of $(\mathbb{X}(\varLambda),\mathbb{R}\ts^d)$ if there exists a surjective continuous map \[ \Phi \! : \, \mathbb{X} (\varLambda)\xrightarrow{\quad} \mathbb{Y} \] which intertwines the respective actions of $\mathbb{R}\ts^d$. In our context, the dynamical systems will naturally be equipped with measures and we will require additionally that the factor map maps the measure on $\mathbb{X} (\varLambda)$ onto the measure on $\mathbb{Y}$. If $\mathbb{Y} $ is the hull of a Delone set with finite local complexity, then $(\mathbb{Y}, \mathbb{R}\ts^{d},\nu)$ is called an FLC Delone factor. Of course, any FLC Delone factor comes with an autocorrelation $\gamma^{}_{(\mathbb{Y},\mathbb{R}\ts^{d},\nu)}$ and a diffraction $\widehat{\gamma}^{}_{(\mathbb{Y},\mathbb{R}\ts^{d},\nu)}$. The main abstract result of \cite{BLvE} then states that the family $ \widehat{\gamma}^{}_{(\mathbb{Y},\mathbb{R}\ts^{d},\nu)}$, where $(\mathbb{Y},\mathbb{R}\ts^{d},\nu)$ runs over all FLC Delone factors of $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$, is a complete spectral invariant for $T$. In fact, it is not even necessary to know the diffraction of all such factors. It suffices to know the diffraction of so-called derived factors that arise as follows. Let $P$ be a $K$-cluster of $\varLambda$. For any $\varLambda' \in \mathbb{X} (\varLambda)$, the set of $K$-clusters of $\varLambda'$ is a subset of the $K$-clusters of $\varLambda$, as a consequence of the construction of the hull $\mathbb{X} (\varLambda)$. We may thus define the \emph{locator set} \[ T^{}_{K,P} (\varLambda') \, = \, \{ t \in \mathbb{R}\ts^{d} \mid (\varLambda' - t)\cap K = P \} \, = \, \{ t \in \varLambda' \mid (\varLambda' - t)\cap K = P \} \, \subset \, \varLambda' \hspace{0.5pt} , \] which contains the cluster reference points of all occurrences of $P$ in $\varLambda'$. Then, any $K$-cluster $P$ of $\varLambda$ gives rise to a factor \[ \mathbb{Y} \, = \, \mathbb{Y}_{K,P} \, := \, \{T^{}_{K,P} (\varLambda') \mid \varLambda' \in \mathbb{X} (\varLambda) \} \] with factor map \[ \varPhi \, = \, \varPhi_{K, P}\hspace{-0.5pt} : \; \mathbb{X} \xrightarrow{\quad} \mathbb{Y} \hspace{0.5pt} , \quad X \mapsto T^{}_{K,P} (X) \hspace{0.5pt} . \] This factor will be called the factor \emph{derived from $(\mathbb{X},\mathbb{R}\ts^{d})$ via the $K\hspace{-0.5pt}$-cluster $P$ of $\varLambda$}. It is the diffraction of these factors (for all clusters) that is a complete spectral invariant. This result is relevant on many levels. On the abstract level, it shows that the diffraction spectrum and dynamical spectrum are equivalent in a certain sense. This may then be used to gather information on the dynamical spectrum via diffraction methods. On the concrete level, the result may even be relevant in suitably devised experimental setups. The considerations presented in this section raise naturally various questions and problems. For example it seems that in concrete examples often finitely many factors suffice. Thus, it would be of interest to find criteria when this happens. Also, it is not unreasonable to expect that in such situations also the diffraction of one factor (or rather of one topologically conjugate system) suffices. Finally, it would certainly of interest to extend the considerations to situations where FLC does not hold. \section{Continuous eigenfunctions and the maximal equicontinuous factor} \label{Section:MEF} Let $\varLambda$ be a Delone set with hull $\mathbb{X} (\varLambda)$. Then, there is natural embedding \[ \mathbb{R}\ts^{d}\xrightarrow{\quad} \mathbb{X} (\varLambda) \hspace{0.5pt} , \quad t\mapsto t + \varLambda, \] with dense range. In this way, the hull can be seen as a compactification of $\mathbb{R}\ts^{d}$. As $\mathbb{R}\ts^{d}$ is an Abelian group, it is then a natural question whether the hull carries a group structure such that this natural embedding becomes a group homomorphism. In general, this will not be the case. Indeed, as shown in \cite{KL} for an FLC Delone set $\varLambda$, such a group structure on the hull $\mathbb{X} (\varLambda)$ will exist if and only if $\varLambda$ is completely periodic. So, the general question then becomes how close the hull is to being a group. An equivalent formulation would be how much the metric on the hull differs from being translation invariant. The concept of the maximal equicontinuous factor (which we will recall below) allows one to deal with these questions. This concept is not specific to Delone dynamical systems. It can be defined for arbitrary dynamical systems and this is how we will introduce it. Throughout this section, we will assume that the occurring dynamical systems are minimal (meaning that each orbit is dense). This is a rather natural assumption as we want to compare the dynamical systems to dynamical systems on groups, which are automatically minimal. \smallskip A dynamical system $(\mathbb{T}, \mathbb{R}\ts^{d})$ is called a \emph{rotation on a compact group} if $\mathbb{T}$ is a compact group and there exists an group homomorphism \[ \xi \! : \, \mathbb{R}\ts^{d} \xrightarrow{\quad} \mathbb{T} \] with dense range inducing the action of $\mathbb{R}\ts^{d}$ on $\mathbb{T}$ via \[ t \cdot u \, := \, \xi (t) \hspace{0.5pt} u \] for all $u\in\mathbb{T}$ and $t\in\mathbb{R}\ts^{d}$. (Here, $\xi (t) u$ denotes the product in the group $\mathbb{T}$ of the two elements $\xi(t)$ and $u$.) As $\xi$ has dense range, the group $\mathbb{T}$ must necessarily be Abelian. It is well known (see e.g. \cite{ABKL} for a recent discussion) that any rotation on a compact group is strictly ergodic (meaning uniquely ergodic and minimal) and has pure point spectrum with only continuous eigenfunctions (and the eigenvalues are just given by the dual of the group $\mathbb{T}$). The \emph{maximal equicontinuous factor} (MEF) of a minimal dynamical system $(\mathbb{X}, \mathbb{R}\ts^{d})$ is then the largest rotation on a compact group $(\mathbb{T},\mathbb{R}\ts^{d})$ which is a factor of $(\mathbb{X},\mathbb{R}\ts^{d})$. It will be denoted as $(\mathbb{X}_{\mathsf{mef}},\mathbb{R}\ts^{d})$ and the factor map will be denoted as \[ \Psi_{\mathsf{mef}} \! : \, \mathbb{X} \xrightarrow{\quad} \mathbb{X}_{\mathsf{mef}} \hspace{0.5pt} . \] With this factor map at our disposal, the question of how close $\mathbb{X}$ is to being a group becomes the question of how much $\Psi_{\mathsf{mef}}$ differs from being bijective. In this context, one can naturally distinguish three different regimes: \begin{itemize} \item The map $\Psi_{\mathsf{mef}}$ is one-to-one everywhere (so, every point of $\mathbb{X}_{\mathsf{mef}}$ has exactly one inverse image). In this case, the hull carries the structure of a compact Abelian group (as it is isomorphic to $\mathbb{X}_{\mathsf{mef}}$). \item The map $\Psi_{\mathsf{mef}}$ is one-to-one almost everywhere (so, almost every point of $\mathbb{X}_{\mathsf{mef}}$ has exactly one inverse image). In this case, the hull is called an \emph{almost one-to-one extension} of its MEF. \item The map $\Psi_{\mathsf{mef}}$ is one-to-one in (at least) one point (meaning that there exists a point of $\mathbb{X}_{\mathsf{mef}}$ with exactly one inverse image). In this case, the hull is called an \emph{almost automorphic system}. \end{itemize} \begin{remark} Indeed, quite a substantial part of the general theory of the MEF is devoted to studying these three regimes \cite{Auslander88}. However, various other cases have been considered as well. This concerns in particular situations where the condition to be one-to-one is replaced by being $m$-to-one with a fixed integer $m$. In this context there is an emerging theory centered around the notion of coincidence rank; see \cite{ABKL} for a recent survey. In the special case $m=2$, which occurs for instance for the TM subshift of Example~\ref{ex:TM} or for the twisted silver mean chain \cite{BG}, interesting and strong results are possible because such an index-$2$ extension is quite restrictive; compare \cite{Hel} and \cite[Sec.~3.6]{Q} for background. \hfill $\Diamond$ \end{remark} Here, we are concerned with the situation that $\mathbb{X} = \mathbb{X} (\varLambda)$ is the hull of a Delone set $\varLambda$. In this case, particular attention has been paid to the case that $\varLambda$ is a Meyer set. In this case, the corresponding parts of \cite{Aujogue,BLM,KS} can be summarised as giving that these three regimes correspond exactly to the situation that $\varLambda$ is crystallographic, a regular model set, a model set respectively. We refrain from giving precise definitions or proofs but rather refer the reader to \cite{ABKL} for a recent discussion; see \cite{Kel} as well. Next, we will provide an explicit description of the MEF for Delone dynamical systems. In fact, it is not hard to see that a similar description can be given for rather general dynamical systems as well. For further details and reference we refer the reader to \cite{ABKL}; see \cite{BLM} as well. Let $\mathcal{E}_{\mathsf{top}}$ be the set of continuous eigenvalues of $(\mathbb{X}(\varLambda), \mathbb{R}\ts^{d})$. Here, an eigenvalue $k\in \mathbb{R}\ts^{d}$ is called a \emph{continuous eigenvalue} of $(\mathbb{X}(\varLambda), \mathbb{R}\ts^{d})$ if there exists a continuous non-vanishing function $f \! : \, \mathbb{X} (\varLambda) \xrightarrow{\quad} \mathbb{C}\ts$ with \[ f (t + \varLambda') \, = \, \mathrm{e}^{2 \pi \mathrm{i} k t} f(\varLambda') \] for all $t\in\mathbb{R}\ts^{d}$ and $\varLambda'\in \mathbb{X} (\varLambda)$. It is not hard to see that the set of continuous eigenvalues is an (Abelian) group. We equip this set with the discrete topology. Then, the Pontryagin dual $\widehat{\mathcal{E}_{\mathsf{top}}}$ of this group, which is the set of all group homomorphisms \[ \mathcal{E}_{\mathsf{top}} \xrightarrow{\quad} \mathbb{S}^{1} \, = \, \{z \in \mathbb{C}\ts : |z| =1\} \hspace{0.5pt} , \] will be a compact group, In line with our previous convention, we shall write this group additively and denote it by $\mathbb{T}$. There is a natural group homomorphism \[ \xi \! : \, \mathbb{R}\ts^{d} \xrightarrow{\quad} \mathbb{T} \quad \text{with } \, \xi(t) (k) := \mathrm{e}^{2 \pi \mathrm{i} t k} \] for all $t\in\mathbb{R}\ts^{d}$ and $k\in \mathcal{E}_{\mathsf{top}}$. In this way, $(\mathbb{T},\mathbb{R}\ts^{d})$ becomes a rotation on a compact Abelian group. Also, $(\mathbb{T},\mathbb{R}\ts^{d})$ is a factor of $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d})$. Indeed, choose for each $k\in \mathcal{E}_{\mathsf{top}}$ the unique continuous eigenfunction $f_k$ with $f_k (\varLambda) =1$. Then, the map \[ \mathbb{X}(\varLambda)\xrightarrow{\quad} \mathbb{T} \, = \, \widehat{\mathcal{E}_{\mathsf{top}}} \hspace{0.5pt} , \quad \varLambda'\mapsto (k\mapsto f_k (\varLambda')) \hspace{0.5pt} , \] can easily be seen to be a factor map. Via this factor map, the dynamical system $(\mathbb{T},\mathbb{R}\ts^{d})$ is the MEF of $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d})$. The preceding considerations show that there is a strong connection between continuous eigenfunctions and the MEF. Somewhat loosely speaking one may say that the MEF stores all information on continuous eigenvalues. In this context, dynamical systems coming from Meyer sets $\varLambda$ play a special role. Indeed, this could already seen from the discussion above relating a hierarchy of Meyer sets to injectivity properties of the factor map $\Psi_{\mathsf{mef}}$. It is also visible in recent results in \cite{KS} showing that the dynamical system $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d})$ coming from a Delone set with FLC has $d$ linearly independent continuous eigenvalues if and only if it is conjugate to a dynamical system $(\mathbb{X}(\widetilde{\varLambda}),\mathbb{R}\ts^{d})$ with $\widetilde{\varLambda}$ a Meyer set. In this sense, Delone dynamical systems with FLC and `many' continuous eigenvalues are systems coming from Meyer sets. Continuous eigenvalues also play a role in diffraction theory, as we discuss next. In Section~\ref{Section:Diffraction}, we have seen how the autocorrelation of $(\mathbb{X}(\varLambda),\mathbb{R}\ts^{d},\mu)$ can be computed by a limiting procedure for $\mu$-almost every element $\varLambda' \in \mathbb{X} (\varLambda)$ if $\mu$ is ergodic, and for all $\varLambda' \in \mathbb{X} (\varLambda)$ if the system is uniquely ergodic. In this context, we have also discussed the validity of the formula \[ \widehat{\gamma } (\{k\}) \, = \lim_{n\to \infty} \biggl| \frac{1}{ \vol (B^{}_{R} (0))} \sum_{x\in \varLambda'\cap B^{}_{R} (0)} \mathrm{e}^{2 \pi \mathrm{i} k x} \biggr|^2 \] for almost every $\varLambda'$ in the ergodic case. Now, in the uniquely ergodic case, this formula can be shown to hold even for all $\varLambda'$ provided the eigenvalue $k$ is continuous \cite{Lenz}; see \cite{Rob} for related earlier work as well. \begin{remark} As discussed in Remark \ref{Remark-BT}, the validity of such a formula is known for sets coming from primitive substitutions as well as for regular model sets. In both cases, the associated Delone dynamical system is uniquely ergodic with only continuous eigenvalues. So, the mentioned work \cite{Lenz} provides a unified structural treatment. \hfill $\Diamond$ \end{remark} It is an interesting open problem to which extent such a formula is valid beyond the case of continuous eigenfunctions. For example, it is shown in \cite{Lenz} that such a formula holds for all linearly repetitive systems even though such systems may have discontinuous eigenfunctions \cite{BDM}. Also, the formula can be shown for weak model sets of extremal density \cite[Prop.~6]{BHS}, where continuity of eigenfunctions generally fails. It then also holds for generic elements in the corresponding hull, equipped with a natural patch frequency measure. Moreover, nonperiodic measures with locally finite support \emph{and} spectrum, as recently constructed in \cite{M-new}, are further examples with well-defined amplitudes. So, there is room for generalisation, and hence work to be done to clarify the situation. \section{Quasicrystals and hulls of quasiperiodic functions}\label{Section:qpf} So far, we have (mostly) considered the dynamical system $(\mathbb{X} (\varLambda), \mathbb{R}\ts^{d})$ arising from a Delone set $\varLambda$. Special emphasis has been paid to the case that this system is minimal and uniquely ergodic with pure point point dynamical spectrum and only continuous eigenfunctions. Indeed, these are the systems to which all results of the preceding four sections apply. In particular, these systems have pure point diffraction and the set of (continuous) eigenvalues is a group generated by the Bragg spectrum and \[ \Psi_{\mathsf{mef}} \! : \, \mathbb{X} (\varLambda)\xrightarrow{\quad} \mathbb{T} \] is the factor map to its MEF, where $\mathbb{T}$ is given as the dual group of the group of eigenvalues. While it is not clear at present what a mathematical definition for a quasicrystal should be, it seems reasonable that such systems should fall into the class of quasicrystals. At the same time, certain quasiperiodic functions are also sometimes treated under the label of quasicrystals. In this section, we compare these two approaches and also compute the diffraction of a quasiperiodic function. This will actually show an important structural difference in the diffraction measure which seems to favour Delone sets as mathematical models for quasicrystals over a description via quasiperiodic functions. \bigskip Let $\mathcal{C}$ be a countable subset of $\mathbb{R}\ts^{d}$. Let $a^{}_{k}$, $k\in \mathcal{C}$, be non-vanishing complex numbers that satisfy the summability condition \[ \sum_{k\in \mathcal{C} } | a^{}_{k} | \, < \, \infty \hspace{0.5pt} . \] Denote the subgroup of $\mathbb{R}\ts^{d}$ generated by $\mathcal{C}$ by $\mathcal{E}'$, so $\mathcal{E}' = \langle \mathcal{C} \rangle$. Define \[ u \! : \, \mathbb{R}\ts^{d}\xrightarrow{\quad} \mathbb{C}\ts \hspace{0.5pt} , \quad u (x) \, = \sum_{k\in \mathcal{C}} a^{}_{k} \, \mathrm{e}^{ 2 \pi \mathrm{i} k x} . \] By the summability condition, the sum is absolutely convergent and the function is continuous and bounded. In fact, such functions are known as \emph{quasiperiodic functions} in the sense of Bohr; see \cite{Cord,Katz}, or \cite[Sec.~8.2]{TAO} for a short summary. Clearly, we can view a bounded continuous function $f$ as a Radon--Nikodym density relative to Lebesgue measure, and then identify $f$ with the translation bounded measure defined that way. Consequentliy, we can equip the set of such functions with the vague topology induced from measures. In particular, we can consider the \emph{hull} of $f$ defined by \[ \mathbb{X} (f) \, := \, \overline{\{ f(\cdot - t) \mid t\in\mathbb{R}\ts^{d}\} } \hspace{0.5pt} , \] where the closure is taken in the vague topology on measures. Then, $\mathbb{X} (f)$ is compact and $\mathbb{R}\ts^{d}$ acts continuously via translations on it (see e.g. \cite{BL-1}). Thus, we are given a dynamical system $(\mathbb{X} (f),\mathbb{R}\ts^{d})$. Assume now that $f = u$ is the quasiperiodic function introduced above. Then, the closure $\mathbb{X} (u)$ actually agrees with the closure of the translates of $u$ in the topology of uniform convergence, so \[ \mathbb{X} (u) \, = \, \overline{\{ u(\cdot - t) \mid t\in \mathbb{R}\ts^{d} \} }^{\|\cdot\|_\infty} . \] In particular, all elements in $\mathbb{X} (u)$ (which are apriori only measures) are continuous bounded functions. Moreover, by standard theory of almost periodic functions, compare \cite{Loomis} or \cite[Sec.~8.2]{TAO} and references given there, this closure has the structure of an Abelian group. More specifically, define \[ \xi \! : \, \mathbb{R}\ts^{d} \xrightarrow{\quad} \mathbb{X} (u) \hspace{0.5pt} , \quad t\mapsto u ( \cdot -t) \hspace{0.5pt} . \] Then, there exists a unique group structure on $\mathbb{X} (u)$ making $\xi$ a homomorphism of Abelian groups (see \cite{LR} as well for a recent discussion). This homomorphism has dense range and the translation action of $\mathbb{R}\ts^{d}$ on $\mathbb{X} (u)$ is given by \[ \mathbb{R}\ts^{d}\times \mathbb{X} (u)\xrightarrow{\quad} \mathbb{X} (u) \hspace{0.5pt} , \quad (t, v) \mapsto v(\cdot - t) = \xi (t) \cdot v \hspace{0.5pt} . \] Thus, $(\mathbb{X}(u),\mathbb{R}\ts^{d})$ is a rotation on a compact group (in the notation of Section~\ref{Section:MEF}). In particular, it is strictly ergodic and has pure point dynamical spectrum with only continuous eigenfunctions. Now, it is not hard to see that \[ \mathcal{C} \, = \, \Big\{ k\in \mathbb{R}\ts^{d} \, \Big| \lim_{R\to \infty} \frac{ \int_{B^{}_{R} (0)} u (x) \, \mathrm{e}^{- 2 \pi \mathrm{i} k x} \, \mathrm{d} x } {\vol (B^{}_{R} (0))} \neq 0 \Big\} . \] So, by standard theory of almost periodic functions, we infer \[ \mathcal{E}' \, = \, \langle \mathcal{C} \rangle \, = \, \widehat{\mathbb{X} (u)} \hspace{0.5pt} . \] Dualising once more we infer \[ \mathbb{X} (u) \, = \, \widehat{\mathcal{E}'} \hspace{0.5pt} . \] Assume now that the group $\mathcal{E}'$ is the group of eigenvalues of the uniquely ergodic minimal system $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d})$ with pure point spectrum, which has only continuous eigenfunctions. Then, its dual group $\widehat{\mathcal{E}'}$ is the MEF of $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d})$, as discussed at the beginning of this section. Moreover, as just derived, this dual group is isomorphic to $\mathbb{X} (u)$. Putting this together, we see that the map $\Psi_{\mathsf{mef}}$ can be considered as a map \[ \Psi_{\mathsf{mef}} \! : \, \mathbb{X} (\varLambda)\xrightarrow{\quad} \mathbb{X} (u) \hspace{0.5pt} . \] In terms of the associated dynamical systems, we thus find a precise relationship between the hulls of $\varLambda$ and of $u$: One is a factor of the other and, in fact, a special one via the connection with the MEF. These considerations can be slightly generalised as follows. Let $(\mathbb{X} (\varLambda),\mathbb{R}\ts^{d})$ be uniquely ergodic with pure point spectrum, and only continuous eigenfunctions and group of eigenvalues $\mathcal{E}$. If the group $\mathcal{E}' = \langle \mathcal{C} \rangle$ is only a subgroup of $\mathcal{E}$, we would still get a factor map \[ \Psi \! : \, \mathbb{X}(\varLambda)\xrightarrow{\quad} \mathbb{X} (u) \hspace{0.5pt} , \] as, in this case, the dual of the group $\mathcal{E}'$ can easily be seen to be a factor of $\widehat{\mathcal{E}}$. \begin{remark} The preceding considerations naturally raise the question whether, to any countable set $\mathcal{C}$ and the induced group $\mathcal{E}'$, one can find a uniquely ergodic minimal Delone dynamical system with pure point spectrum, only continuous eigenvalues and dynamical spectrum $\mathcal{E}'$. The answer to this question is positive. In fact, it is even possible to find a Meyer set $\varLambda$ such that its hull $\mathbb{X} (\varLambda)$ has the desired properties. Indeed, the work of Robinson \cite{Rob} gives that, for any countable subgroup of $\mathbb{R}\ts^{d}$, one can find a cut and project scheme whose torus is just the dual of the subgroup. Then, any model set arising from a regular window from this cut and project scheme will be such a Meyer set \cite{Martin}. \hfill $\Diamond$ \end{remark} It is possible to set up a diffraction theory for the elements of $\mathbb{X} (u)$ along the same lines as for $\mathbb{X} (\varLambda)$. Indeed, if both $u$ and $\varLambda$ are considered as translation bounded measures there is virtually no difference in the framework and this is the point of view proposed in \cite{BL-1}. As it is instructive, let us discuss the diffraction theory of $u$. As before, we consider $u$ as a measure by viewing it as a Radon--Nikodym density relative to Lebesgue measure $\lambda$. Then, the measure $\widetilde{u \lambda}$ is given by $\widetilde{u} \lambda$. Consequently, the autocorrelation of $u$ can then simply be written as \[ \gamma^{}_{u} \, := \lim_{R\to \infty} \frac{u^{}_{R} \ast \widetilde{u^{}_{R}}} {\vol (B^{}_{R} (0))} \hspace{0.5pt} , \] where we use the shorthand $u^{}_{R} = u|^{}_{B_{R} (0)}$ for the restriction of $u$ to the ball of radius $R$ around $0$. Of course, the existence of the limit has still to be established. Before we do this, via an explicit calculation, let us pause for a very simple special case. \begin{example}\label{ex:one-diffraction} Consider $u\equiv 1$, hence Lebesgue measure itself. Then, a simple calculation with the volume-averaged convolution, compare \cite[Ex.~8.10]{TAO}, gives $\gamma^{}_{u} = \lambda$, and thus diffraction $\widehat{\gamma^{}_{u}} = \delta^{}_{0}$, which is a \emph{finite} pure point measure. Indeed, as we shall see later, this is an important distinction to the diffraction of a Delone set. \hfill $\Diamond$ \end{example} To proceed with the general case, we will need two ingredients: \begin{itemize} \item One of the characteristic functions can be removed in the definition of $\gamma_u$. In particular, assuming existence of the limit, we have \[ \gamma^{}_{u} \, := \lim_{R\to \infty} \frac{u^{}_{R} \ast \widetilde{u}}{\vol (B^{}_{R} (0))} \hspace{0.5pt} . \] (This is well-known and can be seen by a direct computation; compare \cite{Martin,BL-1}). \item For any $k\in\mathbb{R}\ts^{d}$, the limit \[ \lim_{R\to \infty} \frac{1}{\vol (B^{}_{R} (0))} \int_{B_{R} (0)} \mathrm{e}^{- 2 \pi \mathrm{i} k x} u (x) \, \mathrm{d} x \] exists. It is $a^{}_{k}$ if $k \in \mathcal{C}$ and $0$ otherwise. (This is the formula for the Fourier--Bohr coefficient of $u$. It is easy to see by direct computation and well-known in the theory of Bohr almost periodic functions; see \cite{Cord,Katz} or \cite[Thm.~8.2]{TAO}.) \end{itemize} Equipped with these two pieces of preparation, we are now going to compute $\gamma^{}_{u}$. Let $g\in \mathcal{S}$ be arbitrary. Using the first ingredient, we find \[ \gamma^{}_{u} (g) \, = \lim_{R\to \infty} \frac{\bigl( u^{}_{R} \hspace{-0.5pt} \ast \widetilde{u} \hspace{0.5pt} \bigr) (g)} {\vol (B^{}_{R} (0))} \hspace{0.5pt} . \] Direct computations then give \begin{eqnarray*} \bigl( u^{}_{R} \hspace{-0.5pt} \ast \widetilde{u}\hspace{0.5pt} \bigr) (g) &=& \int_{\mathbb{R}\ts^{d}} \int_{\mathbb{R}\ts^{d}} u^{}_{R} (y) \, \overline{u} (y-x) \, g (x) \, \mathrm{d} y \, \mathrm{d} x \\[2mm] &=& \int_{\mathbb{R}\ts^{d}} \int_{\mathbb{R}\ts^{d}} u^{}_{R} (y) \sum_{k\in\mathcal{C}} \overline{a^{}_{k}}\, \mathrm{e}^{ - 2 \pi \mathrm{i} k (y-x)} g(x) \, \mathrm{d} y \, \mathrm{d} x \\ &=& \sum_{k\in\mathcal{C}} \overline{a^{}_{k}} \int_{\mathbb{R}\ts^{d}} \int_{\mathbb{R}\ts^{d}} u^{}_{R} (y) \, \mathrm{e}^{ - 2 \pi \mathrm{i} k y}\hspace{0.5pt} \mathrm{e}^{ 2 \pi \mathrm{i} k x} g(x) \, \mathrm{d} y \, \mathrm{d} x \\ &=& \sum_{k\in\mathcal{C}} \overline{a^{}_{k}} \int_{\mathbb{R}\ts^{d}} u^{}_{R} (y)\, \mathrm{e}^{ - 2 \pi \mathrm{i} k y} \left( \int_{\mathbb{R}\ts^{d}} \mathrm{e}^{ 2 \pi \mathrm{i} k x} \hspace{0.5pt} g(x) \, \mathrm{d} x \right) \, \mathrm{d} y \\ &=& \sum_{k\in\mathcal{C}} \overline{a^{}_{k}} \, F^{-1} (g)(k) \int_{B_{R} (0)} u (y) \, \mathrm{e}^{ -2 \pi \mathrm{i} k y} \, \mathrm{d} y \hspace{0.5pt} . \end{eqnarray*} Here, the second line follows from the definition of $u$, while Fubini's theorem was employed in the penultimate step. Finally, last step relies on the observation that the integral over $x$ just gives the inverse Fourier transform $F^{-1} (g)$ of $g$. Using the preceding computation, the second ingredient and the summability of the $(a^{}_{k})$, we then find \[ \gamma^{}_{u} (g) \, = \sum_{k\in \mathcal{C}} |a^{}_{k} |^2 \hspace{0.5pt} F^{-1} (g) (k) \hspace{0.5pt} . \] As this holds for all $g\in \mathcal{S}$, we obtain \[ \gamma^{}_{u} \, = \, \biggl(\, \sum_{k\in \mathcal{C}} |a^{}_{k} |^2 \, \delta^{}_{k} \biggr) \circ {F^{-1} } \hspace{0.5pt} . \] Taking one more Fourier transform, and recalling $(\widehat{T}, g) = (T, \widehat{g}\hspace{0.5pt} )$ for distributions $T$, we then find \[ \widehat{\gamma^{}_{u}} \, = \sum_{k\in \mathcal{C}} |a^{}_{k} |^2 \, \delta^{}_{k} \hspace{0.5pt} . \] So, $\widehat{\gamma^{}_{u}}$ is a pure point measure with its set of atoms being given by $\mathcal{C}$. \begin{remark} Due to the summability of the $(a^{}_{k})$, the $|a^{}_{k}|^2$ are also summable, and the pure point measure $\widehat{\gamma_u}$ is \emph{finite}, thus generalising the finding of Example~\ref{ex:one-diffraction}. In fact, one has the relation \[ \sum_{k \in \mathcal{C}} \lvert a^{}_{k} \rvert^{2} \, = \lim_{R\to\infty} \frac{1}{\vol (B_{R} (0))} \int_{B_{R} (0)} \lvert u (x) \rvert^{2} \, \mathrm{d} x \hspace{0.5pt}. \] This formula, which is not hard to derive from our above considerations, is nothing but Parseval's identity for Bohr almost periodic functions \cite[Thm.~I.1.18]{Cord}. This way, one can see immediately why the diffraction measure $\widehat{\gamma^{}_{u}}$ must be a \emph{finite} measure. This is an important structural difference to the case of Delone sets. \hfill $\Diamond$ \end{remark} Let us add the comment that this innocently looking observation, with hindsight, sheds some light on the old dispute about the `right' model for the description of quasicrystals between the quasiperiodic function approach and the tiling or Delone set approach. While the former leads to finite diffraction measures, the latter does not; compare \cite[Rem.~9.11]{TAO} for a simple argument in the context of cut and project sets, and \cite{Nicu+} as well as \cite[Rem.~9.12]{TAO} for an argument in the more general situation of Meyer sets. Now, the experimental findings seem to indicate the existence of series of Bragg peaks with growing $k$ and converging intensity, which is not compatible with a finite diffraction measure in the infinite volume limit. \begin{remark} The Fourier--Bohr coefficients $a^{}_{k}$ as volume-averaged integrals can once again be interpreted as \emph{amplitudes} in our above sense, and it is then no surprise that the intensities of the Bragg peaks are once again given as the absolute squares of these amplitudes. This is another indication that there is more to be done in this direction. \hfill $\Diamond$ \end{remark} \section*{Acknowledgements} The authors would like to thank the organizers of the \textit{3rd Bremen Winter School and Symposium:\ Diffusion on Fractals and Nonlinear Dynamics} (2015) for setting up a most stimulating event inspiring in particular the material presented in Section~\ref{Section:qpf}. This work was supported by the German Research Foundation (DFG), within the CRC 701. \bigskip
1,108,101,564,510
arxiv
\section{Introduction} The rate at which star clusters lose mass has been one of the enduring problems of stellar dynamics. In one of the earliest results, \citet{amb38} already highlighted the role of relaxation. The landmark survey of \citet{1990ApJ...351..121C} added a mass spectrum, stellar evolution and a tidal boundary, and also revealed the importance of the initial structure of the star cluster. But several other factors also influence the lifetime of star clusters, including the binary population (e.g. \citealt{2009PASJ...61..721T}), the form of the Galactic orbit (e.g. \citealt{2003MNRAS.340..227B}), the form of the Galactic potential and tidal shocking (e.g. \citealt{1997ApJ...474..223G}), and the crossing time scale \citep{2013ApJ...778..118W}. In this Letter we add one more influence: natal kicks of neutron stars (NS). Though neutron stars may account for less than 2\% of the cluster by mass, we find, astonishingly, that the presence or absence of kicks may change the lifetime of a star cluster by almost a factor of four. Though the existence of natal kicks of neutron stars is not in doubt, their distribution and dispersion are difficult to establish (see, for example, \citealt{2005ASPC..328..327P}). In order to isolate the effect of this one factor we consider models from another landmark survey of the evolution of star clusters: that by \citet[][hereafter BM03]{2003MNRAS.340..227B}. As it happens, they imposed no natal kicks on neutron stars, and it was the attempt to reproduce some of their results that led to our discovery. Indeed their principal models, which begin with a King profile with $W_0=5$, evolve very differently, both qualitatively and quantitatively, if natal kicks are applied. The particular models we considered are described in the following section, while Sect.~\ref{Results} presents our results in some detail, including some information on core collapse and mass segregation. The final section summarises our conclusions, and attempts to interpret them in the context of other recent work. \section{Description of the Runs}\label{Simulation} We simulate the evolution of a globular cluster as in \citetalias{2003MNRAS.340..227B}, but using \textsc{NBODY6} \citep{2012MNRAS.424..545N}. We have performed a survey of simulations in an accelerating, non-rotating frame, using a number of particles between $N=8192$ and $N=131072$, a Kroupa IMF \citep{2001MNRAS.322..231K}, with the mass of the stars between $0.1 \ M_\odot$ and $15 \ M_\odot$ (resulting in a theoretical mean mass $\left \langle m \right \rangle=0.547 \ M_\odot$), and metallicity $Z=0.001$. Natal kicks, when they were applied, had a Maxwellian distribution with $\sigma = 190 \ \text{km} \ \text{s}^{-1}$ \citep[see eq.~3 in][]{1997MNRAS.291..569H}. \begin{table*} \begin{minipage}{126mm} \caption{$N$-body\ simulation properties} \label{parameter} \resizebox{\linewidth}{!}{ \begin{tabular}{@{}lccccccccccc} \Xhline{2\arrayrulewidth} \\[-2ex] Model & $N$ & $W_0$ & $e$ & $M_0$ & $r_h$ & $r_J$ & $T_{diss}$ & ${T_{diss}^{BM}}$ & $T_{cc}$&$T_{cc}^{BM}$\\ \ & \ & \ & \ & $\left[M_\odot\right]$ & $\left[pc\right]$ & $\left[pc\right]$ & [Myr] & [Myr] & [Myr]&[Myr]\\ \\[-2ex] \Xhline{2\arrayrulewidth} \\[-2ex] 8kK & 8192 & 5.0 & 0.0 & 4497.3 & 4.53 & 24.35 & 2426 & - & 2666 & -\\ 16kK & 16384 & 5.0 & 0.0 & 8990.7 & 5.73 & 30.67 & 2816 & - & - & -\\ 32kK & 32768 & 5.0 & 0.0 & 18419.2 & 7.23 & 38.96 & 3669 & - & - & -\\ 64kK & 65536 & 5.0 & 0.0 & 36183.1 & 9.10 & 48.79 & 4516 & - & - & -\\ 128kK* & 131072 & 5.0 & 0.0 & 71422.0 & 11.46 & 61.21 & 5927 & - & - & -\\ \\[-2ex] \Xhline{1.5\arrayrulewidth} \\[-2ex] 8kN & 8192 & 5.0 & 0.0 & 4497.3 & 4.53 & 24.35 & 4137 & 4149 & 3142 & 3329\\ 16kN & 16384 & 5.0 & 0.0 & 8990.7 & 5.73 & 30.67 & 5932 & 6348 & 4810 & 5062\\ 32kN & 32768 & 5.0 & 0.0 & 18419.2 & 7.23 & 38.96 & 9384 & 9696 & 7788 & 8412\\ 64kN & 65536 & 5.0 & 0.0 & 36183.1 & 9.10 & 48.79 & 14414 & 15197 & 12375 & 13193\\ 128kN & 131072 & 5.0 & 0.0 & 71659.0 & 11.46 & 61.27 & 22707 & 23769 & 20307 & 21339\\ \\[-2ex] \Xhline{2\arrayrulewidth} \\[-2ex] 128kKe & 131072 & 5.0 & 0.5 & 71453.0 & 5.50 & 29.43 & 5479 & - & 6859 & -\\ 128kNe & 131072 & 5.0 & 0.5 & 71453.0 & 5.50 & 29.43 & 11254 & 11675 & 8952 & 9332\\ 128kK7 & 131072 & 7.0 & 0.0 & 71780.9 & 7.14 & 61.31 & 18369 & - & 18267 & - \\ 128kN7 & 131072 & 7.0 & 0.0 & 71780.9 & 7.14 & 61.31 & 24494 & 25506 & 11886 & 12620\\ \\[-2ex] \Xhline{2\arrayrulewidth} \\[-2ex] \end{tabular}} \medskip Note. --- The capital letter in the model label indicates if the model is characterized by the presence (\#K, e.g. 128kK) or the absence (\#N, e.g. 128kN) of NS initial kicks. The star (*) denotes a model for which two different numerical realizations have been evolved; the values are the average of those for the two simulations. \end{minipage} \end{table*} In our simulations, the cluster is in a circular orbit, or in an elliptical orbit with eccentricity $e=0.5$, in a logarithmic Galactic potential $\phi={V_G}^2\ln(R_G)$, where ${V_G}$ is the circular velocity and $R_G$ is the Galactocentric distance. For the majority of our runs we have used a Roche-lobe filling \citet{1966AJ.....71...64K} model with $W_0=5$ as initial condition. Additional simulations have been performed by increasing the initial concentration of the King profile ($W_0=7$). The clusters start at a Galactic radius of $8.5$ kpc, with an initial velocity of $220 \ \text{km} \ \text{s}^{-1}$ (in the circular case); in the elliptical case the apogalacticon is at $8.5$ kpc and the initial speed there is reduced appropriately, while the size of the cluster is determined by assuming a Roche-lobe filling condition at perigalacticon. The initial conditions for all the simulations have been generated using M{\sevensize C}L{\sevensize USTER} \citep{2011MNRAS.417.2300K}. The tidal radius of the cluster was defined as the Jacobi radius \begin{equation} r_J=\left(\frac{G M}{2{V_G}^2}\right)^{\nicefrac{1}{3}}{R_G^{\nicefrac{2}{3}}}, \qquad \end{equation} where $M$ is the ``bound'' cluster mass. The quantities $M$ and $r_J$ were determined self-consistently and iteratively by first assuming that all stars are still bound and calculating the tidal radius with this formula. In a second step, we calculated the mass of all stars inside $r_J$ relative to the density centre of all stars, and used it to obtain a new estimate for $r_J$. This method was repeated until convergence. Escapers were not removed from the simulations. The properties of the simulations are presented in Table~\ref{parameter}. The significance of the model label is stated in the note to the Table. Column 4 gives the orbital eccentricity; columns 5, 6 and 7 are the initial values of the total bound mass, the half-mass radius and the Jacobi radius, respectively. Column 8 gives the dissolution time, which, following \citetalias{2003MNRAS.340..227B}, is defined as the time when $95\%$ of the mass was lost from the cluster, while column 10 gives the core-collapse time. The corresponding quantities from \citetalias{2003MNRAS.340..227B} are reported in columns 9 and 11, respectively. In our analysis, the moment of core collapse $T_{cc}$ has been determined by inspecting the time evolution of the core radius and of the innermost lagrangian radius enclosing $1\%$ of the total mass. \section{Results}\label{Results} \subsection{Lifetime and main properties of the models} The main result of our investigation is that the presence or the absence of NS natal velocity kicks can affect significantly the lifetime of star clusters, up to almost a factor of four. This striking result is illustrated by a series of ``reference models'' (with $W_0=5$, $e=0$, and N = $128$k, $64$k, $32$k, $16$k, and $8$k), with or without NS velocity kicks; the time evolution of the bound mass of these models is presented in Fig.~\ref{fig:MboverM0_128k}. The difference in the behaviour of the models with or without NS kicks starts early in their evolution ($M/M_0 \approx 0.8$) and leads to a dramatic contrast in the slope of the graph at the final stages of evolution ($M/M_0<0.2$). An important aspect of the very rapid dissolution of the models with NS kicks is that, in almost all cases, they fail to reach core collapse during their evolution, as opposed to the models without NS kicks, which show signatures of core collapse at a time corresponding to $0.1 < M/M_0 < 0.2$. The only exception is given by model $8$kK, which reaches core collapse at the very late stages of evolution, $240$ Myr after the formal dissolution time (see Tab.~\ref{parameter}, row~1, and the corresponding black dot in Fig.~\ref{fig:MboverM0_128k}). The fact that it reaches core collapse at all, while larger models do not, is attributable to its short initial relaxation time. We have also considered two additional pairs of models, as representative cases of the regime of high initial concentration ($W_0=7$; models $128$kK7 and $128$kN7) and of the evolution of a star cluster on an elliptic orbit ($e=0.5$; models $128$kKe and $128$kNe). Here the effects of the presence of NS kicks on the star cluster lifetime are less severe compared to those on the ``reference models'', but they are still significant (see Fig.~\ref{fig:MboverM0_ECC_W7}). Both models $128$kK7 and $128$kKe reach core collapse, although at a very late stage of evolution. Of the systems without NS kicks, model $128$kNe reaches core collapse at a mass comparable to that of ``reference models'' without NS kicks, while model $128$kN7 has the largest mass at $T_{cc}$ of all the models in our survey which reach core collapse; such a result is not surprising, given its initial concentration. Another useful diagnostic of the differences between models with and without NS kicks is provided by the mean mass of stars in the innermost lagrangian shell, enclosing $1\%$ of the total bound mass of a system. Its time evolution is illustrated in Fig.~\ref{fig:mmean}, for all models in our survey with $N=128$k particles. In almost all cases, the mean mass in the innermost shell initially shows a decrease, which is due to the early evolution and escape of massive stars; as expected, this effect is more pronounced for systems with NS kicks. Nonetheless, after only a few Gyr, the value of the central mean mass starts to increase, reflecting the process of mass segregation. For models that reach core collapse, the mean mass in the final stages of evolution falls within the range $1.2 \le \left<m\right> \le 1.4 M_{\odot}$, which indicates the dominance of neutron stars in the central regions of the system. Not surprisingly, the rapidly dissolving model $128$kK (Fig.~\ref{fig:mmean}, red line) shows a final mean mass which is comparable to the initial value. \begin{figure} \includegraphics[trim=0 20 0 50, clip, width=0.48\textwidth]{compare128k.eps} \caption{Time evolution of the fraction of bound mass (normalized to the initial value) of models with initial concentration $W_0=5$, on circular orbits. The models are characterized by different number of particles and by the presence (red lines; right to left: $128$kK, $64$kK, $32$kK, $16$kK and $8$kK) or the absence (blue lines; right to left: $128$kN, $64$kN, $32$kN, $16$kN and $8$kN) of NS kicks. The black dots mark when core collapse occurs. The corresponding models studied by \citetalias{2003MNRAS.340..227B}, i.e. without NS kicks, are also shown (green dashed lines; right to left: $128$k, $64$k, $32$k, $16$k and $8$k); this data was retrieved by means of the data extraction tool Dexter. }\label{fig:MboverM0_128k} \end{figure} \subsection{Detailed comparison with Baumgardt \& Makino (2003)} Despite our best efforts in reproducing the initial conditions and the numerical set-up described by \citetalias{2003MNRAS.340..227B}, we note that there are still non-negligible discrepancies between our models without NS natal kicks and the corresponding ones in their original investigation (see Table~\ref{parameter} and Figs.~\ref{fig:MboverM0_128k} and \ref{fig:MboverM0_ECC_W7}). We have attempted to identify the main reasons for these discrepancies in the intrinsic differences between the $N$-body\ codes used to perform the simulations, and in particular slightly different stellar evolution prescriptions. We performed our simulations by using the GPU version of \textsc{NBODY6} \citep{2012MNRAS.424..545N}, while \citetalias{2003MNRAS.340..227B} used the public \textsc{GRAPE-6} version of \textsc{NBODY4} \citep{1999PASP..111.1333A}. The latter treats components of binaries as single stars, without collisions or exchange of mass, and the resulting differences might partially explain the increasing discrepancy after core collapse for the models depicted in Fig.~\ref{fig:MboverM0_ECC_W7}, because of the increase in the number of binaries at this time. Moreover, \citetalias{2003MNRAS.340..227B} used a prescription for the properties of stellar remnants by \citet{2000MNRAS.315..543H}, while in \textsc{NBODY6} the \citet{2004MNRAS.353...87E} recipe is now used. To test this, we carried out a simulation of model $128$kN with the \citet{2000MNRAS.315..543H} prescription for stellar remnants, but we obtained a dissolution time of $T_{diss}=23.0$ Gyr, which reduces the discrepancy by only about 30\% (see data for model 128kN in Table~\ref{parameter}). To assess stochastic effects (such as run-to-run variations) we also performed additional simulations of models $128$kN and $64$kN by evolving different numerical realizations of the same initial conditions, and by evolving the same realization in several independent simulations (as in \citetalias{2003MNRAS.340..227B}). Finally, we performed a simulation of model $128$kN in which the escapers were progressively removed (as in \citetalias{2003MNRAS.340..227B}), but again without any significant difference ($T_{diss}=22.9$ Gyr). None of these effects was able, individually, to account for the observed discrepancy. Therefore, we believe that the small but systematic discrepancy between our models without NS kicks and the corresponding ones in \citetalias{2003MNRAS.340..227B} results from a combination of all the effects mentioned above, and others which we have not studied, including possible differences in the way in which models are virialised and scaled in different codes. As we shall show later (Sect.~\ref{sec:2modes}) the sensitivity of these runs to small effects is such that apparently trivial differences could have significant effects. \begin{figure} \includegraphics[trim=0 20 0 50, clip, width=0.48\textwidth]{./compareECC_W7.eps} \caption{Time evolution of the fraction of bound mass of models with (i) $W_0=5$, on elliptic orbits; (ii) $W_0=7$, on circular orbits. As in Fig.~\ref{fig:MboverM0_128k}, models with NS kicks are denoted by red lines (right to left: $128$kK7 and $128$kKe), and without NS kicks by blue lines (right to left: $128$kN7 and $128$kNe). Dashed green lines show the corresponding models (without kicks) from BM03. }\label{fig:MboverM0_ECC_W7} \end{figure} \begin{figure} \includegraphics[trim=0 20 0 50, clip, width=0.48\textwidth]{./compareMMEAN.eps} \caption{Evolution of the mean mass of the stars in the innermost langrangian shell, containing 1\% of the bound mass, evaluated for all models with N=$128$k. The vertical arrows mark the moment of core collapse (in the five models which exhibit core collapse).}\label{fig:mmean} \end{figure} \section{Discussion}\label{Discussion} \subsection{Two modes of star cluster dissolution}\label{sec:2modes} We have found that the presence or absence of neutron star kicks, in the models we have studied, can change the lifetime of a star cluster by a large factor. We shall now try to interpret our results in the context of previous studies of star cluster dissolution mechanisms, with the aim of understanding why it is that a process which affects such a small fraction of the mass can have such a dramatic effect. We consider initially tidally filling, multi-mass models with stellar evolution. Over the years, several numerical investigations have shown that the dissolution time is strongly affected by two factors: the initial relaxation time and the initial concentration (represented by the King parameter $W_0$). In particular \citet{1990ApJ...351..121C} showed that, for a Salpeter-like IMF, their models with $W_0=1$ or $3$ dissolved quickly, in less than a Gyr, and without core collapse, while models with $W_0 = 7$ all entered core collapse, after about 10 Gyr or longer. Clusters with $W_0=3$ and a steeper IMF (and hence a longer time scale for mass loss by stellar evolution) could enter core collapse before dissolution, provided that relaxation was fast enough. Thus there is a tension between the time scales of stellar evolution and relaxation, which plays out differently depending on the concentration. Recently \citet{2013ApJ...778..118W} noted that the clusters which dissolve by the effects of stellar evolution lose their mass in a qualitatively different way from those dominated by relaxation. The former, as they approach dissolution, lose the last fraction of their mass (which may be substantial) extremely rapidly, whereas the latter lose mass at a rate which is steady, and sometimes even declining. Whitehead et al. also noted that the dividing line between the two modes of dissolution is quite sharp. For that reason it would not be surprising if a very small effect, such as the loss or retention of NS, were to place a cluster in one mode of dissolution or the other. The two kinds of behaviour described by \citet{2013ApJ...778..118W} are plainly visible in several previous studies of star cluster evolution, such as \citet{2000ApJ...535..759T}, and they are visible in Fig.~\ref{fig:MboverM0_128k} of the present Letter, where all models with kicks end their evolution by losing mass precipitately (except for the case N = $8$k), whereas the others lose mass at a more moderate rate. We refer to these two cases as ``jumping'' and ``skiing'', respectively\footnote{This unconventional terminology was coined by Simon Portegies Zwart in conversation with one of us (DCH) several years ago. It vividly conveys the difference between skiing down a gentle slope and jumping off a cliff. We note that \citet{2013ApJ...778..118W} have conflated the two terms, with a different semantics.}. Fig.~\ref{fig:MboverM0_128k} also illustrates the point made by \citet{1990ApJ...351..121C}, i.e. the two modes of dissolution are characterised by the presence or absence of core collapse before dissolution. Indeed we see that the clusters with and without natal kicks (except for the case N = $8$k) lie on either side of the divide between the two modes. In order to visualise the transition between skiing and jumping models, it has been particularly instructive for us to take on the point of view first suggested by \citet{1993ASPC...48..689W}, and to explore the evolution of our models in the plane defined by the concentration (parameterized by $c=\log(r_J/r_c)$, where $r_c$ is the core radius) and the mass which remains bound to the system. In this representation, a system which experiences exclusively stellar evolution effects would gradually lose mass, while reducing its concentration due to the progressive expansion, giving rise to a track moving down in the plane and to the left \citep[see Fig.~\ref{fig:mmean} in][]{1993ASPC...48..689W}. The tracks qualitatively resemble those of some of our models, as shown in Fig.~\ref{fig:weinberg10}. These are four of the models with kicks, shown in red; and one of these (128kK) is also shown in Fig.~\ref{fig:weinberg_main}. In Weinberg's treatment, dealing with the slow evolution of spherical equilibrium models, the tracks end when equilibrium is no longer possible; the tracks end at points along a curve, which is shown as a dashed near-vertical curve in these figures. $N$-body\ models can cross this curve, but then lose mass on a dynamical time scale, explaining the jumping profile of the corresponding curves in Figs.~\ref{fig:MboverM0_128k} and \ref{fig:MboverM0_ECC_W7}. Though its precise position may differ slightly when the simplifying assumptions of Weinberg's models are relaxed, we refer to this curve as ``Weinberg's cliff''. In Weinberg's models, mass-loss is driven by stellar evolution only, and his results should be applicable when this process dominates. When the effects of two-body relaxation are dominant, one of the natural consequences is the progressive increase of the central concentration, leading to core collapse. This results in a track oriented to the right-hand-side of the plane, behaviour which can be immediately recognised in the remaining models in Fig.~\ref{fig:weinberg10} and \ref{fig:weinberg_main}. It should not come as a surprise now that all long-lived, ``skiing'' models show signatures of core collapse, in contrast with short-lived, ``jumping'' models. These figures strongly suggest the existence of trajectories in which the two processes, stellar evolution and relaxation, are in a delicate balance overall, even though stellar evolution dominates early on and relaxation dominates thereafter. We have been particularly fortunate to have included in our survey two models whose evolution almost perfectly delimits a ``separatrix'' between ``skiing'' and ``jumping'' systems (see the innermost pair of red lines in Fig.~\ref{fig:weinberg10}, which correspond to models $8$kK and $16$kK). Even more strikingly, the model $128$kKe, despite the oscillations generated by the time-dependent tide, offers an excellent representation of the ``separatrix'' (see the green line in Fig.~\ref{fig:weinberg_main}). Evidently, the models that we have studied lie close to the separatrix dividing jumping models (which are dominated by stellar evolution, lose mass rapidly at the end of their lives, and do not reach core collapse) and skiing models (which are dominated by two-body relaxation, lose mass gently towards the end of their lives, and reach core collapse). If neutron stars are given no natal kick, as in model $128$kN, or the models of \citet{2003MNRAS.340..227B}, the trend to mass segregation and core collapse is accentuated, and the model moves across the separatrix into the domain of relaxation-dominated evolution. But we warn the reader against interpreting this as a general rule. Kicks were applied to both model 8kK and model 16kK (the innermost pair of red lines in Fig.~\ref{fig:weinberg10}), and they lie on opposite sides of the separatrix. The 8kK model, because of the low particle number and consequently smaller relaxation time, is sufficiently dominated by relaxation to lie in the skiing regime. \begin{figure} \includegraphics[trim=0 20 0 50, clip, width=0.48\textwidth]{./weinberg10.eps} \caption{The plot shows the total mass remaining in the cluster as a function of concentration, for the models depicted in Fig.~\ref{fig:MboverM0_128k} with (red lines) and without (blue lines) NS kicks. The black dots show the moment of core collapse, the black horizontal dashed line at $M/M_0=0.05$ marks the formal dissolution condition, and the black vertical dashed line denotes ``Weinberg's cliff'' \citep[see Fig.~\ref{fig:mmean} in][]{1993ASPC...48..689W}. }\label{fig:weinberg10} \end{figure} These considerations do not immediately explain, however, why the lifetime should be so different as a factor of nearly four. But the example of models 32kK and 8kN, which lose mass in almost the same way until core collapse in the latter model (Fig.\ref{fig:MboverM0_128k}), shows that the effects of skiing and jumping lead to different lifetimes. Though the difference is only a factor 1.13 in this case, it seems plausible that the effect could be much bigger if the event which determines the mode of dissolution occurred very early in the lifetime of a model, e.g. the ejection of neutron stars. Furthermore, because our models lie so close to the separatrix between the two modes, it would not be surprising if very minor systematic differences in the initial conditions were to lead to significant systematic differences in the lifetime, as discussed in Sect.~3.2. While this Letter has focused on kicks by neutron stars, the lesson to learn is that apparently minor changes can have very large effects, especially for clusters close to the transition between different modes of dissolution. Other factors which should be taken into account include the presence and properties of primordial binaries, variations in the high-mass end of the IMF, and the degree of primordial mass segregation, which influences both the importance of mass loss by stellar evolution and the role of remnants, not only NS but also stellar-mass black holes. The importance of these factors depends on the location of the dividing line between the two modes of dissolution that we have discussed, which can be assessed only by means of appropriate numerical experiments. \subsection{Conclusions} We have presented evidence, based on $N$-body simulations of the evolution of initially tidally filling King models with stellar evolution, that the presence or absence of NS natal velocity kicks can play a crucial role in the long-term survival of model star clusters. In particular we show that some of the basic models in the landmark study of \citet{2003MNRAS.340..227B} are especially sensitive to this effect, which can change their lifetime by almost a factor of four. We explain this finding by showing that the models lie close to a dividing line between (i) models which are dominated by the effects of mass-loss from stellar evolution, and whose evolution ends with a steepening rate of mass loss, and (ii) models whose dynamical evolution is dominated by two-body relaxation, which reach core collapse before dissolving, and do so with a gently decreasing rate of mass loss. \section*{Acknowledgments}\label{acknowledgments} We thank Mark Gieles and Simon Portegies Zwart for useful comments and valuable discussions and an anonymous referee for constructive comments. The simulations were carried out on GeForce GTX 780 graphics cards at University of Surrey and we thank Dave Munro for the hardware support. FC acknowledges support from the European Research Council (ERC-StG-335936, CLUSTERS), and ALV from the Royal Commission for the Exhibition of 1851. This work was initiated during the 2014 International Summer Institute for Modeling in Astrophysics, hosted by CITA at the University of Toronto. We are grateful to Pascale Garaud for its organisation, for financial support and, together with the other participants, for the stimulating research environment. \begin{figure} \includegraphics[trim=0 20 0 50, clip, width=0.48\textwidth]{./weinberg_128kmain.eps} \caption{As in Fig.~\ref{fig:weinberg10}, but presenting all models with N=$128$k. Note that model $128$kKe (green line) spans the region occupied by the separatrix, distinguishing ``skiing'' ($128$kN7, $128$kNe, $128$kN, $128$kK7) and ``jumping'' models ($128$kK). } \label{fig:weinberg_main} \end{figure} \bibliographystyle{mn2e}
1,108,101,564,511
arxiv
\section{Introduction} The expectation value of the scalar density at vanishing quark mass, commonly named the quark or chiral condensate, plays a central r\^ole in QCD at low energies. Spontaneous chiral symmetry breaking is signalled by the formation of a non-vanishing condensate, and an accurate determination of its value is of great practical interest. Lattice simulations of QCD appear well suited for this task, but in order to guarantee a reliable error estimation, it is crucial to have control over systematic effects. In particular, to ensure that the quark condensate approaches the continuum limit as a power series in the lattice spacing~$a$, the renormalization of the bare scalar density must be known with good accuracy. It is well known that renormalization factors computed in perturbation theory at one loop are not reliable. Further complications arise if the lattice formulation breaks chiral symmetry explicitly. For instance, in the case of Wilson fermions, a cubically divergent term must be subtracted before multiplicative renormalization can be applied \cite{chiral_latt}. In this paper we report on a non-perturbative calculation of the renormalization factor $\zs$ of the scalar density, using the overlap operator \cite{NeubergerDirac} as our fermionic discretization in the quenched approximation. We employ the method proposed in \cite{HJLW} and compute $\zs$ at four different values of the lattice spacing, ranging from $a\approx0.12\,\fm$ to $0.075\,\fm$. By identifying the bare condensate with the low-energy constant $\Sigma$, which appears in effective low-energy descriptions of QCD, we can compute the renormalized quantity, given results for $\Sigma$ at the corresponding values of the bare coupling in the quenched theory. Our analysis of the scaling properties of the renormalized condensate indicates the presence of only very small cutoff effects of order~$a^2$, provided that the non-perturbative estimates for the renormalization factor $\zs$ are used throughout. Thus, an extrapolation to the continuum limit can be performed in a controlled way. Moreover, we have extended the scaling analysis to other quantities, such as the pseudoscalar meson decay constant and the mass in the vector channel. In all cases we observe an excellent scaling behaviour, with leading cutoff effects of order $a^2$, and thus consistent with expectation. To our knowledge, these results represent the first detailed scaling study for overlap fermions. Results for the quark condensate have already been published by a number of authors \cite{APE_cond,JLW_cond,RBC_cond,DeG_cond,GHR_cond,HJLW_lat01,Bern_cond, BeciLub_cond,GLMPR_cond,McNeile_cond}. The novelty in this paper is the extension of previous simulations with overlap fermions \cite{JLW_cond,DeG_cond,GHR_cond} to considerably finer lattice spacings, as well as the strict application of non-perturbative renormalization, enabling us to take the continuum limit. Overlap fermions, despite their larger numerical cost, have clear conceptual advantages when it comes to studying the problem of chiral symmetry breaking, which is encoded in the value for the quark condensate. We stress, though, that our results are valid for quenched QCD, and thus great care must be taken if they are to be interpreted in the context of the full theory. In particular, the chiral condensate is ill-defined in the quenched approximation \cite{Quen_Chiral}. \section{Renormalization conditions} Here we briefly recall the conditions that fix the renormalization of the scalar and pseudoscalar densities in simulations using fermionic discretizations that satisfy the Ginsparg-Wilson relation \cite{GinsWil,ExactChSy}. Full details can be found in refs.\,\cite{HJLW,HJLW_lat01}. If the regularization preserves chiral symmetry, then the chiral Ward identities imply that \be \zs=\zp=1/\zm. \ee The renormalization factor $\zshat$, which relates the bare scalar density to the renormalization group invariant (RGI) density, can then be defined by \cite{HJLW} \be \zshat(g_0) = \left. \frac{(r_0\,m)(g_0)}{\UM}\right|_{(r_0\,\mps)^2=\xref}. \label{eq_zsUM_def} \ee In this expression $\UM$ denotes the RGI quark mass in the continuum limit, in units of the hadronic radius $r_0$ \cite{r0_refs}, while $m$ is the bare quark mass that appears in the lattice Dirac operator satisfying the Ginsparg-Wilson relation. The expression on the right is evaluated at a given reference value, $\xref$, of the square of the pseudoscalar meson mass in units of $r_0$. A convenient choice, which we also adopt here, is $\xref=1.5736$. For $r_0=0.5\,\fm$ this corresponds to $\mps=\mk=495\,\MeV$. The original data required for the determination of $\UM$ were published in \cite{mbar:pap3}, and in eq.~(3.1) of \cite{HJLW} $\UM$ is listed for several choices of $\xref$. Since $\zs=\zp$ an alternative renormalization condition can be formulated in terms of the matrix element of the pseudoscalar density. If we introduce the shorthand notation \be \Gpb=\langle0|P^a(0)|{\rm{PS}}\rangle,\qquad P^a(x)=(\psibar\lambda^a\gamma_5\psi)(x), \ee where $\lambda^a$ is some flavour matrix, then $\zphat$ can be defined via \be \zphat = \left. \frac{\UP}{(r_0^2\Gpb)(g_0)}\right|_{(r_0\,\mps)^2=\xref}. \label{eq_zsUP_def} \ee The universal factor $\UP$ denotes the RGI matrix element of the pseudoscalar density in the continuum limit. Its value can be determined, for instance, using $\rmO(a)$ improved Wilson fermions, and the results presented in refs.\,\cite{mbar:pap1,mbar:pap3} then yield \be \UP = 1.802(42) \qquad\hbox{at}\quad (r_0\,\mps)^2=1.5736. \ee In order to compute $\zshat$ or $\zphat$, it is clear from eqs.\,(\ref{eq_zsUM_def}) and\,(\ref{eq_zsUP_def}) that the main task is the determination of the value of the bare quark mass, $m$, and the matrix element $\Gpb$ at the point where $(r_0\,\mps)^2=\xref$, for a fermionic discretization based on the overlap operator. \section{Numerical simulations} In our simulations we have computed mesonic two-point correlation functions in the pseudoscalar and vector channels. We have used the massive overlap operator $D_m$, defined by \cite{NeubergerDirac} \be D_m=\left(1-\half\abar{m}\right)D+m,\qquad D=\frac{1}{\abar}\left(1-\frac{A}{\sqrt{A^\dagger{A}}}\right), \label{eq_Ddef} \ee where \be A=1+s-aD_{\rm w},\qquad \abar=\frac{a}{1+s},\qquad |s|<1, \label{eq_Adef} \ee and $D_{\rm w}$ is the Wilson-Dirac operator. The calculation of the quark propagator proceeds as usual by solving \be D_m\psi = \eta \label{eq_Dpsi_eta} \ee for a source field $\eta$. As pointed out in \cite{numeps}, the determination of both chiralities of the solution $\psi$ requires some care in the presence of zero modes of the massless operator $D$, especially as the quark mass becomes small. To separate off the zero mode contribution we have implemented the strategy outlined in section\,7 of ref.~\cite{numeps}, which we briefly review here. To this end we shall consider a gauge configuration which has a number of zero modes with positive chirality. The solution to \eq{eq_Dpsi_eta} with negative chirality is given by \be P_{-}\psi = (D_m^\dagger D_m)^{-1} P_{-}D_m^\dagger\eta, \label{eq_Pm_psi} \ee and thus the inversion of $D_m^\dagger D_m$ takes place in the chirality sector that does not contain zero modes. The components with positive chirality are obtained from \be P_{+}\psi=\frac{1}{m}P_0P_{+}\eta + (P_{+}D_m P_{+})^{-1}\Big\{(1-P_0)P_{+}\eta -P_{-}D_m P_{-}\psi\Big\}, \label{eq_Pp_psi} \ee where $P_0$ is a projector onto the subspace spanned by the zero modes, and whose calculation is described in \cite{numeps}. When implemented in a computer program, \eq{eq_Pp_psi} offers complete control over the zero mode contribution. It is also clear that the necessary inversion of $(P_{+}D_m P_{+})$ is performed on a source where all zero mode contributions have been projected out. The r\^oles of the positive and negative chirality sectors are obviously reversed in the above expressions if the zero modes have negative chirality. In our programs we compute $P_{-}\psi$ and $P_{+}\psi$ using the Generalized Minimum Residual (GMRES) algorithm \cite{YSaad}, which also allows for an inversion of $D_m$ itself. To speed up the inversion we have incorporated ``low-mode preconditioning'', a technique designed to protect against numerical instabilities caused by very small eigenvalues of $D_m^\dagger D_m$ \cite{numeps}. As we shall see later, the quark masses considered in this work are relatively large and hence provide an infrared cutoff, but we found that the inversion can nevertheless be accelerated in this way. The presence of zero modes in conjunction with the fact that the low (non-zero) modes are only known with a certain numerical accuracy requires some care in the implementation of low-mode preconditioning for the solution in eq.~(\ref{eq_Pp_psi}). Details will be described elsewhere \cite{JW_thesis}. Since the goal of our study is the computation of the renormalized condensate in the continuum limit, we have chosen our simulation parameters to coincide with those of previous determinations of the bare condensate. To this end we have identified the latter with the parameter $\Sigma$ computed by matching the spectrum of low-lying eigenvalues of $D$ to the predictions of Random Matrix Theory \cite{rmt}. More precisely, we have concentrated on the dataset labeled ``B'' in that reference, which comprises three different lattice spacings at a fixed box size of $L=1.49\,\fm$. We note that a spatial volume of this size is sufficiently large to avoid large finite volume effects for masses and decay constants at $\mps\approx\mk$. In order to improve the accuracy of the continuum extrapolation we added a fourth $\beta$-value, $\beta=5.9256$, tuned to reproduce the same physical box size for $L/a=14$. Following the same procedure as in \cite{rmt}, we have determined the low-lying spectrum of the Dirac operator and extracted the parameter $\Sigma$. The computation of fermionic two-point functions proceeded by setting $T=2L$, to control the exponential decay of the correlation function in a more reliable way. Our simulation parameters are listed in Table~\ref{tab_simpar}. As in ref. \cite{rmt}, the parameter~$s$ in the definition of the overlap operator (c.f. \eq{eq_Adef}) was set to $s=0.4$. \begin{table}[ht] \begin{center} \vspace{0.25cm} \begin{tabular}{ccccc} \hline \\[-2.0ex] $\beta$ & $L/a$ & $r_0/a$ & $a\;[\fm]$ & $\#$cfgs \\[0.7ex] \hline \\[-2.0ex] $5.8458$ & $12$ & $4.026$ & 0.124 & 200 \\ $5.9256$ & $14$ & $4.697$ & 0.106 & 174 \\ $6.0000$ & $16$ & $5.368$ & 0.093 & 200 \\ $6.1366$ & $20$ & $6.710$ & 0.075 & 100 \\ \hline \\[-2.0ex] \end{tabular} \end{center} \caption{\footnotesize Simulation parameters for the determination of mesonic two-point functions \label{tab_simpar}} \end{table} At each value of the bare coupling we computed quark propagators for three bare masses straddling the reference point corresponding to $\xref=(r_0\mps)^2=1.5736$. We added a fourth, heavier value at all but the finest lattice spacing we considered, to study the quark mass dependence of mesonic quantities in more detail. Since the quark masses here are relatively large, the low-lying spectrum of $D$ cannot induce large fluctuations in correlation functions like those observed in the so-called $\epsilon$-regime \cite{HUB_eps,lma}. Therefore, we did not apply the method known as low-mode averaging \cite{lma,DeG_Schaef} to enhance the signal. In the pseudoscalar channel we used both the left-handed axial current $J_\mu$ and the pseudoscalar density $P$ as interpolating operators, i.e. \be J_\mu(x) = (\psibar_r\gamma_\mu P_{-}\psi_s)(x),\qquad P(x) = (\psibar_r\gamma_5\psi_s)(x), \ee where $P_\pm=\half(1\pm\gamma_5)$, and $r,\,s$ denote flavour labels. Choosing $r\neq{s}$, both of these composite fields were then combined into non-singlet two-point correlation functions \be C_{\rm QR}(x_0) = a^3\sum_{\xvec}\left\langle Q(x) R(0) \right\rangle, \qquad Q,\,R = J_0,\,P. \ee The correlation function $C_{\rm JJ}$ involves only the left-handed quark propagator such that zero modes cannot contribute. By contrast, $C_{\rm PP}$ includes components of the quark propagator whose chirality coincides with that of the zero modes (if any). The latter can be separated off by implementing the expression in \eq{eq_Pp_psi}. The pseudoscalar mass and decay constant, as well as the matrix element $\Gpb$ were extracted from single-cosh fits, after averaging the correlators over the forward and backward halves of the lattice. Good plateaus were observed, which served as a guideline for choosing our fit intervals. We also computed the current quark mass, $\mpcac$ from \be a\mpcac=\frac{1}{2} \frac{\half(\partial_0+\partial_0^*)C_{\rm JP}(x_0)}{C_{\rm PP}(x_0)}, \ee where $\partial_0,\,\partial_0^*$ denote the forward and backward lattice derivatives, respectively. In order to compute meson masses in the vector channel, we have considered the two-point correlator \be C_{\rm VV}(x_0) = a^3\sum_{\xvec}\sum_{k=1}^3 \left\langle V_k(x)V_k(0)\right\rangle,\qquad V_k(x) = (\psibar_r\gamma_k\psi_s)(x),\qquad k=1,2,3. \ee It turned out to be impossible, however, to obtain a stable plateau for the effective mass, by simply using a local source vector in the inversion step. Therefore we applied Jacobi smearing on the source $\eta$, as described in \cite{jacobi}. The parameters were chosen such that the rms.~smearing radius in units of $r_0$ was kept constant at approximately 0.6. With this choice we were able to improve the stability of the plateau in the vector channel considerably. \section{Determination of renormalization factors} Our results for masses and matrix elements are summarized in Table \ref{res_tab}. \begin{table}[ht] \begin{center} \vspace{0.25cm} \begin{tabular}{ccccccc} \hline \\[-2.0ex] $\beta$ & $am$ & $a\mpcac$ & $a\mps$ & $a\Fpb$ & $a^2\Gpb$ & $a\mv$ \\[0.7ex] \hline \\[-2.0ex] $5.8458$ & $0.040$ & $0.02359(6)$ & $0.262(9)$ & $0.0417(10)$ & $0.1185(61)$ & $0.532(37)$ \\ & $0.053$ & $0.03134(7)$ & $0.294(8)$ & $0.0424(9)$ & $0.0889(39)$ & $0.537(31)$ \\ & $0.067$ & $0.03973(8)$ & $0.327(8)$ & $0.0434(8)$ & $0.0718(28)$ & $0.556(24)$ \\ & $0.113$ & $0.06769(11)$ & $0.421(6)$ & $0.0469(7)$ & $0.0488(14)$ & $0.631(14)$ \\[0.7ex] $5.9256$ & $0.034$ & $0.02120(13)$ & $0.235(7)$ & $0.0389(10)$ & $0.0877(39)$ & $0.502(21)$\\ & $0.046$ & $0.02875(14)$ & $0.266(6)$ & $0.0397(10)$ & $0.0657(26)$ & $0.515(15)$ \\ & $0.057$ & $0.03569(15)$ & $0.292(6)$ & $0.0405(9)$ & $0.0547(19)$ & $0.529(12)$ \\ & $0.097$ & $0.06120(17)$ & $0.377(4)$ & $0.0433(9)$ & $0.0378(11)$ & $0.579(7)$ \\[0.7ex] $6.0000$ & $0.030$ & $0.01927(7)$ & $0.217(6)$ & $0.0346(7)$ & $0.0814(42)$ & $0.424(15)$ \\ & $0.040$ & $0.02576(7)$ & $0.247(5)$ & $0.0356(6)$ & $0.0612(27)$ & $0.445(11)$ \\ & $0.050$ & $0.03229(7)$ & $0.273(5)$ & $0.0366(6)$ & $0.0501(20)$ & $0.462(9)$ \\ & $0.085$ & $0.05543(9)$ & $0.352(3)$ & $0.0403(5)$ & $0.0342(10)$ & \\[0.7ex] $6.1366$ & $0.024$ & $0.01638(6)$ & $0.168(5)$ & $0.0296(7)$ & $0.0447(21)$ & $0.360(28)$ \\ & $0.032$ & $0.02185(6)$ & $0.195(4)$ & $0.0301(6)$ & $0.0356(14)$ & $0.378(20)$ \\ & $0.040$ & $0.02734(6)$ & $0.218(4)$ & $0.0309(6)$ & $0.0305(11)$ & $0.389(15)$ \\ \hline \\[-2.0ex] \end{tabular} \end{center} \caption{\footnotesize Results for meson masses and decay constants computed at several values of quark masses at each lattice spacing. The results for $a\mps$ and $a\Fpb$ were extracted from correlators of the left-handed axial current. \label{res_tab}} \end{table} \begin{figure}[t] \centerline{\includegraphics*[width=8cm]{mjj.eps}} \caption{\footnotesize $(r_0 \mps)^2$ as a function of $r_0 m$. The horizontal dashed line represents the reference point $(r_0\mps)^2=1.5736$. \label{fig_mps2_m}} \end{figure} In order to compute $\zshat$ according to \eq{eq_zsUM_def} we have to determine the quark mass at the reference point in units of $r_0$. In Fig.~\ref{fig_mps2_m} we have plotted $(r_0\mps)^2$ as a function of $(r_0m)$ at all four $\beta$-values. As can be seen, the data are easily fitted by straight lines, but a non-zero intercept is found at all but the largest value of $\beta$: the pseudoscalar mass at zero bare quark mass differs from zero by $1-2$ standard deviations. Since the correlation function of the left-handed axial current is free from contributions of zero modes, they cannot be responsible for the non-zero intercept. We note however that the chiral fits yield $\chi^2/\rm dof$ below~1 even if the extrapolation is forced through the origin. By performing local interpolations to the reference point using the three nearest data points and subsequently applying \eq{eq_zsUM_def}, we obtain the values of $\zshat$, which are tabulated at each $\beta$-value in Table~\ref{tab_zs_res}. The typical accuracy of our determination is around 5\%. It should be noted that the precision is partly limited by the accuracy of the published value of $\UM$, which is about 3\% \cite{mbar:pap3}. We estimate that pushing the precision of our determination of $\zshat$ to that level would require a four-fold increase in statistics. \begin{table} \begin{center} \vspace{0.25cm} \begin{tabular}{cccc} \hline \\[-2.0ex] $\beta$ & $\zshat$ & $\zphat$ & $\za$\\[0.7ex] \hline \\[-2.0ex] $5.8458$ & $1.28(6)$ & $1.33(4)$ & $1.710(5)$ \\ $5.9256$ & $1.19(7)$ & $1.20(4)$ & $1.611(3)$ \\ $6.0000$ & $1.05(5)$ & $0.88(6)$ & $1.553(2)$ \\ $6.1366$ & $1.01(4)$ & $1.02(5)$ & $1.478(2)$ \\ \hline \\[-2.0ex] \end{tabular} \end{center} \caption{\footnotesize Non-perturbative determinations of $\zshat$, $\zphat$ and $\za$.\label{tab_zs_res}} \end{table} In Fig.~\ref{fig_zs_res} we plot our results for $\zshat$ versus $\beta$. It has become customary to represent results for renormalization factors at different values of the bare coupling by interpolating curves. Using a simple polynomial ansatz in $(\beta-6)$ yields \be \zshat(\beta)=1.045-0.899(\beta-6)+4.36(\beta-6)^2, \qquad s=0.4. \label{zsfit} \ee This formula describes $\zshat$ with an estimated error of 5\% in the studied range of $\beta$, i.e. $5.8458\leq\beta\leq6.1366$. We emphasize that our determination is valid only for the case $s=0.4$ in the definition of the Neuberger-Dirac operator, eqs.~(\ref{eq_Ddef}) and~(\ref{eq_Adef}). The perturbative expression for $\zshat$ at one loop is \be \zshat^{\rm pt}(g_0) = \frac{\mbar_\msbar(\mu)}{M} \left\{ 1+g_0^2\left[\frac{1}{2\pi^2}\ln(a\mu) +z_{\rm S}^{(1)}\right]+\rmO(g_0^4)\right\}, \label{eq_zspbare} \ee where $z_{\rm S}^{(1)}=0.147107$ for our choice of $s=0.4$ \cite{chiral:AlFoPaVi,SteLeo00}. The factor $\mbar_\msbar(\mu)/M$ was computed previously in \cite{mbar:pap3}. The mean-field improved version of $\zshat^{\rm pt}$ reads \cite{HJLW} \be \zshat^{\rm mf}(g_0) = \frac{\mbar_\msbar(\mu)}{M} \left(\frac{1+s}{1+\tilde s}\right)\left\{ 1+g^2 \left[\frac{1}{2\pi^2}\ln(a\mu)+z_{\rm S}^{(1)}+u_0^{(1)}\left( \frac{3-s}{1+s}\right) \right]\right\}\ , \label{eq_zspttad} \ee where $g^2=g_0^2/u_0^4$ is the boosted coupling, $\tilde{s}=3+(s-3)/u_0$, with $u_0^4$ being the average plaquette. The comparison of our numerical results for $\zshat$ with perturbation theory is shown in Fig.~\ref{fig_zs_res}. The mean-field improved perturbative expansion comes quite close to the non-perturbatively determined values for $\beta\;\gtaeq\;6.0$ but falls short by more than 20\% below $\beta=6.0$. Unsurprisingly, perturbation theory in the bare coupling $g_0^2$ fares a lot worse in the entire range of couplings studied here. \begin{figure}[ht] \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{zsfit.eps}} \caption{\footnotesize $\zshat$ as a function of $\beta$. The solid line denotes the fit of \eq{zsfit}. The dotted and dashed curves represent the results of bare and mean-field improved perturbation theory at one loop order.\label{fig_zs_res}} \end{minipage} \hfill \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{ZA.eps}} \caption{\footnotesize The quark mass dependence of $\frac{m}{\mpcac}$. The value of $\beta$ increases from top to bottom. $\za$ is defined as the value of this ratio in the limit of vanishing quark mass.\label{fig_massrat}} \end{minipage} \end{figure} The results for $\zphat$, computed according to \eq{eq_zsUP_def}, are listed alongside those for $\zshat$ in Table~\ref{tab_zs_res}. The renormalization conditions for $\zshat$ and $\zphat$ imply that the two must be identical up to effects of order~$a^2$. Indeed, we observe hardly any difference at our level of accuracy, except at $\beta=6.0$. In our view, the most likely explanation for this deviation is a statistical fluctuation. In order to include the pseudoscalar decay constant in the scaling tests described below we also computed the renormalization factor of the axial current, $\za$. Using the PCAC relation and $\zm=1/\zp$ one can define \be \za = \lim_{m\to0}\frac{m}{\mpcac}. \ee We found the ratio ${m}/{\mpcac}$ to depend only weakly on the bare mass (c.f.~Fig.~\ref{fig_massrat}). $\za$ could then be determined by extrapolating $m/\mpcac$ linearly in $m$ to the chiral limit. \section{The renormalized condensate} Having determined the renormalization factor of the scalar density in a range of bare couplings, we can now compute the renormalized condensate in the continuum limit, by combining the results for $\zshat$ with estimates of the bare condensate. In effective low-energy descriptions of QCD with $\nf=3$ quark flavours, the quark condensate is identified with the low-energy constant $\Sigma$ via \be -\left\langle\psibar\psi\right\rangle = \Sigma. \ee In the quenched theory, however, the condensate $-\langle\psibar\psi\rangle$ is not defined, owing to the presence of infrared divergencies as the chiral limit is approached \cite{Quen_Chiral}. Nevertheless, the low-energy constant $\Sigma$ can be determined in quenched QCD, for instance, by comparing lattice data of suitable quantities to expressions of Chiral Perturbation Theory or chiral Random Matrix Theory. Although in this case the identification of $\Sigma$ with the quark condensate is rather dubious, we shall nevertheless proceed to compute a renormalized ``condensate'', by assuming that estimates of $\Sigma$ in the quenched theory renormalize like the scalar density. Our input quantities are thus the renormalization factors $\zshat$ of Table~\ref{tab_zs_res} and results for $\Sigma$, determined by matching the low-lying eigenvalues of the Dirac operator in the $\epsilon$-regime to the predictions of the chiral unitary random matrix model according to \cite{rmt} \be \left\langle\lambda_k\right\rangle_{\nu}{\Sigma}V =\left\langle\zeta_k\right\rangle_{\nu}, \qquad k=1,2,\ldots \label{eq_rmt_match} \ee Here, $\langle\lambda_k\rangle_{\nu}$ is the expectation value of the $k$th eigenvalue in the topological sector with index $\nu$, and $\zeta_k$ denotes the $k$th scaled eigenvalue in the matrix model. In ref. \cite{rmt} it was found that good agreement with random matrix behaviour is observed for lattice volumes $V$ of at least $(1.5\,\fm)^4$. In other words, the value of $\Sigma$ extracted from \eq{eq_rmt_match} depends neither on the particular eigenvalue, nor on the topological sector, within statistical errors. Using the results for $\Sigma$ from Table~3 of \cite{rmt} (i.e. the runs labelled $\rm B_0, B_1$ and $\rm B_2$), supplemented by our data at $\beta=5.9256$, we plot the renormalization group invariant condensate $\widehat\Sigma$ in units of $r_0$ versus $(a/r_0)^2$ in Fig.~\ref{fig_zshat}. If the non-perturbative estimates for $\zshat$ are used, the results for $r_0^3\widehat\Sigma$ show a remarkably flat behaviour, which not only indicates small residual cutoff effects, but is also consistent with the expectation that the leading lattice artefacts of our fermionic discretization should be of order $a^2$. Figure~\ref{fig_zshat} also reveals that employing mean-field improved perturbation theory for $\zshat$ produces a significant slope in $r_0^3\widehat\Sigma$ as the continuum limit is approached. Although this procedure apparently yields a consistent value of $r_0^3\widehat\Sigma$ in the continuum limit, it is equally obvious that the perturbatively renormalized result serves as a poor estimate for the condensate at non-zero lattice spacing. \begin{figure}[ht] \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{sigma.eps}} \caption{\footnotesize Continuum extrapolation of $r_0^3\widehat\Sigma$. Full circles denote the results obtained using non-perturbative renormalization factors, while open squares represent values resulting from applying mean-field improved perturbation theory.\label{fig_zshat}} \end{minipage} \hfill \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{contlim.eps}} \caption{\footnotesize The variation of $r_0^3\widehat\Sigma$ in the continuum limit, arising from choosing different eigenvalues and topological sectors in the determination of the bare condensate. The solid and dashed lines represent the result for $k=2,\, |\nu|=1$ which is used for our main result.\label{fig_fit_stab}} \end{minipage} \end{figure} Our results for $r_0^3\widehat\Sigma$ at all values of $\beta$ and in the continuum limit are listed in Table~\ref{tab_results}. Here we have used $\Sigma$ as determined from $\langle\lambda_k\rangle_\nu$ for $k=2$ and $|\nu|=1$. We note that the variation in the value of $r_0^3\widehat\Sigma$ from choosing different $\lambda_k$'s and topological sectors is well within the statistical fluctuations after taking the continuum limit. This is illustrated in Fig.~\ref{fig_fit_stab}, where we plot the continuum results for all possible choices of $\lambda_k$ and $|\nu|$. We emphasize that this variation should not be regarded as a systematic uncertainty, since all choices are equivalent, if random matrix theory does indeed give an accurate description of the low-lying eigenvalues, and hence we refrain from quoting an additional error. \begin{table}[hb] \begin{center} \vspace{0.25cm} \begin{tabular}{cccc} \hline \\[-2.0ex] $\beta$ & $r_0^3\widehat\Sigma$ & $r_0\Fk$ & $r_0m_{\rm K^*}$ \\[0.7ex] \hline \\[-2.0ex] $5.8458$ & $0.282(14)$ & $0.296(6)$ & $2.209(89)$ \\ $5.9256$ & $0.285(16)$ & $0.301(7)$ & $2.403(95)$ \\ $6.0000$ & $0.275(14)$ & $0.293(5)$ & $2.413(66)$ \\ $6.1366$ & $0.294(13)$ & $0.297(7)$ & $2.328(165)$ \\ $\infty$ & ${\it 0.293(21)}$ & ${\it 0.294(9)}$ & ${\it 2.32(29)}$\\ \hline \\[-2.0ex] \end{tabular} \end{center} \caption{\footnotesize Renormalization group invariant quark condensate, kaon decay constant and $K^*$-mass, in units of $r_0$.\label{tab_results}} \end{table} Our result in the continuum limit is thus \be r_0^3\widehat\Sigma = 0.293\pm0.021 \ee for the renormalization group invariant condensate. In the $\msbar$-scheme at $2\,\GeV$ we obtain after division by $\mbar_\msbar(2\,\GeV)/M=0.72076$ \cite{mbar:pap3} the value \be r_0^3\Sigma_\msbar(2\,\GeV) = 0.406\pm0.029. \ee These are the main results of our calculation. To our knowledge, these are the first estimates of a quantity in the continuum limit, computed using overlap fermions. We emphasize that the quoted errors include all uncertainties, except those due to quenching. As is well known, the calibration of the lattice spacing is ambiguous in the quenched approximation, and thus any conversion into physical units is only illustrative. Here we perform such a conversion using either the kaon decay constant or the nucleon mass to set the scale. Ref. \cite{mbar:pap3} quotes \be r_0\Fk\sqrt{2} = 0.415\pm0.009,\qquad \Fk=113\,\MeV, \ee in the continuum limit, while a continuum extrapolation of the nucleon mass data of \cite{CPPAC_quen02} in units of $r_0$ yields \be r_0m_{\rm N} = 2.670 \pm 0.042,\qquad m_{\rm N} = 939.6\,\MeV. \ee For the condensate in the $\msbar$-scheme at $2\,\GeV$ we then obtain \be \Sigma_\msbar(2\,\GeV) = \left\{\begin{array}{ll} (285 \pm 9\,\MeV)^3, & \quad \hbox{scale set by $\Fk$} \\ (261 \pm 8\,\MeV)^3, & \quad \hbox{scale set by $m_{\rm N}$} \end{array} \right.. \ee These findings are consistent with previous observations that the typical scale ambiguity for a quantity with mass dimension equal to one is of the order of 10\%. Recent calculations of the renormalized condensate \cite{APE_cond,JLW_cond,RBC_cond,DeG_cond,GHR_cond,HJLW_lat01,Bern_cond,BeciLub_cond,GLMPR_cond,McNeile_cond} yield similar values compared to our results. \section{Further scaling tests} The leading cutoff effects of fermionic discretizations based on the Ginsparg-Wilson relation are expected be of order~$a^2$, and indeed, this expectation has been confirmed in our scaling study of the quark condensate. In this section we shall extend our analysis of cutoff effects to quantities like the pseudoscalar decay constant and the meson mass in the vector channel. \begin{figure}[ht] \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{FK.eps}} \caption{\footnotesize Continuum extrapolation of $r_0 \Fk$. Full circles denote our results, while the open squares are the data of \cite{mbar:pap3}, employing O($a$) improved Wilson fermions. The full triangles are our data with $\za$ from mean-field improved perturbation theory. \label{FKplot}} \end{minipage} \hfill \begin{minipage}[t]{7cm} \centerline{\includegraphics*[width=7cm]{mkstar.eps}} \caption{\footnotesize Scaling behaviour of $r_0 m_{\rm K^*}$. The meaning of the full circles and open squares is as in Fig.~\ref{FKplot}. The open circle results from an alternative fit with a fit range of $x_0/a\in[5,11]$ instead of $x_0/a\in[8,11]$ (full circle). \label{mKstarplot}} \end{minipage} \end{figure} To this end we have assumed that $a\Fpb$ and $a\mv$ depend linearly on $(a\mps)^2$ and performed a linear interpolation to the point where $(r_0\mps)^2=(r_0\mk)^2=1.5736$. Thus, our aim is to investigate the scaling behaviour of $\Fk$ and $m_{\rm K^*}$. The renormalized kaon decay constant is obtained after multiplication with the factor $\za$ listed in Table~\ref{tab_zs_res}. In Table~\ref{tab_results} we have compiled the results for $r_0\Fk$ and $r_0 m_{\rm K^*}$ at the various values of $\beta$, as well as in the continuum limit. The corresponding continuum extrapolations are plotted in Figures~\ref{FKplot} and~\ref{mKstarplot}. For the kaon decay constant we observe a flat approach to the continuum limit, consistent with a linear fit in $(a/r_0)^2$, provided that the non-perturbative estimate for $\za$ is used. The perturbatively renormalized $\Fk$ is subject to larger lattice artefacts, and the resulting continuum value is roughly consistent. In Fig.~\ref{FKplot} we also show the continuum extrapolation of the same quantity from ref.~\cite{mbar:pap3}, where $r_0\Fk$ was computed using O($a$) improved Wilson fermions. In the continuum limit our data agree remarkably well with those of ref.~\cite{mbar:pap3}, but for overlap fermions the residual cutoff effects at lattice spacings of around $0.1\,\fm$, i.e. at $(a/r_0)^2\approx0.035$, are apparently much smaller. The scaling behaviour of the $K^*$ mass is also flat, except at our coarsest lattice spacing. A closer inspection of our fits to the two-point function shows that the value of $a\mv$ at $\beta=5.8458$ depends strongly on the chosen fit range. Extending the fit interval to smaller timeslices leads to a significant increase in the value of $\r_0\mkstar$, as indicated in Fig.~\ref{mKstarplot}. Owing to the uncertainty in the value of $\r_0\mkstar$ as a result of using different fit intervals, we exclude the coarsest lattice from the continuum extrapolation, despite the fact that the alternative result is apparently consistent with a linear behaviour up to $(a/r_0)\approx 0.06$. Nevertheless we also confirm good scaling behaviour for the vector mass; as our values for $\beta>5.8458$ are mutually consistent with each other, as well as with the results of ref.~\cite{mbar:pap3}. \section{Conclusions} We have presented the first comprehensive scaling study of quantities computed using overlap fermions. A major part of our calculation was devoted to the determination of the renormalization factor $\zshat$ of the scalar density. Thereby we were able to present a conceptually clean determination of the renormalized low-energy constant $\Sigma$ in the continuum limit of quenched QCD, with a total accuracy of 7\%. Besides studying the continuum extrapolation of $r_0^3\widehat\Sigma$ we also performed scaling studies of the pseudoscalar decay constant and the mass in the vector channel. For all three quantities computed using overlap quarks we observed an excellent scaling behaviour, resulting in a flat approach to the continuum limit. This is signified by the fact that the results in Table~\ref{tab_results} at any finite value of $\beta$ and in the continuum limit are practically the same, at least at our level of accuracy. We note, however, that a flat continuum behaviour is only observed for $\widehat\Sigma$ and $\Fk$, if non-perturbative estimates of the respective renormalization factors are employed. Our values for $r_0\Fk$ and $r_0\mkstar$ in the continuum limit are in very good agreement with those of refs.~\cite{mbar:pap3,chiLF_quen}. Owing to their good scaling properties, overlap fermions are an attractive discretization for the computation of phenomenologically interesting quantities, despite the large numerical effort involved in their simulation. \section*{Acknowledgements} We are grateful to Leonardo Giusti, Pilar Hern\'andez, Mikko Laine, Martin L\"uscher and Peter Weisz for interesting discussions and for computer code developed for related projects using overlap fermions. We thank Miho Koma for her work on optimizing parts of our programs. Our calculations have been performed on PC clusters at DESY Hamburg and LRZ Munich, as well as on the IBM Regatta at FZ J\"ulich. We thank all these institutions for support and the staff of their computer centers for technical help.
1,108,101,564,512
arxiv
\section*{Introduction} \indent Three-dimensional topological insulators (TIs) have attracted significant attention as they possess topologically protected metallic states on their surfaces, known as surface states\cite{Hasan,Qi,Ando,Ando1,Cava}. These metallic surface states in 3D TIs originate as a result of the nontrivial topology of the bulk band structure and are protected by time-reversal symmetry. Due to the topological protection of these metallic states, surface-state electrons have very high mobility and are less sensitive to impurities (if the impurity is not magnetic). Recently, very large magnetoresistance and high mobility have also been observed in many topological systems\cite{Yan,Shrestha,Wang}, making TIs not only a playground for understanding novel quantum phenomena but promising candidates for future electronic devices as well. Many bismuth-based TIs have been theoretically predicted and have later been experimentally verified by surface-sensitive techniques such as angle-resolved photoelectron spectroscopy and scanning electron microscopy\cite{Xia,Chen,Hsieh}. Electrical transport (or magnetization) measurements under high magnetic fields have often been used to study topological systems. In the presence of magnetic fields, electrical conductivity (or magnetization) shows quantum oscillations known as Shubnikov de Haas (de Haas van Alphen) effects\cite{Kittel,Ashcroft,Shoenberg}. By analyzing oscillations at different tilt angles of the sample with respect to magnetic fields, one can map the two-dimensional Fermi surface of surface states or the three-dimensional Fermi surface of bulk states and thus study many additional properties\cite{Wang,Shrestha1}. However, transport studies of surface states in 3D TIs are always hindered due to the presence of the parallel bulk conduction channel that arises as a result of crystal defects and imperfections\cite{Qu,Analytis,Eto,Cao}. Several efforts have been made to grow purer topological crystals by modifying the crystal growth technique, compensating excess bulk carriers by doping Sb or Ca ions, and extending research from binary to ternary topological compounds \cite{Taskin, Gaku, Hor, Bao, Xu, Lin}. \\ \indent In our recent magnetotransport studies\cite{Shrestha2,Shrestha3} on metallic Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystals, we have observed well separated signals from the surface and bulk states. The surface states dominate at low fields (below 7 T) and bulk states at high fields. The crossover between these signals takes place at 14 T. These results have shown that it may be possible to characterize surface-state properties even if the bulk is metallic.\\ \indent Due to the presence of strong spin-orbit coupling in topological materials, their magnetoconductance often shows a weak antilocalization (WAL) effect\cite{He,Shrestha4}. As a quantum correction to a classical conductance, a WAL effect in topological insulators may originate due to spin-orbit coupling in either the surface or bulk states. Whether the WAL effect originates from surface or bulk states can be determined by measuring the magnetoconductance at different angles between the sample and the magnetic field direction. Recently, numerous topological systems have been successfully investigated by means of the WAL effect\cite{Taskin,Shekhar} where physical properties, such as the phase coherence length and the number of conduction channels, were determined; such measurements are not possible using the quantum oscillations method. It would therefore be interesting to extend the study of the Bi$_2$Se$_{2.1}$Te$_{0.9}$ sample by the WAL method.\\ \indent In this work, we have carried out magnetoresistance studies on a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal under high magnetic fields up to 35 T. The sample shows a large non-saturating magnetoresistance that reaches 1900\% under 35 T at $T$=0.33 K. Magnetoconductance in low magnetic fields shows a cusp due to the WAL effect. From the angle dependence of the WAL curves, we prove the presence of topological surface states in a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal. We have estimated several physical parameters using the Hikami-Larkin-Nagaoka formula and have studied their temperature dependence as well. \section*{Experimental} High-quality Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystals were grown using the modified Bridgman method. Stoichiometric amounts of high purity Bi (99.9999\%), Se (99.9999\%), and Te (99.9999\%) were mixed together and enclosed in quartz ampoules. The mixture melts at 875 $^\circ$C and was kept at this temperature for 2 days. The molten mixture was slowly cooled to 670 $^\circ$C at a rate of 0.5 $^\circ$C/h and then to room temperature at a rate of 10 $^\circ$C/h. A shiny plate-like single crystal was selected from the boule of crystals. We used Scotch tape to peel out a very thin layer of the sample. Typical thickness of the sample is $\sim$0.05 mm. To ensure safe handling of this sample, it was attached to a magnesium oxide (MgO) substrate using GE varnish. Six gold contact pads were sputtered on the sample and platinum wires were attached to these gold contact pads using silver paint to carry out the standard longitudinal and Hall measurements.\\ \indent Transport measurements of the Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal under magnetic fields up to 7 T were performed in a Physical Properties Measurement System (PPMS) at the Texas Center for Superconductivity at the University of Houston. The field range was extended to 35 T by performing measurements at the National High Magnetic Field Laboratory (NHMFL), Tallahassee, Florida. The angle-dependence measurements were carried out by mounting the sample on a rotating platform on a standard probe designed at NHMFL. Longitudinal and Hall resistances were measured using a lock-in technique in which a Keithley (6221) source meter provides AC current of amplitude 1 mA at a certain frequency, 47.77 Hz and the lock-in amplifier (SR-830) measures the voltage signal at the same frequency. The standard probe with the sample mounted on it was inserted into a $^3$He Oxford cryostat that sits into the bore of a resistive magnet with a maximum field of 35 T. A Hall sensor was used to calibrate the position of the sample with respect to the direction of the applied field. \section*{Results and Discussion} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig1.jpg} \caption{(Color online) Temperature dependence of longitudinal resistivity of a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal. Upper inset: Hall measurement data of Bi$_2$Se$_{2.1}$Te$_{0.9}$ at $T$=5 K. Lower inset: the resistivity in low temperature range with a logarithmic temperature axis.}\label{Fig1} \end{figure} \indent Figure [1] shows the temperature dependence of longitudinal resistivity of a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal. The sample shows a metallic behavior from 300 to 5 K. The high value of the residual resistance ratio, RRR=$\rho_{xx}$(300 K)/$\rho_{xx}$(5 K)=18 indicates good crystalline quality. This RRR value is comparable with those of Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystals used in our previous studies\cite{Shrestha2,Shrestha3}. The lower inset shows a zoomed in picture of the resistivity in low temperature range with the logarithmic temperature axis. The resistivity curve is almost flat below $T$ = 100 K. Such low temperature behavior of resistivity is also seen in other topological systems and considered as due to the dominance of topological surface states in this temperature regime\cite{Bansal, Chen, Chiatti}. The upper inset shows Hall measurements at $T$=5 K. The positive slope of the Hall resistance reveals the presence of hole-like bulk charge carriers. The Hall resistance shows the non-linear behavior near the origin, $B$=0, indicating the presence of multiband effects (hole and electron bands), as has been observed in other bismuth-based topological systems\cite{Qu,Shrestha2}. From the slope of the Hall data, we have estimated the bulk carrier concentration to be $\approx$ 8.7$\times$10$^{18}$cm$^{-3}$ at 5 K.\\ \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig2.jpg} \caption{(Color online) (a) Angle dependence of MR of a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal expressed as a percentage in magnetic fields up to 34.5 T at $T$=0.33 K.(b) MR value at highest magnetic field at different temperatures measured at $\theta$=0$^{o}$. Inset: $\theta$ is the angle between magnetic field and normal to the sample surface.}\label{SdH} \end{figure} \indent The magnetoresistance of a Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal was measured under high magnetic fields up to 35 T at NHMFL. Figure [2(a)] shows the magnetoresistance of Bi$_2$Se$_{2.1}$Te$_{0.9}$ measured at different tilt angles, $\theta$. Here, the angle $\theta$ is defined as the angle between the magnetic field direction and the perpendicular to the sample surface, as shown in the inset to Fig. [2(b)]. Magnetoresistance is expressed in percentage as MR=\big[${\rho}_{xx}(B)$/${\rho}_{xx}(0)$-1\big]$\times$100\%, where ${\rho}_{xx}(0)$ and ${\rho}_{xx}(B)$ are resistivity values at zero and $B$ applied field, respectively. The sample shows positive MR that increases linearly with magnetic field. MR reaches as high as 1900\% under 35 T at $\theta$=0$^{o}$ with no sign of saturation. At low magnetic field regime, MR shows a sharp cusp-like feature which indicates the presence of the weak antilocalization effect (WAL) in the Bi$_2$Se$_{2.1}$Te$_{0.9}$ sample. We will discuss the WAL effect later in detail. It should be noted that MR shows clear quantum oscillations in fields above 10 T. The oscillations have two frequencies at $F_1$$\approx$26 T and $F_2$$\approx$55 T in the frequency spectrum. From the angle dependence of $F_1$ and $F_2$ and Berry phase calculations, we have already resolved the origin of these frequencies in our previous studies\cite{Shrestha2,Shrestha3}. The MR value strongly depends on $\theta$. MR is maximum at $\theta$=0$^{o}$, and it decreases gradually at higher $\theta$ values. Similarly, MR decreases with increase in temperature, as shown in Fig. [2(b)]. At $T$=20 K, MR=1475\%, which is almost $\frac{2}{3}$ of the MR value at $T$=0.33 K. Topological materials are expected to show a large linear magnetoresistance due to the Dirac-like dispersion of surface states in the band structure\cite{Yan,Shrestha,Wang, Zhang}. Thus, an observation of large non-saturating magnetoresistance suggest the presence of topological surface states in Bi$_2$Se$_{2.1}$Te$_{0.9}$ sample.\\ \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig3.jpg} \caption{(Color online) Magnetoconductance curves of Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal as a function of (a) $B$ and (b) $B$cos$\theta$ under fields up to 6 T at $T$=0.33 K.}\label{FFT} \end{figure} \indent In order to detect topological surface states in Bi$_2$Se$_{2.1}$Te$_{0.9}$, we have studied magnetoconductance at different tilt angles, $\theta$. The WAL-induced quantum corrections to magnetoconductance can be obtained as\cite{He} \begin{equation} \Delta G (\theta, B)=1/{\rho}_{xx}(\theta,B)-1/{\rho}_{xx}(\theta=90^{o},B), \end{equation}\\ where $\rho_{xx}(\theta,B)$ and $\rho_{xx}(\theta=90^{o},B)$ are resistivity values at a given $\theta$ value and at $\theta$=90$^{o}$, respectively. Figure [3(a)] shows $\Delta G (\theta, B)$ of the Bi$_2$Se$_{2.1}$Te$_{0.9}$ crystal measured along different tilt angles at $T$=0.33 K. Magnetoconductance shows a strong dependence on the $\theta$ values. All of the magnetoconductance curves merge together when they are plotted as a function of the normal component of magnetic fields, $Bcos\theta$, as shown in Fig. [3(b)]. This provides a strong evidence of the dominance of topological surface states in the magnetoconductance of the Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal. This observation is consistent with the previous conclusions at low magnetic fields\cite{Shrestha2,Shrestha3}. \\ \indent For a deeper understanding of the WAL effect observed in Bi$_2$Se$_{2.1}$Te$_{0.9}$, we have used the Hikami-Larkin-Nagaoka (HLN) formula\cite{Hikami} and determined various physical parameters. According to the HLN formula, magnetoconductance can be described as,\\ \begin{equation}\label{Hikami} \Delta G(\theta=0, B)=-\alpha\frac{e^2}{2\pi^{2}\hbar}\bigg[\Psi\bigg(\frac{1}{2}+\frac{\hbar}{4eL^{2}_{\phi}B}\bigg)-ln\bigg(\frac{\hbar}{4eL^{2}_{\phi}}\bigg)\bigg]. \end{equation}\\ \noindent Here $\Psi$ is the digamma function, and $L_{\phi}$ is the phase coherence length, which is the distance traveled by an electron before its phase is changed. The parameter $\alpha$=0.5 for a single coherent conduction channel. Equation (2) can be applied to the sample that shows a semiconducting-like behavior. However, in case of metallic-like samples we have to calculate magnetoconductance per conduction channels, i.e. $\sigma=\Delta G/Z{^*}$ where $Z^*$ is the number of conduction layers\cite{Chiatti}. Following Chiatti $et$ $al.$\cite{Chiatti}, one 2D layer contributes a conductance value of $\sim$ $e^2/h$. One 2D layer is of about 2 quintuple layers with thickness of $\sim$2 nm. Thus, the number of conduction layers can be calculated as $Z^*$=$\times$ t/(2 nm) where $t$ is the sample thickness. Using equation (2) with our experimental data, the fitting parameters $L_{\phi}$ and $\alpha$ can be determined. Figure [4(a)] shows $\sigma$ vs $B$ data at $T$=0.33 K in a low field range (-1 to 1 T). The magnetoconductivity data is well described by the HLN formula as shown by the dashed curve. The fitting yields $L_{\phi}$=25.5 nm and $\alpha$=0.54 at $T$=0.33 K. Now, these values are comparable to those of previously reported data for other topological systems\cite{Xu1, Checkelsky, Chiu}. In order to explore the robustness of system in terms of conduction, we have calculated the parameters, $L_{\phi}$ and $\alpha$ at different temperatures as shown in Fig. [4(b, c)]. The parameter $\alpha$ and phase coherence length remain independent of temperature up to $T$=5 K. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{Fig4.jpg} \caption{(Color online) (a) Magnetoconductance curve of Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal within (-1 to 1 T) field range at $\theta$=0$^{o}$. The dashed curve shows the best fit curve obtained using the HLN formula. (b, c) Temperature dependence of $\alpha$ and phase coherence length $L_{\phi}$ respectively.}\label{Fig4} \end{figure} \section*{Conclusion} \indent One of the biggest challenges in transport studies of bismuth-based topological systems is the bulk state conduction, which interferes with the surface conduction channel. Since the bulk states effect is larger than that of surface, it is challenging to detect the surface states signal and study their properties by transport measurements. In our previous works\cite{Shrestha2,Shrestha3}, we have separated the surface states signal by detailed quantum oscillations analyses. In this work, we have another transport measurement technique, the weak antilocalization (WAL) effect, to detect surface states in metallic Bi$_2$Se$_{2.1}$Te$_{0.9}$ single crystal. The WAL curves at different tilt angles with respect to the magnetic field scale with the normal component of magnetic fields, further confirming the dominance of topological surface states in magnetoconductivity of Bi$_2$Se$_{2.1}$Te$_{0.9}$ sample. In order to investigate the WAL effect further, we have applied the Hikami-Larkin-Nagaoka formula to the magnetoconductivity data and determined various physical parameters. We have calculated $\alpha$=0.54 and phase coherence length $l_{\phi}$ $\sim$ 25 nm at $T$=0.33 K. The values of $\alpha$ and $l_{\phi}$ remain almost constant while increasing temperature up to $T$ = 5 K. In addition to the WAL effect, the Bi$_2$Se$_{2.1}$Te$_{0.9}$ sample shows a large positive magnetoresistance that reaches 1900\% under 35 T and $T$=0.33 K without any sign of saturation. Large magnetoresistance of Bi$_2$Se$_{2.1}$Te$_{0.9}$ makes it a suitable candidate for technological use in many future electronics such as sensors, spintronics and memory devices.\\ \section*{acknowledgements} This work is supported in part by the U.S. Air Force Office of Scientific Research Grant FA9550-15-1-0236, the T. L. L. Temple Foundation, the John J. and Rebecca Moores Endowment, and the State of Texas through the Texas Center for Superconductivity at the University of Houston. V. Marinova acknowledges support from the Bulgarian Science Fund project DN 08/9. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1157490 and the State of Florida. The work at Idaho National Laboratory is supported by Department of Energy, Office of Basic Energy Sciences, Materials Sciences, and Engineering Division.
1,108,101,564,513
arxiv
\section{Conclusions} In this paper, we proposed a novel probabilistic framework for extracting hypernym subsequences from individual hypernymy relations. We also presented a minimum cost-flow optimization approach to taxonomy induction from a noisy hypernym graph. We demonstrated that our subsequence-based approach outperforms state-of-the-art taxonomy induction approaches that utilize individual hypernymy edge features. Unlike previous approaches, our taxonomy induction approach is robust to the significant presence of noise in the input terminology. It also provides a user-defined parameter for controlling the accuracy and coverage of terms and edges in output taxonomies. As a consequence, our approach is applicable to arbitrary domains without any manual intervention, thus truly automating the process of taxonomy induction. \label{sec:conc} \section{Introduction} \label{sec:intro} \paragraph{\textbf{Motivation.}}Lexical semantic knowledge in the form of term taxonomies has been beneficial in a variety of NLP tasks, including inference, textual entailment, question answering and information extraction~\citep{biemann2005ontology}. This widespread utility of taxonomies has led to multiple large-scale manual efforts towards taxonomy induction, such as WordNet~\citep{miller1995wordnet} and Cyc ~\citep{lenat1995cyc}. However, such manually constructed taxonomies suffer from low coverage~\citep{hovy2009toward} and are unavailable for specific domains or languages. Therefore, in recent years, there has been substantial interest in extending existing taxonomies automatically or building new ones ~\citep{snow2006semantic,yang2009metric,kozareva2010semi,velardi2013ontolearn,task17semeval2015,task13semeval2016}. Approaches towards automated taxonomy induction consist of two main stages: \begin{enumerate} \item \textbf{extraction of hypernymy relations} (i.e., ``is-a" relations between a term and its hypernym such as \textit{apple}$\rightarrow$\textit{fruit}) \item \textbf{ structured organization of terms into a taxonomy}, i.e., a coherent tree-like hierarchy. \end{enumerate} Extraction of hypernymy relations has been relatively well-studied in previous works. Its approaches can be classified into two main categories: \textit{Distributional} and \textit{Pattern-based} approaches. \textit{Distributional} approaches use clustering to extract hypernymy relations from structured or unstructured text. Such approaches draw primarily on the distributional hypothesis~\citep{harris1954distributional}, which states that semantically similar terms appear in similar contexts. The main advantage of distributional approaches is that they can discover relations not directly expressed in the text. \begin{figure*}[t] \centering \includegraphics[width=0.8\linewidth]{figures/embed_process.pdf} \caption{Traditional process for taxonomy induction from a domain-specific corpus~\cite{velardi2013ontolearn}.} \label{fig:process} \end{figure*} In contrast, \textit{Pattern-based} approaches utilize pre-defined rules or lexico-syntactic patterns to extract terms and hypernymy relations from text~\citep{hearst1992automatic,oakes2005using}. Patterns are either chosen manually~\citep{hearst1992automatic,kozareva2008semantic} or learnt automatically via bootstrapping~\citep{snow2004learning}. Pattern-based approaches usually result in higher accuracies. However, unlike the distributional approaches, which are fully unsupervised, they require a set of seed surface patterns to initiate the extraction process. Early work on the second stage of taxonomy induction, namely the structured organization of terms into a taxonomy, focused on extending existing partial taxonomies such as WordNet by inserting missing terms at appropriate positions~\cite{widdows2003unsupervised,snow2006semantic,yang2009metric}. Another line of work focused on taxonomy induction from Wikipedia by exploiting the semi-structured nature of the Wikipedia category network~\cite{suchanek2007yago,ponzetto2008wikitaxonomy,ponzetto2011taxonomy,nastase2010wikinet,flati2016multiwibi,guptarevisiting}. Subsequent approaches to taxonomy induction focused on building lexical taxonomies entirely \textit{from scratch}, i.e., from a domain corpus or the Web~\cite{kozareva2010semi,navigli2011graph,velardi2013ontolearn,bansal2014structured,alfarone2015unsupervised,panchenko2016taxi}. Automated taxonomy induction from scratch is preferred because it can be used over arbitrary domains, including highly specific or technical domains, such as Finance or Artificial Intelligence~\cite{navigli2011graph}. Such domains are usually under-represented in existing taxonomic resources. For example, WordNet is limited to the most frequent and the most important nouns, adjectives, verbs, and adverbs~\cite{gurevych2010expert,nakashole2012patty}. Similarly, Wikipedia is limited to popular entities~\cite{kliegr2014linked}, and its utility is further diminished by slowed growth~\cite{suh2009singularity}. Past approaches to taxonomy induction from scratch either assume the availability of a clean input vocabulary~\cite{panchenko2016taxi} or employ a time-consuming manual cleaning step over a noisy input vocabulary~\cite{velardi2013ontolearn}. For example, Figure~\ref{fig:process} shows the pipeline of a typical taxonomy induction approach from a domain corpus~\cite{velardi2013ontolearn}. An initial noisy vocabulary is automatically extracted from the domain corpus using a term extraction tool, such as \textit{TermExtractor}~\citep{sclano2007termextractor}, and is further cleaned manually to produce the final vocabulary. This requirement severely limits the applicability of such approaches in an automated setting because clean vocabularies are usually unavailable for specific domains. To handle these limitations, we designed our approach to induce a taxonomy directly from a noisy input vocabulary. Consequently, it is the first work to fully automate the taxonomy induction process for arbitrary domains. \paragraph{\textbf{Contributions.}} In this paper, we present a novel, semi-supervised approach for building lexical taxonomies given an input vocabulary of (potentially noisy) seed terms. We leverage the existing work on hypernymy relations extraction and focus on the second stage, i.e. the organization of terms into a taxonomy. Our main contributions are as follows: \begin{itemize} \item We propose a novel probabilistic framework for extracting longer hypernym subsequences from hypernymy relations, as well as a novel minimum-cost flow based optimization framework for inducing a tree-like taxonomy from a noisy hypernym graph. \item We empirically show that our approach outperforms state-of-the-art taxonomy induction approaches across four different languages, while achieving $>$32\% relative improvement in F1-measure over the Food domain. \item We demonstrate that our subsequence-based model is robust to the presence of noisy terms in the input vocabulary, and achieves a 65\% relative improvement in precision over an edge-based model while maintaining similar coverage. To the best of our knowledge, this is the first approach towards taxonomy induction from a noisy input vocabulary. \end{itemize} The rest of the paper is organized as follows. In Section~\ref{sec:tax}, we describe our taxonomy induction approach. In Section~\ref{sec:eval}, we discuss our experiments and performance results. In Section~\ref{sec:related}, we discuss related work. We conclude in Section~\ref{sec:conc}. \subsection{Evaluation with Noisy Vocabulary} \label{sec:noisy} In the previous experiment, we performed taxonomy induction under the simplifying assumption that a clean input vocabulary of relevant domain terms is available. However, as explained in Section~\ref{sec:intro}, in practice, this assumption is rarely satisfied for most domains. Hence, in this experiment, we evaluate the performance of SubSeq in the presence of significant noise in the input vocabulary. TAXI is inapplicable in this setting, as it assumes a clean input vocabulary consisting of both leaf and non-leaf terms. Instead, we compare SubSeq against a baseline, which is an edges-based variant of SubSeq. \paragraph{\textbf{Setup.}} We first build a corpus of relevant documents for the food domain by collecting all English Wikipedia articles with titles matching at least one seed term (post lemmatization) in the SemEval food vocabulary. In total, 1,344 matching Wikipedia articles are found from the initial set of 1,555 seed terms. We run \textit{TermSuite}~\citep{cram2016termsuite}, a state-of-the-art term extraction approach to extract an initial terminology of 12,645 terms. All terms with occurrence counts $<5$ in the corpus are removed, thus resulting in a final terminology of 3,977 terms. The final terminology contains numerous noisy terms that are not food items, such as \textit{South Asia} and \textit{triangular}. We now describe the edge-based baseline, hereafter referred to as \textit{TopEdge}, which extracts individual hypernym edges for terms in the vocabulary. TopEdge is identical to SubSeq, except that rather than extracting hypernym subsequences, it extracts direct hypernyms for terms with the highest hypernym probability $\text{Pr}_e(x_1,x_2)$ (cf. Equation~\ref{eqn:features}). It starts with the seed terms, and recursively extracts hypernyms for terms that do not yet have a hypernym until a fixed number of iterations. The aggregation and taxonomy construction steps are identical to SubSeq (cf. Sections~\ref{sec:agg} and~\ref{sec:flow}). Since the only difference between SubSeq and TopEdge is the extraction of hypernym subsequences compared to individual hypernym edges, this experiment also serves to evaluate the utility of extracting hypernym subsequences. \paragraph{\textbf{Evaluation Results.}}We compare the quality of the taxonomies induced by TopEdge and SubSeq against the sub-hierarchy of WordNet rooted at \textit{food} as the gold standard. More specifically, we compute two metrics, i.e., \textit{term precision} and \textit{edge precision}. Term precision of a taxonomy is computed for the set of the input vocabulary terms retained by the taxonomy as: the ratio of the number of terms in the food sub-hierarchy of WordNet to the total number of terms present in WordNet. Edge precision is computed as the ancestor precision: all nodes from the taxonomy that are not present in the WordNet are removed, and precision is computed on the hypernymy relations from the initial vocabulary to the root\footnote{Trivial edges $t\rightarrow$\textit{food} are ignored for all terms $t$.}. Figures~\ref{fig:alpha3} and~\ref{fig:alpha2} show the term precision and edge precision for TopEdge and SubSeq taxonomy induction methods for varying values of required coverage, i.e., $\alpha$ (cf. Section~\ref{sec:flow}). Both Term and edge precision scores for SubSeq are significantly higher than TopEdge across all values of $\alpha$, hence demonstrating the utility of hypernym subsequences. For both methods, precision scores decrease with increase in $\alpha$. This behavior is expected, because as $\alpha$ increases additional potentially-noisy seed terms are included in the output taxonomies. Figure~\ref{fig:tax} shows a section of the SubSeq taxonomy for $\alpha$=$0.9$. We also performed a manual evaluation to judge the quality of the taxonomic edges that are \textit{not} present in the WordNet. Two authors independently annotated 100 such edges each of TopEdge and SubSeq taxonomies for $\alpha$$=$$0.5$. The precision for SubSeq was found to be 86\% compared to 52\% for TopEdge, with a high inter-annotator agreement (0.68). Both evaluations show that the precision of SubSeq taxonomies is quite high, thus demonstrating the efficacy of SubSeq in inducing taxonomies from noisy terminologies. When $\alpha$$=$$1$, i.e., all input terms are included in the final taxonomy, term precision is 30\%, indicating that only 30\% of the terms extracted by the terminology extraction algorithm belong to the WordNet food sub-hierarchy. In contrast, the term precision for the original seed terms provided by SemEval is 75.8\%, hence confirming the presence of significant noise in the output of the terminology extraction approach. Overall, this experiment demonstrates that SubSeq is an effective approach towards taxonomy induction under the presence of significant noise in input terminologies. It also shows that extraction of hypernym subsequences is beneficial and results in significantly more accurate taxonomies. \paragraph{\textbf{Parameter Sensitivity.}} We now discuss the effect of parameters on the efficacy of subsequence extraction. To this end, we first construct a gold standard by sampling a set of 100 terms from the food domain randomly and extracting their generalization paths from WordNet. For a set of parameters, we run subsequence extraction and compute the precision and recall averaged over the top-5 paths per term. The parameters we focus on are the: subsequence length ($n$), number of hypernyms used ($k$), and rank-penalty (${\lambda}_1$) (cf. Equations~\ref{eqn:4} and ~\ref{eqn:5}). Figure~\ref{fig:prbysl} shows the precision/recall values for varying values of subsequence lengths (before the expansion phase). Precision decreases and recall increases as the subsequence length increases. This can be intuitively explained by the observation that candidate hypernyms (cf. Table~\ref{tab:apple_hyp}) usually only contain hypernyms up to 3/4 levels. Hence, longer subsequences would typically drift from the original term, thus causing loss of precision. Figure~\ref{fig:prbyk} shows the effect of the number of candidate hypernyms used ($k$) for subsequence extraction. As $k$ increases, both precision and recall increase initially, but drop afterwards. This shows the benefit of utilizing lower-ranked hypernyms for subsequence extraction. However, it also illustrates the significant noise present in candidate hypernyms beyond a certain $k$. Figure~\ref{fig:prbyl} shows the effect of rank-penalty (${\lambda}_1$), the parameter used to penalize candidate hypernyms with lower frequency counts. Both precision and recall are low for lower values of ${\lambda}_1$ and peak at ${\lambda}_1$$=$$0.95$. We also evaluated the sensitivity to other parameters. We found out that subsequence extraction is fairly stable across different values of beam width and length penalty (${\lambda}_2$). Moreover, we observed that the number of subsequences per term ($b$ in Equation~\ref{eqn:4}) is also inconsequential beyond a value of $4$ as irrelevant subsequences are filtered out by domain filtering (cf. Section~\ref{sec:tax}). \section{Related Work} \label{sec:related} Taxonomy induction is a well-studied task, and multiple different lines of work have been proposed in the prior literature. Early work on taxonomy induction aims to extend the existing partial taxonomies (e.g., WordNet) by inserting missing terms at appropriate positions.~\citet{widdows2003unsupervised} places the missing terms in regions with most semantically-similar neighbors.~\citet{snow2006semantic} use a probabilistic model to attach novel terms in an incremental greedy fashion, such that the conditional probability of a set of relational evidence given a taxonomy is maximized.~\citet{yang2009metric} cluster terms incrementally using an ontology metric learnt from a set of heterogeneous features such as co-occurrence, context, and lexico-syntactic patterns. A different line of work aims to exploit collaboratively-built semi-structured content such as Wikipedia for inducing large-scale taxonomies. Wikipedia links millions of entities (e.g., \textit{Johnny Depp}) to a network of inter-connected categories of different granularity (e.g. \textit{Hollywood Actors}, \textit{Celebrities}). WikiTaxonomy \cite{ponzetto2007deriving,ponzetto2008wikitaxonomy} labels these links as hypernymy or non-hypernymy, using a cascade of heuristics based on the syntactic structure of Wikipedia category labels, the topology of the network and lexico-syntactic patterns for detecting subsumption and meronymy, similar to Hearst patterns~\cite{hearst1992automatic}. WikiNet \cite{nastase2010wikinet} extends WikiTaxonomy by expanding non-hypernymy relations into fine-grained relations such as \textit{part-of, located-in, etc}. YAGO induces a taxonomy by employing heuristics linking Wikipedia categories to corresponding synsets in WordNet \cite{hoffart2013yago2}. More recently, ~\citet{flati2016multiwibi} and ~\citet{gupta2017280} propose approaches towards multilingual taxonomy induction from Wikipedia, resulting in taxonomies for over 270 languages. However, as pointed out by ~\citet{hovy2013collaboratively}, these taxonomy induction approaches are non-transferable, i.e., they only work for Wikipedia, because they employ lightweight heuristics that exploit the semi-structured nature of Wikipedia content. Although taxonomy induction approaches based on external lexical resources achieve high precision, they usually suffer from incomplete coverage over specific domains. To address this issue, another line of work focuses on building lexical taxonomies automatically from a domain-specific corpus or Web.~\citet{kozareva2010semi} start from an initial set of root terms and basic level terms and use hearst-like lexico-syntactic patterns recursively to harvest new terms from the Web. Hypernymy relations between terms are induced by searching the Web again with surface patterns. The graph of extracted hypernyms is subsequently pruned using heuristics based on the out-degree of nodes and the path lengths between terms.~\citet{velardi2013ontolearn} extract hypernymy relations from textual definitions discovered on the Web, and further employ an optimal branching algorithm to induce a taxonomy. More recently,~\citet{task17semeval2015,task13semeval2016} introduced the first shared tasks on open-domain Taxonomy Extraction, thus providing a common ground for evaluation. INRIASAC, the top system in 2015 task, uses features based on substrings and co-occurrence statistics~\citep{grefenstette2015inriasac} whereas TAXI, the top system in 2016 task, uses lexico-syntactic patterns, substrings and focused crawling~\citep{panchenko2016taxi}. In contrast to taxonomy induction approaches which use external resources, taxonomy induction approaches from a domain corpus or Web typically face two main obstacles. First, they assume the availability of a clean input vocabulary of seed terms. This requirement is not satisfied for most domains, thus requiring a time-consuming manual cleaning of noisy input vocabularies. Second, they ignore the relationship between terms and senses. For example, taxonomies induced from WordNet or Wikipedia produce different hypernyms for each sense of the term \textit{apple} (e.g., \textit{apple} is a \textit{fruit} or a \textit{company}). To tackle the second obstacle, taxonomy induction approaches from a domain corpus employ domain filtering to perform implicit sense disambiguation. This is done by removing hypernyms corresponding to domain-irrelevant senses of the terms~\cite{velardi2013ontolearn}. Although taxonomies should ideally contain senses rather than terms, term taxonomies have shown significant efficacy in a variety of NLP tasks~\cite{biemann2005ontology,velardi2013ontolearn,bansal2014structured}. To put it in context, our approach is similar to the previous attempts at inducing taxonomies without using external resources such as WordNet or Wikipedia. One key differentiator, however, is that it is robust to the presence of significant noise in the input vocabulary, thus dealing with the first obstacle above. To deal with the second obstacle, our approach performs implicit sense disambiguation via domain filtering at two different steps: (i) domain filtering of subsequences (cf. Section~\ref{sec:agg}); (ii) assigning lower cost for likely in-domain edges when applying the minimum-cost flow optimization (cf. Section ~\ref{sec:agg} \&~\ref{sec:flow}). \section{Evaluation} \label{sec:eval} The aim of the empirical evaluation is to address the following questions: \begin{itemize}[leftmargin=0.35cm,noitemsep,topsep=1pt] \item How does our approach compare to the state-of-the-art approaches under the assumption of a clean input vocabulary? \item How does our approach perform on a noisy input vocabulary? \item What are the benefits of extracting longer hypernym subsequences compared to single hypernym edges? \end{itemize} \vspace{0.25cm} To this end, we perform two experiments. In Section~\ref{sec:sota}, we compare our taxonomy induction approach against the state of the art, under the simplifying assumption of a clean input vocabulary. Evaluations are performed automatically by computing standard precision, recall and F1 measures against a gold standard. We then drop the simplifying assumption in Section~\ref{sec:noisy}, where we show that our taxonomy induction performs well even under the presence of significant noise in the input vocabulary. Evaluation is performed both manually as well as automatically against WordNet as the gold standard. We also demonstrate that the subsequences-based approach significantly outperforms an edges-based variant, thus demonstrating the utility of hypernym subsequences. In the remainder of this section, we use \textit{SubSeq} to refer to our approach towards taxonomy induction (cf. Section~\ref{sec:tax}). \subsection{Evaluation against the State of the Art} \label{sec:sota} \paragraph{\textbf{Setup.}}We use the setting of the SemEval 2016 task for taxonomy extraction~\citep{task13semeval2016}. The task provides 6 sets of input terminologies, related to three domains (food, environment and science), for four different languages (English, Dutch, French and Italian). The task requires participants to generate taxonomies for each (terminology, language) pair, which are further evaluated using a variety of techniques, including comparison against a gold standard. Except for a few restricted resources used to construct gold standard, the participants are allowed to use external corpora for hypernymy extraction and taxonomy induction. Participants are compared against each other and against a high-precision string inclusion baseline. We compare SubSeq with TAXI, the system that reached the first place in all subtasks of the SemEval task~\citep{panchenko2016taxi}. TAXI harvests candidate hypernyms using substring inclusion and lexico-syntactic patterns from text corpora. It further utilizes an SVM trained with individual hypernymy edge features, such as frequency counts and substring inclusion to classify edges as positive and negative. The positive edges are added to the taxonomy. ~\citet{panchenko2016taxi} also report that alternate configurations of TAXI with different term-level and edge-level features as well as different classifiers such as Logistic Regression, Gradient Boosted Trees, and Random Forest fail to provide improvements over their approach. In contrast to SubSeq, which discovers new hypernyms for the seed terms, SemEval task provides the additional assumption that all the terms in the gold standard taxonomies (i.e., including leaf terms and non-leaf terms) are present in the input vocabulary. This would unfairly lower the performance of SubSeq, as SubSeq would find hypernyms, which are possibly correct but not present in the gold standard. Hence, to ensure a fair comparison, we restrict the subsequence extraction and hypernym graph construction step of SubSeq (cf. Section~\ref{sec:tax}) to candidate hypernyms present in the input vocabulary. Furthermore, since candidate hypernymy extraction is orthogonal to our work, we reuse the candidate hypernymy relations made available by TAXI. As a consequence, TAXI and SubSeq are identical in input data conditions as well as evaluation metrics, and only differ in the core taxonomy induction approach. \paragraph{\textbf{Evaluation Results.}} \begin{table}[bt] \begin{tabular} {>{\scshape}r*{9}{>{\centering\arraybackslash}p{1.53em}}} \toprule & \multicolumn{3}{c}{\textsc{TAXI}} & \multicolumn{3}{c}{\textsc{SubSeq}} \\ \cmidrule(lr){2-4} \cmidrule(lr){5-7} & \textsc{P} & \textsc{R} & \textsc{F1} & \textsc{P} & \textsc{R} & \textsc{F1} \\ \midrule EN & 33.2 & 31.7 & 32.2 & \textbf{44.9} & \textbf{31.9} & \textbf{37.2}\\ NL & \textbf{48.0} & 19.7 & 27.6 & 42.3 & \textbf{20.7} & \textbf{27.9}\\ FR & 33.4 & 24.1 & 27.7 & \textbf{41.0} & \textbf{24.4 } & \textbf{30.5}\\ IT & \textbf{53.7 }& 20.7 & 29.1 & 49.0 & \textbf{21.8}& \textbf{29.9}\\ \bottomrule \end{tabular} \captionof{table}{Precision (P), Recall (R) and F1 Metrics for TAXI vs. SubSeq across different languages. Results are aggregated over all domains per language.} \label{tab:lang_com} \end{table} Table~\ref{tab:lang_com} shows the language-wise precision, recall and F1 values computed against the gold standard for SubSeq and TAXI. Aggregated over all domains, SubSeq outperforms TAXI for all four languages. It achieves $>$15\% relative improvement in F1 for English and 7\% improvement overall. Both methods perform significantly better for English, which can be attributed to the higher accuracy of candidate hypernymy relations for English. Figure~\ref{fig:grouped_barplot} shows the performance of SubSeq compared to TAXI and the SemEval baseline across different domains and languages. SubSeq performs best for food domain, where it outperforms TAXI across all the languages. SubSeq performs best for English, where it outperforms TAXI across 3/4 domains. In our experiments, we noticed that SubSeq achieves the largest improvements when a greater number of hypernym subsequences are found during the subsequence extraction step. For example, SubSeq achieves an average 32.23\% relative improvement in F1 over TAXI for the food domain, where on an average 0.67 subsequences are found per term, compared to only 0.44 for the other domains. Similarly, SubSeq performs best for English datasets, where, on an average, 1.09 subsequences are found per term, compared to only 0.32 for other languages. The variation in the number of extracted subsequences per term can be attributed to two factors: (i) number of terms in the input vocabulary, and (ii) number of candidate hypernymy relations available. Due to the assumption that all candidate hypernyms belong to the input vocabulary, larger vocabularies of food domain make it more likely for a candidate hypernym to be found, and hence for a subsequence to be extracted. In a similar fashion, the larger set of available candidate hypernyms for English ($\sim$65 million vs. $<$ 2.2 million for other languages) makes it more likely for a subsequence to be extracted for English datasets. Overall this experiment shows that under the assumption of a clean input vocabulary, SubSeq is more effective that TAXI for most domains in English, and domains with large vocabularies such as food in other languages. \begin{figure}[tb] \centering \includegraphics[width=0.8\linewidth]{figures/embed_grouped_barplot.pdf} \caption{Relative improvement \% in F1 for SubSeq, compared to TAXI (TX) and the SemEval Baseline (BL), for different domains and languages. $N$ is the average number of terms in the input vocabulary for that domain. \textit{Science eurovoc} datasets are shown separately, as they have significantly fewer input terms than other science datasets. } \label{fig:grouped_barplot} \end{figure} \section{Taxonomy Induction} \label{sec:tax} Given a potentially-noisy vocabulary\footnote{In this work, we use terminology and vocabulary interchangeably.} of seed terms as an input, we define our goal as inducing a taxonomy consisting of these seed terms (and possibly other terms). This taxonomy is a directed acyclic graph with terms as the nodes and the edges indicating a hypernymy relationship between the terms. For our task, we assume the availability of a database of \textit{candidate} hypernymy relations. Multiple such resources have been compiled and made available publicly over the years. A prominent example of such a resource is WebIsA~\citep{seitner2016large}, a collection of more than 400 million hypernymy relations for English, extracted from the CommonCrawl web corpus using lexico-syntactic patterns. However, such resources come with a considerable number of noisy candidate hypernyms, typically containing a mixture of relations such as hyponymy, meronymy, synonymy and co-hyponymy. For example, WebIsA has more than 12,000 hypernyms for the term \textit{apple}, including noisy hypernyms such as \textit{orange}, \textit{everyone} and \textit{smartphone}. A sample set of candidate hypernyms and their occurrence frequencies for the term \textit{apple} taken from WebIsA is shown in Table~\ref{tab:apple_hyp}. Our approach to taxonomy induction consists of three main steps: \begin{enumerate} \item extracting hypernym subsequences for the given seed terms (Section~\ref{sec:hyp}), \item aggregating the extracted subsequences into an initial hypernym graph (Section~\ref{sec:agg}), \item pruning the hypernym graph using a minimum-cost flow approach to induce the final taxonomy (Section~\ref{sec:flow}). \end{enumerate} \subsection{Hypernym Subsequences Extraction} \label{sec:hyp} Unsupervised or semi-supervised approaches to taxonomy induction typically aim to extract \mbox{\textbf{single hypernym edges}} among terms from noisy candidate hypernyms \citep{kozareva2010semi,panchenko2016taxi}. In contrast, our approach consists of extracting \mbox{\textbf{hypernym subsequences}} (where a subsequence is a series of one or more individual hypernym edges). \begin{table}[t] \centering \footnotesize \begin{tabular}{cc} \toprule Candidate hypernym & Frequency \\ \midrule company & 5536 \\ fruit & 3898\\ apple & 2119\\ vegetable & 928 \\ orange & 797\\ tech company & 619 \\ brand & 463 \\ hardware company & 460 \\ technology company & 427 \\ food & 370 \\ \bottomrule \end{tabular} \caption{Candidate hypernyms for the term \textit{apple}.}% \label{tab:apple_hyp} \end{table} To motivate this, we first note that Table~\ref{tab:apple_hyp} includes hypernyms of \textit{apple} at different levels of generality, such as \textit{fruit} and \textit{food}. In fact, we observe this pattern in the candidate hypernyms of most terms. This suggests that we can leverage such information to not only extract the direct hypernyms of \textit{apple}, but to also extract longer hypernym subsequences, such as \textit{apple}$\rightarrow$\textit{fruit}$\rightarrow$\textit{food}. \begin{figure}[t] \centering \includegraphics[width=0.7\linewidth]{figures/embed_wn_height_rank_tsv} \caption{Average rank and normalized frequency of WordNet edges vs. height of edge. } \label{fig:rank_vs_height} \end{figure} This becomes even more important given the result by~\citet{velardi2013ontolearn}, who demonstrated that hypernym extraction becomes increasingly erroneous as the generality of terms increases, mainly due to the increase in term ambiguity. To further support this hypothesis, we perform an experiment where we first randomly sample 100 paths from Wordnet. For each edge $a$$\rightarrow$$b$ in a sampled path, we plot the normalized frequency\footnote{Normalization is performed by dividing frequency counts by the maximum.} of ``$b$ as a candidate hypernym for $a$'' against the height of the edge, where frequencies are computed using lexico-syntactic patterns (cf. Table~\ref{tab:apple_hyp}). We also plot the average rank of $b$ among candidate hypernyms of $a$, where candidate hypernyms are ranked by their normalized frequencies in a decreasing order. Results of this experiment are shown in Figure~\ref{fig:rank_vs_height}. Since edges in WordNet are assumed to be ground truth, it is desired that they have a higher normalized frequency and lower ranks. This small-scale experiment demonstrates that as the height of the edge increases, the normalized frequencies decrease whereas the average ranks increase. Therefore, the accuracy of patterns-based hypernymy detection decreases for more general terms that appear higher in generalization paths. Hence, for such terms, it makes sense to not solely base the hypernym selection on a noisy set of candidate hypernyms. We can potentially improve the accuracy of selected hypernyms for general terms (such as \textit{fruit}) by relying on extracted subsequences starting from more specific terms (such as \textit{apple}). Those subsequences would be evidenced by the less-noisy candidate hypernyms of the specific terms. In sum, extracting hypernym subsequences is both \textit{possible} and potentially \textit{beneficial}. The remainder of this section describes our model that realizes this intuition. \begin{figure}[t] \includegraphics[width=0.33\linewidth]{figures/embed_apple_hyps} \caption{An example DAG built using generalizations of term \textit{apple}. } \label{fig:apple_hyp} \end{figure} \paragraph{\textbf{Model.}}We now describe our model for extracting hypernym subsequences for a given term. We begin with a general formulation using directed acyclic graphs (referred to as DAG), and we make simplifying assumptions to derive a model for hypernym subsequences. We use the following notations: \begin{itemize}[leftmargin=0.2cm,noitemsep,topsep=0pt] \setlength{\itemindent}{0.8em} \item $t_0$: a given seed term, e.g., \textit{apple}; \item $l_t$: lexical head of any term $t$, e.g., $l_t$=\textit{soup} for \mbox{$t$=\textit{chicken soup}}; \item $E$: Hypernym \underline{\textit{E}}vidence, i.e., the set of all the candidate hypernymy relations, in the form of 3-tuples (\textit{hyponym, hypernym, frequency}); \item $E_k(t)$: Hypernym \underline{\textit{E}}vidence for term $t$, i.e., the set of top-\underline{$k$} candidate hypernyms for term $t$, having the highest frequency counts (Table~\ref{tab:apple_hyp} shows a sample from $E_k(t)$ for $t$=\textit{apple}); \item $E_k(t, m)$: \textit{m}$^{th}$ ranked candidate hypernym from $E_k(t)$, where $m\leq k$, and ranks are computed by sorting candidate hypernyms in decreasing order of frequency counts; \item $\text{sim}(t_i, t_j)$: A similarity measure between terms $t_i$ and $t_j$ estimated using evidence $E$; \item $G_{t}$: a DAG consisting of generalizations for a term $t$ (Figure~\ref{fig:apple_hyp} shows an example of a possible DAG for $t$$=$\textit{apple}). \end{itemize} \vspace{0.3cm} For a given term $t_0$, we define the goal of this step of our taxonomy induction approach as finding a DAG $\hat{G}_{t_0}$, which maximizes the conditional probability of $G_{t_0}$, given the evidence $E_k(t_0)$, for a fixed $k$: \begin{eqnarray} \hat{G}_{t_0}&=&\underset{G_{t_0}}{\text{argmax}}\;\text{Pr}(G_{t_0}|E_k({t_0})) \nonumber \\ &=&\underset{G_{t_0}}{\text{argmax}}\;\text{Pr}(E_k(t_0)|G_{t_0}) \times \text{Pr}(G_{t_0}) \label{eqn:1} \end{eqnarray} Due to the combinatorial nature of the search space of $G_{t_0}$, finding an exact solution to the above equation is intractable, even for a small $k$. Therefore, we make the following simplifying assumptions, which facilitate an efficient search through the search space of $G_{t_0}$: \begin{itemize}[leftmargin=0.2cm,itemsep=1pt,topsep=5pt] \setlength{\itemindent}{0.8em} \item$G_{t_0}$ can be approximated as a set of independent hypernym subsequences with possibly repeated hypernyms. In other words, $G_{t_0}=\bigcup_{i=1}^{b} S_{t_0}^i$ where $S_{t_0}^i$ is the $i^{\text{th}}$ subsequence and $b$ is a fixed constant. For example, the DAG shown in Figure~\ref{fig:apple_hyp} can be approximated as a set of three subsequences: (i) \textit{apple}$\rightarrow$\textit{fruit}$\rightarrow$\textit{food}, (ii) \textit{apple}$\rightarrow$\textit{hardware company}$\rightarrow$\textit{company}, and (iii) \textit{apple}$\rightarrow$\textit{technology company}$\rightarrow$\textit{company}. This assumption intuitively derives from the fact that any DAG can be represented by a finite number of subsequences. \item$\forall i$, the joint events $(E_k(t_0), S_{t_0}^i)$ are independent. Intuitively, this assumption implies that each subsequence independently contributes to the evidence $E_k(t_0)$. \item$\forall i$, the direct hypernyms of $t_0$ in $S_{t_0}^i$ are unique. In other words, for a candidate hypernym $h_c$ of given term $t_0$, there is at most one subsequence with the first edge $t_0$$\rightarrow$$h_c$. Intuitively, this assumption implies that a candidate hypernym $h_c$ uniquely sense-disambiguates the term $t_0$, thus resulting in a only one possible generalization subsequence. \end{itemize} \vspace{0.25cm} In conjunction, these assumptions imply that $G_{t_0}$ is composed of $b$ hypernym subsequences, where each subsequence independently attempts to generate $E_k({t_0})$. Given these assumptions, Equation~\ref{eqn:1} transforms into: \begin{eqnarray} \hat{G}_{t_0}&=&\underset{\bigcup_{i=1}^{b} S_{t_0}^i}{\text{argmax}}\;\prod_{i=1}^{b}\text{Pr}(E_k({t_0})|S_{t_0}^i)\times \text{Pr}(S_{t_0}^i) \label{eqn:2} \vspace{-2cm} \end{eqnarray} \vspace{0.2cm} \paragraph{\textbf{Estimation.}}We now describe the estimation of $\text{Pr}(E_k({t_0})|S_{t_0}^i)$ and $\text{Pr}(S_{t_0}^i)$ for a hypernym subsequence ${S_{t_0}^i}$. In order to motivate the estimation of the conditional probability $\text{Pr}(E_k({t_0})|S_{t_0}^i)$, we start with an example. Consider a valid hypernym subsequence \textit{apple}$\rightarrow$\textit{fruit}$\rightarrow$\textit{food}$\rightarrow$\textit{substance}$\rightarrow$\textit{matter}$\rightarrow$\textit{entity} for the term \textit{apple} (whose candidate hypernyms are in Table~\ref{tab:apple_hyp}). At first sight, it might seem desirable for a candidate hypernym from $E_k(t_0)$ (e.g., \textit{fruit}) to have a high similarity with as many terms in the subsequence as possible. However, since the similarity measure is based on the hypernym evidence $E$, it is plausible that terms such as \textit{matter} and \textit{entity} have a low similarity with the candidate hypernym \textit{fruit}, simply because they are at a higher level of generality. To avoid penalizing such valid subsequences, we let the conditional probability $\text{Pr}(E_k({t_0})|S_{t_0}^i)$ be proportional to the maximum similarity possible between the candidate hypernym and \textit{any} term in the subsequence (e.g., for the candidate hypernym \textit{fruit}, the similarity is 1 as \textit{fruit} is in the subsequence). We aggregate those similarity values across the candidate hypernyms. More formally, assuming subsequence \mbox{$S_{t_0}^i$ = $t_0$$\rightarrow$$h_{i1}$$\rightarrow$$h_{i2}$\dots$h_{in}$}, where $n$ is the length of $S_{t_0}^i$, we compute the conditional probability as: \begin{eqnarray} \text{Pr}(E_k({t_0})|S_{t_0}^i)\propto\sum_{m=1}^{k}({\lambda}_1)^m\underset{j\in \lbrack 1,n\rbrack}{\max}\big(\text{sim}(E_k(t_0,m),h_{ij})\big)\label{eqn:4} \end{eqnarray} where $\lambda_1$ (a fixed parameter) serves as a rank-penalty to penalize candidate hypernyms with lower frequency counts.\\ \newline We now proceed to compute $\text{Pr}(S_{t_0}^i)$, the other constituent of Equation~\ref{eqn:2}. Towards that, we assume that $S_{t_0}^i$ is a collection of independent hypernym edges. Thus, $\text{Pr}(S_{t_0}^i)$ becomes the product of the individual edges' probabilities: \begin{eqnarray} \text{Pr}(S_{t_0}^i) \propto {\text{Pr}_{e}({t_0},h_{i1})\times (\lambda}_2)^n \prod_{j=1}^{n-1}\text{Pr}_{e}(h_{ij},h_{i(j+1)})\label{eqn:5} \end{eqnarray} where $\text{Pr}_{e}(x_1,x_2)$ is the probability of an individual hypernym edge $x_1$$\rightarrow$$x_2$ between terms $x_1$ and $x_2$; ${\lambda}_2$ is a length penalty parameter. \newline Finally, we estimate $\text{Pr}_e(x_1,x_2)$ as a log-linear model using a set of features \textbf{\mbox{f}}, weighted by the learned weight vector \textbf{w}: \begin{eqnarray} \text{Pr}_e(x_1,x_2) &\propto& \exp\big(\textbf{w} \cdot \textbf{f}(x_1, x_2)\big)\label{eqn:features} \end{eqnarray} We also use this edge probability to compute the aforementioned similarity function ($\text{sim}$) as: \begin{eqnarray} \text{sim}(x_i,x_j) &=& \max\big(\text{Pr}_e(x_i, x_j), \text{Pr}_e(x_j, x_i)\big) \label{eqn:sim} \end{eqnarray} \newline Intuitively, $\text{Pr}(E_k(t_0)|S_{t_0}^i)$ promotes subsequences containing a larger number of candidate hypernyms from $E_k({t_0})$ whereas $\text{Pr}(S_{t_0}^i)$ promotes subsequences consisting of individual edges with a larger probability of hypernymy. \vspace{0.2cm} \paragraph{\textbf{Subsequence Extraction.}} After inserting Equations~\ref{eqn:4} and~\ref{eqn:5} into Equation~\ref{eqn:2} and taking logarithm, the objective function becomes: \begin{eqnarray} \begin{aligned} &\hat{G}_{t_0}=\underset{\bigcup_{i=1}^{b} S_{t_0}^i}{\text{argmax}}\;\sum_{i=1}^{b} \Big[ \log \sum_{m=1}^{k}({\lambda}_1)^m \underset{j\in \lbrack 1,n\rbrack}{\max}\big(\text{sim}(E_k(t_0,m),h_{ij})\big) \\&+ \log\text{Pr}_e(t_0,h_{i1}) + n\lambda_2 + \sum_{j=1}^{n-1}\log\text{Pr}_e(h_{ij},h_{i(j+1)})\Big] \nonumber\label{eqn:6} \end{aligned} \end{eqnarray} This objective function leads to the following search algorithm for the extraction of subsequences: \begin{enumerate} \item For a given term $t_0$, iterate over all candidate hypernyms in $E_k(t_0)$. \item For each $h_c\in E_k(t_0)$, perform a depth-limited beam search over the space of possible subsequences by recursively exploring the candidate hypernyms of $h_c$ (i.e., $E_k(h_c))$. \item For each $h_c\in E_k(t_0)$, choose the subsequence $S$ with the highest score (i.e., $\log( \text{Pr}(E_k(t_0)|S)\times\text{Pr}(S)))$. \item Choose the top-$b$ candidate hypernyms based on their corresponding subsequence scores. \end{enumerate} While, in theory, we can iterate over all candidate hypernyms in $E_k(t_0)$, in practice, we employ an alternative two-stage execution that significantly improves the running time as well as produces more meaningful subsequences: \vspace{0.2cm} \newline\leavevmode{\parindent=0.6em\indent}$\bullet$ \textit{Search phase}: Proceed as in the aforementioned steps. However, in the special case where a candidate hypernym $h_c$ is a compound term and its lexical head $l_{h_c}$ is also present in $E_k(t_0)$, skip $h_c$ in step (1) of the algorithm\footnote{Lexical heads of terms have consistently played a special role in taxonomy induction~\citep{ponzetto2011taxonomy,guptarevisiting}.}. For example, for $t_0$ $=$ \textit{apple}, candidate hypernyms \textit{tech company}, \textit{software company} and \textit{hardware company} are skipped in step (1) due to the presence of \textit{company} in $E_k(t_0)$ (cf. Table~\ref{tab:apple_hyp}). \vspace{0.2cm} \newline\leavevmode{\parindent=0.6em\indent}$\bullet$ \textit{Expansion phase}: In this phase, we augment the subsequences extracted in the search phase to account for skipped compound terms. We focus on the case where the lexical head of the skipped compound terms occurs in a subsequence. In that case, we expand the incoming edge of the lexical head with zero or more of those compound terms. For example, in the subsequence \textit{apple}$\rightarrow$\textit{company}$\rightarrow$\textit{organization}, a potential expansion of the edge \textit{apple}$\rightarrow$\textit{company} is: \textit{apple}$\rightarrow$\textit{American software company}$\rightarrow$\textit{software company}$\rightarrow$\textit{company}. However, special attention has to be taken while generating these potential expansions. For example, the expansion \textit{apple}$\rightarrow$\textit{American software company}$\rightarrow$\textit{British software company}$\rightarrow$\textit{company} is invalid due to the co-hyponymy edge \textit{American software company}$\rightarrow$\textit{British software company}. In contrast, the expansion {\textit{apple}$\rightarrow$\textit{American software company}$\rightarrow$\textit{software company}$\rightarrow\,$\textit{company} is a valid expansion. To avoid invalid expansions, we restrict the possible expansions to the case where the set of pre-modifiers of a compound term is a superset of its hypernym's pre-modifiers (e.g., \mbox{\{\textit{American, software} \}$\supset$\{\textit{software}\}}). We generate all possible expansions for each edge and rank them by averaging a TF-IDF-style metric across the pre-modifiers of compound terms in each expansion. Our aim in the ranking is two-fold: i) promoting the pre-modifiers, which frequently appear in the evidence $E_k(t_0)$, and ii) penalizing the noisy pre-modifiers unrelated to $t_0$ that frequently occur in compound terms (e.g., \textit{several}, \textit{other}, etc.). Hence, we compute the TF score of a pre-modifier as its average frequency of occurrence in the candidate hypernyms $E_k(t_0)$. We compute IDF as the average frequency of occurrences of the pre-modifier in $E_k(t)$ for a random term $t$. Finally, we choose the top ranked expansion per edge. To illustrate the result of the previous steps, we show in Table~\ref{tab:subseq} an example of extracted subsequences along with their expanded versions for the food domain. Intuitively, the two-stage execution serves to distinguish between two fundamentally different forms of generalization: \begin{enumerate}[leftmargin=0.8cm,itemsep=3pt,topsep=1pt] \item \textbf{type-based generalization}, which provides core types as generalizations (e.g., \textit{apple}$\rightarrow$\textit{company}$\rightarrow$\textit{organization}). \item \textbf{attribute-based generalization}, which enriches type-based generalization edges. For example, \textit{apple}$\rightarrow$\textit{american software company}$\rightarrow$\textit{software company}$\rightarrow$\textit{company} enriches the individual type-based edge \textit{apple}$\rightarrow$\textit{company}. \end{enumerate} In our experiments, models that distinguished between these two different forms of generalizations consistently performed better than models, which attempted to unify them. \paragraph{\textbf{Features.}} We now describe the edge features that we employ for estimating the probability of a hypernymy relation between two terms (cf. Equation~\ref{eqn:features}): \newline\leavevmode{\parindent=0.6em\indent}$\bullet$ \textit{Normalized Frequency Diff ($n_d$)}: Similar to~\cite{panchenko2016taxi}, this feature is an asymmetric hypernymy score based on frequency counts. We compute $n_d(x_i, x_j)$ by first normalizing the frequency counts obtained (i.e., the counts in $E_k(x_i)$) for term $x_i$ as follows: $n_f(x_i, x_j) = \frac{\text{freq}(x_i,x_j)}{\underset{m}{\text{max}}\; \text{freq}(x_i,x_m)}$, where $\text{freq}(x_i,x_j)$ is the frequency count of candidate hypernym $x_j$ in $E_k(x_i)$. Further, we subtract the score in the opposite direction to downrank synonyms and co-hyponyms: $n_d(x_i, x_j) = n_f(x_i, x_j) - n_f(x_j, x_i)$. \newline\leavevmode{\parindent=0.6em\indent}$\bullet$ \textit{Generality Diff ($g_d$)}: We introduce a novel feature for explicitly incorporating the term generality (or abstractness) in our model. To this end, we first define the generality $g(t)$ of a term $t$ as the log of the number of distinct hyponyms present in all candidate hypernymy relations ($E$); i.e., $g(t) = \text{log}(1 + \lvert x \mid x$$\rightarrow$$t \in E \rvert)$. We define the generality of an edge as the difference in generality between the hypernym and the hyponym: $g_e(x_i, x_j) = g(x_j) - g(x_i)$. \begin{table} \resizebox{0.5\textwidth}{!}{ \begin{tabular}{l} \toprule \textbf{Initial subsequences} \\ \midrule \textit{mortadella}$\rightarrow$\textit{sausage}$\rightarrow$\textit{meat}$\rightarrow$\textit{food}\\ \textit{laksa}$\rightarrow$\textit{soup}$\rightarrow$\textit{dish}$\rightarrow$\textit{food}\\ \toprule \textbf{Expanded subsequences} \\ \midrule \textit{mortadella}$\rightarrow$\textit{large Italian sausage}$\rightarrow$\textit{sausage}$\rightarrow$\textit{process meat}$\rightarrow$\textit{meat}$\rightarrow$\textit{food}\\ \textit{laksa}$\rightarrow$\textit{spicy noodle soup}$\rightarrow$\textit{noodle soup}$\rightarrow$\textit{soup}$\rightarrow$\textit{dish}$\rightarrow$\textit{food}\\ \end{tabular} } \caption{Examples of hypernym subsequences found during the search phase, and their expanded versions.} \label{tab:subseq} \vspace{-0.3cm} \end{table} \input{figures/taxonomy-fig} Intuitively, we aim to promote edges with the right level of generality and penalize edges, which are either too general (e.g., \mbox{\textit{apple}$\rightarrow$\textit{thing}}) or too specific (i.e., edges between synonyms or co-hyponyms, such as \mbox{\textit{apple}$\rightarrow$\textit{orange}}). To realize this intuition, we first sample a random set of terms and collect the edges with highest $n_d$ for these terms (hereafter referred to as \textit{top edges}). We compare the distribution of generality (i.e., $g_e$) for the top edges vs. the distribution of generality for a set of randomly sampled edges. The assumption is that it is more likely to sample the generality of a correct edge (i.e., edge at right level of generality) from the distribution of top edges as compared to random edges. Hence, given $D_t$ and $D_r$ as the Gaussian distributions estimated from the samples of generality for top edges and random edges respectively, we define the feature as: $g_d(x_i, x_j) = \text{Pr}_{D_t}\big(g_e(x_i, x_j)\big) - \text{Pr}_{D_r}\big(g_e(x_i, x_j)\big)$. \paragraph{\textbf{Parameter Tuning.}}We estimate the weights for features (\textbf{w} in equation~\ref{eqn:features}), using a support vector machine trained on a manually annotated set of 500 edges. For beam search in the search phase, we use a beam of width 20, and limit the search to subsequences of maximum length 4. We set the rest of the parameters by running grid-search over a manually-defined range of parameters using a small validation set\footnote{Validation set is excluded from the test set.}. The final values of parameters are as follows: $k$$=$$10$, $b$$=$$4$, $\lambda_{1}$$=$$\lambda_{2}$$=$$0.95$. \subsection{Aggregation of Subsequences} \label{sec:agg} Up till now, we have described our methodology to generate hypernym subsequences starting from a given term. In this section, we aggregate the hypernym subsequences obtained for a set of seed terms, in order to construct an initial hypernym graph. For that, we undertake the following steps: \paragraph{\textbf{Domain Filtering.}} Given a term $t_0$, the usual case is that multiple hypernym subsequences corresponding to different senses of the term $t_0$ are extracted. For example, \textit{apple} can be a \textit{company} or a \textit{fruit}, thus resulting in subsequences \textit{apple}$\rightarrow$\textit{fruit}$\rightarrow$\textit{food} and \textit{apple}$\rightarrow$\textit{software company}$\rightarrow$\textit{company}. However, many of these subsequences will not pertain to the domain of interest (as determined by the seed terms). To eliminate the irrelevant ones, we estimate a smoothed unigram model\footnote{We used a weighting function (i.e., step function with cut-off at 50\% of the height of the subsequence) to favor terms at lower heights as they are usually more domain-specific.} from all extracted subsequences, and we remove those with generation probabilities below a fixed threshold. \paragraph{\textbf{Hypernym Graph Construction.}} We now aggregate the filtered subsequences into an initial hypernym graph. We construct this graph by grouping the edges with the same start and end nodes together from the filtered subsequences. The weight of each edge is computed as the sum of the scores of subsequences it belongs to (i.e., $\log$( $\text{Pr}(E_k(t)|S)\times \text{Pr} (S))$). To increase the coverage for compound seed terms that do not yet have a hypernym, we simply add an hypernym edge to their lexical head with weight$=$$\infty$ (i.e, a very large value) whenever the lexical head is already present in the hypernym graph. Finally, for each cycle in the hypernym graph, we remove the edge with the smallest weight, hence resulting in a DAG. This DAG contains many noisy terms and edges, which are pruned in the next step of our approach. \subsection{Taxonomy Construction} \label{sec:flow} In this step, we aim to induce a tree-like taxonomy from the hypernym DAG obtained in the previous step. We cast this as an instance of the minimum-cost flow problem (MCFP). MCFP is an optimization problem, which aims to find the cheapest way of sending a certain amount of flow through a flow network. It has been used to find the optimal solution in applications like the \textit{transportation problem}~\citep{klein1967primal}, where the goal is to find the cheapest paths to send commodities from a group of facilities to the customers via a transportation network. Analogously, we cast the problem of taxonomy induction as finding the cheapest way of sending the seed terms to the root terms through a carefully designed flow network $F$. We use the \textit{network simplex algorithm}~\citep{orlin1997polynomial} to compute the optimal flow for $F$, and we select all edges with positive flow as part of our final taxonomy. We now describe our method for constructing the flow network $F$. In what follows, we refer to Figure~\ref{fig:taxonomy-induction} at the different steps. \paragraph{\textbf{Flow Network Construction.}} Let $V$ be the vocabulary of input seed terms (e.g., \textit{apple}, \textit{orange}, and \textit{Spain} in Figure~\ref{fig:taxonomy-induction}); $H$ is the noisy hypernym graph constructed in Section ~\ref{sec:agg} (cf. Figure~\ref{fig:taxonomy-induction}(a)); $w(x, y)$ is the weight of the edge $x$$\rightarrow$$y$ in $H$; $D_x$ is the set of descendants of term $x$ in $H$ (e.g., \textit{apple} is a descendant of \textit{food}); $R$ is the set of given roots\footnote{If roots are not provided, a small set of upper terms can be used as roots~\citep{velardi2013ontolearn}.} (e.g., \textit{food} in Figure~\ref{fig:taxonomy-induction}). The construction of the flow network $F$ proceeds as follows (cf. Figure~\ref{fig:taxonomy-induction}(b)): \begin{enumerate}[label=\roman*),leftmargin=0.2cm,noitemsep,topsep=0pt] \setlength{\itemindent}{1.4em} \item For an edge $x$$\rightarrow$$y$ in $H$, add the edge $x$$\rightarrow$$y$ in $F$. Set the capacity ($c$) of the added edge as \mbox{$c(x,y)=\lvert D_x\cap V\rvert$}. Set the cost ($a$) of that edge as \mbox{$a(x,y)=1 / w(x,y)$}. \item Add a sentinel \textit{source} node $s$. $\forall v \in V$, add an edge $s$$\rightarrow$$v$ with $c(s,v)=a(s,v)=1$. \item Add a sentinel \textit{sink} node $t$. $\forall r \in R$, add edge $r$$\rightarrow$$t$ with $c(r,t)=\lvert D_r\cap V\rvert$ and $a(r,t)=1$. \end{enumerate} \paragraph{\textbf{Minimum-cost Flow.}} Given a demand $d$ of the total flow to be sent from $s$ to $t$, the goal of MCFP is to find flow values ($f$) for each edge in $F$ that minimize the total cost of flow over all edges: $\underset{(u,v) \in F}{\sum}a(u,v)\cdot f(u,v)$. In our construct, demand $d$ represents the maximum number of seed terms that can be included in the final taxonomy. Figures~\ref{fig:taxonomy-induction}(c) and~\ref{fig:taxonomy-induction}(d) show the minimum-cost flow for demand $d$$=$3 and $d$$=$2 respectively. In both cases, the edge \textit{apple}$\rightarrow$\textit{food} receives $f$$=$0 due to the presence of edges \textit{apple}$\rightarrow$\textit{fruit} and \textit{fruit}$\rightarrow$\textit{food} with lower costs. For $d$$=$2, the edge \textit{source}$\rightarrow$\textit{Spain} has $f$$=$0, implying that the noisy term \textit{Spain} would be removed from the final taxonomy. Intuitively, demand $d$ serves as a parameter for discarding potentially noisy terms in the input vocabulary. More formally, $d$ can be defined as $\alpha$$\lvert V\rvert$, where $\alpha$, a user-defined parameter, indicates the desired \textit{coverage} over seed terms. If the vocabulary contains only accurate terms, $\alpha$ is set to 1. For a given $\alpha$, we run the network simplex algorithm with $d$$=$$\alpha$$\lvert V\rvert$ to compute the minimum-cost flow for $F$. The final taxonomy consists of all edges with flow $>0$.
1,108,101,564,514
arxiv
\section{Introduction and motivation} Black holes are solutions of Einstein's general relativity that are constantly surprising. While black holes are classically black, in the quantum context they can radiate particles and have a well defined temperature, which allows us to treat them as thermodynamical objects. Bekenstein, Hawking, Bardeen, Carter and others nicely summed up the four laws of black hole thermodynamics in the 1970s \cite{Bekenstein1972,Bekenstein1973,Hawking1972,Hawking1974,Hawking1975,Bardeen1973}. \begin{itemize} \item The zeroth law: The horizon has a constant surface gravity for a stationary black hole. \item The first law: For perturbations of stationary black holes, the change of energy is related to the change of area, angular momentum, and electric charge by \begin{equation} dE =\frac{\kappa}{8\pi} dA +\Omega dJ +\Phi dQ \end{equation} where $E$ is the internal energy equal to the mass $M$ in the case, $\kappa$ is the surface gravity, $A$ is the horizon area, $\Omega$ is the angular velocity, $J$ is the angular momentum, $\Phi$ is the electrostatic potential and $Q$ is the charge. \item The second law: The horizon area is a non-decreasing function of time: \begin{equation} \frac{dA}{dt}\geq 0. \end{equation} \item The third law: It is impossible to form a black hole with vanishing surface gravity. \end{itemize} These four laws directly correspond to the laws of classical thermodynamics, once the black hole temperature and entropy are assigned as \begin{equation} kT = \frac{\hbar \kappa}{2\pi} \text{, } S=\frac{A}{4\hbar G} \end{equation} where $k$ is the Boltzmann constant. However, these laws were formulated before the discovery of Hawking's radiation. Once the Hawking radiation is included, a black hole can reduce its area and hence reduce its entropy. Then, the second law must be modified to include the entropy of the black hole environment. {In principle, the black hole environment depends on a cosmological model, but since we live in a de Sitter (dS) universe it is natural to include a cosmological constant in these laws, even though, in general, the physics of the black hole environment is not only captured by the cosmological constant.} Since there are many models in which the cosmological constant, $\Lambda$, is not a constant but it can change in time (e.g. \cite{Henneaux1984,Caldarelli:1999xj}), we can treat the cosmological constant as a thermodynamical variable. A more general black hole first law of thermodynamics with varying cosmological constant was proposed in \cite{Sekiwa:2006qj,Kastor:2009wy} as \begin{equation} d M = TdS +\Omega d J +\Phi dQ + V dP . \end{equation} Here, $P=\frac{-\Lambda}{8\pi G}$ and $V=\frac{4\pi r_h^3}{3}$, where $r_h$ is the black hole horizon radius. Thus, the cosmological constant $\Lambda$ is identified with pressure. This equation coincides with classical thermodynamics in which $H=E+PV$, and therefore $M$ is considered to be the enthalpy in this proposal. This model has been extended to include extra variables and phase structures \cite{Dolan:2010ha,Kubiznak:2014zwa}, which further enrich the theory, by assuming the fine tuning of extra variables so that a phase transition is induced. However, virtually all of the previous extensions were based on the modifications of the first law. Replacing the cosmological constant with pressure and discussing the evolution of a black hole alone (without taking environment into account) is incomplete, since the cosmological constant cannot be tuned freely like pressure. For a self-consistent description, one has to describe how these variables actually change dynamically. Without the entropy of the environment, a black hole entropy on its own is not a good indicator if a process is allowed or not. \begin{figure} \centering \includegraphics[width=5cm]{wall} \caption{A process in which a black hole changes its state. Initially, the black hole is in the state a. Then it emits (for simplicity) a spherically symmetric wave, and the system changes its state to b. The dashed line represents emitted matter or field. The emitted matter keeps moving away from the black hole and the system changes its state to c. The matter completely escapes from a black hole and the state becomes d. This represents an emission process ($a\rightarrow b\rightarrow c\rightarrow d$). An absorption process can be made by reversing the order of steps ($d\rightarrow c\rightarrow b\rightarrow a$). } \label{wall} \end{figure} In general, an isolated black hole can change its own state by emitting (or absorbing) matter/energy or a field (e.g. in Fig.~\ref{wall}). Eventually, the emitted matter escapes to infinity and the black hole becomes an isolated object once again. Without taking into account the middle steps (b and c in Fig.~\ref{wall}), all the information about the environment is neglected. A black hole entropy on its own can increase or decrease with no full thermodynamical meaning. Here we want to study a complete system of the black hole plus environment, in the context of a realistic model where the cosmological constant changes its value. We will verify that the total entropy of the system always increases. \section{Black hole induced vacuum decay} One of the ways to change the value of the cosmological constant is to tunnel from one de Sitter vacuum to another. [While we concentrate on a de Sitter space with positive cosmological constant, identical analysis can be done in anti-de-Sitter space with negative cosmological constant, what we will do at the end of the paper.] This tunneling can be spontaneously triggered by the presence of the black hole, and has been well studied in the literature \cite{Gregory:2013hja,Burda:2015yfa}. For our purpose, we adopt a simple scalar field model where the non-zero vacuum energy of the scalar field, $\varphi$, plays the role of the cosmological constant. The action of the model is \begin{equation} S=\frac{1}{16\pi G} \int_M R \sqrt{-g} d^4x +\int_M \left[\partial_\mu \varphi\partial^\mu\varphi -h(\varphi) \right]\sqrt{-g} d^4 x , \end{equation} where $R$ is the Ricci scalar, while $h(\varphi)$ is the scalar field potential. We set $\hbar=k=c=1$. To allow $\Lambda$ to change, we consider a potential like in Fig.~\ref{potential-f}. The scalar field is initially stuck in the false vacuum, $\varphi_f$, and then decays to the true vacuum, $\varphi_t$. The cosmological constant goes from $\Lambda_+ =8\pi G h(\varphi_f) $ to $\Lambda_- =8\pi G h(\varphi_t) $. Thus, the cosmological constant varies discretely in this model, and it can change only in one direction. A continuously varying $\Lambda$ model is possible by considering a quintessence-like potential, but for our purpose this discrete model will suffice. \begin{figure} \centering \includegraphics[width=8cm]{potential} \caption{The potential of the scalar field with two local minima. One is the false vacuum ($\varphi_f$) and the other one is the true vacuum ($\varphi_t$). } \label{potential-f} \end{figure} To simplify the discussion, we focus on a spherically symmetric metric as a background in which the scalar field $\varphi$ propagates \begin{eqnarray} ds^2 &=& -f(r) d t^2 +\frac{dr^2}{f(r)} +r^2 d\Omega \\ f(r) &=& 1-\frac{2GM_{\pm}}{r}-\frac{\Lambda_{\pm}r^2}{3} . \end{eqnarray} Here $M_+$ and $M_-$ are the values of the black hole mass before and after tunneling. In the process of the vacuum decay, a spherical bubble filled with new vacuum is formed. This bubble is enveloped with a spherical domain wall separating two vacua. Some of the original black hole mass/energy is invested into the phase transition, and its mass changes from $M_+$ to $M_-$. Thus, some of the energy of the domain wall comes from the energy of the original black hole, so the situation is similar to the configuration b in Fig.~\ref{wall}. To study the dynamics of the bubble (domain wall), we use the thin wall approximation. The energy density and wall tension depend on the potential \begin{equation} \sigma=\left| \int_{\phi_f}^{\phi_t} (2h)^{1/2} d\phi \right| \end{equation} The absolute value mark is inserted to ensure the energy density is positive and physical. The cosmological constant takes values $\Lambda_-$ and $\Lambda_+$ inside and outside the bubble respectively, and also $f_-=1-\frac{2GM_{-}}{r}-\frac{\Lambda_{-}r^2}{3}$ and $f_+=1-\frac{2GM_{+}}{r}-\frac{\Lambda_{+}r^2}{3}$ inside and outside the bubble respectively. The equation of motion of the wall can be obtained from the junction condition \begin{equation} \label{Lorentz-motion} f_+ \dot{t}_+ - f_- \dot{t}_- =-4\pi G \sigma R , \end{equation} with $\dot{x} =dx/d\lambda$ , where $\lambda$ is the proper time of an observer sitting on the wall enveloping the bubble. The wall is located at the radius $R$. {Combining this equation with the motion in the following coordinates} \begin{equation} f_{\pm} \dot{t}^2_{\pm} -\frac{\dot{R}_\pm^2}{f_\pm} =1, \end{equation} the equation of motion is simplified to \begin{equation} \Big(\frac{\dot{R}}{R}\Big)^2=\bar{\sigma}^2-\frac{\bar{f}}{R^2}+\frac{(\Delta f)^2}{16R^4\bar{\sigma}^2} . \end{equation} Here, $\bar{\sigma}=2\pi G \sigma$, $\bar{f}=(f_-+f_+)/2$ and $\Delta f=f_+-f_-$. The exact solution represents a bounce, i.e. the bubble first contracts and then expands. To describe the expanding case, the solution is usually cut at the bounce. Exactly at the bounce, the domain wall is generated with $\dot{R}_\pm=0$. Eq (\ref{Lorentz-motion}) implies \begin{equation} \sqrt{f_-} -\sqrt{f_+} >0, \end{equation} or \begin{equation} M_+ - M_- > - \frac{R^{*3}}{6G} (\Lambda_+ -\Lambda_-). \end{equation} Here $R^*$ is the bubble radius of the bounce solution. $M_-$ must satisfy the following inequality $ M_- < M_+ +\frac{R^{*3}}{6G} (\Lambda_+ -\Lambda_-)$. This implies that the final black hole (after the tunneling) can be either heavier or lighter than the initial black hole. Since $\sigma$ cannot be $0$, the actual constraint is more stringent. \begin{figure} \centering \includegraphics[width=8cm]{space} \caption{A bubble (domain wall) with radius $R$ is generated in Euclidean space. The entire space is between the black hole horizon, $r_h$, and the cosmological horizon, $r_c$. The coordinate $\tau$ is periodic, and its period is the inverse temperature. } \label{space} \end{figure} \section{The second law of thermodynamics for the whole system: black hole plus environment } In order to study the thermodynamical properties of the wall, one must perform the Wick rotation, $t=-i\tau$. The action becomes the Euclidean action, and the metric becomes \begin{eqnarray} ds^2 &=& f(r) d\tau^2 +\frac{dr^2}{f(r)} +r^2 d\Omega . \end{eqnarray} The domain wall equation of motion is \begin{equation} -\Big(\frac{\dot{R}}{R}\Big)^2=\bar{\sigma}^2-\frac{\bar{f}}{R^2}+\frac{(\Delta f)^2}{16R^4\bar{\sigma}^2} . \end{equation} This equation can be written as \begin{equation} \label{EOM} \frac{\dot{R}^2}{2}+U=0 , \end{equation} with \begin{eqnarray} \label{potential} 2U&=& A R^2 -1+BR^{-1}+CR^{-4} \\ A&=&\bar{\sigma}^2 +\frac{1}{6}(\Lambda_+ +\Lambda_-)+\frac{(\Lambda_+-\Lambda_-)^2}{144\bar{\sigma}^2}\\ B&=&G(M_++M_-) +G\frac{(M_+-M_-)(\Lambda_+-\Lambda_-)}{12\bar{\sigma}^2}\\ C&=&G^2\frac{(M_+-M_-)^2}{4\bar{\sigma}^2} . \end{eqnarray} This equation is identical to that of a non-relativistic particle moving in a potential $U$ with total energy $0$. According to thermal quantum field theory, the period of $\tau$ is the inverse temperature, which can be read out from the period of motion of the object. This period can be calculated from $\int dR/\dot{R}$. In the case of small oscillations around the bounce condition (i.e. $U=U'=0$), a simple harmonic oscillator approximation can be applied. Eq.~(\ref{EOM}) is then approximated as \begin{eqnarray} &&\Delta E =\frac{\dot{R}^2}{2}+\frac{U''(R^*)}{2} \Delta R^2\\ &&U''(R)=A+BR^{-3}+10CR^{-6} , \end{eqnarray} where $R^*$ is the radius of the bubble in the bounce solution and $\Delta R=R-R^*$. The period of $\lambda$ is \begin{eqnarray} \beta_\lambda=\frac{2\pi}{\sqrt{U''}} . \end{eqnarray} The bubble temperature is \begin{eqnarray} T_{\pm} =\frac{\sqrt{f_{\pm}(R)}}{\beta_\lambda} =\frac{\sqrt{U'' f_\pm(R)}}{2\pi} . \end{eqnarray} Here, $T_+$ and $T_-$ are the values of the temperature at the outer and inner side of the wall respectively. In general $\bar{\sigma}$ is determined by the scalar field potential. Since we do not have a precise form of $f(\varphi)$ we keep it as a free parameter which will be determined by the initial and final states. We note from Eq.~(\ref{potential}) that $\bar{\sigma}$ cannot be $0$, which implies that there are no massless domain walls in this context. {Here $R^*$ and $\bar{\sigma}$ are obtained from $M_{\pm}$ and $\Lambda_\pm$.} The condition for $U=U'=0$ at $R^*$ is \begin{eqnarray} &&2AR^{*6}-BR^{*3}-4C=0\\ &&6AR^{*3}-4R^*+3B=0 . \end{eqnarray} If $M_+ =M_-$, there is a special solution $R^*=3G M_+ $, where the value of $\bar{\sigma}$ determines whether this solution exists. The relation between $R^*$ and $T_+$ is shown in Fig.~\ref{radius}. Just like for black holes, larger radius implies lower temperature. There is a clear region of a sharp change at around $M_+=M_-$, which may be a sign of a phase transition. The same transition can also be seen from the relation between the energy density and temperature, shown in Fig.~\ref{sigma}, where a very quick transition appears at around $M_-\approx M_+$. Physically, this transition may be explained as the critical point where a black hole releases or absorbs energy from the vacuum decay. \begin{figure} \centering \includegraphics[width=9cm]{radius} \caption{ The relation between the bubble radius and its temperature. The bubble temperature gets lower as the radius is increasing. We set $M_+=0.02$, $\Lambda_+=3$, $\Lambda_-=0.03$ and $G=1$, and change the black hole rest mass $M_-$. } \label{radius} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{sigma} \caption{The relation between the bubble wall's energy density ($\bar{\sigma}$) and its temperature ($T_+$). There is a sharp change at the point where $M_-=M_+$, which might indicate a phase transition of some sort. Physically, this represents the critical point where a black hole releases or absorbs energy from the vacuum decay. We set $M_+=0.02$, $\Lambda_+=3$, $\Lambda_-=0.03$ and $G=1$, and change the black hole rest mass $M_-$. } \label{sigma} \end{figure} Fig.~\ref{temperature} shows that the bubble has a lower temperature than the black hole horizon ($T_{Bh} = (1-\Lambda r_h^2)/(4\pi r_h)$) if the cosmological constant reduces its value in the process. The change in entropy can be calculated directly from the initial black hole, final black hole and the bubble's entropy as \begin{equation}\label{ds} \Delta S=\pi r_{h-}^2+\frac{4\pi \sigma R^{*2}}{T_+}f_+(R^*)^{1/2}-\pi r_{h+}^2, \end{equation} where we set $G=1$. The outer (cosmological) horizon stays the same in this instantaneous process of tunneling, so its contribution cancels out. [Note that the situation is very similar in the anti-de-Sitter (AdS) case, though cosmological horizon is not involved.] We added the term $f_+(R^*)^{1/2}$ to compensate for the redshift. Fig.~\ref{entropy} and Fig. \ref{entropy-ads} show how entropy changes with $M_-$ in de-Sitter and anti-de-Sitter spaces respectively. As expected, entropy always increases. As long as $M_-\lessapprox M_+$, $\Delta S$ reduces with $M_-$ slowly. The reduction becomes very quick when $M_-\gtrapprox M_+$. We emphasize that the entropy of the bubble (second term in Eq.~(\ref{ds}) is crucial for providing that the total entropy of the system always increases. Without it, entropy could decrease, thus apparently violating the second law of thermodynamics. We note that this is the entropy created during the instanton process when wall has just been created, but has not been expanding yet. The expanding process also create entropy, which will be discussed in the next section. \begin{figure} \centering \includegraphics[width=9cm]{temperature} \caption{The temperatures of the bubble and the black hole as functions of the black hole rest mass. The solid, dashed, doted and dot-dashed lines are the temperatures at the outer side of the bubble wall, inner side of the wall, initial black hole surface and final black hole surface respectively. We set $M_+=0.02$, $\Lambda_+=3$, $\Lambda_-=0.03$ and $G=1$, and change the black hole rest mass $M_-$. } \label{temperature} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{entropy} \caption{The change in total entropy of the system as a function of $M_-$ in de Sitter space. The entropy is always increasing. As long as $M_-\lessapprox M_+$, $\Delta S$ reduces with $M_-$ slowly. The reduction becomes very quick when $M_-\lessapprox M_+$. This implies that the black hole prefers to reduce its energy while triggering the vacuum decay. We set $M_+=0.02$, $\Lambda_+=3$, $\Lambda_-=0.03$ and $G=1$, and change the black hole rest mass $M_-$. } \label{entropy} \end{figure} \begin{figure} \centering \includegraphics[width=9cm]{entropy-ads} \caption{The change in total entropy of the system as a function of $M_-$ in anti de Sitter space. The situation is very similar to de Sitter space, i.e. the entropy is always increasing. As long as $M_-\lessapprox M_+$, $\Delta S$ reduces with $M_-$ slowly. The reduction becomes very quick when $M_-\lessapprox M_+$. This implies that the black hole prefers to reduce its energy while triggering the vacuum decay. We set $M_+=0.02$, $\Lambda_+=-0.03$, $\Lambda_-=-3$ and $G=1$, and change the black hole rest mass $M_-$. } \label{entropy-ads} \end{figure} \section{Other mechanisms which increase entropy} Vacuum change, slow of abrupt, is always accompanied with particle production. A change in the value of the cosmological constant is no exception. In our case of the bubble production during tunneling, particles are generated during the nucleation and expanding phase. This particle production increases the entropy; however, it is not taken into account in the extended black hole first law of thermodynamics. First we consider the bubble nucleation phase, where particles are extracted from vacuum during the formation of the domain wall (Fig.~\ref{wall} b). Though the vacuum decay involves gravitational effects, here we just want to clarify the essential physics of particle production and, therefore, in what follows we use the results from flat space. In flat space, the spectrum of created particles is obtained from the Bogoliubov transformation \cite{Yamamoto:1994te} \begin{equation} \label{creation} n_p=\Big| \frac{B_p}{\pi A_p} \Big|^2 , \end{equation} where $p$ is the particle momentum and \begin{eqnarray} A_p&=&\frac{\mu I'_{ip}(x)K_{ip}(y)-\mu_0 I_{ip}(x)K'_{ip}(y)}{\mu I'_{ip}(x)K_{ip}(x)-\mu I_{ip}(x)K'_{ip}(x)}\\ B_p&=&\frac{-\mu K'_{ip}(x)K_{ip}(y)+\mu_0 K_{ip}(x)K'_{ip}(y)}{\mu I'_{ip}(x)K_{ip}(x)-\mu I_{ip}(x)K'_{ip}(x)} , \end{eqnarray} with, $x=\mu R$ and $y=\mu_0 R$. Here $\mu_0$ and $\mu$ are the masses of a particle in the false and true vacuum regions respectively. The functions $I_\nu$ and $K_\nu$ are the modified Bessel functions of the first and second kind, while $i$ is the standard imaginary unit. Apparently, particles are created if $\mu\neq \mu_0$ and this instanton process is an irreversible process. The entropy can be calculated according to the Gibbons entropy formula \begin{equation} S_G=-N k\sum_i p_i \log (p_i). \end{equation} where $N$ is the particle number and $p_i$ is the probability for a particle to be in the state $i$. These particles will also increase the entropy ensuring that the second law of thermodynamics is satisfied. Second, bubble expansion also creates particles (Fig.~\ref{wall} c) \cite{Yamamoto:1994te, Maziashvili:2003sk, Maziashvili:2003kj}. The particle spectrum can be calculated again using Eq. \ref{creation} by replacing $A_p$ and $B_p$ with \begin{eqnarray} A_p&=&\frac{\mu H^{(1)}_{ip}{}'(x)H^{(2)}_{ip}(y)-\mu_0 H^{(1)}_{ip}(x)H^{(2)}_{ip}{}'(y)}{\mu_0 H^{(1)}_{ip}{}'(y)H^{(2)}_{ip}(y)-\mu_0 H^{(1)}_{ip}(y)H^{(2)}_{ip}{}'(y)}\\ B_p&=&e^{-p\pi}\frac{-\mu H^{(1)}_{ip}{}'(x)H^{(1)}_{ip}(y)+\mu_0 H^{(1)}_{ip}(x)H^{(1)}_{ip}{}'(y)}{\mu_0 H^{(1)}_{ip}{}'(y)H^{(2)}_{ip}(y)-\mu_0 H^{(1)}_{ip}(y)H^{(2)}_{ip}{}'(y)} . \end{eqnarray} $H_\nu^{(1)}$ and $H_\nu^{(2)}$ are the first and second kind Hunkel functions. Thus, though the bubble expands like a classical object, the fields propagating in its background change their vacuum state which leads to particle production. The particle spectrum is not completely thermal (Eq.~(\ref{creation})), nevertheless, the entropy is increasing during this process. If we repeat the same procedure in which $\Lambda$ is increasing instead of decreasing, we will find that the bubble wall's energy density cannot be positive and is therefore unphysical. We can extend the evolution of the bubble to extreme cases. As the bubble wall approaches the cosmological horizon size, the whole horizon volume gets converted into a new vacuum with lower value of the cosmological constant, and the entropy will increase further. The entropy increases by \begin{equation} \pi r_{c-}^2 -\pi r_{c+}^2 , \end{equation} where $r_{c\pm}$ is the radius of the cosmological horizon in the false and true vacuum. Since the entropy increases, this is also an irreversible process. However, once the wall crosses and leaves the cosmological horizon behind, it will take away entropy with it. The domain wall's entropy then disappears and does not leave (a classical) imprint on the stuff inside the cosmological horizon. \begin{figure} \centering \includegraphics[width=7cm]{entropy-increasing} \caption{ The schematic picture of the entropy evolution in time. The black hole triggers the vacuum decay through instanton process at $t_s$. The entropy increases according to Eq.~(\ref{ds}), and in addition due to the entropy of particles created during the vacuum tunneling. The bubble wall produced during the tunneling keeps expanding and generating more entropy, since more particles are created during the expanding phase. At $t_c$ the wall grows to the size of the old cosmic horizon and keeps pushing the horizon outward to its final size. Entropy is produced by both particle creation and the expansion of the cosmic horizon after $t_c$. Finally the wall leaves the new horizon and the entropy remains constant. The horizontal dashed line shows the entropy counted in the first law of the black hole thermodynamics, which is always lower than the actual entropy. } \label{entropy-increasing} \end{figure} \section{Conclusions} In this letter we have studied the extended black hole thermodynamics by discussing a realistic model in which the cosmological constant is variable. We have found that the vacuum decay triggered by an instanton has the temperature lower than the black hole horizon. During this process more entropy is produced than the change in the black hole entropy. The modified second law of thermodynamics for black holes only gives a complete picture provided the entropy of the environment is included in order to preserve the second law (see also \cite{Hu:2019lcy}). To summarize, the entropy is increasing with time schematically as shown in Fig.~\ref{entropy-increasing}. If we neglect Hawking radiation, the total entropy is a constant at first. The black hole then triggers the vacuum decay through instanton process at $t_s$. The entropy increases according to Eq.~(\ref{ds}), and in addition due to the entropy of particles created during vacuum tunneling. The bubble wall produced during the tunneling keeps expanding and generating more entropy, since more particles are created during the expanding phase. At $t_c$ the wall grows to the size of the old cosmic horizon (associated with the higher value of the cosmological constant) and keeps pushing the horizon outward to its final size (associated with the lower value of the cosmological constant). Entropy is produced by both particle creation and the expansion of the cosmic horizon after $t_c$. Finally the wall leaves the new horizon and the entropy remains constant. The entropy created between $t_s$ and $t_c$ is not included in the first law of the extended black hole thermodynamics. The first law assumes the black hole entropy coming purely from the horizon size, which is always lower than the entropy shown here. The situation is very similar in the anti-de-Sitter case, though cosmological horizon is not involved. Our analysis thus suggests that the environmental effects are crucial in the context of the extended black hole thermodynamics, and they call for a more careful interpretation of the holographic principle \cite{holography}. We note that such a sharpening of the idea of holography has been recently suggested in different contexts \cite{Freidel:2013jfa}, \cite{Almheiri:2020cfm}. Finally, we note that our analysis is valid both for anti-de-Sitter (AdS) and de Sitter (dS) spaces, which we find significant, given their radically different causal structures and holographic formulations \cite{ads}, \cite{ds}. \begin{acknowledgments} We thank D.~Kubiznak and R.~B.~Mann for comments. D.C Dai is supported by the National Natural Science Foundation of China (Grant No. 11775140). D. M. is supported by the Julian Schwinger Foundation and the Department of Energy ((under grant DE-SC0020262))). D.S. is partially supported by the US National Science Foundation, under Grant No. PHY-2014021. \end{acknowledgments}